ASN.1 vs Text

Paul Long Plong at SMITHMICRO.COM
Wed Apr 7 11:13:15 EDT 1999


The experiment should implement the entire protocol and include a large set of
messages. The syntax should then be extended in various ways. Text encoding is
seductive. At first, text looks easier than ASN.1, but as more complex
structures are needed and the syntax grows, it becomes cumbersome and
error-prone.

Paul Long
Smith Micro Software, Inc.

        -----Original Message-----
        From:   Tom-PT Taylor [SMTP:taylor at NORTELNETWORKS.COM]
        Sent:   Wednesday, March 31, 1999 9:17 PM
        To:     ITU-SG16 at MAILBAG.INTEL.COM
        Subject:        ASN.1 vs Text

        Since the other responses seem to have gone off into a discussion of
        firewalls, let me respond to this more directly by recalling a bit of
past
        discussion.

        1) On the Megaco list, we finally agreed that the text vs. binary
question
        is something to be resolved by experiment.

        2) In Turin (I think), the general opinion was that the protocol would
have
        to be binary to achieve the required performance.  Interestingly
enough,
        however, no one that I recall felt that it should be PER ASN.1 (or
even, I
        think, BER ASN.1).  Good reasons were given at the time.  It would be
good
        if someone can remember them, but I think it has something to do with
        processing performance and less risk of fragmentation than call
signalling.

        3) We have had an occasionally recurring suggestion on the Megaco list
that
        the Media Gateway control protocol should conform to the format of
other
        messaging out of the MGC, for the reason you give (i.e. to minimize
        transcoding).  Interestingly, the person advocating this was talking
about
        billing data, not signalling.  The counter-argument on the list is
that the
        Media Gateway control protocol must work with different signalling
protocols
        (for example in H.GCP's case, with any of the H-series systems), so
there is
        no point in optimizing it for just one.

        The real sticking point of debate is going to be whether the media
        description structures will be those defined for H.245 OLC etc., or
SDP.
        Text vs. binary is a closely related but broader debate.

        I'd suggest we start thinking about experimental design.

        > -----Original Message-----
        > From: Ami Amir [SMTP:amir at RADVISION.RAD.CO.IL]
        > Sent: Wednesday, March 31, 1999 4:15 AM
        > To:   ITU-SG16 at MAILBAG.INTEL.COM
        > Subject:      H.320 gateways a MEGACO / ITU
        >
        > Hi
        >
        > During the last MEGACO conference call - Tom Taylor asked what could
        > hinder
        > ITU acceptance of the MEGACO work. It is obviously to everybody's
interest
        > that IETF and ITU standards merge. As an example - it would be great
if
        > the
        > MEGACO work could become an ITU SG 16 contribution.
        >
        > An obvious item is efficient multimedia support (as reflected also
in
        > John's
        > mail).
        >
        > However, I think that one of the even more major difficulties facing
us
        > will
        > be the encoding scheme (ASN.1 vs Text).
        >
        > There are many who feel that ASN.1 is too heavy and complex for
simple
        > devices, and should be avoided. This was one of the major reasons
for the
        > emergence of SIP.
        >
        > On the other hand, experience in the ITU H.323 work has shown that
since
        > ASN.1 is the encoding scheme on the PSTN side, the use of ASN
cleared the
        > way for PSTN to IP interoperability. This feature will be extremely
        > important in hybrid networks that need to provide Intelligent
Network (IN)
        > services (e.g. "800"), while retaining the investment in existing
IN,
        > billing and directory services (411).
        >
        > Another problem is that if ASN.1 is not chosen, every device that
will
        > have
        > to connect between a MEGACO component and H.323 will need to
dis-assemble
        > and re-assemble (transcode) messages, and hence network performance
will
        > suffer, and those devices will be more complex. A prime example -
MGC to
        > GK
        > communications.
        >
        > I am not promoting a specific approach. I just think that this
complex
        > issue
        > needs to be addressed if we want to be able to be able accept a
universal
        > protocol.
        >
        > Do you think this is really a problem?
        > If so - any ideas on how to bridge the gap?
        >
        > Ami
        >
        >         -----Original Message-----
        >         From:   John Segers [SMTP:jsegers at lucent.com]
        >         Sent:   Tuesday, March 30, 1999 6:27 PM
        >         To:     ITU-SG16 at mailbag.cps.intel.com
        >         Subject:        H.320 gateways
        >
        >         People,
        >
        >         In yesterday's conference call, the subject of H.320 GWs was
        > raised
        >         briefly. In my opinion, the connection model and protocol
should
        > be
        > able
        >         to deal with H.320.  I would like to continue discussion on
it on
        > the
        >         mailing list.
        >
        >         H.320 allows a user to have a session with both audio and
video on
        > a
        >         single 64 kbit/s channel such as an ISDN B-channel.  The
same
        > channel
        >         carries some signalling information (frame alignment,
bitrate
        >         allocation).  To a MG supporting H.320, this means that on a
        > single
        >         endpoint, three streams can come in, carrying different
types of
        > media.
        >         The current connection model of megaco/H.gcp does not cater
to
        > this.
        > I
        >         see two possible solutions:
        >
        >         The first is to allow multiple media in one context and to
        > describe
        > for
        >         terminations the logical streams they carry.  In a picture:
        >
        >                               +----------+
        >                               |          |
        >                               |          +--------- signalling (FAS,
BAS)
        >                               |          |
        >         B-channel   ==========+          +--------- audio (16
kbit/s)
        >                               |          |
        >                               |          +--------- video (46.4
kbit/s)
        >                               |          |
        >                               +----------+
        >
        >         The second solution is to have separate terminations for the
        > different
        >         streams.  They would all "connect to" the same physical
endpoint.
        > In
        >         order to properly identify the terminations, it is necessary
to
        > have
        >         logical names for them.  The physical endpoint they connect
may
        > have
        > the
        >         hierarchical name proposed in the megaco document.
        >
        >         Another example of a H.320 session is the case of two
B-channels
        > being
        >         used for an audiovisual call.  The following frame structure
is
        > then
        >         possible.
        >
        >            +--------------------------++-----------------------+
        >            | Channel 1                || Channel 2             |
        >            +-----+--+--+--+--+--+--+--++--+--+--+--+--+--+--+--+
        >            |Bit 1|B2|B3|B4|B5|B6|B7|B8||B1|B2|B3|B4|B5|B6|B7|B8|
        >            +-----+--+--+--+--+--+--+--++--+--+--+--+--+--+--+--+
        >         1  | a1  |a2|a3|a4|a5|a6|v1|F ||v2|v3|v4|v5|v6|v7|v8|F |
        >         2  | a7  |a8|a9|a |a |a |v9|F ||v |v |v |v |v |v |v |F |
        >         3  | a   |a |a |a |a |a |v |F ||v |v |v |v |v |v |v |F |
        >         4  | a   |  |  |  |  |a |v |F ||v |              |v |F |
        >         5  | a   |  |  |  |  |a |v |F ||v |              |v |F |
        >         6  | a   |  |  |  |  |a |v |F ||v |              |v |F |
        >         7  | a   |  |  |  |  |a |v |F ||v |              |v |F |
        >         8  | a   |  |  |  |  |a |v |F ||v |              |v |F |
        >            +---------------------------------------------------+
        >         9  | a   |  |  |  |  |a |v |B ||v |              |v |B |
        >         10 | a   |  |  |  |  |a |v |B ||v |              |v |B |
        >         11 | a   |  |  |  |  |a |v |B ||v |              |v |B |
        >         12 | a   |  |  |  |  |a |v |B ||v |              |v |B |
        >         13 | a   |  |  |  |  |a |v |B ||v |              |v |B |
        >         14 | a   |  |  |  |  |a |v |B ||v |              |v |B |
        >         15 | a   |  |  |  |  |a |v |B ||v |              |v |B |
        >         16 | a   |  |  |  |  |a |v |B ||v |              |v |B |
        >            +---------------------------------------------------+
        >         17 | a   |  |  |  |  |a |v |v ||v |              |v |v |
        >          .
        >          .
        >          .
        >         80 | a   |  |  |  |  |a |v |v ||v |              |v |v |
        >            +---------------------------------------------------+
        >
        >         (a=audio, v=video, F=FAS, B=BAS).
        >
        >         We see that the video stream is split up over two channels.
In
        > order to
        >         cater to this, it seems we have to allow terminations to
receive
        > media
        >         from and send it to multiple physical endpoints.  The two
        > approaches
        >         outlined above can both be extended to allow this. Both
extensions
        > will
        >         lead to the introduction of logical names for terminations.
In
        > the
        >         first approach there will be one termination "containing"
two
        > B-channels
        >         on one side and three logical streams on the other.  In the
second
        >         approach there will be three terminations, the one for the
video
        > stream
        >         referencing both B-channels, the ones for signalling and
audio
        >         referencing only channel 1.
        >
        >         The second approach allows us to keep separate contexts for
        > different
        >         media types.  It is then easy to delete, for instance, the
video
        > part of
        >         a session (session used loosely to desribe the contexts for
the
        > audio
        >         and video).
        >
        >         The first approach groups the streams coming from/going to
one
        > user,
        >         making it possible to remove a user from a context more
easily.
        >
        >
        >         Personally, I can't decide which approach I would prefer.
How do
        > others
        >         feel about these ideas?
        >
        >         Regards,
        >
        >         John Segers
        >         --
        >         John Segers                                  email:
        > jsegers at lucent.com
        >         Lucent Technologies
Room HE
        > 344
        >         Dept. Forward Looking Work                      phone: +31
35 687
        > 4724
        >         P.O. Box 18, 1270 AA  Huizen                      fax: +31
35 687
        > 5954



More information about the sg16-avd mailing list