H.320 gateways

John Segers jsegers at lucent.com
Tue Mar 30 10:26:56 EST 1999


People,

In yesterday's conference call, the subject of H.320 GWs was raised
briefly. In my opinion, the connection model and protocol should be able
to deal with H.320.  I would like to continue discussion on it on the
mailing list.

H.320 allows a user to have a session with both audio and video on a
single 64 kbit/s channel such as an ISDN B-channel.  The same channel
carries some signalling information (frame alignment, bitrate
allocation).  To a MG supporting H.320, this means that on a single
endpoint, three streams can come in, carrying different types of media.
The current connection model of megaco/H.gcp does not cater to this.  I
see two possible solutions:

The first is to allow multiple media in one context and to describe for
terminations the logical streams they carry.  In a picture:

                      +----------+
                      |          |
                      |          +--------- signalling (FAS, BAS)
                      |          |
B-channel   ==========+          +--------- audio (16 kbit/s)
                      |          |
                      |          +--------- video (46.4 kbit/s)
                      |          |
                      +----------+

The second solution is to have separate terminations for the different
streams.  They would all "connect to" the same physical endpoint.  In
order to properly identify the terminations, it is necessary to have
logical names for them.  The physical endpoint they connect may have the
hierarchical name proposed in the megaco document.

Another example of a H.320 session is the case of two B-channels being
used for an audiovisual call.  The following frame structure is then
possible.

   +--------------------------++-----------------------+
   | Channel 1                || Channel 2             |
   +-----+--+--+--+--+--+--+--++--+--+--+--+--+--+--+--+
   |Bit 1|B2|B3|B4|B5|B6|B7|B8||B1|B2|B3|B4|B5|B6|B7|B8|
   +-----+--+--+--+--+--+--+--++--+--+--+--+--+--+--+--+
1  | a1  |a2|a3|a4|a5|a6|v1|F ||v2|v3|v4|v5|v6|v7|v8|F |
2  | a7  |a8|a9|a |a |a |v9|F ||v |v |v |v |v |v |v |F |
3  | a   |a |a |a |a |a |v |F ||v |v |v |v |v |v |v |F |
4  | a   |  |  |  |  |a |v |F ||v |              |v |F |
5  | a   |  |  |  |  |a |v |F ||v |              |v |F |
6  | a   |  |  |  |  |a |v |F ||v |              |v |F |
7  | a   |  |  |  |  |a |v |F ||v |              |v |F |
8  | a   |  |  |  |  |a |v |F ||v |              |v |F |
   +---------------------------------------------------+
9  | a   |  |  |  |  |a |v |B ||v |              |v |B |
10 | a   |  |  |  |  |a |v |B ||v |              |v |B |
11 | a   |  |  |  |  |a |v |B ||v |              |v |B |
12 | a   |  |  |  |  |a |v |B ||v |              |v |B |
13 | a   |  |  |  |  |a |v |B ||v |              |v |B |
14 | a   |  |  |  |  |a |v |B ||v |              |v |B |
15 | a   |  |  |  |  |a |v |B ||v |              |v |B |
16 | a   |  |  |  |  |a |v |B ||v |              |v |B |
   +---------------------------------------------------+
17 | a   |  |  |  |  |a |v |v ||v |              |v |v |
 .
 .
 .
80 | a   |  |  |  |  |a |v |v ||v |              |v |v |
   +---------------------------------------------------+

(a=audio, v=video, F=FAS, B=BAS).

We see that the video stream is split up over two channels.  In order to
cater to this, it seems we have to allow terminations to receive media
from and send it to multiple physical endpoints.  The two approaches
outlined above can both be extended to allow this. Both extensions will
lead to the introduction of logical names for terminations.  In the
first approach there will be one termination "containing" two B-channels
on one side and three logical streams on the other.  In the second
approach there will be three terminations, the one for the video stream
referencing both B-channels, the ones for signalling and audio
referencing only channel 1.

The second approach allows us to keep separate contexts for different
media types.  It is then easy to delete, for instance, the video part of
a session (session used loosely to desribe the contexts for the audio
and video).

The first approach groups the streams coming from/going to one user,
making it possible to remove a user from a context more easily.


Personally, I can't decide which approach I would prefer.  How do others
feel about these ideas?

Regards,

John Segers
--
John Segers                                  email: jsegers at lucent.com
Lucent Technologies                                        Room HE 344
Dept. Forward Looking Work                      phone: +31 35 687 4724
P.O. Box 18, 1270 AA  Huizen                      fax: +31 35 687 5954



More information about the sg16-avd mailing list