sg16-avd
Threads by month
- ----- 2025 -----
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2009 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2008 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2007 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2006 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2005 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2004 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2003 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2002 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2001 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2000 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1999 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1998 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 1997 -----
- December
- November
- October
- September
- August
March 1999
- 93 participants
- 145 discussions
I'm in favor of the second approach.
You could have a termination class H.320 with H.320/audio
H.320/video and H.320/T.120. There would be 3 contexts,
each with different media. There is a slight ugliness where
the termination class would define all the parameters for
all of the streams as one big list, and each of the
terminations would have all of them. Unfortunate, but it
works.
Christian did not want to infer anything from the name.
We would need to have some way for the MG to know which
media was on which termination if we couldn't infer it from
the name.
I like this approach because from the MGC, whether the three
streams are on one gateway, or multiple gateways, it operates
on them the same way, only the names are changed (to protect
the innocent). The MG is built differently in these cases,
but I don't see any way around that.
BTW, the H.320 stream could be multi-ported, so you could have
H.320/7/video.
> -----Original Message-----
> From: John Segers [ mailto:jsegers@lucent.com
<mailto:jsegers@lucent.com> ]
> Sent: Tuesday, March 30, 1999 10:27 AM
> To: megaco(a)BayNetworks.COM; ITU-SG16(a)mailbag.cps.intel.com
> Subject: H.320 gateways
>
>
> People,
>
> In yesterday's conference call, the subject of H.320 GWs was raised
> briefly. In my opinion, the connection model and protocol
> should be able
> to deal with H.320. I would like to continue discussion on it on the
> mailing list.
>
> H.320 allows a user to have a session with both audio and video on a
> single 64 kbit/s channel such as an ISDN B-channel. The same channel
> carries some signalling information (frame alignment, bitrate
> allocation). To a MG supporting H.320, this means that on a single
> endpoint, three streams can come in, carrying different types
> of media.
> The current connection model of megaco/H.gcp does not cater
> to this. I
> see two possible solutions:
>
> The first is to allow multiple media in one context and to
> describe for
> terminations the logical streams they carry. In a picture:
>
> +----------+
> | |
> | +--------- signalling (FAS, BAS)
> | |
> B-channel ==========+ +--------- audio (16 kbit/s)
> | |
> | +--------- video (46.4 kbit/s)
> | |
> +----------+
>
> The second solution is to have separate terminations for the different
> streams. They would all "connect to" the same physical endpoint. In
> order to properly identify the terminations, it is necessary to have
> logical names for them. The physical endpoint they connect
> may have the
> hierarchical name proposed in the megaco document.
>
> Another example of a H.320 session is the case of two B-channels being
> used for an audiovisual call. The following frame structure is then
> possible.
>
> +--------------------------++-----------------------+
> | Channel 1 || Channel 2 |
> +-----+--+--+--+--+--+--+--++--+--+--+--+--+--+--+--+
> |Bit 1|B2|B3|B4|B5|B6|B7|B8||B1|B2|B3|B4|B5|B6|B7|B8|
> +-----+--+--+--+--+--+--+--++--+--+--+--+--+--+--+--+
> 1 | a1 |a2|a3|a4|a5|a6|v1|F ||v2|v3|v4|v5|v6|v7|v8|F |
> 2 | a7 |a8|a9|a |a |a |v9|F ||v |v |v |v |v |v |v |F |
> 3 | a |a |a |a |a |a |v |F ||v |v |v |v |v |v |v |F |
> 4 | a | | | | |a |v |F ||v | |v |F |
> 5 | a | | | | |a |v |F ||v | |v |F |
> 6 | a | | | | |a |v |F ||v | |v |F |
> 7 | a | | | | |a |v |F ||v | |v |F |
> 8 | a | | | | |a |v |F ||v | |v |F |
> +---------------------------------------------------+
> 9 | a | | | | |a |v |B ||v | |v |B |
> 10 | a | | | | |a |v |B ||v | |v |B |
> 11 | a | | | | |a |v |B ||v | |v |B |
> 12 | a | | | | |a |v |B ||v | |v |B |
> 13 | a | | | | |a |v |B ||v | |v |B |
> 14 | a | | | | |a |v |B ||v | |v |B |
> 15 | a | | | | |a |v |B ||v | |v |B |
> 16 | a | | | | |a |v |B ||v | |v |B |
> +---------------------------------------------------+
> 17 | a | | | | |a |v |v ||v | |v |v |
> .
> .
> .
> 80 | a | | | | |a |v |v ||v | |v |v |
> +---------------------------------------------------+
>
> (a=audio, v=video, F=FAS, B=BAS).
>
> We see that the video stream is split up over two channels.
> In order to
> cater to this, it seems we have to allow terminations to receive media
> from and send it to multiple physical endpoints. The two approaches
> outlined above can both be extended to allow this. Both
> extensions will
> lead to the introduction of logical names for terminations. In the
> first approach there will be one termination "containing" two
> B-channels
> on one side and three logical streams on the other. In the second
> approach there will be three terminations, the one for the
> video stream
> referencing both B-channels, the ones for signalling and audio
> referencing only channel 1.
>
> The second approach allows us to keep separate contexts for different
> media types. It is then easy to delete, for instance, the
> video part of
> a session (session used loosely to desribe the contexts for the audio
> and video).
>
> The first approach groups the streams coming from/going to one user,
> making it possible to remove a user from a context more easily.
>
>
> Personally, I can't decide which approach I would prefer.
> How do others
> feel about these ideas?
>
> Regards,
>
> John Segers
> --
> John Segers email: jsegers(a)lucent.com
> Lucent Technologies Room HE 344
> Dept. Forward Looking Work phone: +31 35 687 4724
> P.O. Box 18, 1270 AA Huizen fax: +31 35 687 5954
>
2
1
31 Mar '99
I want to add the issue of T.120 to the multimedia discussion. An effort is
made in SG16 Q3 to try to develop T.120 as the control interface for audio
graphic conferences using PSTN. So far we talk about h.323 and H.245 but
what about T.120
Roni Even
Accord Video Telecommunication.
email:roni_e@accord.co.il
-----Original Message-----
From: John Segers <jsegers(a)LUCENT.COM>
To: ITU-SG16(a)MAILBAG.INTEL.COM <ITU-SG16(a)MAILBAG.INTEL.COM>
Date: éåí øáéòé 31 îøõ 1999 14:13
Subject: Re: Multimedia (was Re: Megaco Protocol - IETF/ITU conf call
minutes)
>Mauricio,
>
>You are right that single medium contexts are simpler to reason about
>services and call flows. But we must take care that we don't end up
>with a protocol that cannot be extended to cover multimedia GWs. If
>multimedia applications are within scope, the protocol document should
>not contain a connection model that handles only audio.
>
>What do others think? Is multimedia in the scope of megaco/H.gcp? And
>if so, is Paul's proposal general enough? Doesn't it lead to too much
>overhead for single media GWs? I'd be interested to hear opinions.
>
>John Segers
>
>"Arango, Mauricio" wrote:
>>
>> Please disregard my previous message which by mistake I sent without
erasing
>> editing. Apologies to the list members.
>>
>> Paul,
>>
>> I understand the rationale for your multiple media context. My preference
is
>> the single medium context because it seems simpler for thinking about
>> services and call flows. This seems to require more discussion. I suggest
we
>> advance the protocol as proposed with single-medium contexts and revisit
>> this later if necessary.
>>
>> Mauricio
>>
>> > -----Original Message-----
>> > From: Paul Sijben [mailto:sijben@lucent.com]
>> > Sent: Tuesday, March 30, 1999 2:45 PM
>> > To: Arango, Mauricio
>> > Cc: 'Greene, Nancy-M [CAR:5N10:EXCH]'; 'ITU-SG16(a)mailbag.intel.com';
>> > 'megaco(a)baynetworks.com'
>> > Subject: Multimedia (was Re: Megaco Protocol - IETF/ITU conf call
>> > minutes)
>> >
>> >
>> > Mauricio,
>> >
>> > you write:
>> > > It could be good idea to have some entity that ties
>> > together multiple
>> > > contexts, this was our experience in the Touring Machine
>> > and we referred to
>> > > it as "session". However, it may not be necessary for the
>> > most common audio
>> > > only operations, and for the sake of simplicity it may be
>> > better to leave it
>> > > as an option.
>> >
>> > I think you are absolutely right. We have been doing some
>> > work on that and
>> > have come up with the following which seems like a sensible
>> > extention to
>> > the H.gcp/MEGACOP work:
>> >
>> > Problem definition:
>> > For multimedia calls you will have multiple media streams
>> > representing
>> > each of the users. these streams need to be appropriately
>> > processed (maybe
>> > mixed or something more clever) and synchronised.
>> >
>> > The current definition in the MEGACO I-D specifies that each
>> > context only
>> > has one type of medium. This makes the logical grouping of
>> > terminations
>> > belonging to one user difficult, creates issues with synchronisation
>> > between media. It also makes it difficult to remove one user
>> > or an entire
>> > conference in one command.
>> >
>> > We have gone over several possible solutions, you can group
>> > terminations
>> > per user, and/or per medium and/or make some kind of links between
>> > terminations or contexts which signify the synchronisation.
>> >
>> > What seems like a workable model is to allow multiple media
>> > in one context
>> > and have a matrix binding the terminations to the right
>> > medium/user. So you
>> > would get something like:
>> >
>> > context
>> >
>> > medium1 | M2 | ....| Mn
>> > ------------------------------
>> > user1 | x | | | x
>> > user2 | | x | | x
>> > ..........
>> > usern | x | x
>> > where x==termination
>> >
>> > such a matrix could be a "scap paper" used by the MGW and MC
>> > to keep track
>> > of to which user and which medium each termination belongs.
>> > One could set a
>> > flag on terminations that one wants to synchronise.
>> >
>> > Naturally you would not need this for simple voice gateways so the
>> > extentions needed to the protocol for this youl be optional.
>> >
>> > The extentions would look like:
>> >
>> > Commands:
>> > We now have a command structure that states:
>> >
>> > ContextID
>> > *Command(TerminationID ....)
>> >
>> > we would get:
>> >
>> > ContextID
>> > [UserID]
>> > *command( [MediaID] Termination ID....)
>> >
>> > so for example:
>> > ContextID=3
>> > UserID=2
>> > add (MediaID=Video3, TerminationID=RTP:32424, ....)
>> > modify (MediaID=Video3, TerminationID=*, SychWith:MediaID=Audio2)
>> >
>> > etc.
>> >
>> > A context could have the following properties:
>> > - mode (unidirectional, bidirectional, mix, choose...)
>> > - maxNumberOfUsers (e.g. 3)
>> > - maxNumberOfMedia (e.g. 1 for a simple voice gateway)
>> >
>> > This extention would allow the removal of one medium or one
>> > user from a
>> > conference but not hamper "simple" operations.
>> >
>> > Paul
>> >
>> > "Arango, Mauricio" wrote:
>> > >
>> > > Nancy,
>> > >
>> > > Thanks for preparing the minutes.
>> > >
>> > > >
>> > > > b. Raised by Mauricio Arango: you may want to be able to
>> > > > create a context
>> > > > with parameters before adding any terminations to it - (e.g.
>> > > > turing machine)
>> > >
>> > > I want to clarify that I was not referring to Alan Turing's
>> > grand work but
>> > > to something colleagues and I did some years ago in
>> > multimedia call control,
>> > > called "Touring Machine". This effort developed a
>> > connection model similar
>> > > to the one in MEGACO's protocol and could be worth keeping
>> > in mind as
>> > > previous experience to help this effort.
>> > >
>> > > The January 1993 of the Communications of the ACM has an
>> > article on the
>> > > Touring Machine. Following is an excerpt desribing its
>> > connection model
>> > > (page 71):
>> > >
>> > > "The logical topology of a session is specified as a set of
>> > typed connectors
>> > > (see Figure 2). A connector represents a multiway transport
>> > connection among
>> > > endpoints (logical ports). A connection is an abstraction of a
>> > > communications bridge, including point-to-point (two way) as well as
>> > > multipoint connections. Since bridging is a medium-specific
>> > operation,
>> > > connectors are typed by medium. A session may have one or
>> > more connectors
>> > > per medium, with souce and sink endpoints from
>> > participating clients. An
>> > > endpoint represents a connector termination point. Endpoints are
>> > > distinguished by medium, direction of flow, and the client
>> > receiving or
>> > > providing transport."
>> > >
>> > > In the above excerpt a "connector" is equivalent to
>> > MEGACO's context and an
>> > > "endpoint" is equivalent to MEGACO's termination.
>> > >
>> > > > c. Raised by a few people: we need to see whether we need a
>> > > > meta definition
>> > > > of a "call" in the MG, so that the MG can link together
>> > > > related contexts.
>> > > > May be needed for lipsynch between a call's video context and
>> > > > its audio
>> > > > context. Instead of introducing a callId, this could be done using
>> > > > parameters on each context referring to the other one.
>> > > > Benefit of a callId
>> > > > is that the MGC can bring down the entire call by just
>> > > > referring to the
>> > > > callId. (Note: "call" may be the wrong word - if we need it,
>> > > > let's try to
>> > > > invent a new name).
>> > > >
>> > >
>> > > It could be good idea to have some entity that ties
>> > together multiple
>> > > contexts, this was our experience in the Touring Machine
>> > and we referred to
>> > > it as "session". However, it may not be necessary for the
>> > most common audio
>> > > only operations, and for the sake of simplicity it may be
>> > better to leave it
>> > > as an option.
>> > >
>> > > Regards,
>> > >
>> > > Mauricio Arango
>> > >
>> > > > -----Original Message-----
>> > > > From: Greene, Nancy-M [CAR:5N10:EXCH]
>> > > > [mailto:ngreene@americasm01.nt.com]
>> > > > Sent: Monday, March 29, 1999 8:54 PM
>> > > > To: 'ITU-SG16(a)mailbag.intel.com'; 'megaco(a)baynetworks.com'
>> > > > Subject: Megaco Protocol - IETF/ITU conf call minutes
>> > > >
>> > > >
>> > > > Here are the minutes from the call today. Please email me any
>> > > > corrections. I
>> > > > tried to pair comments made with the person that made them -
>> > > > I may have made
>> > > > some errors, and I know I missed the names of some of the
>> > > > people commenting.
>> > > >
>> > > >
>> > > >
>> > > > Nancy (ngreene(a)nortelnetworks.com)
>> > > > -----------------------------------------------------------
>> > > >
>> > > > Megaco Protocol presentation to IETF megaco and ITU SG16
>> > participants
>> > > > March 29/99 10am-12pm EST
>> > > > =================================================
>> > > >
>> > > > Present: The number of ports being used went as high as 47,
>> > > > and was about 45
>> > > > on average. Number of participants was higher since a number
>> > > > of people were
>> > > > sharing ports.
>> > > >
>> > > > Chair of the call: Glen Freundlich (ggf(a)lucent.com)
>> > > > Minutes: taken by Nancy Greene (ngreene(a)nortelnetworks.com)
>> > > >
>> > > > NOTE: In keeping with IETF and ITU procedures, the audio call
>> > > > cannot make
>> > > > binding decisions. It is a tool for the chairs and
>> > editors, and the
>> > > > participants of both the IETF and ITU working groups, to
>> > > > gauge support for
>> > > > the protocol, and to raise issues with the protocol. Issues
>> > > > may be raised in
>> > > > either an audio call, or on the mailing list
>> > > > (megaco(a)baynetworks.com) These
>> > > > minutes are going to both the Megaco and ITU-SG16 mailing lists.
>> > > >
>> > > > Meeting summary:
>> > > > --------------------------
>> > > > Glen Freundlich summed up results of the meeting. The goal of
>> > > > the meeting
>> > > > was two fold:
>> > > > 1) it was the first opportunity for people to look at the
>> > > > output from the
>> > > > Megaco protocol design/drafting group.
>> > > > 2) it was an opportunity for people to pose questions on
>> > the draft.
>> > > >
>> > > > He saw no opposition to the connection model proposed in the
>> > > > Megaco protocol
>> > > > I-D
>> > > > (ftp://www.ietf.org/internet-drafts/draft-ietf-megaco-protocol
>> > > > -00.txt).
>> > > > To date, the Megaco protocol I-D has been run through audio
>> > > > call scenarios
>> > > > for 3-way call, for call waiting, ... However, it needs to be
>> > > > tested with
>> > > > multimedia scenarios. Any input in this area is appreciated.
>> > > >
>> > > > Because there seems to be agreement on the connection model,
>> > > > Bryan Hill, the
>> > > > H.gcp editor, will start pulling related sections from the
>> > > > Megaco protocol
>> > > > I-D into H.gcp.
>> > > >
>> > > > Glen will hold another audio call next week to see where we
>> > > > stand, and to
>> > > > look towards adding more sections to H.gcp from the Megaco
>> > > > protocol I-D.
>> > > >
>> > > > For the ITU-T SG16 May meeting, H.gcp needs to contain:
>> > > > - a connection model
>> > > > - a set of commands
>> > > > - a start on parameters for those commands
>> > > > - generic syntax
>> > > > - and a list of issues to resolve.
>> > > >
>> > > >
>> > > > Protocol timetable:
>> > > > --------------------------
>> > > > IETF Megaco:
>> > > > - Proposed Standard by summer/99
>> > > > - interoperability testing in fall/99
>> > > > - Draft Standard in Feb/2000
>> > > >
>> > > > This matches up closely with the ITU-T SG16 schedule:
>> > > > - planning to get H.gcp determined in May/99
>> > > > - planning to get H.gcp decided in Feb/2000
>> > > >
>> > > > IETF/ITU collaboration:
>> > > > -------------------------------
>> > > > Question raised: will the two protocols be identical?
>> > > >
>> > > > Tom Taylor proposes two options:
>> > > > 1) SG16 and Megaco completely agree on the protocol
>> > > > requirements. If this is
>> > > > the case, the protocol will be identical
>> > > >
>> > > > 2) Each group has some requirements that are different from
>> > > > the other group.
>> > > > In this case, there would be agreement on a core
>> > protocol, and then
>> > > > additional parts would be defined by each group.
>> > > >
>> > > > BUT how to do this in practice? Tom said that if a huge
>> > > > roadblock comes up
>> > > > again, we would form a new design team to work out a solution.
>> > > >
>> > > > Tom asked whether the group on the call agreed that the
>> > > > Megaco protocol I-D
>> > > > was a reasonable basis for H.gcp. Feeling was that it is, but
>> > > > that we need
>> > > > to do multimedia call walkthroughs. Mike Buckley had some
>> > > > concern about the
>> > > > definition of a context, but thought that the basic model
>> > is workable.
>> > > > Mauricio Arango thought it was a good model for multimedia.
>> > > > Ami Amir raised
>> > > > issue about associating contexts. Glen asked the H.320
>> > > > experts to look at
>> > > > the protocol and see if we need to modify the connection model.
>> > > >
>> > > >
>> > > > Next audio call:
>> > > > ----------------------
>> > > > Probably next Thursday April 8/99, same time. Confirmation of
>> > > > date & time
>> > > > and the call details will be out later this week.
>> > > >
>> > > >
>> > > > Use of mailing lists:
>> > > > ---------------------------
>> > > > It was agreed that we would try to keep all technical
>> > > > discussion of the
>> > > > Megaco protocol on the megaco(a)baynetworks.com mailing list.
>> > > >
>> > > > Invitations to audio calls, and audio call minutes will be
>> > > > sent to both the
>> > > > megaco(a)baynetworks.com and to the ITU-SG16(a)mailbag.intel.com
>> > > > mailing lists.
>> > > >
>> > > > Tom Taylor took the action of posting to each list, how to
>> > > > join the other
>> > > > one.
>> > > >
>> > > > Detailed minutes:
>> > > > ------------------------
>> > > > Brian Rosen presented the Megaco protocol draft
>> > > > (ftp://www.ietf.org/internet-drafts/draft-ietf-megaco-protocol
>> > > > -00.txt). He
>> > > > first noted that this draft is open for discussion. It is not
>> > > > cast in stone.
>> > > > If changes need to be made to any part, they will be
>> > made, if they are
>> > > > deemed necessary to satisfy the requirements of the protocol,
>> > > > and agreed to
>> > > > by the group.
>> > > >
>> > > > 1. New connection model
>> > > > - concept of a termination - have permanent (e.g. DS0) and
>> > > > ephemeral (e.g.
>> > > > RTP port) terminations
>> > > > - a termination is named with a terminationId, this name can
>> > > > have wildcards,
>> > > > for example to allow the MG to choose the actual physical
>> > > > termination, or to
>> > > > request notification of an event that occurs on any DS0
>> > > > - concept of a context - can add, subtract, modify
>> > terminations in a
>> > > > context. A context is created when the first termination is
>> > > > added to it, and
>> > > > goes away when the last termination is subtracted from it.
>> > > > - a context can have parameters associated with it - for
>> > > > example, a video
>> > > > mixing context may have parameters to describe how the video
>> > > > is to be mixed
>> > > > - mosaic, or current speaker/last speaker, ...
>> > > > - a termination class defines parameters on a typical
>> > > > termination - e.g. DS0
>> > > > Termination Class, RTP Termination Class
>> > > > - Signals can be attached to a termination class, events can
>> > > > occur at a
>> > > > termination class, and the MGC can specify which ones it
>> > wants to be
>> > > > notified about.
>> > > > - packages define signals and events
>> > > > - a termination class can have more than one package that
>> > apply to it
>> > > > - event naming structure allows package name to be put in
>> > > > front of the event
>> > > > name
>> > > > - question: how do you apply a call waiting tone? - answer -
>> > > > it is a signal
>> > > > that you apply to a termination
>> > > >
>> > > > 1.1 Proposed Changes and Issues:
>> > > > a. Tom Taylor proposed a more general definition for
>> > context: every
>> > > > termination in a context has connectivity with all other
>> > > > terminations in the
>> > > > context.
>> > > >
>> > > > b. Raised by Mauricio Arango: you may want to be able to
>> > > > create a context
>> > > > with parameters before adding any terminations to it - (e.g.
>> > > > turing machine)
>> > > > - may want to mark a context with the max # of terminations
>> > > > at creation
>> > > > time. - BUT this may bring lack of flexibility - may be
>> > > > better to let the MG
>> > > > decide, as new terminations are added to a context, whether
>> > > > it is able to
>> > > > mix them in. For example, instead of marking a context with
>> > > > one particular
>> > > > type of codec, it is more flexible to let the MG decide what
>> > > > transcoding
>> > > > needs to be done in a context as a function of the types of
>> > > > terminations
>> > > > added to it.
>> > > >
>> > > > c. Raised by a few people: we need to see whether we need a
>> > > > meta definition
>> > > > of a "call" in the MG, so that the MG can link together
>> > > > related contexts.
>> > > > May be needed for lipsynch between a call's video context and
>> > > > its audio
>> > > > context. Instead of introducing a callId, this could be done using
>> > > > parameters on each context referring to the other one.
>> > > > Benefit of a callId
>> > > > is that the MGC can bring down the entire call by just
>> > > > referring to the
>> > > > callId. (Note: "call" may be the wrong word - if we need it,
>> > > > let's try to
>> > > > invent a new name).
>> > > >
>> > > > d. Raised by Steve Davies: with H.320, the MG has one
>> > > > physical termination
>> > > > with audio, video and data on it - can a termination be in
>> > > > more than one
>> > > > context? no - so need to separate the different media out of
>> > > > this physical
>> > > > termination before creating the contexts.
>> > > >
>> > > > e. Raised by Tom Taylor: need to see whether it is
>> > feasible to have
>> > > > decomposed H.320 gateways - if it is necessary (people say
>> > > > yes), then need
>> > > > to be sure the Megaco protocol can handle it.
>> > > >
>> > > >
>> > > > 2. Commands
>> > > > - grouped into commands within an action per context, and
>> > one or more
>> > > > actions are grouped into a transaction. Transaction is
>> > all or nothing.
>> > > > - Add, Modify, Subtract each can contain local and remote
>> > terminiation
>> > > > descriptors, signalling descriptor, an events descriptor, and
>> > > > a digit map
>> > > > descriptor
>> > > > - question on how to change an event description for a
>> > > > termination - answer:
>> > > > use Modify - can do it within a context, or outside a context
>> > > > - if it is
>> > > > outside the context (i.e. using the NULL context), then that
>> > > > is how you
>> > > > change the default parameters for that termination.
>> > > > - MGCP's NotificationRequest is now covered in Modify
>> > > >
>> > > > 3. Multimedia and Contexts and Terminations
>> > > > - a termination only belongs to one context
>> > > > - for multimedia, there is one context per media type
>> > > > - just as you have separate RTP flows for different media
>> > > > types, you have
>> > > > separate contexts.
>> > > >
>> > > > 3.1 Issues
>> > > > a. Paul Sijben noted that when you try to use the Megaco
>> > protocol with
>> > > > H.320, there may be problems with the model - Paul will bring
>> > > > these up on
>> > > > the mailing list.
>> > > >
>> > > > Also, see Issue c in 1.1 above.
>> > > >
>> > > > 4. H.245 & SDP
>> > > > - Tom Taylor explained that H.245 has at least 2 purposes: 1)
>> > > > for capability
>> > > > negotiation, and 2) for specification of open logical
>> > > > channel. The scope of
>> > > > the Megaco protocol is 2). Capability negotiation may use
>> > > > H.245 between
>> > > > MGCs, but the Megaco protocol is between MGC and MG. SDP may
>> > > > be good enough
>> > > > to be used between the MGC and the MG. Discussion still open
>> > > > here. Need to
>> > > > allow for an environment where H.245 may not be involved.
>> > > > - with H.263 - H.245 provides a tag to link together 2 RTP
>> > > > streams - need to
>> > > > be able to carry this tag from the MGC to the MG.
>> > > >
>> > > > 5. underspecifying termination descriptors
>> > > > - can be used as a way to tell the MG to use default
>> > values for the
>> > > > termination
>> > > > - a termination learns its default values at MG boot-up time
>> > > > - MGC can change these values using Modify with context
>> > set to NULL.
>> > > >
>> > > > 6. underspecifying terminationIds
>> > > > - provides a way of setting default parameters for a T1
>> > for example
>> > > >
>> > > > 7. Case: two audio contexts - in different MGs, and in the same MG
>> > > > - can the connection model handle this? Yes
>> > > > - protocol should need to keep track of where resources are.
>> > > > - it is up to the MGC to know what contexts are associated
>> > > > with a call.
>> > > >
>> > > > 8. overview of Audit, Notify, ServiceChange
>> > > > - ServiceChange - MG can use it upon reboot, to register
>> > > > itself with an MGC
>> > > > - MG knows which MGC to send this msg to from some method
>> > > > outside the scope
>> > > > of this protocol (pre-provisioned, for example).
>> > > > - MGC learns capabilities of MG using Audit. - can learn codecs
>> > > >
>> > > > 8.1 Issue
>> > > > - need a way for MGC to learn the QUANTITY of codecs an MG has!
>> > > >
>> > > > 9. overview of Security
>> > > > - this is security between an MGC and an MG
>> > > > - interim method is specified for when an MG does not have IPSEC
>> > > >
>> > > > 10. Question on scope of the protocol
>> > > > - check the requirements draft:
>> > > > ftp://standards.nortelnetworks.com/megaco/docs/minn99
>> > > > - Steve Davies brought up point that some aspects of the
>> > > > Megaco protocol
>> > > > overlap with Policy server protocol proposals.
>> > > >
>> > > > 11. Can MG do coded renegotiation?
>> > > > - David Featherstone asked if an MG can negotiate use of a
>> > > > new codec on its
>> > > > own
>> > > > - Brian Rosen answered that with AAL2 profiles, the MG can
>> > > >
>> > > > Issue:
>> > > > a. MGC may need to be involved - for example if the Quality
>> > > > goes down too
>> > > > low, call may no longer be billable. Solution - create an
>> > > > event to notify
>> > > > the MGC for this case.
>> > > >
>> > > > 12. How does the MGC find out RTP interfaces?
>> > > > - should be able to do this using Audit in the NULL context.
>> > > >
>> > > > 13. IVR - still open for discussion
>> > > > - view IVR as a termination
>> > > > - issue is how much signalling effort do you need to add to
>> > > > the protocol?
>> > > > - simply playing msgs is ok - they just look like events
>> > > > - problem is when you want to put time and money values into
>> > > > it for example,
>> > > > you have x minutes left on your calling card. - this may be
>> > > > out of the scope
>> > > > of the Megaco protocol
>> > > >
>> > > > 14. QoS reservation
>> > > > - needs more discussion
>> > > >
>> > > > ****see Meeting summary above for audio call conclusions.
>> > > >
>> > > > end of minutes.
>> > > >
>> > > >
>> > > >
>> > > >
>> > > >
>> > > > --------------------------------------------------------------
>> > > > ------------
>> > > > Nancy M. Greene
>> > > > Internet & Service Provider Networks, Nortel Networks
>> > > > T:514-271-7221 (internal:ESN853-1077) E:ngreene@nortelnetworks.com
>> > > >
>> >
>> > --
>> > Paul Sijben Telno:+ 31 35 687 4774 Fax:+31 35 687 5954
>> > Lucent Technologies Home telno: +31 33 4557522
>> > Forward Looking Work e-mail: sijben(a)lucent.com
>> > Huizen, The Netherlands internal http://hzsgp68.nl.lucent.com/
>> >
>
>--
>John Segers email: jsegers(a)lucent.com
>Lucent Technologies Room HE 344
>Dept. Forward Looking Work phone: +31 35 687 4724
>P.O. Box 18, 1270 AA Huizen fax: +31 35 687 5954
>
1
0
Re: Multimedia (was Re: Megaco Protocol - IETF/ITU conf call min utes)
by Arango, Mauricio 31 Mar '99
by Arango, Mauricio 31 Mar '99
31 Mar '99
Please disregard my previous message which by mistake I sent without erasing
editing. Apologies to the list members.
Paul,
I understand the rationale for your multiple media context. My preference is
the single medium context because it seems simpler for thinking about
services and call flows. This seems to require more discussion. I suggest we
advance the protocol as proposed with single-medium contexts and revisit
this later if necessary.
Mauricio
> -----Original Message-----
> From: Paul Sijben [mailto:sijben@lucent.com]
> Sent: Tuesday, March 30, 1999 2:45 PM
> To: Arango, Mauricio
> Cc: 'Greene, Nancy-M [CAR:5N10:EXCH]'; 'ITU-SG16(a)mailbag.intel.com';
> 'megaco(a)baynetworks.com'
> Subject: Multimedia (was Re: Megaco Protocol - IETF/ITU conf call
> minutes)
>
>
> Mauricio,
>
> you write:
> > It could be good idea to have some entity that ties
> together multiple
> > contexts, this was our experience in the Touring Machine
> and we referred to
> > it as "session". However, it may not be necessary for the
> most common audio
> > only operations, and for the sake of simplicity it may be
> better to leave it
> > as an option.
>
> I think you are absolutely right. We have been doing some
> work on that and
> have come up with the following which seems like a sensible
> extention to
> the H.gcp/MEGACOP work:
>
> Problem definition:
> For multimedia calls you will have multiple media streams
> representing
> each of the users. these streams need to be appropriately
> processed (maybe
> mixed or something more clever) and synchronised.
>
> The current definition in the MEGACO I-D specifies that each
> context only
> has one type of medium. This makes the logical grouping of
> terminations
> belonging to one user difficult, creates issues with synchronisation
> between media. It also makes it difficult to remove one user
> or an entire
> conference in one command.
>
> We have gone over several possible solutions, you can group
> terminations
> per user, and/or per medium and/or make some kind of links between
> terminations or contexts which signify the synchronisation.
>
> What seems like a workable model is to allow multiple media
> in one context
> and have a matrix binding the terminations to the right
> medium/user. So you
> would get something like:
>
> context
>
> medium1 | M2 | ....| Mn
> ------------------------------
> user1 | x | | | x
> user2 | | x | | x
> ..........
> usern | x | x
> where x==termination
>
> such a matrix could be a "scap paper" used by the MGW and MC
> to keep track
> of to which user and which medium each termination belongs.
> One could set a
> flag on terminations that one wants to synchronise.
>
> Naturally you would not need this for simple voice gateways so the
> extentions needed to the protocol for this youl be optional.
>
> The extentions would look like:
>
> Commands:
> We now have a command structure that states:
>
> ContextID
> *Command(TerminationID ....)
>
> we would get:
>
> ContextID
> [UserID]
> *command( [MediaID] Termination ID....)
>
> so for example:
> ContextID=3
> UserID=2
> add (MediaID=Video3, TerminationID=RTP:32424, ....)
> modify (MediaID=Video3, TerminationID=*, SychWith:MediaID=Audio2)
>
> etc.
>
> A context could have the following properties:
> - mode (unidirectional, bidirectional, mix, choose...)
> - maxNumberOfUsers (e.g. 3)
> - maxNumberOfMedia (e.g. 1 for a simple voice gateway)
>
> This extention would allow the removal of one medium or one
> user from a
> conference but not hamper "simple" operations.
>
> Paul
>
> "Arango, Mauricio" wrote:
> >
> > Nancy,
> >
> > Thanks for preparing the minutes.
> >
> > >
> > > b. Raised by Mauricio Arango: you may want to be able to
> > > create a context
> > > with parameters before adding any terminations to it - (e.g.
> > > turing machine)
> >
> > I want to clarify that I was not referring to Alan Turing's
> grand work but
> > to something colleagues and I did some years ago in
> multimedia call control,
> > called "Touring Machine". This effort developed a
> connection model similar
> > to the one in MEGACO's protocol and could be worth keeping
> in mind as
> > previous experience to help this effort.
> >
> > The January 1993 of the Communications of the ACM has an
> article on the
> > Touring Machine. Following is an excerpt desribing its
> connection model
> > (page 71):
> >
> > "The logical topology of a session is specified as a set of
> typed connectors
> > (see Figure 2). A connector represents a multiway transport
> connection among
> > endpoints (logical ports). A connection is an abstraction of a
> > communications bridge, including point-to-point (two way) as well as
> > multipoint connections. Since bridging is a medium-specific
> operation,
> > connectors are typed by medium. A session may have one or
> more connectors
> > per medium, with souce and sink endpoints from
> participating clients. An
> > endpoint represents a connector termination point. Endpoints are
> > distinguished by medium, direction of flow, and the client
> receiving or
> > providing transport."
> >
> > In the above excerpt a "connector" is equivalent to
> MEGACO's context and an
> > "endpoint" is equivalent to MEGACO's termination.
> >
> > > c. Raised by a few people: we need to see whether we need a
> > > meta definition
> > > of a "call" in the MG, so that the MG can link together
> > > related contexts.
> > > May be needed for lipsynch between a call's video context and
> > > its audio
> > > context. Instead of introducing a callId, this could be done using
> > > parameters on each context referring to the other one.
> > > Benefit of a callId
> > > is that the MGC can bring down the entire call by just
> > > referring to the
> > > callId. (Note: "call" may be the wrong word - if we need it,
> > > let's try to
> > > invent a new name).
> > >
> >
> > It could be good idea to have some entity that ties
> together multiple
> > contexts, this was our experience in the Touring Machine
> and we referred to
> > it as "session". However, it may not be necessary for the
> most common audio
> > only operations, and for the sake of simplicity it may be
> better to leave it
> > as an option.
> >
> > Regards,
> >
> > Mauricio Arango
> >
> > > -----Original Message-----
> > > From: Greene, Nancy-M [CAR:5N10:EXCH]
> > > [mailto:ngreene@americasm01.nt.com]
> > > Sent: Monday, March 29, 1999 8:54 PM
> > > To: 'ITU-SG16(a)mailbag.intel.com'; 'megaco(a)baynetworks.com'
> > > Subject: Megaco Protocol - IETF/ITU conf call minutes
> > >
> > >
> > > Here are the minutes from the call today. Please email me any
> > > corrections. I
> > > tried to pair comments made with the person that made them -
> > > I may have made
> > > some errors, and I know I missed the names of some of the
> > > people commenting.
> > >
> > >
> > >
> > > Nancy (ngreene(a)nortelnetworks.com)
> > > -----------------------------------------------------------
> > >
> > > Megaco Protocol presentation to IETF megaco and ITU SG16
> participants
> > > March 29/99 10am-12pm EST
> > > =================================================
> > >
> > > Present: The number of ports being used went as high as 47,
> > > and was about 45
> > > on average. Number of participants was higher since a number
> > > of people were
> > > sharing ports.
> > >
> > > Chair of the call: Glen Freundlich (ggf(a)lucent.com)
> > > Minutes: taken by Nancy Greene (ngreene(a)nortelnetworks.com)
> > >
> > > NOTE: In keeping with IETF and ITU procedures, the audio call
> > > cannot make
> > > binding decisions. It is a tool for the chairs and
> editors, and the
> > > participants of both the IETF and ITU working groups, to
> > > gauge support for
> > > the protocol, and to raise issues with the protocol. Issues
> > > may be raised in
> > > either an audio call, or on the mailing list
> > > (megaco(a)baynetworks.com) These
> > > minutes are going to both the Megaco and ITU-SG16 mailing lists.
> > >
> > > Meeting summary:
> > > --------------------------
> > > Glen Freundlich summed up results of the meeting. The goal of
> > > the meeting
> > > was two fold:
> > > 1) it was the first opportunity for people to look at the
> > > output from the
> > > Megaco protocol design/drafting group.
> > > 2) it was an opportunity for people to pose questions on
> the draft.
> > >
> > > He saw no opposition to the connection model proposed in the
> > > Megaco protocol
> > > I-D
> > > (ftp://www.ietf.org/internet-drafts/draft-ietf-megaco-protocol
> > > -00.txt).
> > > To date, the Megaco protocol I-D has been run through audio
> > > call scenarios
> > > for 3-way call, for call waiting, ... However, it needs to be
> > > tested with
> > > multimedia scenarios. Any input in this area is appreciated.
> > >
> > > Because there seems to be agreement on the connection model,
> > > Bryan Hill, the
> > > H.gcp editor, will start pulling related sections from the
> > > Megaco protocol
> > > I-D into H.gcp.
> > >
> > > Glen will hold another audio call next week to see where we
> > > stand, and to
> > > look towards adding more sections to H.gcp from the Megaco
> > > protocol I-D.
> > >
> > > For the ITU-T SG16 May meeting, H.gcp needs to contain:
> > > - a connection model
> > > - a set of commands
> > > - a start on parameters for those commands
> > > - generic syntax
> > > - and a list of issues to resolve.
> > >
> > >
> > > Protocol timetable:
> > > --------------------------
> > > IETF Megaco:
> > > - Proposed Standard by summer/99
> > > - interoperability testing in fall/99
> > > - Draft Standard in Feb/2000
> > >
> > > This matches up closely with the ITU-T SG16 schedule:
> > > - planning to get H.gcp determined in May/99
> > > - planning to get H.gcp decided in Feb/2000
> > >
> > > IETF/ITU collaboration:
> > > -------------------------------
> > > Question raised: will the two protocols be identical?
> > >
> > > Tom Taylor proposes two options:
> > > 1) SG16 and Megaco completely agree on the protocol
> > > requirements. If this is
> > > the case, the protocol will be identical
> > >
> > > 2) Each group has some requirements that are different from
> > > the other group.
> > > In this case, there would be agreement on a core
> protocol, and then
> > > additional parts would be defined by each group.
> > >
> > > BUT how to do this in practice? Tom said that if a huge
> > > roadblock comes up
> > > again, we would form a new design team to work out a solution.
> > >
> > > Tom asked whether the group on the call agreed that the
> > > Megaco protocol I-D
> > > was a reasonable basis for H.gcp. Feeling was that it is, but
> > > that we need
> > > to do multimedia call walkthroughs. Mike Buckley had some
> > > concern about the
> > > definition of a context, but thought that the basic model
> is workable.
> > > Mauricio Arango thought it was a good model for multimedia.
> > > Ami Amir raised
> > > issue about associating contexts. Glen asked the H.320
> > > experts to look at
> > > the protocol and see if we need to modify the connection model.
> > >
> > >
> > > Next audio call:
> > > ----------------------
> > > Probably next Thursday April 8/99, same time. Confirmation of
> > > date & time
> > > and the call details will be out later this week.
> > >
> > >
> > > Use of mailing lists:
> > > ---------------------------
> > > It was agreed that we would try to keep all technical
> > > discussion of the
> > > Megaco protocol on the megaco(a)baynetworks.com mailing list.
> > >
> > > Invitations to audio calls, and audio call minutes will be
> > > sent to both the
> > > megaco(a)baynetworks.com and to the ITU-SG16(a)mailbag.intel.com
> > > mailing lists.
> > >
> > > Tom Taylor took the action of posting to each list, how to
> > > join the other
> > > one.
> > >
> > > Detailed minutes:
> > > ------------------------
> > > Brian Rosen presented the Megaco protocol draft
> > > (ftp://www.ietf.org/internet-drafts/draft-ietf-megaco-protocol
> > > -00.txt). He
> > > first noted that this draft is open for discussion. It is not
> > > cast in stone.
> > > If changes need to be made to any part, they will be
> made, if they are
> > > deemed necessary to satisfy the requirements of the protocol,
> > > and agreed to
> > > by the group.
> > >
> > > 1. New connection model
> > > - concept of a termination - have permanent (e.g. DS0) and
> > > ephemeral (e.g.
> > > RTP port) terminations
> > > - a termination is named with a terminationId, this name can
> > > have wildcards,
> > > for example to allow the MG to choose the actual physical
> > > termination, or to
> > > request notification of an event that occurs on any DS0
> > > - concept of a context - can add, subtract, modify
> terminations in a
> > > context. A context is created when the first termination is
> > > added to it, and
> > > goes away when the last termination is subtracted from it.
> > > - a context can have parameters associated with it - for
> > > example, a video
> > > mixing context may have parameters to describe how the video
> > > is to be mixed
> > > - mosaic, or current speaker/last speaker, ...
> > > - a termination class defines parameters on a typical
> > > termination - e.g. DS0
> > > Termination Class, RTP Termination Class
> > > - Signals can be attached to a termination class, events can
> > > occur at a
> > > termination class, and the MGC can specify which ones it
> wants to be
> > > notified about.
> > > - packages define signals and events
> > > - a termination class can have more than one package that
> apply to it
> > > - event naming structure allows package name to be put in
> > > front of the event
> > > name
> > > - question: how do you apply a call waiting tone? - answer -
> > > it is a signal
> > > that you apply to a termination
> > >
> > > 1.1 Proposed Changes and Issues:
> > > a. Tom Taylor proposed a more general definition for
> context: every
> > > termination in a context has connectivity with all other
> > > terminations in the
> > > context.
> > >
> > > b. Raised by Mauricio Arango: you may want to be able to
> > > create a context
> > > with parameters before adding any terminations to it - (e.g.
> > > turing machine)
> > > - may want to mark a context with the max # of terminations
> > > at creation
> > > time. - BUT this may bring lack of flexibility - may be
> > > better to let the MG
> > > decide, as new terminations are added to a context, whether
> > > it is able to
> > > mix them in. For example, instead of marking a context with
> > > one particular
> > > type of codec, it is more flexible to let the MG decide what
> > > transcoding
> > > needs to be done in a context as a function of the types of
> > > terminations
> > > added to it.
> > >
> > > c. Raised by a few people: we need to see whether we need a
> > > meta definition
> > > of a "call" in the MG, so that the MG can link together
> > > related contexts.
> > > May be needed for lipsynch between a call's video context and
> > > its audio
> > > context. Instead of introducing a callId, this could be done using
> > > parameters on each context referring to the other one.
> > > Benefit of a callId
> > > is that the MGC can bring down the entire call by just
> > > referring to the
> > > callId. (Note: "call" may be the wrong word - if we need it,
> > > let's try to
> > > invent a new name).
> > >
> > > d. Raised by Steve Davies: with H.320, the MG has one
> > > physical termination
> > > with audio, video and data on it - can a termination be in
> > > more than one
> > > context? no - so need to separate the different media out of
> > > this physical
> > > termination before creating the contexts.
> > >
> > > e. Raised by Tom Taylor: need to see whether it is
> feasible to have
> > > decomposed H.320 gateways - if it is necessary (people say
> > > yes), then need
> > > to be sure the Megaco protocol can handle it.
> > >
> > >
> > > 2. Commands
> > > - grouped into commands within an action per context, and
> one or more
> > > actions are grouped into a transaction. Transaction is
> all or nothing.
> > > - Add, Modify, Subtract each can contain local and remote
> terminiation
> > > descriptors, signalling descriptor, an events descriptor, and
> > > a digit map
> > > descriptor
> > > - question on how to change an event description for a
> > > termination - answer:
> > > use Modify - can do it within a context, or outside a context
> > > - if it is
> > > outside the context (i.e. using the NULL context), then that
> > > is how you
> > > change the default parameters for that termination.
> > > - MGCP's NotificationRequest is now covered in Modify
> > >
> > > 3. Multimedia and Contexts and Terminations
> > > - a termination only belongs to one context
> > > - for multimedia, there is one context per media type
> > > - just as you have separate RTP flows for different media
> > > types, you have
> > > separate contexts.
> > >
> > > 3.1 Issues
> > > a. Paul Sijben noted that when you try to use the Megaco
> protocol with
> > > H.320, there may be problems with the model - Paul will bring
> > > these up on
> > > the mailing list.
> > >
> > > Also, see Issue c in 1.1 above.
> > >
> > > 4. H.245 & SDP
> > > - Tom Taylor explained that H.245 has at least 2 purposes: 1)
> > > for capability
> > > negotiation, and 2) for specification of open logical
> > > channel. The scope of
> > > the Megaco protocol is 2). Capability negotiation may use
> > > H.245 between
> > > MGCs, but the Megaco protocol is between MGC and MG. SDP may
> > > be good enough
> > > to be used between the MGC and the MG. Discussion still open
> > > here. Need to
> > > allow for an environment where H.245 may not be involved.
> > > - with H.263 - H.245 provides a tag to link together 2 RTP
> > > streams - need to
> > > be able to carry this tag from the MGC to the MG.
> > >
> > > 5. underspecifying termination descriptors
> > > - can be used as a way to tell the MG to use default
> values for the
> > > termination
> > > - a termination learns its default values at MG boot-up time
> > > - MGC can change these values using Modify with context
> set to NULL.
> > >
> > > 6. underspecifying terminationIds
> > > - provides a way of setting default parameters for a T1
> for example
> > >
> > > 7. Case: two audio contexts - in different MGs, and in the same MG
> > > - can the connection model handle this? Yes
> > > - protocol should need to keep track of where resources are.
> > > - it is up to the MGC to know what contexts are associated
> > > with a call.
> > >
> > > 8. overview of Audit, Notify, ServiceChange
> > > - ServiceChange - MG can use it upon reboot, to register
> > > itself with an MGC
> > > - MG knows which MGC to send this msg to from some method
> > > outside the scope
> > > of this protocol (pre-provisioned, for example).
> > > - MGC learns capabilities of MG using Audit. - can learn codecs
> > >
> > > 8.1 Issue
> > > - need a way for MGC to learn the QUANTITY of codecs an MG has!
> > >
> > > 9. overview of Security
> > > - this is security between an MGC and an MG
> > > - interim method is specified for when an MG does not have IPSEC
> > >
> > > 10. Question on scope of the protocol
> > > - check the requirements draft:
> > > ftp://standards.nortelnetworks.com/megaco/docs/minn99
> > > - Steve Davies brought up point that some aspects of the
> > > Megaco protocol
> > > overlap with Policy server protocol proposals.
> > >
> > > 11. Can MG do coded renegotiation?
> > > - David Featherstone asked if an MG can negotiate use of a
> > > new codec on its
> > > own
> > > - Brian Rosen answered that with AAL2 profiles, the MG can
> > >
> > > Issue:
> > > a. MGC may need to be involved - for example if the Quality
> > > goes down too
> > > low, call may no longer be billable. Solution - create an
> > > event to notify
> > > the MGC for this case.
> > >
> > > 12. How does the MGC find out RTP interfaces?
> > > - should be able to do this using Audit in the NULL context.
> > >
> > > 13. IVR - still open for discussion
> > > - view IVR as a termination
> > > - issue is how much signalling effort do you need to add to
> > > the protocol?
> > > - simply playing msgs is ok - they just look like events
> > > - problem is when you want to put time and money values into
> > > it for example,
> > > you have x minutes left on your calling card. - this may be
> > > out of the scope
> > > of the Megaco protocol
> > >
> > > 14. QoS reservation
> > > - needs more discussion
> > >
> > > ****see Meeting summary above for audio call conclusions.
> > >
> > > end of minutes.
> > >
> > >
> > >
> > >
> > >
> > > --------------------------------------------------------------
> > > ------------
> > > Nancy M. Greene
> > > Internet & Service Provider Networks, Nortel Networks
> > > T:514-271-7221 (internal:ESN853-1077) E:ngreene@nortelnetworks.com
> > >
>
> --
> Paul Sijben Telno:+ 31 35 687 4774 Fax:+31 35 687 5954
> Lucent Technologies Home telno: +31 33 4557522
> Forward Looking Work e-mail: sijben(a)lucent.com
> Huizen, The Netherlands internal http://hzsgp68.nl.lucent.com/
>
2
1
People,
In yesterday's conference call, the subject of H.320 GWs was raised
briefly. In my opinion, the connection model and protocol should be able
to deal with H.320. I would like to continue discussion on it on the
mailing list.
H.320 allows a user to have a session with both audio and video on a
single 64 kbit/s channel such as an ISDN B-channel. The same channel
carries some signalling information (frame alignment, bitrate
allocation). To a MG supporting H.320, this means that on a single
endpoint, three streams can come in, carrying different types of media.
The current connection model of megaco/H.gcp does not cater to this. I
see two possible solutions:
The first is to allow multiple media in one context and to describe for
terminations the logical streams they carry. In a picture:
+----------+
| |
| +--------- signalling (FAS, BAS)
| |
B-channel ==========+ +--------- audio (16 kbit/s)
| |
| +--------- video (46.4 kbit/s)
| |
+----------+
The second solution is to have separate terminations for the different
streams. They would all "connect to" the same physical endpoint. In
order to properly identify the terminations, it is necessary to have
logical names for them. The physical endpoint they connect may have the
hierarchical name proposed in the megaco document.
Another example of a H.320 session is the case of two B-channels being
used for an audiovisual call. The following frame structure is then
possible.
+--------------------------++-----------------------+
| Channel 1 || Channel 2 |
+-----+--+--+--+--+--+--+--++--+--+--+--+--+--+--+--+
|Bit 1|B2|B3|B4|B5|B6|B7|B8||B1|B2|B3|B4|B5|B6|B7|B8|
+-----+--+--+--+--+--+--+--++--+--+--+--+--+--+--+--+
1 | a1 |a2|a3|a4|a5|a6|v1|F ||v2|v3|v4|v5|v6|v7|v8|F |
2 | a7 |a8|a9|a |a |a |v9|F ||v |v |v |v |v |v |v |F |
3 | a |a |a |a |a |a |v |F ||v |v |v |v |v |v |v |F |
4 | a | | | | |a |v |F ||v | |v |F |
5 | a | | | | |a |v |F ||v | |v |F |
6 | a | | | | |a |v |F ||v | |v |F |
7 | a | | | | |a |v |F ||v | |v |F |
8 | a | | | | |a |v |F ||v | |v |F |
+---------------------------------------------------+
9 | a | | | | |a |v |B ||v | |v |B |
10 | a | | | | |a |v |B ||v | |v |B |
11 | a | | | | |a |v |B ||v | |v |B |
12 | a | | | | |a |v |B ||v | |v |B |
13 | a | | | | |a |v |B ||v | |v |B |
14 | a | | | | |a |v |B ||v | |v |B |
15 | a | | | | |a |v |B ||v | |v |B |
16 | a | | | | |a |v |B ||v | |v |B |
+---------------------------------------------------+
17 | a | | | | |a |v |v ||v | |v |v |
.
.
.
80 | a | | | | |a |v |v ||v | |v |v |
+---------------------------------------------------+
(a=audio, v=video, F=FAS, B=BAS).
We see that the video stream is split up over two channels. In order to
cater to this, it seems we have to allow terminations to receive media
from and send it to multiple physical endpoints. The two approaches
outlined above can both be extended to allow this. Both extensions will
lead to the introduction of logical names for terminations. In the
first approach there will be one termination "containing" two B-channels
on one side and three logical streams on the other. In the second
approach there will be three terminations, the one for the video stream
referencing both B-channels, the ones for signalling and audio
referencing only channel 1.
The second approach allows us to keep separate contexts for different
media types. It is then easy to delete, for instance, the video part of
a session (session used loosely to desribe the contexts for the audio
and video).
The first approach groups the streams coming from/going to one user,
making it possible to remove a user from a context more easily.
Personally, I can't decide which approach I would prefer. How do others
feel about these ideas?
Regards,
John Segers
--
John Segers email: jsegers(a)lucent.com
Lucent Technologies Room HE 344
Dept. Forward Looking Work phone: +31 35 687 4724
P.O. Box 18, 1270 AA Huizen fax: +31 35 687 5954
1
0
29 Mar '99
Here are the minutes from the call today. Please email me any corrections. I
tried to pair comments made with the person that made them - I may have made
some errors, and I know I missed the names of some of the people commenting.
Nancy (ngreene(a)nortelnetworks.com)
-----------------------------------------------------------
Megaco Protocol presentation to IETF megaco and ITU SG16 participants
March 29/99 10am-12pm EST
=================================================
Present: The number of ports being used went as high as 47, and was about 45
on average. Number of participants was higher since a number of people were
sharing ports.
Chair of the call: Glen Freundlich (ggf(a)lucent.com)
Minutes: taken by Nancy Greene (ngreene(a)nortelnetworks.com)
NOTE: In keeping with IETF and ITU procedures, the audio call cannot make
binding decisions. It is a tool for the chairs and editors, and the
participants of both the IETF and ITU working groups, to gauge support for
the protocol, and to raise issues with the protocol. Issues may be raised in
either an audio call, or on the mailing list (megaco(a)baynetworks.com) These
minutes are going to both the Megaco and ITU-SG16 mailing lists.
Meeting summary:
--------------------------
Glen Freundlich summed up results of the meeting. The goal of the meeting
was two fold:
1) it was the first opportunity for people to look at the output from the
Megaco protocol design/drafting group.
2) it was an opportunity for people to pose questions on the draft.
He saw no opposition to the connection model proposed in the Megaco protocol
I-D (ftp://www.ietf.org/internet-drafts/draft-ietf-megaco-protocol-00.txt)
To date, the Megaco protocol I-D has been run through audio call scenarios
for 3-way call, for call waiting, ... However, it needs to be tested with
multimedia scenarios. Any input in this area is appreciated.
Because there seems to be agreement on the connection model, Bryan Hill, the
H.gcp editor, will start pulling related sections from the Megaco protocol
I-D into H.gcp.
Glen will hold another audio call next week to see where we stand, and to
look towards adding more sections to H.gcp from the Megaco protocol I-D.
For the ITU-T SG16 May meeting, H.gcp needs to contain:
- a connection model
- a set of commands
- a start on parameters for those commands
- generic syntax
- and a list of issues to resolve.
Protocol timetable:
--------------------------
IETF Megaco:
- Proposed Standard by summer/99
- interoperability testing in fall/99
- Draft Standard in Feb/2000
This matches up closely with the ITU-T SG16 schedule:
- planning to get H.gcp determined in May/99
- planning to get H.gcp decided in Feb/2000
IETF/ITU collaboration:
-------------------------------
Question raised: will the two protocols be identical?
Tom Taylor proposes two options:
1) SG16 and Megaco completely agree on the protocol requirements. If this is
the case, the protocol will be identical
2) Each group has some requirements that are different from the other group.
In this case, there would be agreement on a core protocol, and then
additional parts would be defined by each group.
BUT how to do this in practice? Tom said that if a huge roadblock comes up
again, we would form a new design team to work out a solution.
Tom asked whether the group on the call agreed that the Megaco protocol I-D
was a reasonable basis for H.gcp. Feeling was that it is, but that we need
to do multimedia call walkthroughs. Mike Buckley had some concern about the
definition of a context, but thought that the basic model is workable.
Mauricio Arango thought it was a good model for multimedia. Ami Amir raised
issue about associating contexts. Glen asked the H.320 experts to look at
the protocol and see if we need to modify the connection model.
Next audio call:
----------------------
Probably next Thursday April 8/99, same time. Confirmation of date & time
and the call details will be out later this week.
Use of mailing lists:
---------------------------
It was agreed that we would try to keep all technical discussion of the
Megaco protocol on the megaco(a)baynetworks.com mailing list.
Invitations to audio calls, and audio call minutes will be sent to both the
megaco(a)baynetworks.com and to the ITU-SG16(a)mailbag.intel.com mailing lists.
Tom Taylor took the action of posting to each list, how to join the other
one.
Detailed minutes:
------------------------
Brian Rosen presented the Megaco protocol draft
(ftp://www.ietf.org/internet-drafts/draft-ietf-megaco-protocol-00.txt) He
first noted that this draft is open for discussion. It is not cast in stone.
If changes need to be made to any part, they will be made, if they are
deemed necessary to satisfy the requirements of the protocol, and agreed to
by the group.
1. New connection model
- concept of a termination - have permanent (e.g. DS0) and ephemeral (e.g.
RTP port) terminations
- a termination is named with a terminationId, this name can have wildcards,
for example to allow the MG to choose the actual physical termination, or to
request notification of an event that occurs on any DS0
- concept of a context - can add, subtract, modify terminations in a
context. A context is created when the first termination is added to it, and
goes away when the last termination is subtracted from it.
- a context can have parameters associated with it - for example, a video
mixing context may have parameters to describe how the video is to be mixed
- mosaic, or current speaker/last speaker, ...
- a termination class defines parameters on a typical termination - e.g. DS0
Termination Class, RTP Termination Class
- Signals can be attached to a termination class, events can occur at a
termination class, and the MGC can specify which ones it wants to be
notified about.
- packages define signals and events
- a termination class can have more than one package that apply to it
- event naming structure allows package name to be put in front of the event
name
- question: how do you apply a call waiting tone? - answer - it is a signal
that you apply to a termination
1.1 Proposed Changes and Issues:
a. Tom Taylor proposed a more general definition for context: every
termination in a context has connectivity with all other terminations in the
context.
b. Raised by Mauricio Arango: you may want to be able to create a context
with parameters before adding any terminations to it - (e.g. turing machine)
- may want to mark a context with the max # of terminations at creation
time. - BUT this may bring lack of flexibility - may be better to let the MG
decide, as new terminations are added to a context, whether it is able to
mix them in. For example, instead of marking a context with one particular
type of codec, it is more flexible to let the MG decide what transcoding
needs to be done in a context as a function of the types of terminations
added to it.
c. Raised by a few people: we need to see whether we need a meta definition
of a "call" in the MG, so that the MG can link together related contexts.
May be needed for lipsynch between a call's video context and its audio
context. Instead of introducing a callId, this could be done using
parameters on each context referring to the other one. Benefit of a callId
is that the MGC can bring down the entire call by just referring to the
callId. (Note: "call" may be the wrong word - if we need it, let's try to
invent a new name).
d. Raised by Steve Davies: with H.320, the MG has one physical termination
with audio, video and data on it - can a termination be in more than one
context? no - so need to separate the different media out of this physical
termination before creating the contexts.
e. Raised by Tom Taylor: need to see whether it is feasible to have
decomposed H.320 gateways - if it is necessary (people say yes), then need
to be sure the Megaco protocol can handle it.
2. Commands
- grouped into commands within an action per context, and one or more
actions are grouped into a transaction. Transaction is all or nothing.
- Add, Modify, Subtract each can contain local and remote terminiation
descriptors, signalling descriptor, an events descriptor, and a digit map
descriptor
- question on how to change an event description for a termination - answer:
use Modify - can do it within a context, or outside a context - if it is
outside the context (i.e. using the NULL context), then that is how you
change the default parameters for that termination.
- MGCP's NotificationRequest is now covered in Modify
3. Multimedia and Contexts and Terminations
- a termination only belongs to one context
- for multimedia, there is one context per media type
- just as you have separate RTP flows for different media types, you have
separate contexts.
3.1 Issues
a. Paul Sijben noted that when you try to use the Megaco protocol with
H.320, there may be problems with the model - Paul will bring these up on
the mailing list.
Also, see Issue c in 1.1 above.
4. H.245 & SDP
- Tom Taylor explained that H.245 has at least 2 purposes: 1) for capability
negotiation, and 2) for specification of open logical channel. The scope of
the Megaco protocol is 2). Capability negotiation may use H.245 between
MGCs, but the Megaco protocol is between MGC and MG. SDP may be good enough
to be used between the MGC and the MG. Discussion still open here. Need to
allow for an environment where H.245 may not be involved.
- with H.263 - H.245 provides a tag to link together 2 RTP streams - need to
be able to carry this tag from the MGC to the MG.
5. underspecifying termination descriptors
- can be used as a way to tell the MG to use default values for the
termination
- a termination learns its default values at MG boot-up time
- MGC can change these values using Modify with context set to NULL.
6. underspecifying terminationIds
- provides a way of setting default parameters for a T1 for example
7. Case: two audio contexts - in different MGs, and in the same MG
- can the connection model handle this? Yes
- protocol should need to keep track of where resources are.
- it is up to the MGC to know what contexts are associated with a call.
8. overview of Audit, Notify, ServiceChange
- ServiceChange - MG can use it upon reboot, to register itself with an MGC
- MG knows which MGC to send this msg to from some method outside the scope
of this protocol (pre-provisioned, for example).
- MGC learns capabilities of MG using Audit. - can learn codecs
8.1 Issue
- need a way for MGC to learn the QUANTITY of codecs an MG has!
9. overview of Security
- this is security between an MGC and an MG
- interim method is specified for when an MG does not have IPSEC
10. Question on scope of the protocol
- check the requirements draft:
ftp://standards.nortelnetworks.com/megaco/docs/minn99
- Steve Davies brought up point that some aspects of the Megaco protocol
overlap with Policy server protocol proposals.
11. Can MG do coded renegotiation?
- David Featherstone asked if an MG can negotiate use of a new codec on its
own
- Brian Rosen answered that with AAL2 profiles, the MG can
Issue:
a. MGC may need to be involved - for example if the Quality goes down too
low, call may no longer be billable. Solution - create an event to notify
the MGC for this case.
12. How does the MGC find out RTP interfaces?
- should be able to do this using Audit in the NULL context.
13. IVR - still open for discussion
- view IVR as a termination
- issue is how much signalling effort do you need to add to the protocol?
- simply playing msgs is ok - they just look like events
- problem is when you want to put time and money values into it for example,
you have x minutes left on your calling card. - this may be out of the scope
of the Megaco protocol
14. QoS reservation
- needs more discussion
****see Meeting summary above for audio call conclusions.
end of minutes.
--------------------------------------------------------------------------
Nancy M. Greene
Internet & Service Provider Networks, Nortel Networks
T:514-271-7221 (internal:ESN853-1077) E:ngreene@nortelnetworks.com
1
0
29 Mar '99
Just a reminder of the call details, and that the passcode is 131313# for
the ITU/IETF audio call today.
> CALL DETAILS:
> DATE: Monday, March 29/99
> TIME: 10amEST till 12pmEST
> NUMBER: 613-763-6338
> PASSCODE: 131313#
> CHAIR: Nancy Greene
> problems joining? Call 613-765-CONF (613-765-2663)
>
There are 90 ports, and no more than that.
If you cannot get on the call because there are not enough ports, please be
assured that minutes of the call will be published by the end of Monday EST.
PLEASE share ports where possible if more than one person is calling from
one location.
If you have a comment to make, but can't make yourself heard on the call, or
if you can't be on the call at all, just put your comment in an email msg to
the mailing lists.
Nancy
--------------------------------------------------------------------------
Nancy M. Greene
Internet & Service Provider Networks, Nortel Networks
T:514-271-7221 (internal:ESN853-1077) E:ngreene@nortelnetworks.com
> IMPORTANT:
> *****The audio call cannot make any decisions. There will
> certainly be people that won't be available for the call.
> *****Comments sent to the mailing lists carry equal weight
> to any discussions held in the audio call.
> *****The minutes of the audio call will be published to the
> mailing lists.
> Ideally, the SG16 group would agree to as much of the
> internet-draft as they can, and turn it into the current version of H.GCP.
> The more we agree on between the two groups the better.
> CALL DETAILS:
> DATE: Monday, March 29/99
> TIME: 10amEST till 12pmEST
> NUMBER: 613-763-6338
> PASSCODE: 131313#
> CHAIR: Nancy Greene
>
> problems joining? Call 613-765-CONF (613-765-2663)
> 90 ports have been booked.
> NOTES:
> * because of the potentially large # of people, no
> tones will be used to mark people joining or leaving
> * to improve the voice quality, it is important to
> mute your phone when you are not speaking - press 63 to mute, and 66 to
> unmute.
>
> Nancy Greene Bryan Hill
> ngreene(a)nortelnetworks.com bhill(a)videoserver.com
>
>
>
1
0
I read the Draft Megaco Protocol proposal. It still make me wonder how
multimedia calls are going to be handled. I am sorry that I will not be
able to attend the audio call because of other meeting but I have to
questions I would like to raise.
1. Will the relation between a video and audio stream be managed in the
Media gateway controller.
2. What about T.120 is it part of the media streams and if so how is it
decomposed between the media gateway and the media gateway controller.
Where do we run MCS, GCC.
Roni Even
**********************************************
Roni Even
VP Product Marketing
Accord Video Telecommunication
Email: roni_e(a)accord.co.il
Tel: +972-3-9251412
Fax: +972-3-9211571
***************************************************
1
0
Attached are two notes giving the outcome of a review of requirements at the
Megaco meeting two weeks ago. I'm sorry I didn't sent this material out to
the SG 16 list earlier. I also attach a set of ATM-related requirements
which the Multi-Service Switching Forum (MSF) provided to Megaco.
It's important that we determine whether Q. 14/16 and Megaco have the same
view of requirements. I've asked Glen Freundlich for time to probe the
issue on tomorrow's H.GCP conference call. The dialogue can continue by
E-mail. In the end, I hope we can determine a core set of requirements
agreed by both groups, plus, if necessary, additional separately-documented
requirements which are specific to one group or the other.
<<Megaco Requirements>> <<Requirements Part II>>
<<draft-ietf-megaco-msf-reqs-00.txt>>
Tom Taylor
E-mail: taylor(a)nortelnetworks.com (internally Tom-PT Taylor)
Tel.: +1 613 765 4167 (internally 395-4167)
This is frequently forwarded to my residence.
1
0
Hi folks,
I put the latest document in the avc-site\incoming directory on the
picturetel site as h341wht7.[zip,doc]. The changes have been those of an
editorial nature resulting from email of Mr. Shulman, Ms. Gafni, and Ms.
Shah. The document includes content changes approved from the Monterey
SG-16 meeting. Please review the document for correctness.
The zipped version is also inserted below:
Thanks,
George Kajos
gkajos(a)videoserver.com
VideoServer Inc.
63 Third Avenue
Burlington, MA 01803
phone: 781-505-2193
fax: 781-505-2101
1
0
Paul,
you got my points exactly right. The reason that I weakened by arguments by
hinting at the ease of upgrade in an IP environment was to avoid people
coming back with this line of argument. Although using "DTMF digits" as the
means to access services may not be the most elegant one, it is probably the
simplest and it maps well onto a simple "blackphone" type of terminal. If we
do not provide for something fairly simple (both for the users and the
service providers) then the chances are that service providers will come up
with "(IP) Telephony unrelated" means to access services. E.g. they could
provide Web pages that allow customers to specify their service behaviour.
Frank
-----Original Message-----
From: Paul E. Jones [mailto:paul.jones@TIES.ITU.INT]
Sent: 22 March 1999 19:11
To: ITU-SG16(a)MAILBAG.INTEL.COM
Subject: Re: AW: Call hold and transfer in H.323 AnnexF. Too limited??
Frank,
I do agree that two mechanisms for accomplishing the same task is a bad
idea. I, too, would rather see one mechanism employed-- we do want to
create interoperable equipment, after all. Unfortunately, we already have
two ways of doing "call hold"-- H.450.4 and "empty capability sets" (see
7.6.2 of Annex F).
The issues you raise with supplementary services echoes the concerns of
those also participating in the TIPHON work. Essentially, service providers
would like to be able to add new services without upgrading software in the
endpoints. Although it may be possible to upgrade IP phone devices, I can
assure you that the average person would never do that-- once the phone is
plugged in, it will stay there until it stops working. More importantly,
why would one want to require somebody who purchased a hardware phone device
to upgrade periodically?
We need to engineer a solution so that the telephony service providers can
introduce new services without requiring software upgrades in the endpoints.
I would like to see SET device to take advantage of those newly introduced
services without software or hardware upgrades.
Paul
-----Original Message-----
From: Derks, Frank <F.Derks(a)PBC.BE.PHILIPS.COM>
To: ITU-SG16(a)MAILBAG.INTEL.COM <ITU-SG16(a)MAILBAG.INTEL.COM>
Date: Monday, March 22, 1999 5:45 AM
Subject: Re: AW: Call hold and transfer in H.323 AnnexF. Too limited??
>Folks,
>
>when talking about a Simple Endpoint Type, I think we should aim for it to
>be something that closely resembles a black phone. This way it becomes a
lot
>easier to define what its capabilities are and it makes life easy on the
>users and on those companies that will actually make (physical) IP-phones.
>These phones should probably look and act like the normal phones that are
>currently being used. Looking at how most supplementary services are
>accessed in both the public and the private (PBX) networks, I think it is
>safe to say that in most cases we are talking about "stimulus protocols".
>I.e. DTMF digits are sent to an exchange and the exchange interprets
certain
>digit sequences as being the invocation of some service rather than a
number
>to be dialled. The big advantage over functional protocols (like H.450.x)
>being that services can be added from the exchange side, without the
>terminal having to be modified as well.
>
>Functional protocols never became a success in the ISDN world and this may
>end up to be the same in the IP world. However, having said this, there is
a
>lot more potential for easy upgrading of e.g. terminal software in this
>domain, which reduces the side effects of functional protocols.
>
>It does not seem to make sense to define "alternative" mechanisms to
provide
>services, so I would strongly opt for using H.450.x when possible and using
>a simple stimulus protocol otherwise. The latter would allow service
>providers to easily make services available and I see no reason why this
>should be standardised. In practice, today, we already see that certain
>digit sequences for service activation are identical in several countries.
>
>Frank
>
>-----------------------------------------------------
>Frank Derks |Tel +31 35 6893238 |
>Advanced Development |Fax +31 35 6891030 |
>Philip Business Communications |P.O. Box 32 |
> |1200 JD Hilversum |
> |The Netherlands |
>----------------------------------------------------|
>E-mail: mailto:f.derks@pbc.be.philips.com |
>WWW: http://www.business-comms.be.philips.com |
>-----------------------------------------------------
>
>
>
>-----Original Message-----
>From: Klaghofer Karl ICN IB NL IP 7
>[mailto:Karl.Klaghofer@ICN.SIEMENS.DE]
>Sent: 18 March 1999 22:36
>To: ITU-SG16(a)MAILBAG.INTEL.COM
>Subject: AW: AW: Call hold and transfer in H.323 AnnexF. Too limited??
>
>
>See comment below.
>
>Karl
>
>> -----Ursprüngliche Nachricht-----
>> Von: Paul E. Jones [SMTP:paul.jones@TIES.ITU.INT]
>> Gesendet am: Donnerstag, 18. März 1999 18:57
>> An: ITU-SG16(a)mailbag.cps.intel.com
>> Betreff: Re: AW: Call hold and transfer in H.323 AnnexF. Too
>> limited??
>>
>> Karl,
>>
>> Unfortunately, I will have to disagree with your comments. While it is
>> true
>> that the H.450 supplementary services could be utilized in a SET device,
I
>> believe that introducing H.450 into a SET breaks the spirit of that work.
>>
>> The goal of Annex F is to define a "Simple Endpoint Type". There are
>> simpler ways to put a call on hold and to transfer a call. Introducing
>> H.450 introduces a lot more complexity that I believe we want to have.
If
>> Annex F is not sufficiently clear on how to simply transfer a call or put
>> a
>> call on hold, we should work on that text-- I will absolutely disagree
>> with
>> introducing H.450 into a SET device.
> [Klaghofer, Karl PN VS LP3] Whatever you mean with
"introducing" -
>H.450 as I sayd in my previous mail is a way of providing supplementary
>services like call hold and call transfer to a SET device. It IS already
>part of the H.323 Annex F!
>> Paul
>>
>> -----Original Message-----
>> From: Klaghofer Karl ICN IB NL IP 7 <Karl.Klaghofer(a)ICN.SIEMENS.DE>
>> To: ITU-SG16(a)MAILBAG.INTEL.COM <ITU-SG16(a)MAILBAG.INTEL.COM>
>> Date: Wednesday, March 17, 1999 3:27 PM
>> Subject: AW: Call hold and transfer in H.323 AnnexF. Too limited??
>>
>>
>> >Gunnar,
>> >
>> >You are referring to call hold and transfer in conjunction with H.323
>> Annex
>> >F SETs (Audio or Text) and clause 7.6 of H.323 Annex F.
>> >
>> >Talking about call hold, clause 7.6 of H.323 Annex F is not needed for a
>> SET
>> >at all. Call Hold works for a SET as it is defined in H.450.4.
>> >
>> >Talking about Call Transfer, clause 7.6 of H.323 Annex F is not needed
>> for
>> a
>> >SET, if the transfer is executed by the endpoints as defined in H.450.2.
>> >Codec re-negotiation you are referring to is no problem and takes place
>> >between the transferred and the transferred-to endpoint. This may cover
>> your
>> >case with wireless endpoints being involved.
>> >
>> >For call transfer, section 7.6 of H.323 Annex F is only needed if the
>> >gatekeeper or a proxy acts on behalf of the transferred SET endpoint B.
>> >However, media re-negotiation also should work here as part of the
>> fastStart
>> >method.
>> >
>> >Regards,
>> >Karl
>> >------------------------------------------------
>> >Karl Klaghofer, Siemens AG, Dpmt. ICN IB NL IP7
>> >Hofmannstr. 51, D-81359 Munich, Germany
>> >Tel.: +49 89 722 31488, Fax.: +49 89 722 37629
>> >e-mail: karl.klaghofer(a)icn.siemens.de
>> >
>> >
>> >
>> >> -----Ursprüngliche Nachricht-----
>> >> Von: Gunnar Hellstrom [SMTP:gunnar.hellstrom@OMNITOR.SE]
>> >> Gesendet am: Dienstag, 16. März 1999 23:01
>> >> An: ITU-SG16(a)mailbag.cps.intel.com
>> >> Betreff: Call hold and transfer in H.323 AnnexF. Too limited??
>> >>
>> >> Dear multimedia experts.
>> >>
>> >> In my efforts to establish the simple IP voice and text telephone Text
>> >> SET,
>> >> I came across a section in H.323 Annex F (Simple Endpoint Type, TD11
in
>> >> Monterey) that I feel is causing a functional obstacle also to the
>> voice
>> >> users. Can anyone clarify if I am correct and why it is specified the
>> way
>> >> it is.
>> >>
>> >> In section 7.6.1 and 7.6.2 it is specified:" The Audio SET device
>> shall
>> >> then resume transmitting its media stream(s) to the transport
>> address(es)
>> >> newly indicated in the OpenLogicalChannel structures."
>> >> I understand that this means that you cannot re-negotiate audio
coding,
>> >> and
>> >> you cannot add text conversation after rerouting the call from a Voice
>> >> only
>> >> SET to a Text SET.
>> >>
>> >> Re-negotiating the audio coding will probably be a desired function,
>> e.g.
>> >> when rerouting from a fixed to a wireless IP phone.
>> >> Adding a data channel for text will also be a desired function, after
>> >> answering a call in an audio-only SET, and then rerouting it to a
>> >> text-capable SET.
>> >> That action is very common in today's text telephone usage, and I
would
>> >> expect it to be just as common in the IP telephony world. You first
>> >> receive
>> >> the call in the terminal that is closest to you, and then you get a
>> reason
>> >> to start text mode. Then you transfer the call to another device with
>> text
>> >> capabilities, where you can switch mode.
>> >>
>> >> Questions:
>> >>
>> >> 1. Is that kind of call transfer that is handled by the mechanisms in
>> 7.61
>> >> and 7.6.2?
>> >>
>> >> 2. Are my conclusion right about the limitations?
>> >>
>> >> 3. Is this limitation a consequence of using Fast Connect?
>> >>
>> >> 4. Do you see any possibility to avoid the negative effects of it - to
>> >> make
>> >> re-negotiation possible?
>> >>
>> >> 5. Is the specified functionality acceptable in the voice world? If
two
>> >> devices have agreed on a voice coder, is it likely that the third
>> device
>> >> supports it? Will this not create a lot of unsuccessful call transfers
>> >> where the users will have a no chance to understand why they fail?
>> >>
>> >> ----
>> >>
>> >> Another question area:
>> >>
>> >> 6. When selecting the transport protocol for the text conversation,
the
>> >> current draft (APC 1504) specifies TCP or UDP. I realize that there
are
>> >> situations where TCP must be avoided. One such situation is a
>> sub-titled
>> >> H.332 transmission. Also other multi-casting situations is better off
>> with
>> >> a UDP based transport protocol.
>> >> I am therfore now leaning towards using RTP as the transport for text
>> >> conversation. With RTP we can discover dropped frames and possibly
>> invent
>> >> a
>> >> mechanism to mark that event in the text stream for T.140 to display.
>> If
>> >> we
>> >> have less than 3 % dropped frames, I think the users would accept it.
>> >>
>> >> 6.1 Do you agree that there are situations when TCP should be avoided,
>> and
>> >> a UDP based protocol preferred?
>> >>
>> >> 6.2 Do you agree that RTP is a good alternative, with a thin protocol
>> for
>> >> error indications to the user?
>> >>
>> >> 6.3 Most packets will carry only 1-4 characters . Can anyone give me
an
>> >> indication on the expected packet loss rates in different situations
>> for
>> >> such packets. Or a document giving such figures. Is max 3% loss
>> reachable?
>> >>
>> >> Please give your view on these questions.
>> >>
>> >> Best regards
>> >>
>> >> Gunnar Hellström
>> >> -----------------------------------------------
>> >> Gunnar Hellstrom
>> >> Representing Ericsson in ITU-T
>> >>
>> >> E-mail gunnar.hellstrom(a)omnitor.se
>> >> Tel +46 751 100 501
>> >> fax +46 8 556 002 06
2
1