Tom,
I agree with your analysis. We have to investigate the co-ordination problems in the case of multiple contexts for one multimedia conference.
There's another problem related to multimedia, viz. multimedia on the circuit side. How do we with H.320 streams into the megaco/H.gcp connection model? When H.320 is used, it is posAs I see it, it will require changes to the definition of termination.
In fact,
"Taylor, Tom-PT [CAR:5V00-I:EXCH]" wrote:
Single-medium contexts as a representaion scheme are orthogonal to the question of multimedia. Multimedia implies (a) multiple media types (b) coordination between them.
If I have multiple contexts, each devoted to one medium, that satisfies (a). One suggestion on the call was to do that, then add a context attribute which linked together different contexts. This is a possible solution to (b).
My personal feeling is that the single-medium context is a good direction to explore, because in the degenerate (audio-only) case it turns out simply. The open question, of course, is whether it introduces too many problems of coordination. I'm hoping to work through the two test cases I posed before the Minneapolis meeting to see.
-----Original Message----- From: John Segers [SMTP:jsegers@LUCENT.COM] Sent: Wednesday, March 31, 1999 7:11 AM To: ITU-SG16@MAILBAG.INTEL.COM Subject: Re: Multimedia (was Re: Megaco Protocol - IETF/ITU conf call minutes)
Mauricio,
You are right that single medium contexts are simpler to reason about services and call flows. But we must take care that we don't end up with a protocol that cannot be extended to cover multimedia GWs. If multimedia applications are within scope, the protocol document should not contain a connection model that handles only audio.
What do others think? Is multimedia in the scope of megaco/H.gcp? And if so, is Paul's proposal general enough? Doesn't it lead to too much overhead for single media GWs? I'd be interested to hear opinions.
John Segers
"Arango, Mauricio" wrote:
Please disregard my previous message which by mistake I sent without
erasing
editing. Apologies to the list members.
Paul,
I understand the rationale for your multiple media context. My
preference is
the single medium context because it seems simpler for thinking about services and call flows. This seems to require more discussion. I
suggest we
advance the protocol as proposed with single-medium contexts and revisit this later if necessary.
Mauricio
-----Original Message----- From: Paul Sijben [mailto:sijben@lucent.com] Sent: Tuesday, March 30, 1999 2:45 PM To: Arango, Mauricio Cc: 'Greene, Nancy-M [CAR:5N10:EXCH]'; 'ITU-SG16@mailbag.intel.com'; 'megaco@baynetworks.com' Subject: Multimedia (was Re: Megaco Protocol - IETF/ITU conf call minutes)
Mauricio,
you write:
It could be good idea to have some entity that ties
together multiple
contexts, this was our experience in the Touring Machine
and we referred to
it as "session". However, it may not be necessary for the
most common audio
only operations, and for the sake of simplicity it may be
better to leave it
as an option.
I think you are absolutely right. We have been doing some work on that and have come up with the following which seems like a sensible extention to the H.gcp/MEGACOP work:
Problem definition: For multimedia calls you will have multiple media streams representing each of the users. these streams need to be appropriately processed (maybe mixed or something more clever) and synchronised.
The current definition in the MEGACO I-D specifies that each context only has one type of medium. This makes the logical grouping of terminations belonging to one user difficult, creates issues with synchronisation between media. It also makes it difficult to remove one user or an entire conference in one command.
We have gone over several possible solutions, you can group terminations per user, and/or per medium and/or make some kind of links between terminations or contexts which signify the synchronisation.
What seems like a workable model is to allow multiple media in one context and have a matrix binding the terminations to the right medium/user. So you would get something like:
context
medium1 | M2 | ....| Mn
user1 | x | | | x user2 | | x | | x .......... usern | x | x where x==termination
such a matrix could be a "scap paper" used by the MGW and MC to keep track of to which user and which medium each termination belongs. One could set a flag on terminations that one wants to synchronise.
Naturally you would not need this for simple voice gateways so the extentions needed to the protocol for this youl be optional.
The extentions would look like:
Commands: We now have a command structure that states:
ContextID *Command(TerminationID ....)
we would get:
ContextID [UserID] *command( [MediaID] Termination ID....)
so for example: ContextID=3 UserID=2 add (MediaID=Video3, TerminationID=RTP:32424, ....) modify (MediaID=Video3, TerminationID=*, SychWith:MediaID=Audio2)
etc.
A context could have the following properties:
- mode (unidirectional, bidirectional, mix, choose...)
- maxNumberOfUsers (e.g. 3)
- maxNumberOfMedia (e.g. 1 for a simple voice gateway)
This extention would allow the removal of one medium or one user from a conference but not hamper "simple" operations.
Paul
"Arango, Mauricio" wrote:
Nancy,
Thanks for preparing the minutes.
b. Raised by Mauricio Arango: you may want to be able to create a context with parameters before adding any terminations to it - (e.g. turing machine)
I want to clarify that I was not referring to Alan Turing's
grand work but
to something colleagues and I did some years ago in
multimedia call control,
called "Touring Machine". This effort developed a
connection model similar
to the one in MEGACO's protocol and could be worth keeping
in mind as
previous experience to help this effort.
The January 1993 of the Communications of the ACM has an
article on the
Touring Machine. Following is an excerpt desribing its
connection model
(page 71):
"The logical topology of a session is specified as a set of
typed connectors
(see Figure 2). A connector represents a multiway transport
connection among
endpoints (logical ports). A connection is an abstraction of a communications bridge, including point-to-point (two way) as well as multipoint connections. Since bridging is a medium-specific
operation,
connectors are typed by medium. A session may have one or
more connectors
per medium, with souce and sink endpoints from
participating clients. An
endpoint represents a connector termination point. Endpoints are distinguished by medium, direction of flow, and the client
receiving or
providing transport."
In the above excerpt a "connector" is equivalent to
MEGACO's context and an
"endpoint" is equivalent to MEGACO's termination.
c. Raised by a few people: we need to see whether we need a meta definition of a "call" in the MG, so that the MG can link together related contexts. May be needed for lipsynch between a call's video context and its audio context. Instead of introducing a callId, this could be done using parameters on each context referring to the other one. Benefit of a callId is that the MGC can bring down the entire call by just referring to the callId. (Note: "call" may be the wrong word - if we need it, let's try to invent a new name).
It could be good idea to have some entity that ties
together multiple
contexts, this was our experience in the Touring Machine
and we referred to
it as "session". However, it may not be necessary for the
most common audio
only operations, and for the sake of simplicity it may be
better to leave it
as an option.
Regards,
Mauricio Arango
-----Original Message----- From: Greene, Nancy-M [CAR:5N10:EXCH] [mailto:ngreene@americasm01.nt.com] Sent: Monday, March 29, 1999 8:54 PM To: 'ITU-SG16@mailbag.intel.com'; 'megaco@baynetworks.com' Subject: Megaco Protocol - IETF/ITU conf call minutes
Here are the minutes from the call today. Please email me any corrections. I tried to pair comments made with the person that made them - I may have made some errors, and I know I missed the names of some of the people commenting.
Nancy (ngreene@nortelnetworks.com)
Megaco Protocol presentation to IETF megaco and ITU SG16
participants
March 29/99 10am-12pm EST
Present: The number of ports being used went as high as 47, and was about 45 on average. Number of participants was higher since a number of people were sharing ports.
Chair of the call: Glen Freundlich (ggf@lucent.com) Minutes: taken by Nancy Greene (ngreene@nortelnetworks.com)
NOTE: In keeping with IETF and ITU procedures, the audio call cannot make binding decisions. It is a tool for the chairs and
editors, and the
participants of both the IETF and ITU working groups, to gauge support for the protocol, and to raise issues with the protocol. Issues may be raised in either an audio call, or on the mailing list (megaco@baynetworks.com). These minutes are going to both the Megaco and ITU-SG16 mailing lists.
Meeting summary:
Glen Freundlich summed up results of the meeting. The goal of the meeting was two fold:
- it was the first opportunity for people to look at the
output from the Megaco protocol design/drafting group. 2) it was an opportunity for people to pose questions on
the draft.
He saw no opposition to the connection model proposed in the Megaco protocol I-D (ftp://www.ietf.org/internet-drafts/draft-ietf-megaco-protocol -00.txt). To date, the Megaco protocol I-D has been run through audio call scenarios for 3-way call, for call waiting, ... However, it needs to be tested with multimedia scenarios. Any input in this area is appreciated.
Because there seems to be agreement on the connection model, Bryan Hill, the H.gcp editor, will start pulling related sections from the Megaco protocol I-D into H.gcp.
Glen will hold another audio call next week to see where we stand, and to look towards adding more sections to H.gcp from the Megaco protocol I-D.
For the ITU-T SG16 May meeting, H.gcp needs to contain:
- a connection model
- a set of commands
- a start on parameters for those commands
- generic syntax
- and a list of issues to resolve.
Protocol timetable:
IETF Megaco:
- Proposed Standard by summer/99
- interoperability testing in fall/99
- Draft Standard in Feb/2000
This matches up closely with the ITU-T SG16 schedule:
- planning to get H.gcp determined in May/99
- planning to get H.gcp decided in Feb/2000
IETF/ITU collaboration:
Question raised: will the two protocols be identical?
Tom Taylor proposes two options:
- SG16 and Megaco completely agree on the protocol
requirements. If this is the case, the protocol will be identical
- Each group has some requirements that are different from
the other group. In this case, there would be agreement on a core
protocol, and then
additional parts would be defined by each group.
BUT how to do this in practice? Tom said that if a huge roadblock comes up again, we would form a new design team to work out a solution.
Tom asked whether the group on the call agreed that the Megaco protocol I-D was a reasonable basis for H.gcp. Feeling was that it is, but that we need to do multimedia call walkthroughs. Mike Buckley had some concern about the definition of a context, but thought that the basic model
is workable.
Mauricio Arango thought it was a good model for multimedia. Ami Amir raised issue about associating contexts. Glen asked the H.320 experts to look at the protocol and see if we need to modify the connection model.
Next audio call:
Probably next Thursday April 8/99, same time. Confirmation of date & time and the call details will be out later this week.
Use of mailing lists:
It was agreed that we would try to keep all technical discussion of the Megaco protocol on the megaco@baynetworks.com mailing list.
Invitations to audio calls, and audio call minutes will be sent to both the megaco@baynetworks.com and to the ITU-SG16@mailbag.intel.com mailing lists.
Tom Taylor took the action of posting to each list, how to join the other one.
Detailed minutes:
Brian Rosen presented the Megaco protocol draft (ftp://www.ietf.org/internet-drafts/draft-ietf-megaco-protocol -00.txt). He first noted that this draft is open for discussion. It is not cast in stone. If changes need to be made to any part, they will be
made, if they are
deemed necessary to satisfy the requirements of the protocol, and agreed to by the group.
- New connection model
- concept of a termination - have permanent (e.g. DS0) and
ephemeral (e.g. RTP port) terminations
- a termination is named with a terminationId, this name can
have wildcards, for example to allow the MG to choose the actual physical termination, or to request notification of an event that occurs on any DS0
- concept of a context - can add, subtract, modify
terminations in a
context. A context is created when the first termination is added to it, and goes away when the last termination is subtracted from it.
- a context can have parameters associated with it - for
example, a video mixing context may have parameters to describe how the video is to be mixed
- mosaic, or current speaker/last speaker, ...
- a termination class defines parameters on a typical
termination - e.g. DS0 Termination Class, RTP Termination Class
- Signals can be attached to a termination class, events can
occur at a termination class, and the MGC can specify which ones it
wants to be
notified about.
- packages define signals and events
- a termination class can have more than one package that
apply to it
- event naming structure allows package name to be put in
front of the event name
- question: how do you apply a call waiting tone? - answer -
it is a signal that you apply to a termination
1.1 Proposed Changes and Issues: a. Tom Taylor proposed a more general definition for
context: every
termination in a context has connectivity with all other terminations in the context.
b. Raised by Mauricio Arango: you may want to be able to create a context with parameters before adding any terminations to it - (e.g. turing machine)
- may want to mark a context with the max # of terminations
at creation time. - BUT this may bring lack of flexibility - may be better to let the MG decide, as new terminations are added to a context, whether it is able to mix them in. For example, instead of marking a context with one particular type of codec, it is more flexible to let the MG decide what transcoding needs to be done in a context as a function of the types of terminations added to it.
c. Raised by a few people: we need to see whether we need a meta definition of a "call" in the MG, so that the MG can link together related contexts. May be needed for lipsynch between a call's video context and its audio context. Instead of introducing a callId, this could be done using parameters on each context referring to the other one. Benefit of a callId is that the MGC can bring down the entire call by just referring to the callId. (Note: "call" may be the wrong word - if we need it, let's try to invent a new name).
d. Raised by Steve Davies: with H.320, the MG has one physical termination with audio, video and data on it - can a termination be in more than one context? no - so need to separate the different media out of this physical termination before creating the contexts.
e. Raised by Tom Taylor: need to see whether it is
feasible to have
decomposed H.320 gateways - if it is necessary (people say yes), then need to be sure the Megaco protocol can handle it.
- Commands
- grouped into commands within an action per context, and
one or more
actions are grouped into a transaction. Transaction is
all or nothing.
- Add, Modify, Subtract each can contain local and remote
terminiation
descriptors, signalling descriptor, an events descriptor, and a digit map descriptor
- question on how to change an event description for a
termination - answer: use Modify - can do it within a context, or outside a context
- if it is
outside the context (i.e. using the NULL context), then that is how you change the default parameters for that termination.
- MGCP's NotificationRequest is now covered in Modify
- Multimedia and Contexts and Terminations
- a termination only belongs to one context
- for multimedia, there is one context per media type
- just as you have separate RTP flows for different media
types, you have separate contexts.
3.1 Issues a. Paul Sijben noted that when you try to use the Megaco
protocol with
H.320, there may be problems with the model - Paul will bring these up on the mailing list.
Also, see Issue c in 1.1 above.
- H.245 & SDP
- Tom Taylor explained that H.245 has at least 2 purposes: 1)
for capability negotiation, and 2) for specification of open logical channel. The scope of the Megaco protocol is 2). Capability negotiation may use H.245 between MGCs, but the Megaco protocol is between MGC and MG. SDP may be good enough to be used between the MGC and the MG. Discussion still open here. Need to allow for an environment where H.245 may not be involved.
- with H.263 - H.245 provides a tag to link together 2 RTP
streams - need to be able to carry this tag from the MGC to the MG.
- underspecifying termination descriptors
- can be used as a way to tell the MG to use default
values for the
termination
- a termination learns its default values at MG boot-up time
- MGC can change these values using Modify with context
set to NULL.
- underspecifying terminationIds
- provides a way of setting default parameters for a T1
for example
- Case: two audio contexts - in different MGs, and in the same MG
- can the connection model handle this? Yes
- protocol should need to keep track of where resources are.
- it is up to the MGC to know what contexts are associated
with a call.
- overview of Audit, Notify, ServiceChange
- ServiceChange - MG can use it upon reboot, to register
itself with an MGC
- MG knows which MGC to send this msg to from some method
outside the scope of this protocol (pre-provisioned, for example).
- MGC learns capabilities of MG using Audit. - can learn codecs
8.1 Issue
- need a way for MGC to learn the QUANTITY of codecs an MG has!
- overview of Security
- this is security between an MGC and an MG
- interim method is specified for when an MG does not have IPSEC
- Question on scope of the protocol
- check the requirements draft:
ftp://standards.nortelnetworks.com/megaco/docs/minn99
- Steve Davies brought up point that some aspects of the
Megaco protocol overlap with Policy server protocol proposals.
- Can MG do coded renegotiation?
- David Featherstone asked if an MG can negotiate use of a
new codec on its own
- Brian Rosen answered that with AAL2 profiles, the MG can
Issue: a. MGC may need to be involved - for example if the Quality goes down too low, call may no longer be billable. Solution - create an event to notify the MGC for this case.
- How does the MGC find out RTP interfaces?
- should be able to do this using Audit in the NULL context.
- IVR - still open for discussion
- view IVR as a termination
- issue is how much signalling effort do you need to add to
the protocol?
- simply playing msgs is ok - they just look like events
- problem is when you want to put time and money values into
it for example, you have x minutes left on your calling card. - this may be out of the scope of the Megaco protocol
- QoS reservation
- needs more discussion
****see Meeting summary above for audio call conclusions.
end of minutes.
Nancy M. Greene Internet & Service Provider Networks, Nortel Networks T:514-271-7221 (internal:ESN853-1077) E:ngreene@nortelnetworks.com
-- Paul Sijben Telno:+ 31 35 687 4774 Fax:+31 35 687 5954 Lucent Technologies Home telno: +31 33 4557522 Forward Looking Work e-mail: sijben@lucent.com Huizen, The Netherlands internal http://hzsgp68.nl.lucent.com/
-- John Segers email: jsegers@lucent.com Lucent Technologies Room HE 344 Dept. Forward Looking Work phone: +31 35 687 4724 P.O. Box 18, 1270 AA Huizen fax: +31 35 687 5954
-- John Segers email: jsegers@lucent.com Lucent Technologies Room HE 344 Dept. Forward Looking Work phone: +31 35 687 4724 P.O. Box 18, 1270 AA Huizen fax: +31 35 687 5954