We face a continuing tension between the requirement to ensure interoperability and the desirability of minimizing the size of the core protocol, particularly at the MG. We also have to make sure our protocol has well-defined procedures for adding both ad hoc (i.e. proprietary or experimental) and formal extensions. This note suggests some principles which we should follow in response to these considerations.
There are really two levels of interoperability to deal with: syntactic (can the message be parsed?) and semantic (can the message recipient perform the actions which the sender is requesting?).
At the syntactic level, I believe that we can agree on these principles:
I-1: Every message must be fully parsable down to a certain level of detail, even if it contains extensions unsupported by the recipient.
I-2: Our protocol document must specify the syntax by means of which extensions are identified.
I-3: The protocol must provide the means for the recipient to identify in its response to any message all unsupported extensions it encountered in that message.
There is an open issue here: what is the minimal parsability level we want to specify? Suggestion: all messages are parsable down to the command and parameter level at a minimum. Here I use the term "parameter" to imply either a simple data value or a potentially complex data structure identified as a syntactic element in our API definitions. The implication of this suggestion is that extensions can add no new commands, no new command parameters, only new possible values for command parameters and new event packages. I'm sure others will have comments on this.
Now we get to the interesting part: semantic interoperability. I'll start with what I hope are a couple of obvious statements.
I-4: Not all implementations will support all event packages, all possible parameter values, or whatever else we allow to be extended.
I-5: The protocol definition must include a specification of what the receiver does if a message contains an unsupported extension.
On this latter point, do we enforce the transaction concept at all levels and fail the transaction, requiring the sender to reissue the command without the offending value? This is definitely a workable solution. We could go further and introduce the ability to say in a message whether a given field is a transaction-stopper, but I'm not sure that capability would be used in practice -- it wasn't when we had it in IPDC.
Now for the crucial question: if I-4 is true, what criteria do we use to say what features MUST be supported by any given implementation? Can we distinguish between required support in MGC implementations, versus required support in MG implementations? How does each side of the Megaco interface determine what the other side supports, other than by parsing transaction rejections? Here are some propositions to use as a basis for discussion:
I-6: The Megaco/H.GCP specification should be organized into a basic protocol specification which must be satisfied by all implementations, plus annexes which are conditionally mandatory. The conditions under which an annex is mandatory will be stated within the annex itself, to make it self-contained.
Some annexes will constitute extensions of the protocol (e.g. event package specifications) and must therefore be identifiable as such at the syntactic level. Other annexes may be more like profiles, stating, for example, what extensions must be supported by an MG or MGC which supports a given application. An extension identifier for the complete profile is a "nice-to-have", but would be used only as shorthand in Audit responses, not in other commands.
Looking to the future, extensions could be created: -- by joint Megaco/ITU action, added as new annexes to H.GCP -- by action of the ITU on its own, added as new annexes to H.GCP or as new Recommendations. -- by the IETF on its own, added as new RFCs In all cases, I suggest that the extensions be registered with IANA in accordance with procedures and criteria defined in the base protocol document.
I-7: As a general principle in deciding what each implementation must support, MGs should be assumed to be more specialized than MGCs. For example, we may decide (though I doubt it) that all MGCs must support all applications documented in the initial Megaco/H.GCP specification, but MGs may support an arbitrary subset of them.
This is a debatable principle -- some vendors probably have a different view of the architecture. I suspect it is the consensus view, however.
I-8: If at all possible, MGs should hide the details of national and regional variations in channel-associated signalling from the MGC.
This is more debatable than I-7. I propose it as a principle on the grounds that MGs are more likely to be serving customers in only one country or region than are MGCs, so they will be the more logical point of specialization. I have a concrete suggestion for our event packages which may help to support this principle: event descriptor syntax should include both the abstract event names and specialization identifiers which indicate, for instance, that the MG should provide busy tone for Country A rather than Country B to this particular termination. That way the MGC can pass down information it has received from signalling, while the physical details are known only to the MG.
Tom Taylor Advisor -- IPConnect Standards E-mail: taylor@nortelnetworks.com (internally, Tom-PT Taylor) Phone and FAX: +1 613 736 0961