Ray has raised some excellent points that address some fundamental issues worthy of careful consideration.
There seems to be a basic, underlying assumption that the centralization of all gateway control provides the most cost-effective, scaleable and flexible solution to the building of media gateways. In my opinion, this moves the pendulum too far and doesn't take advantage of a number of important technical lessons learned over the past few decades.
There has been a natural evolution in communications equipment architectures to distribute the workload among multiple processing elements as a means of achieving scalability. In particular, the trend has been to centralize "global" functions while distributing "local" functions. We have seen this trend on the data side with centralized routing and forwarding being replaced by centralized route computation and distributed (among the line cards) forwarding. We have seen it in the circuit-switching world with signaling and, in some cases, call control functions being distributed among the line cards supporting the corresponding bearer channels.
This approach of distributing purely local functions provides a natural system scaling - add more line cards, get more processing with inexpensive, low-end processors. Furthermore, there is no compelling reason to keep all of the detailed state information for all end points in one place - the switch creates connections among end points and end points which are not connected do not need to share state information. In fact, on a truly global scale, this distributed processing model is what has allowed us to build worldwide telephony and IP networks. The principles of distributed processing at a network level work equally well in building scaleable switching and media processing.
So, how does this principle apply to our current situation? In order to answer this question, we have to understand what we are trying to accomplish. Specifically, carriers/service providers would like to be able to rapidly deliver services on a large scale using hardware and software from multiple suppliers (hence the need for open interfaces and interoperability). The separation of the media controller (MC) from the media gateway (MG) seems to be a basic tenet of how this should be accomplished, and I have no fundamental disagreement with this. However, I believe that we have placed too much functionality in the MG and that this may work against the scalability.
In particular, with the exception of SS7 (which I'll address later), signaling is a function that can and should be distributed among the line cards. When one configures a line or a trunk on the PSTN side, inherently each end of the line/trunk is configured for the bearer channel and the associated signaling protocol that both ends understand. Basic to the signaling is the ability to collect address (i.e., the dialed number) and other information for incoming calls. Basic to CAS and ISDN interfaces are protocol state machines to handle all this routine processing. This first stage of processing does not vary from call to call - it's configured so that each side can understand the other.
The interesting stuff comes after this initial information is collected, and this is where an external control function like the MC can play a significant and important role. What makes a gateway "dumb" should not be the lack of signaling stacks or even call state, but rather what to do with a call when it arrives to the gateway. Each arriving call should trigger a message to the MC that basically says "here's the information that I collected for this arriving call, what should I do?" The MC may determine that it is a simple tandem call for which the MC will perhaps do some authorization on the calling number and then determine a route (by maybe consulting a gatekeeper or other backend function). Alternatively, based on the information collected by the signaling function, the MC may determine that it's a calling card call and requires a second stage of dialing. Whether tandem, calling card or other service, the response to the MG is a script (or some form of call processing instructions) from the MC that tells the MG how to process the call. In this way, the MG is "dumb" with respect to how to process each arriving call, yet assumes the signaling and call processing burden that is most cost-effectively performed by processors in the MG. The MC is now more of a pure IT environment with lots of transaction processing and database queries best suited for a general purpose computing environment. New services can be introduced by developing new scripts.
Now what about SS7? I think that early thinking on how SS7 can be used for providing PSTN interconnection for ISP remote access servers via IMTs led to the current trend of piling lots of signaling and call processing into the MCs. As many others have convincingly argued, it makes much sense to terminate SS7 A and F links on an external signaling gateway (to preserve scare SS7 point codes, to allow one to separate a significant and complex development with many country variations and certification requirements into a separate component, etc.) But an ISUP IAM message signifying an incoming call for a particular CIC (DS0 on a trunk group) can just as easily be processed by a processor on the trunk card that supports that DS0 (as though the signaling arrived via CAS or ISDN). This then makes it consistent with how CAS and ISDN trunks are served per the above discussion.
This approach seems to lead to a scaleable decomposition of the problem. Carriers are free to choose best-in-class components including: MGs with appropriate cost/performance/scaleablity/reliablity tradeoffs, service creation environments (for creating the standards- based scripts), MC for providing the intelligence to determine services and route calls, and SS7 gateways for dealing with the complex and varying legacy in PSTN signaling. As always, vendors are free to group components together. But if we don't carefully select the appropriate model of decomposition, then service providers may be needlessly locked into vendor relations because of the close and hidden coupling of what could have been separable components. This includes the close and hidden coupling of software elements within the MC based on the current thinking.
Mike Hluchyj Sonus Networks, Inc. 5 Carlisle Road Westford, MA 01886 USA phone: +1-978-692-8999 x227 fax: +1-978-392-9118 email: mhluchyj@sonusnet.com
-----Original Message----- From: Graham, Gregory [RICH6:B917-M:EXCH] [mailto:ggraham@americasm01.nt.com] Sent: Thursday, November 05, 1998 8:32 PM To: sigtran@BayNetworks.COM Subject: RE: CAS backhaul - Why backhaul
Ray,
It is true that if ISUP and Q.931 signaling is backhauled to the MC, the software on the MC will be complex, but it is better to have the complex software run on a computer that is separate from the gateway devices. You want the complex software to run on off-the-shelf computing equipment where processing power is inexpensive and scalable. Keep the special purpose gateway equipment as simple as possible.
I do agree that you want some layering in your call processing software to separate protocol specific processing from higher-level service logic, but I prefer both layers to run on the MC rather than splitting them between the MG and MC and having to standardizing a protocol between them.
Greg Graham ggraham@nortel.com (972) 684-5218
-----Original Message----- From: Zibman, Ray [SMTP:izibman@gte.com] Sent: Tuesday, November 03, 1998 8:22 AM To: 'David R. Oran'; Mauricio Arango; 'Christian Huitema' Cc: sgcp@bellcore.com; sigtran@BayNetworks.COM Subject: RE: CAS backhaul - Why backhaul
I agree that if the call agent (MC) really needs all the rich call
state
semantics present in Q.931 then the easiest approach is to backhaul
the
whole thing. What I haven't seen is much discussion of the premise of this proposition. What does the call agent need to do its mission? What
is
its mission? If there are pointers to good discussion on this topic
already,
please point me.
These questions are equally relevant to backhaul from an SS7 SG to the
MC.
If we our goal is to build any variety of full-featured end office or tandem switch out of SGs, MGs, and MCs then we need to get to (and from) the
MC
every bit of information out of any protocol used for control
signaling.
A more modest and possibly more achievable goal is to try to
characterize
the kind of call control scenarios we want the MC to deal with. Then we
could
define some relevant abstractions of call state and determine what information reduction can take place inside an SG (or the equivalent
part
of an MG for CAS, ISDN, ...) so that the call agent gets and sends what
is
relevant and the MC is protected from the irrelevant. I worry about
the
complexity of call agent based service logic that needs to deal these
very
rich protocols. I expect that if we don't do the information
reduction in
the SG or MG that it will take place anyway in some layer of the call agent before the information reaches service logic, but service
interoperability
(e.g. feature interactions, portability) will suffer.
Intelligent Network reduced the several thousand states of a modern
switch
to call models with between 2 to 32 points in call. It is recognized
that
the simpler call models do not support all the services of a class 5 switch, but they support enough to be useful to service developers for a
large,
but limited set of services. (I don't suggest using an Intelligent
Network
call model for a IP-based service model. I think there are differences in requirements.)
Let's define the mission for a call agent and the reasons for backhaul before trying to define the details.
Ray Zibman GTE Laboratories Incorporated Senior Technologist 40 Sylvan Road Office: (781) 466-2291 Waltham, MA 02454 Fax: (781) 890 9320 mailto:izibman@gte.com