H.323 Robustness

Archana Nehru archie at TRILLIUM.COM
Wed May 3 20:41:20 EDT 2000

hi all,

This is following our discussion in the last teleconference. In the last
discussion we wanted answers to the following questions:

a) do we need "call-state synchronisation" between the two legs
   of a call when an intermediate GK fails? Is it ok to assume
   that things will sort themselves out without any significant

b) Are there any cases where absence of "call-synchronisation"
   procedures can lead to hung resources or call-release
   if any) of a stable call?

c) Are there any other issues(e.g: degraded service) that we
   need to take care of in the absence of "call-synchronisation"?

d) if the answer to b)is yes, then do we need an ACK at the H.323
   layer if H.323 layer runs over SCTP/DDP?

After the teleconference, we went back to do some study and discussions with
Randy and Qiaobing and here is a summary of that:

a) While SCTP/DDP provides fault tolerance at the transport layer,
   it cannot handle the case where a GK fails after the message
   is ACKed at the SCTP layer of the GK. So in
   a case like:

                          (CRASH)     RELCOMPLETE
       EP2 <--------------  GK   <------------  EP1
      (SCTP/DDP)           (SCTP/DDP)         (SCTP/DDP)
                           NODE FAILS)

when EP1 sends a RELCOMPLETE message to the GK, the SCTP/DDP sends an
SCTP-ACK to the EP1 and if the GK node fails after this step, then the
RELCOMPLETE message is lost. SCTP/DDP layer cannot detect
such failures and therefore it is upto the H.323 protocol layer
to recover from it (if required).

So, we agreed that messages at the GK can get lost even with SCTP/DDP.
Please note that it implies that in a normal GK implementation, the H.323
layer will probably use a "queue" to exchange messages with the SCTP/DDP
layer. The SCTP/DDP will put all those messages in this "queue" for which it
has sent an SCTP-ACK. Therefore when the H.323 layer fails, we lose all the
messages that were present in the "queue" and these messages may belong to
multiple calls.

In other words, failure of the H.323 layer is not trivial, since it doesnot
mean loss of just one message belonging to that "one particular call" that
was being processed at the time of the failure. It means the loss of all
those messages that were present in the "queue" at the time of the failure
which may belong to multiple calls.

Having said this, we wanted to identify the impact of the lost messages at
the GK and at the endpoints. I am enclosing a table of some of the possible
messages that can be lost and what they might potentially translate to.
please note that this tabel is not exhaustive . As of now, the current H.323
specs does not talk
about the action that should be taken if for a particular
command/indication, the terminal doesnot respond as desired. I guess the
assumption is that the message is delivered reliably. <<CallStateSync.doc>>

We would like to discuss the issues listed in the tables with the group to
get an idea of how current implementations behave if these messages are
lost. Depending on the general consensus, we can conclude whether or not an
ACK should be introduced.


-------------- next part --------------
A non-text attachment was scrubbed...
Name: CallStateSync.doc
Type: application/msword
Size: 25600 bytes
Desc: not available
URL: <https://lists.packetizer.com/pipermail/sg16-avd/attachments/20000503/db1676ed/attachment-0006.doc>

More information about the sg16-avd mailing list