Hi Everyone:
The reply is hereby enclosed herewith:
1. We like the idea you are proposing that GK's to be able to pass on LRQ messages. The pathValue is analogous to TTL in IP packets so a packet would eventually die and not be perpetuated by network devices. During the time the LRQ is travelling "up", what do you propose to send the
originator to keep it happy. RIP? E.g. should RIP be sent "down" every time there is a retry LRQ? [Radhika: Many thanks for supporting the model proposed in the contribution. It depends on the lower networking layer what routing protocols might be used. It may be RIP, OSPF, or others as appropriate. We are not proposing any routing schemes since H.323 is independent of lower layer networking technologies.] 2. We do not see how the following messages can be passed from one GK to another: GRQ/GCF/GRJ, RRQ/RCF/RRJ, ARQ/ACF/ARJ, BRQ/BCF/BRJ. We do not see in the present H.323 model how or what benefit is there to gain by passing these messages. May be you can show us a scenario where this is utilized. In the current H.323 model, an endpoint only talks RAS with the GK it is registered with. For example: * Endpoint A which is registered to GK1, sends an ARQ to GK1 to call B * If GK1 can not resolve B's address, it shall send LRQ, and not pass on the ARQ to the next GK. [Radhika: The proposed model is so flexible that a zone boundary can be physical or logical. This distributive model does not restrict that a H.323 entity has to register to a GK which has "geographical" or "physical" proximity. That is, the model is so flexible that the zones can also be logical instead of maintaining a boundary constrained by the physical proximity. If we consider that the ARQ message is sent by a H.323 entity to the GK. The first GK that receives the ARQ message may not have the authority to respond because the H.323 entity may not be registered with this one. So, the ARQ message has to be passed to the next GK. (If the first receiving GK has the authority [i.e., the H.323 entity is registered with this GK] to respond, then the ARQ message need not to be passed anymore.) The GK that has the authority will respond to the ARQ message. Therefore, the GK that will respond to the ARQ message will also be the serving GK. Now the serving GK may be the first GK, or the serving GK may reside to a place where multiple transit GKs may be along the path between the calling and the serving GK. Moreover, the most important point is that the ARQ message needs also to confirm the "bandwidth". In a multiple GK environment, the source-destination path may cover multiple zones, and each zone will have its own GK. Therefore, the serving GK has to work cooperatively with all other GKs to confirm the fact that the bandwidth in all zones along the source-destination is good enough to accept the call before sending the ACF message (also see Section 3.5 of the proposed contribution). It shows clearly the power of the distributive GK model proposed in the contribution in solving the complex problems in the context of large-scale network. Similarly, all other messages can also be explained why these messages need to be passed from one GK to another.]
3. Cache management is an area which we are not too comfortable either. Anywhere there is data with some persistence, it can be stale. [Radhika: Cache management is an option that may help to resolve things faster. The model is providing a flexibility how to use that option. That is, the model does not mandate that the cache has to be managed for each item. In the extreme cases, the messages need to be destined to the serving GK. I completely agree with you that the cache management has to be looked very carefully for each item. If we see that the cache management for certain items do not work well, we will not use the cache management for that item. Rather, the serving GK will be authorized to do that. We can also categorize the cache management using certain rules: authorative and non-authorative. That, if a transit GK is authorized to provide a reply (e.g., address resolutions) on behalf of the serving GK, the transit GK will be able to send a reply. In the case of non-authorative situation, the transit GK will not be to send the reply, and only the serving GK will able to do that. We can also extend this idea a little more to satisfy all conditions.] * What order of magnitude are you proposing for the cache TTL, seconds, minutes, or hours? . [Radhika: I have not thought about the exact figure. This has been left out as a design parameter. That is, H.323 will not specify its exact value.] * An address resolution request (ARQ, LRQ) may not necessarily resolve to an IP address of the destination. Consider the following cases: * The GK may return the address of the endpoint (most of the case), or * The GK's own address with a dynamic port for this call (in the case of a GK-routed model), . [Radhika: In the case of dynamic port for each call, I guess that it would be better that the serving GK should return the address (the cached port address will not have any significance unless the transit GK knows priori how the ports to be allocated by the serving GK).] or * The GK can return an address of an endpoint which belongs to a hunt-group of endpoints on a round-robin basis (e.g. the alias is "411" and the GK is performing line hunting for the next available directory assistant operator). [Radhika: In this case, the serving GK will be the appropriate entiy to provide the address resolution.] If there is any caching done anywhere in the network, the last 2 cases will fail. * In H.323 address resolution, for the most part, does not benefit from caching. When endpoint A calls endpoint B, the address resolution is done once at the beginning of the call. Then the call continues for 1, 10, 60 minutes, or longer. No more address resolution from A to B is involved when the call is up. OK, may be C does an address resolution to B while A and B are in the call. If caching is done, may be C's address resolution is faster. However, when C uses the information to call B, most of the time it will fail anyway, because B is busy (unless B has 2 lines). [Radhika: In this case, I guess that the result will be the same whether the address resolution is provided by the transit GK or by the serving GK. The only difference is that a call will be set up if the address resolution is provided by the transit GK, and the calller will find the line is busy. In the later case, no call will be set up since the serving GK will send the ARJ.] In general, we think that address caching can actually be detrimental. [Radhika: It depends how the cache management is used for each item. You are right that we cannot use the cache management in all situations blindly. We may propose some rules and exceptions as I indicated above. [One many also see how the Next Hop Servers (NHSs) will be using the cache management specified in IETF's NHRP in the context of IP and ATM network. In an ATM network, a virtual circuit will remain up for the entire duration of the call (seconds, minutes, hours, etc)]. We can also find an analogy of our situation from the NHRP cache management scheme, and can propose a similar scheme for our items as appropriate.]
If you have any more questions, please let me know.
Thanks and regards,
Radhika R. Roy AT&T, USA Tel: + 1 732 949 8657 Email: rrroy@att.com
From: Santo Wiryaman[SMTP:swiryama@VIDEOSERVER.COM] Reply To: Mailing list for parties associated with ITU-T Study Group 16 Sent: Wednesday, August 19, 1998 9:00 AM To: ITU-SG16@MAILBAG.INTEL.COM Subject: Comments on AT&T proposal
Dear Colleagues:
The following is some comments on AT&T IGCP contribution by Radhika R. Roy:
We like the idea of GK's passing on LRQ messages. The pathValue
is analogous to TTL in IP packets so a packet would eventually die and not be perpetuated by network devices. During the time the LRQ is travelling "up", what do you propose to send the originator to keep it happy. RequestInProgress(RIP)? E.g. should RIP be sent "down" every time there is a retry LRQ? 2. We do not see how the following messages can be passed from one GK to another: GRQ/GCF/GRJ, RRQ/RCF/RRJ, ARQ/ACF/ARJ, BRQ/BCF/BRJ. We do not see in the present H.323 model how or what benefit is there to gain by passing these messages. May be Mr. Roy can show us a scenario where this is utilized. In the current H.323 model, an endpoint only talks RAS with the GK it is registered with. For example:
Endpoint A which is registered to GK1, sends an ARQ to GK1 to call
B
If GK1 can not resolve B's address, it shall send LRQ, and not
pass on the ARQ to the next GK. 3. Cache management is an area which we are not too comfortable either. Anywhere there is data with some persistence, it can be stale.
What order of magnitude are you proposing for the cache TTL,
seconds, minutes, or hours?
An address resolution request (ARQ, LRQ) may not necessarily
resolve to an IP address of the destination. Consider the following cases:
The GK may return the address of the endpoint (most of the case),
or
The GK's own address with a dynamic port for this call (in the
case of a GK-routed model), or
The GK can return an address of an endpoint which belongs to a
hunt-group of endpoints on a round-robin basis (e.g. the alias is "411" and the GK is performing line hunting for the next available directory assistant operator). If there is any caching done anywhere in the network, the last 2 cases will fail.
In H.323 address resolution, for the most part, does not benefit
from caching. When endpoint A calls endpoint B, the address resolution is done once at the beginning of the call. Then the call continues for 1, 10, 60 minutes, or longer. No more address resolution from A to B is involved when the call is up. OK, may be C does an address resolution to B while A and B are in the call. If caching is done, may be C's address resolution is faster. However, when C uses the information to call B, most of the time it will fail anyway, because B is busy (unless B has 2 lines). In general, we think that address caching can actually be detrimental.
Best Regards,
Santo Wiryaman Videoserver Inc.
participants (1)
-
Roy, Radhika R, ALTEC