« PreviousContinue »
DEVICE, METHOD AND ARTICLE OF MANUFACTURE FOR CALL SETUP PACING IN CONNECTION-ORIENTED NETWORKS
This invention relates to signaling congestion control in connection-oriented networks and more particularly to controlling network congestion due to a high number of network-attached devices demanding simultaneously access 10 to a common network-attached resource.
Modern data terminal equipments (DTEs) attached to connection-oriented networks have a very high processing capacity and are consequently very demanding in terms of connections to the network. In order to establish a connection, a data terminal equipment must exchange a signaling protocol message with the network, commonly called "call setup message". Network access switching 20 nodes can support a large number of attached-to devices and therefore may process hundreds of call setup messages simultaneously. In large networks nodes may even process thousands of such call setup messages in a very short time. ^
In some situations, a burst of simultaneous call setup messages may flow into a network causing congestion. For example, when a common resource such as a file server goes down all attached-to devices will attempt to reconnect simultaneously. This congestion may take place either at a 3Q network input access node that supports numerous devices requesting the common resource or at the output access node where the common resource attaches to the network.
Such a situation arises in a LAN (local area network) environment when based on a protocol such as ATM 35 (asynchronous transfer mode). Most of the existing ATM LANs rely on the emulation of well-known higher layer LAN protocols such as Ethernet, Token-Ring or Internet Protocol (IP), thereby creating a virtual LAN over the ATM layer. This LAN emulation is enabled by using dedicated 40 protocols, the most widespread being the so-called "Classical IP over ATM" protocol and the so-called "LAN Emulation over ATM" protocol. In each case, a protocol server is required to manage the virtual LAN over the ATM layer, and consequently, any terminal device that wants to enter the 45 virtual LAN must connect to this protocol server prior to proceeding with any other activity such as data transmission. Thus, signaling congestion may occur when too many data terminal equipments (DTEs) try to connect simultaneously to the protocol server. 50
Several approaches can be used to address this type of signaling congestion problem. One approach consists in "just doing nothing", that is, let the network recover from congestion by itself. When bursts of call setup messages are received, many of them are rejected. The rejected devices 55 will retry to connect, and hopefully the connection requests will desynchronize with over time so as not to create a congestion state again. Unfortunately, there is no guarantee that the connection requests will desynchronize. Also the time interval for the requests to desynchronize is not deter- 60 minable. Furthermore, this approach is not scalable in that if more devices share the same resource, the congestion will worsen. Therefore, this approach is not satisfactory in the context of a high speed/performance network.
Another approach is to increase the processing power of 65 the switching nodes. This would be acceptable if networks were static and not continuing to increase in size and
utilization. Networks grow faster than the processing power of the switching nodes which makes this approach a short term solution. Therefore, unsatisfactory.
Still another approach is to implement random timers in the terminal devices to manage the retry procedure. The random timers can induce the desynchronizing of all the source devices requesting connections, and therefore, they can naturally pace the call Setup messages. This approach is similar to the so-called Ethernet Backoff Timer method which is commonly implemented to solve access collision problems. However, it has the disadvantage that it depends on changes to the devices that connect to the network. This makes implementing difficult given multi-vendor multiproduct nature of most devices attaching to a network. Furthermore, no standard appears to exists that requires the terminal devices to implement such mechanisms. Finally, it is not desirable to rely on the behavior of unknown devices to protect a switching node and the network from call setup congestion.
Yet a further approach consists in limiting the number of call setup messages in the switching node in order to protect it against an overflow of such call setup messages. This solution is not very efficient since it induces a random discarding of the pending call setup requests. This can be prejudical for example, when, as it may happen, a group of connections has to be established in such a way that, if only one connection from the group fails, then all the group is torn down and the whole group must be re-established. For instance, this is the case for the control connections in LAN Emulation. Furthermore, this technique is not fair as all the users are penalized while only a few of them may have caused the congestion.
Therefore, there is a need for a solution to the above problems of the prior art that provides efficient protection to network devices from call setup overflow, while assuring the scalability of the networks. Such a solution is provided by the present invention as described hereinafter.
SUMMARY OF THE INVENTION
An object of the present invention is to provide a method for keeping control of concurrent connections setup requests in a switching node of a connection-oriented network to provide efficient protection against signaling congestion.
Another object of the invention is to provide a switching node of a connection-oriented network with a system to protect efficiently against call setup message overflow.
In accordance with the appended set of claims, these objects are achieved by providing a method and a system to prevent signaling congestion in a connection-oriented network node in situations where a plurality of networkattached data terminal equipments (source DTEs or source devices) concurrently request a connection to at least one network-attached data terminal equipment (destination DTE or destination device), each of the source DTEs sending call setup messages (CSMs) through the network node to the at least one destination DTE. The CSMs are processed by the network to establish the request connections. The method comprises the steps of: predefining a threshold number (Max) as the maximum number allowed of CSMs from the source DTEs that are actually being processed by the network at a given instant, and predefining a time frame (Window) as the time frame within which no more than Max CSMs are accepted by the network for being processed; detecting each new incoming CSM in the network node; rejecting each new incoming CSM if a number of CSMs equal to Max are already being processed by the network, or
if less than Max CSMs are actually being processed by the network, while Max CSMs have already been accepted to be processed during current Window, accepting each new incoming CSM otherwise. The step of detecting each new incoming CSM is optionally followed by a further step of 5 filtering each new incoming CSM to determine whether the CSM satisfies or not at least one predefined filtering criterion, and accepting the incoming CSM if it does not satisfy any of said at least one predefined criterion, or proceeding further with the following steps of the method 10 otherwise.
BRIEF DESCRIPTION OF THE DRAWINGS
An embodiment of the invention will now be described, by way of example only, with reference to the accompanying 15 drawings, wherein:
FIG. 1 shows the call pacing system of the present invention within a switching node of a high speed packet switching network;
FIG. 2 is a flow chart of the call pacing system; 20
FIG. 3 is a flow chart showing the connections setup acknowledgments processing;
FIG. 4 is a block diagram showing an example of situation where the call pacing system of the invention may be used to solve signaling congestion problems. 25
DETAILED DESCRIPTION OF THE
The call pacing system of the present invention is embodied in an ATM high speed network and more particularly, it 30 is implemented at network node level within the Call Admission Control (CAC) module, that is, the module that determines whether a call can be accepted. Accordingly, the call pacing system of the present invention can be considered as a CAC extension. Furthermore, the call pacing 35 system may be implemented in the access nodes of the network where the data terminal equipments or devices attach or in every node of the network (access or intermediate node). In the preferred embodiment, the call pacing system is implemented in the access nodes in the form of a 40 software system.
The preferred embodiment of the present invention contains one or more software systems or software components or functions. In this context, a software system is a collection of one or more executable software programs, and one or 45 more storage areas (for example, RAM, ROM, cache, disk, flash memory, PCMCIA, CD-ROM, Server's Memory, ftp accessible memory, etc.). In general terms, a software system should be understood to comprise a fully functional software embodiment of a function or collection of 50 functions, which can be added to an existing processing system to provide new function to that processing system. A software system is thus understood to be a software implementation of a function which can be carried out in a processor system providing new functionality. It should be 55 understood in the context of the present invention that delineations between software systems are representative of the preferred implementation. However, the present invention may be implemented using any combination or separation of software or hardware systems. Software systems 60 may be distributed on a computer usable medium such as floppy disk, diskettes, CD-ROM, PCMCIA cards, flash memory cards and/or any other computer or processor usable medium. Note that the software system may also be downloaded to a processor via a communications network or 65 from an Internet node accessible via a communications adapter.
Referring to FIG. 1, a switching node 10 of a high speed packet switching network comprises an incoming interface 11 which receives incoming packets through one or more input ports, and an outgoing interface 12 which forwards received packets through one or more output ports. The call pacing may be implemented at three different points (referred herein to as call pacing points) within switching node 10. First call pacing point 'TCP" stands for "Incoming Call Policing". In this configuration, the call pacing is performed at one or more input ports of the switching node 10 and applies to the incoming call setup messages. The second call pacing point "OCS" stands for "Outgoing Call Shaping". In this configuration call pacing is performed at one or more output ports and the pacing is performed on outgoing call setup messages. The last call pacing point "GCP" stands for "Global Call Pacing". In this configuration call pacing is performed globally in the switch on all I/O ports in the same way. Finally, call pacing of the present invention may be performed at each call pacing point independently of the other call pacing points. Any combination of call pacing points in which call pacing is performed concurrently may also be obtained.
Call pacing of the present invention makes use of a "thresholding" technique and a "windowing" technique as explained hereafter. A counter CNT is dedicated to count the call setup messages (in packet or cell format) which arrive in the switching node. Apredefined number Max called "call setup threshold" defines a maximum allowed number of concurrent connection requests (i.e., call setup messages) that are actually being processed by the network at a given time. Furthermore, a time frame called "Window" is defined as the time frame within which no more than Max call setup messages are allowed to be accepted by call pacing for processing, even if less than Max concurrent connections are actually being set up. Indeed, when a connection is set up after the corresponding call setup message has been processed by the network, the destination data terminal equipment (DTE) sends an acknowledgment message (in the form of packets or cells) through the switching node to the source DTE which requested the connection. These acknowledgment messages are used to decrement counter CNT.
Additionally, the present invention may use a "call pacing filter" to select the call setup messages to be paced. The filtering performed is dependent on the particular implementation of the present invention and the nature of the network. For example, call setup messages may be filtered according to the destination DTE address as in one preferred embodiment, or according to characteristics of the connection requested, as the type of traffic (e.g. CBR—constant bit rate, VBR—variable bit rate) or as the associated QoS (quality of service). If there is more than one call pacing point in the switching node, the type of filter may be different for each call pacing point. It should be noted that, while in the preferred embodiment of the present invention a filter has been implemented, the invention may be practiced without any filter at all.
Acceptance of a call setup message (CSM) by call pacing of the present invention, refers to the procedures applied to a CSM are performed, and in normal conditions, the CSM is forwarded to the next node of the path towards the destination DTE. Conversely, a CSM rejected by the call pacing of the present invention, refers to a message which is sent to the source DTE to notify that the connection request is refused, and the CSM is discarded.
FIG. 2 shows the general flow chart of the call pacing procedure of the present invention. The procedure starts by init step in box 20 where counter CNT is reset (i.e. set to
zero). The init step may be run when the first call setup message is received. The time frame (Window) may also be started when the first call setup message is received. In step 21 a check is made to detect if a new CSM was received at the call admission control module of the switching node. If 5 there is no new CSM received (No), the system recycles for new CSM detection. If a new CSM is detected (Yes), step 22 is performed which filters the CSM received to determine whether this CSM corresponds to a connection request satisfying at least one of the filtering criteria (i.e. a connection request to be processed by the call pacing system). A filter criteria may be, for example, the CSM destination address is a predetermined file server attached to the network. If the CSM does not satisfy the filtering criteria (No) the CSM is accepted in step 23 and the CSM is processed for the corresponding connection to be set up and the procedure 15 returns to wait for a new CSM in step 21 via A. However, If the CSM in step 22 satisfies one of the filtering criteria (Yes), the CSM must be "paced" and step 24 is entered. In step 24, the value of the to-be-paced call setup messages counter CNT is compared to threshold value Max, which 20 defines the maximum allowed number of concurrent CSM that are actually being processed by the network at the present instant. If counter CNT has already reached number Max (Yes) then current call setup message is rejected in step 25 and the procedure returns to wait for a new CSM in step 25 21 via A. If counter CNT has not reached Max (No), then the counter CNT is strictly less than Max (but superior or equal to zero), then step 26 tests whether a current time frame (Window) is running or not. If no current time frame (Window) is running (No), a new Window is started in step 30 27, and the CSM is flagged in a memory table and message counter is incremented in step 28 as a new connection corresponding to current CSM will be set up, then the connection corresponding to current CSM is accepted for processing at step 23 for the corresponding connection to be 35 set up. The flag set in step 28 is further used each time a call setup acknowledgment is received indicating that a connection has been set up, to determine whether the corresponding CSM was processed or not by the call pacing system. If so, counter CNT will be decremented as a concurrent connec- 40 tion request has been processed and completed. The acknowledgment processing procedure will be more detailed further in the description in connection with FIG. 3. Returning to step 26, if a current window is running (Yes), step 29 tests whether counter CNT has already reached 45 threshold Max during current window (even if current value of counter CNT is less than Max). If so (Yes) then current call setup message is rejected 25 and the procedure returns to wait for a new CSM in step 21 via A. If not (No), step 28 is entered to flag current CSM as explained above, increment 50 counter CNT, before entering box 23 to accept current CSM and process it for the corresponding connection to be set up. Note as decribed above only paced CSMs are counted.
As previously said, each time a connection between a source data transmission equipment (DTE) and a destination 55 DTE is set up, the destination DTE sends to the source DTE an acknowledgment message (in the form of packets or cells). When the switching node in which the invention is implemented receives such an acknowledgment message, it is determined whether the corresponding connection call 60 setup message has been processed by the call pacing system. This is done by reading a memory table containing identifiers or indicators of connections that were flagged after their call setup message was accepted and processed through the call pacing system (see FIG. 2, step 28). If so, counter CNT 65 will be decremented as a connection request has been processed and completed.
In FIG. 3, depicts a flow chart illustrating the connection setup acknowledgments processing according to the invention. In step 30, a call setup acknowledgment message is received. Then, in step 32, the corresponding connection is identified. Next, in step 33, it is determined whether the identified connection is a "flagged" connection as explained above. If so (Yes) counter CNT is decremented in step 34. If the identified connection has not been flagged (No), the process recycles to step 30 to receive a new acknowledgment message.
Following is a pseudo-code illustrating the implementation of the call pacing system of the invention as illustrated in FIG. 2. This pseudo-code may be implemented using a programming language such as the well-known C Language.
/* pseudo- code */ begin
CNT = 0; WIN = 0; /* Init */
while (New CSR received) do
if (Filter = "no") then ACCEPT_CSM /* CSM not to be
else if (CNT = Max) then RFJECT_CSR
else /* CNT < Max */
if (Window Not Running) then
WIN = CNT;
else /* window running */
if (WIN = Max) then REJECT_CSM
FIG. 4 provides a block diagram showing an example of an environment where the call pacing techniques of the present invention may be used to reduce signaling congestion problems. This example assumes a large number of data terminal equipments (DTEs) are attached to network access nodes in an ATM network. The DTEs want to simultaneously access a file server using Classical IP over ATM protocol (CIP protocol).
Referring to FIG. 4, a number N (where N is an integer) of data terminal equipments (DTEs) 41 connect network 40 through one or more network access nodes (not shown), a CIP server 42 attaches to the network through access node 46, and a file server 43 attaches to the network through access node 47. According to CIP protocol, each of the DTEs 41 must first establish a connection 44 to the CIP server 42 (i.e. registration step), referred to as "control connection". After a DTE has registered with the CIP server 42, it must establish a data connection 45 (also called "user connection") to file server 43. Note that the file server 42 must have previously registered to CIP server 42.