|Publication number||US7333425 B2|
|Application number||US 10/685,414|
|Publication date||Feb 19, 2008|
|Filing date||Oct 16, 2003|
|Priority date||Nov 19, 2002|
|Also published as||DE60235094D1, EP1422968A1, EP1422968B1, US20060126503|
|Publication number||10685414, 685414, US 7333425 B2, US 7333425B2, US-B2-7333425, US7333425 B2, US7333425B2|
|Inventors||Martin Huck, Dieter Beller, Stefan Ansorge|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (13), Non-Patent Citations (6), Referenced by (35), Classifications (11), Legal Events (3)|
|External Links: USPTO, USPTO Assignment, Espacenet|
The invention is based on a priority application EP 02 360 317.8 which is hereby incorporated by reference.
The present invention relates to the field of telecommunications and more particularly to a method and corresponding network devices for performing fault localization in a transmission network, preferably in an automatically switched optical network.
Transmission networks serve for the transport of user signals, commonly referred to as tributary signals, in the form of multiplexed transmission signals. A transmission network consists of a number a physically interconnected network elements such as add/drop multiplexers, terminal multiplexers, and cross-connects. The physical interconnection between two network elements is referred to as a section or link while the route a particular tributary takes through the transmission network from end to end is known as a path. A path is represented by a multiplexing unit such as a virtual container (VC-N) with its associated path overhead (POH) in SDH (Synchronous Digital Hierarchy). Conversely, a section is represented by an entire transmission frame such as a synchronous transport module (STM-N) with its associated section overhead (SOH).
A very basic aspect of transmission networks is availability of service. Hence, a transmission network needs to provide the means and facilities to ensure sufficient availability. Typically, these network mechanisms are distinguished in protection and restoration. The principle of both is to redirect traffic of a failed link or path to a spare link or path, respectively. Restoration means network management interaction to determine an alternative route through the network after occurrence of a failure while protection uses dedicated protection resources already available and established in the network before a failure might occur.
In order to restore a failed link or the paths on the link, the management plane needs to locate the failure in the network. This is typically achieved by analyzing alarm reports generated by the various network elements. In particular, various monitoring functions are provided at various network elements along a particular path. In the case of a failure, each of these monitors submits an alarm report. The management plane is thus flooded with a huge number of alarm reports. Fault localization is complex because the manager has to process and correlate all these alarm reports. Moreover, restoration must be delayed because the alarm reports will arrive within a certain time window.
Furthermore, fault localization by means of alarm report analysis is only feasible in centrally managed transmission networks. However, transmission networks are currently being developed where at least some functions residing in the management plane are no longer centralized in a central network management system but will be distributed across the entire network. In such a distributed control plane, a different fault localization mechanism is needed. In a first step, network elements adjacent to the fault location have to detect the failure and update their routing databases accordingly. In a second step, the routing database updates must be propagated throughout the entire network by means of routing protocols, which are running in the control plane of the network. It can take a significant amount of time to propagate the new link state information through the network upon occurrence of a failure. Moreover, this update process is not deterministic.
It is therefore an object of the present invention to provide a method and corresponding network devices which allow simplified and faster fault localization in a transmission network and which can also be employed in a distributed network management plane.
These and other objects that appear below are achieved through the use of a Tandem Connection along a segment of a transmission path to be monitored, non-intrusive intermediate Tandem Connection monitors, and temporary Tandem Connection sources created along the path segment in the case of a failure in order to forward information about the fault location at least in downstream direction but preferably also in upstream direction.
In particular, a failure adjacent network element detects the failure and activates a temporary tandem connection source function. This function creates a valid tandem connection signal and insert therein a failed link identifier corresponding to the failed link. The network element terminating and monitoring the tandem connection generates an alarm report including the failed link as indicated by the failed link identifier.
The invention has the advantages that only the tandem connection terminating network elements of a failed path will submit a fault report to the-centralized manager. Moreover restoration activities can be started earlier as no dedicated fault localization procedure must be performed in the manger upon reception of an alarm report. The average path down time is thus shortened.
In another aspect of the present invention, the failed link identifier is used to update local routing databases of intermediate network elements along a failed transmission path. This is particularly useful in label switched transmission networks, e.g., in a GMPLS/ASON. Such networks typically have a distributed control plane and thus no alarm report is sent to a central management system, but the routing information has to be updated in each network element along the failed path.
This second aspect has the advantage that information about the fault location is also available for the “local” nodes along the path. The border node is thus able to perform an optimized, i.e., failure diverse, restoration.
Other objects and advantages of the present invention will be understood in reading the following detailed description of preferred embodiments.
Preferred embodiments of the present invention will now be described with reference to the accompanying drawings in which
If the connection from N1 to N4 fails anywhere, subsequent network elements will typically create secondary alarm reports towards the network management plane, which then has to find out the exact location of the primary fault from all these alarms. In order to simplify this fault localization process, use is made of the tandem connection monitoring functions specified in ITU-T recommendations G.707 (SDH), G.709 (OTH), G.798 (OTH Atomic Functions),and G.783, which are incorporated by reference herein.
Tandem connection monitoring in transmission networks utilizing SDH (Synchronous Digital Hierarchy) uses the N1 byte of the path overhead (POH) of a virtual container (VC-4 or VC-3) and creates a 76 byte multiframe that is periodically repeated in the N1 byte. On VC-12 or VC-2 level, the N2 byte is available for this function. A tandem connection is usually defined on a segment of a path also referred to as trail and exists for the purpose of alarm and performance monitoring. For instance, a tandem connection can be transported over a linked sequence of sections on a transmission path. A similar functionality is achieved through the tandem connection overheads in the OTH.
However, traditional tandem connection monitoring can only detect defects on a tandem connection but not the exact location of a fault. Thus with traditional tandem connection monitoring, the management system would have to create an own Tandem Connection for each link and for each network element along the path to monitor the particular links for failures. In this case, error reports for a failure would be submitted only from the affected tandem connection and direct fault localization would thus be possible. This solution has, however, the disadvantage that the overall performance of the domain, i.e., from ingress to egress network element, cannot be monitored and that the fault location is unknown at the nodes along the path. Thus it would not be possible to start a source based rerouting from a border node in the case of a failure.
Another basic idea of the present invention is thus to introduce non-intrusive tandem connection monitors along a tandem connection created on the path segment to be monitored. In the case these monitors detect any failures in the tandem connection, temporary tandem connection source will be created to mask alarm signal on the tandem connection and to forward the failure location using a reserved byte of the 76 byte tandem connection multiframe in SDH or the TTI of OTH tandem connections.
The monitored path segment is shown in
Upon detection of a TC-SSF, each affected network element activates a temporary tandem connection source function TSn in either direction. The purpose of these temporary source functions is to insert an information about the estimated or assumed failure location into the tandem connection.
The temporarily created tandem connection sources in downstream direction create new AU4 pointers and tandem connection information and therefore mask the TC-SSF alarm towards the subsequent network elements. Moreover, this “renewed” new tandem connection contains information about the failure location. Network element N3, for example, receives a TC-SSF on its interface connected to N2. It thus assumes that link 2 has failed and includes a corresponding failure report “Link 2 fails” into the tandem connection information. N4 has also detected TC-SSF and thus includes in his renewed tandem connected the information “Link 3 fails”.
In reverse direction, the tandem connected is not affected by the failure. However, in order to inform the upstream nodes of the failure, the existing tandem connection is overwritten with a renewed one by upstream temporary tandem connection sources TS3 u and TS4 u. TS3 u reports “Reverse Link 2 fails” and TS4 u reports “Reverse Link 3 fails”.
In principle, any available byte from the 76 byte TC multiframe can be used for the failure report. However, we propose to use the so-called TTI field, i.e., the trail trace identifier field for this purpose. Analogously, the OTH tandem connection TTI can be used.
As explained above, the situation in
In other words, if a link is interrupted, the downstream path transports an AU-AIS signal, which produces the TC-SSF for the TC monitors. In the first instant, all downstream TC monitors detect a TC-SSF alarm. All nodes that detect this TC-SSF alarm create temporary TC sources sending new tandem connection information in up- and downstream direction. The TC sources send an identifier of the putative failing link. As soon as a TC source is created, the AU-AIS signal is replaced again with a valid signal. Thus the TC-SSF at all downstream TC monitors disappears and the nodes remove their TC sources. Only the TC-SSF at the TC monitor next to the failing link does not disappear and this node maintains its TC sources. When this transient phase is completed, the border nodes submit an alarm report that contains the location of the faulty link. Several TC monitors and the TC sinks may detect TC-TIM (tandem connection trail trace identifier mismatch, i.e., a wrong TC-TTI is received), but this alarm shall be suppressed and shall not lead to consequent actions like AIS generation; the received TC-TTI contains the fault location.
As explained above, the management plane can be either a central network management system or a control plane distributed across the network. The latter case is also referred to as an automatically switched transmission network, e.g., an ASON (automatically switched optical network).
Advantageous improvements of the invention in the case of automatically switched networks contain that the network elements along the path, which will be informed of failure by the received TC-TTI field, update the link status (i.e., link failed) in their routing databases. This has the advantage that the link state information can now be disseminated from several network elements across the network more or less simultaneously which makes the process much more efficient and reduces the overall convergence time. Another advantageous improvement of the invention is to start rerouting from the node close to the failing link and not from the border node. The network node closest to the failure determines an alternative route through the network and instructs the affected network elements to set up the corresponding bypass connection. It should be noted that the bypass not necessarily has to include the network element itself, that has determined the bypass. It should be understood that in principle any node along the failed path which immediately knows the failure location by means of this invention is capable to find an alternative route for the affected connection.
The advantages of the invention will now be explained in more detail in a second embodiment shown in
Each controller stores the transport plane topology of its entire domain together with link state information in a routing database. Hence, each network element NE is in principle capable of calculating a valid route at any time from a given source to a given destination, provided that its routing database is up-to-date. It is therefore necessary that, in the case of a failure, the routing database in each GMRE in the transport plane of a GMPLS/ASON network is updated rapidly.
As already explained above, routing database updates are normally done by means of routing protocols which are running in the control plane of the network. These protocols are responsible for propagating routing database changes throughout the entire network. It therefore takes some time to propagate in the case of a failure the updated link state information through the network. Moreover, this update process is not deterministic and the propagation is only done by those network elements that detect the failure, i.e., by the network elements adjacent to the failure. Rapid routing database updates are particularly important for those GMREs that have to perform restoration actions, which are typically those network elements located at the domain boundaries (border nodes) of the affected connections.
The use of Tandem Connections in accordance with the present invention, non-intrusive intermediate Tandem Connection monitors, and temporary Tandem Connection sources along a transmission path, allows to communicate failures efficiently to all network elements along the affected paths. These intermediate network elements will then update their routing databases accordingly and disseminate the information to other network elements.
In the example shown in
In other words, the TC monitors and temporary TC sources along both affected connections send an alarm notification together with the identifier of the failed link to the local GMREs. All GMREs along all affected connections are thus notified more or less simultaneously including those GMREs on the border of the domain that may have to perform restoration. All these GMREs update their routing databases immediately by putting the failed link in the ‘down’ state. The failed link is now excluded for new connection set-up and re-routing. The non-affected nodes 91-96 are informed of the failure by conventional routing protocol mechanisms.
A particular advantage of the invention is that it allows failure diverse re-routing of failed paths signals by network elements close to the failure without additional intervention at control or management plane (in order to get the failure point). NE 83 for example can determine and establish a new route for path P1 leading via NE 94 to NE 85. NE 84, however, can determine a new route for path P2 leading from NE87 via NE92 to NE83. In these cases, the initiating network element is not involved in the bypass connection, but only triggers connection set-up.
Although having described two preferred embodiments of the invention, those skilled in the art would appreciate that various changes, alterations, and substitutions can be made without departing from the spirit and concepts of the present invention.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US4956835||Oct 19, 1988||Sep 11, 1990||Alberta Telecommunications Research Centre||Method and apparatus for self-restoring and self-provisioning communication networks|
|US5748611 *||Jun 27, 1996||May 5, 1998||Mci Corporation||System and method for restoring a telecommunications network using conservative bandwidth reservation and selective message rebroadcast|
|US6404733 *||Sep 8, 1998||Jun 11, 2002||Mci Worldcom, Inc.||Method of exercising a distributed restoration process in an operational telecommunications network|
|US6496476 *||Sep 11, 1998||Dec 17, 2002||Worldcom, Inc.||System and method for restricted reuse of intact portions of failed paths|
|US6990065 *||Apr 2, 2004||Jan 24, 2006||At&T Corp.||System and method for service independent data routing|
|US20010019536 *||Mar 5, 2001||Sep 6, 2001||Hiroyuki Suzuki||Line restoring method and packet transmission equipment|
|US20020006112 *||May 4, 2001||Jan 17, 2002||Jaber Abed Mohd||Method and system for modeling and advertising asymmetric topology of a node in a transport network|
|US20020131362 *||Mar 16, 2001||Sep 19, 2002||Ross Callon||Network routing using link failure information|
|US20020167900 *||Mar 20, 2002||Nov 14, 2002||Mark Barry Ding Ken||Packet network providing fast distribution of node related information and a method therefor|
|US20020172150 *||Mar 6, 2002||Nov 21, 2002||Shinya Kano||Transmission unit and failure recovery method|
|US20030133417 *||Sep 11, 1998||Jul 17, 2003||Sig H. Badt||Method and message therefor of monitoring the spare capacity of a dra network|
|US20030208572 *||Aug 31, 2001||Nov 6, 2003||Shah Rajesh R.||Mechanism for reporting topology changes to clients in a cluster|
|US20040057375 *||Sep 3, 2003||Mar 25, 2004||Nec Corporation||Ring network for sharing protection resource by working communication paths|
|1||Katz D. et al.: "Traffic Engineering Extensions to OSPF, Internet Draft, Internet Engineering Task Force (IETF)" Traffic Engineering for Quality of Service in the internet at large scale, 'Online! Oct. 15, 2002, pp. 1-9, XP002234812.|
|2||Kompella A.: "Routing Extensions in Support of Generalized MPLS, Internet Draft, Internet Engineering Task Force (IETF)" IETF Website, Aug. 28, 2002, pp. 1-21, XP002234811.|
|3||Lang J. P.: "Link Management Protocol (LMP), Internet Engineering Task Force (IETF)" Globecom IETF Library, 'Online! Jul. 7, 2002, pp. 1-57, XP002234809.|
|4||Moy J.: "OSPF Version 2, Request for Comments (RFC), Internet Engineering Task Force (IETF)" IETF Website, 'Online!, Apr. 4, 2001, pp. 1-183, XP002234813.|
|5||Papadimitriou D. et al. "Analysis of Generalized MPLS-Based Recovery Mechanisms (including protection and restoration), Internet Draft, Internet Engineering Task Force (IETF)" IETF Webside, 'Online! Nov. 7, 2002, pp. 1-35, XP002234808.|
|6||Papadimitriou D. et al.: "G.709 Encoding For Link Management Protocol (LMP) Test Messages, Internet Draft, Internet Engineering Task Force (IETF)" www. Potaroo.NET/IETF website, 'Online!, Oct. 31, 2002, pp. 1-8, XP002234810.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7633860 *||Jan 16, 2007||Dec 15, 2009||Huawei Technologies Co., Ltd.||Method for fast rerouting|
|US7796527 *||Apr 13, 2006||Sep 14, 2010||International Business Machines Corporation||Computer hardware fault administration|
|US7861270||Sep 12, 2007||Dec 28, 2010||The Directv Group, Inc.||Method and system for controlling a back-up receiver and encoder in a local collection facility from a remote facility|
|US7934118||Apr 27, 2009||Apr 26, 2011||Microsoft Corporation||Failure notification in rendezvous federation|
|US8072874||Sep 11, 2007||Dec 6, 2011||The Directv Group, Inc.||Method and system for switching to an engineering signal processing system from a production signal processing system|
|US8077706 *||Oct 31, 2007||Dec 13, 2011||The Directv Group, Inc.||Method and system for controlling redundancy of individual components of a remote facility system|
|US8356321||Sep 11, 2007||Jan 15, 2013||The Directv Group, Inc.||Method and system for monitoring and controlling receiving circuit modules at a local collection facility from a remote facility|
|US8424044||Sep 11, 2007||Apr 16, 2013||The Directv Group, Inc.||Method and system for monitoring and switching between a primary encoder and a back-up encoder in a communication system|
|US8479234||Sep 12, 2007||Jul 2, 2013||The Directv Group, Inc.||Method and system for monitoring and controlling a local collection facility from a remote facility using an asynchronous transfer mode (ATM) network|
|US8724635||Sep 12, 2007||May 13, 2014||The Directv Group, Inc.||Method and system for controlling a back-up network adapter in a local collection facility from a remote facility|
|US8973058||Sep 11, 2007||Mar 3, 2015||The Directv Group, Inc.||Method and system for monitoring and simultaneously displaying a plurality of signal channels in a communication system|
|US8988986||Sep 12, 2007||Mar 24, 2015||The Directv Group, Inc.||Method and system for controlling a back-up multiplexer in a local collection facility from a remote facility|
|US9037074||Oct 30, 2007||May 19, 2015||The Directv Group, Inc.||Method and system for monitoring and controlling a local collection facility from a remote facility through an IP network|
|US9049037||Oct 31, 2007||Jun 2, 2015||The Directv Group, Inc.||Method and system for monitoring and encoding signals in a local facility and communicating the signals between a local collection facility and a remote facility using an IP network|
|US9049354||Oct 30, 2007||Jun 2, 2015||The Directv Group, Inc.||Method and system for monitoring and controlling a back-up receiver in local collection facility from a remote facility using an IP network|
|US9203540 *||Aug 13, 2010||Dec 1, 2015||Alcatel Lucent||Method and apparatus for automatic discovery in optical transport networks|
|US20070071447 *||Dec 22, 2005||Mar 29, 2007||Fujitsu Limited||Optical receiving apparatus and dispersion compensating method therein|
|US20070097857 *||Nov 2, 2006||May 3, 2007||Huawei Technologies Co., Ltd.||Method and Apparatus for Fault Management in Ethernet and Multi-Protocol Label Switching Network Interworking Network|
|US20070189265 *||Jan 16, 2007||Aug 16, 2007||Zhenbin Li||Method for fast rerouting|
|US20070230359 *||Jul 13, 2006||Oct 4, 2007||Fujitsu Limited||Path alarm processing method and device|
|US20070260909 *||Apr 13, 2006||Nov 8, 2007||Archer Charles J||Computer Hardware Fault Administration|
|US20090066848 *||Sep 12, 2007||Mar 12, 2009||The Directv Group, Inc.||Method and system for controlling a back-up receiver and encoder in a local collection facility from a remote facility|
|US20090067365 *||Sep 11, 2007||Mar 12, 2009||The Directv Group, Inc.||Method and System for Switching to an Engineering Signal Processing System from a Production Signal Processing System|
|US20090067432 *||Sep 12, 2007||Mar 12, 2009||The Directv Group, Inc.||Method and system for controlling a back-up multiplexer in a local collection facility from a remote facility|
|US20090067433 *||Sep 12, 2007||Mar 12, 2009||The Directv Group, Inc.||Method and system for controlling a back-up network adapter in a local collection facility from a remote facility|
|US20090067490 *||Sep 11, 2007||Mar 12, 2009||The Directv Group, Inc.||Method and system for monitoring and switching between a primary encoder and a back-up encoder in a communication system|
|US20090068959 *||Sep 11, 2007||Mar 12, 2009||The Directv Group, Inc.||Method and system for operating a receiving circuit for multiple types of input channel signals|
|US20090070825 *||Sep 11, 2007||Mar 12, 2009||The Directv Group, Inc.||Method and System for Monitoring and Controlling Receiving Circuit Modules at a Local Collection Facility From a Remote Facility|
|US20090070829 *||Sep 11, 2007||Mar 12, 2009||The Directv Group, Inc.||Receiving circuit module for receiving and encoding channel signals and method for operating the same|
|US20090070830 *||Sep 11, 2007||Mar 12, 2009||The Directv Group, Inc.||Method and System for Monitoring a Receiving Circuit Module and Controlling Switching to a Back-up Receiving Circuit Module at a Local Collection Facility from a Remote Facility|
|US20090070838 *||Sep 11, 2007||Mar 12, 2009||The Directv Group, Inc.||Method and system for communicating between a local collection facility and a remote facility|
|US20090070846 *||Sep 12, 2007||Mar 12, 2009||The Directv Group, Inc.||Method and system for monitoring and controlling a local collection facility from a remote facility using an asynchronous transfer mode (atm) network|
|US20090109836 *||Oct 31, 2007||Apr 30, 2009||Wasden Mitchell B||Method and system for controlling redundancy of individual components of a remote facility system|
|US20100107002 *||Apr 27, 2009||Apr 29, 2010||Microsoft Corporation||Failure notification in rendezvous federation|
|US20120177364 *||Aug 13, 2010||Jul 12, 2012||Dieter Beller||Method and apparatus for automatic discovery in optical transport networks|
|U.S. Classification||370/217, 370/242|
|International Classification||H04L12/28, H04Q11/04|
|Cooperative Classification||H04L41/0659, H04L41/0677, H04Q11/0478, H04J2203/006|
|European Classification||H04L41/06D, H04L41/06C1, H04Q11/04S2|
|Oct 16, 2003||AS||Assignment|
Owner name: ALCATEL, FRANCE
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ANSORGE, STEFAN;BELLER, DIETER;HUCK, MARTIN;REEL/FRAME:014617/0976
Effective date: 20030210
|Aug 12, 2011||FPAY||Fee payment|
Year of fee payment: 4
|Aug 4, 2015||FPAY||Fee payment|
Year of fee payment: 8