|Publication number||US7027396 B1|
|Application number||US 10/076,704|
|Publication date||Apr 11, 2006|
|Filing date||Feb 13, 2002|
|Priority date||Feb 13, 2002|
|Also published as||US7626936, US8284673, US8811185, US20100110902, US20130279352|
|Publication number||076704, 10076704, US 7027396 B1, US 7027396B1, US-B1-7027396, US7027396 B1, US7027396B1|
|Inventors||Joseph Golan, Joseph Thomas O'Neil|
|Original Assignee||At&T Corp.|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (8), Referenced by (39), Classifications (13), Legal Events (5)|
|External Links: USPTO, USPTO Assignment, Espacenet|
The present invention relates to capacity planning for backbone networks that support virtual private networks. Specifically, it defines a system and method to compute traffic matrixes for these networks. The matrixes report the number of bytes and packets that are exchanged among provider edge routers and/or service nodes. This data is essential input for capacity planning and traffic engineering tools.
A Virtual Private Network (VPN) provides secure connectivity among distributed customer sites. VPNs can be implemented by using Border Gateway (BGP) and MPLS (Multiprotocol Label Switching) technologies. The official document on this topic is RFC 2547, BGP/MPLS VPNs, by Rosen and Rekhter at http://www.ietf.org and is incorporated by reference. MPLS and VPN Architectures, by Pepelnjak and Guichard, Cisco Press 2001 is also a valuable source of information and is also incorporated by reference. It provides a practical guide to understanding, designing, and deploying MPLS and MPLS-enabled VPNs.
A backbone network connects a plurality of customer sites that comprise a plurality of VPNs. Each site has one or more customer edge (CE) routers that connect to one or more provider edge (PE) routers in the backbone network. The PE routers in the backbone may be directly connected. Alternatively, the PE routers may be connected via provider (P) routers. The PE and P routers are located in service nodes that are geographically distributed.
Capacity planning and traffic engineering for backbone networks is required to provide adequate quality-of-service. A variety of software tools in the current art can be used for this purpose. One vendor that provides such tools is the Wide Area Network Design Laboratory. A description of their products is available at http://www.wandl.com. A second vendor is Optimum Network Performance. See http://www.opnet.com for more information about their products. Other vendors also offer products in this area.
These products require input that describes the traffic demands on a backbone network. This data can be provided as a matrix that shows the number of bytes and packets transmitted between PE routers. It is necessary to report this data separately for each type-of-service. A traffic matrix is a three dimensional matrix T[x][y][z] where x is the index of an ingress PE router, y is the index of an egress PE router, and z is the type-of-service (TOS). The values of x and y range from 0 to the number of PE routers−1. The value of z ranges from 0 to the number of types of service−1.
Alternatively, a capacity planning or traffic engineering tool may require a traffic matrix that characterizes the number of bytes and packets transmitted between service nodes. A traffic matrix is a three dimensional matrix T[x][y][z] where x is the index of an ingress service node, y is the index of an egress service node, and z is the type-of-service (TOS). The values of x and y range from 0 to the number of service nodes−1. The value of z ranges from 0 to the number of types of service−1.
A variety of protocols are used to route packets in the backbone network. These protocols are defined in specifications at http://www.ietf.org. For example, the Open Shortest Path First (OSPF) protocol is used to route within an autonomous system as described in RFC 2328, OSPF Version 2, by J. Moy. The Border Gateway Protocol is used to route among autonomous systems as described in RFC 1771, A Border Gateway Protocol, by Y. Rekhter and T. Li. The Border Gateway Protocol is also described in RFC 1772, Application of the Border Gateway Protocol in the Internet, by Y. Rekhter and P. Gross. The Multi-Protocol Label Switching (MPLS) technology is described in RFC 3031 Multiprotocol Label Switching Architecture by Rosen, et. al. Many books describe these protocols as well. For example, Computer Networks, Third Edition, by A. Tanenbaum, Prentice-Hall, 1996 is an excellent reference text. Routing in the Internet, by Christian Huitema, Prentice Hall, 1995 is also valuable. BGP4 Inter-Domain Routing in the Internet, by John Stewart III, Addison-Wesley, 1999 describes BGP-4. See MPLS: Technology and Applications, by Davie and Rekhter, Morgan Kafmann, 2000 for a discussion of that protocol.
PE routers in the current art can be configured to generate records that provide summary information about packet flows. A flow is a sequence of packets from a source to a destination. A PE router identifies a flow by examining the packets that enter its interfaces. Packets having identical values for source address/port, destination address/port, protocol, type-of-service, and input interface address are considered to be part of the same flow.
Flow records contain multiple items (e.g. source address/port, destination address/port, protocol, type-of-service, input interface address). In addition, a PE router counts the number of bytes and packets that comprise this flow and includes these values in the flow record. Flow records provide raw data about packet flows through a network. A PE router is configured to transmit flow records to a specific address and port. This occurs when the flow completes. It may also occur multiple times during a flow.
Cisco is a network equipment vendor that provides flow record generation. This feature on their products is called NetFlow. Each Version 5 NetFlow record contains source IP address, destination IP address, source TCP or UDP port, destination TCP or UDP port, next hop router IP address, incoming interface address or index, outgoing interface address or index, packet count, byte count, start of flow timestamp, end of flow timestamp, IP protocol, type-of-service, TCP flags, source autonomous system, destination autonomous system, source subnet, and destination subnet. Other formats are also available. See http://www.cisco.com for a detailed description of this feature.
It is a difficult task to generate traffic matrixes. Traffic volumes through a backbone network are substantial. Flow records may total several megabytes. Centralized architectures that upload these records to one machine for processing are not satisfactory. The time to upload and process the records is substantial.
Therefore, a need exists for a distributed architecture in which records may be processed in each service node. This will significantly reduce the time to generate a matrix. It will also allow matrixes to be generated more frequently. More frequent generation of matrixes provides a more accurate view of the backbone network traffic.
Limitations of the prior art are overcome and a technical advance is achieved by the present invention. The present invention provides a system and method to generate traffic matrixes for backbone networks that support virtual private networks (VPNs). These backbone networks provide high-speed connectivity among a plurality of customer sites for a plurality of VPNs.
Provider edge (PE) routers are configured to generate records for incoming flows on external interfaces. These are the interfaces connected to customer edge (CE) routers.
Flow records are transmitted to a Flow Record Processor (FRP) in the same service node as the PE routers. The FRPs use these flow records in conjunction with configuration data extracted from the PE routers to compute partial traffic matrixes. Each partial traffic matrix indicates how packet flows are distributed from each of the PE routers that are located in a service node.
The partial traffic matrixes are uploaded to a Matrix Generator (MG) to create total traffic matrixes. A total traffic matrix is calculated by adding the partial traffic matrixes that are received from the FRPs.
A second embodiment of this invention generates traffic matrixes by using sampled flow records. Non-sampled flow records require a PE router to examine every packet that passes through the device. This can impose a significant processing overhead on the device. To minimize this overhead, a sampling technique can be used. Every M-th packet is analyzed and flows are identified from sampled packets.
A third embodiment of this invention automatically determines the measurement interval for which a traffic matrix should be generated.
The above-summarized invention will be more fully understood upon consideration of the following detailed description and the attached drawings wherein:
The backbone network 100 contains provider (P) routers 102, 104, 106, and 108. It also contains provider edge (PE) routers 130, 132, 134, and 136. The function of the P routers is to provide connectivity among the PE routers. The function of the PE routers is to provide access to the backbone network for customer sites.
Each of the routers is interconnected by links. For example, PE router 130 connects to P router 102 via link 122. P router 102 connects to P router 108 via link 120.
Customer edge (CE) routers 146, 148, 150, and 152 are situated at the customer sites 154, 156, 158, and 160, respectively. Each CE router connects to a PE router. For example, CE router 146 connects to PE router 130 via link 138.
More complex topologies are also permitted. A CE router may connect to multiple PE routers. A PE router may connect to multiple CE routers. PE routers may directly connect to each other.
Packet traffic is transported between customer sites for a specific VPN via Multiprotocol Label Switching (MPLS) tunnels. Each MPLS tunnel begins at one PE router and ends at a different PE router. For example, packet traffic is transported from customer site 154 to customer site 160 via an MPLS tunnel that starts on PE router 130 and ends on PE router 136.
Packet traffic is typically transported only between customer sites that are in the same VPN. For example, packets from VPN A are not transported to VPN B. However, more complex topologies are also permitted. Several VPNs may be combined to form an extranet. An extranet permits communication among sites in different VPNs.
Customer VPNs may use overlapping addresses. RFC 1918, Address Allocation for Private Internets, by Rekhter, et. al. at http://www.ictf.org defines portions of the IPV4 address space that are reserved for private use.
Therefore, routing process 206 must translate each IPV4 route that is received from CE router 146. Data from the Route Distinguisher Table 208 is used to convert an IPV4 route to a VPN-IPV4 route. (The Route Distinguisher Table 208 is described later in this document.) The VPN-IPV4 routes are given to the BGP process 214.
The BGP process 214 assigns an MPLS label to each VPN-IPV4 route. This data is stored in the BGP table 218. The labeled VPN-IPV4 routes 220 are exchanged with a peer BGP process 252 on PE router 136 (
P routers do not store any VPN routes. VPN routes are stored only on PE routers. This is important because it provides an architecture that scales to large numbers of VPNs.
The routers in the backbone network execute routing processes to exchange Internet Protocol Version 4 (IPV4) routes. These routes are required to transport packets across the backbone network. For example, PE router 130 executes a routing process 222 that peers with routing process 226 on P router 102.
The routers in the backbone network execute MPLS processes to exchanges label bindings across the backbone network. MPLS tunnels start and end at PE routers. For example, packets from CE router 146 can be transported to CE router 152 via an MPLS tunnel across the backbone.
The routers in the backbone network execute forwarding processes to transport labeled packets from a source PE router to a destination PE router. For example, PE router 130 executes a forwarding process 240 that receives unlabeled packets on link 238 from CE router 146. It converts these unlabeled packets to labeled packets that are transmitted to the forwarding process 244 on P router 102. The forwarding process 240 also receives labeled packets on link 242 from P router 102. It converts these labeled packets to unlabeled packets that are transmitted to CE router 146.
The P routers in the backbone have no knowledge of VPN routes. Their only function is to provide connectivity for PE routers. For example, P router 102 executes routing process 226 to exchange IPV4 routes with its neighboring routers. An MPLS process 234 exchanges label bindings with its neighboring routers. A forwarding process 244 uses these label bindings to transport packets through the backbone.
A PE router uses the RD Table to translate IPV4 routes that are received from CE routers to which it is directly connected. Assume that PE router 130 receives an IPV4 routing update from CE router 146 via link 138 (see
The RD Table on each PE router is configured so the non-unique IPV4 addresses from each VPN are converted to unique VPN-IPV4 addresses. The unique VPN-IPV4 addresses are exchanged among the PE routers by the BGP processes that execute on those routers.
Assume that CE router 146 transmits an IPV4 route to PE router 130 (see
Assume that CE router 152 transmits an IPV4 route to PE router 136. The destination address is translated to a VPN-IPV4 address by using the RD Table 264. The translated route is then stored in BGP Table 256. The BGP next hop is the loopback address of the PE router 136. The BGP process 252 exchanges this information with the BGP process that executes on each PE router. For example, BGP process 252 exchanges labeled VPN-IPV4 routes with BGP process 214.
The BGP next hop address is always the loopback address of the PE router that created the VPN-IPV4 route. This fact is critical for the operation of the present invention. It enables the egress PE router for a flow to be efficiently determined from information on the ingress PE router.
The sample record shown in
Various protocols in the current art are used to distribute label assignments among P and PE routers. The Label Distribution Protocol (LDP) is an example of such a protocol. More information about this and other alternatives can be found at http://www.ietf.org.
In some circumstances, several labels may be assigned to a packet. These labels are organized in a stack. A label is pushed to the top of the stack or pulled from the top of the stack. Further discussion of label stacks can also be found at http://wwe.ietf.org.
router 102 receives this packet 504. It swaps the top label on the stack. The value LA is replaced by the value LB. The packet 506 is then transmitted to P router 108.
P router 108 receives this packet 506. It swaps the top label on the stack. The value LB is replaced by the value LC. The packet 508 is then transmitted to PE router 136.
PE router 136 receives this packet 508. It pops the top label on the stack. It pops the bottom label on this stack. The bottom label indicates the destination CE router 152 for the packet. The unlabeled packet 510 is then transmitted to CE router 152.
This two level label stack makes it unnecessary to store VPN-IPV4 routes on P routers. This design is essential to achieve a scalable system.
Each FRP connects to the MG 922. The MG transmits configuration files to the FRPs. It receives partial traffic matrixes from the FRPs. A partial traffic matrix shows how traffic entering the PE routers in one service node is distributed. The MG adds all of the partial matrixes together to generate a total traffic matrix.
This specification uses Extensible Markup Language (XML) to format configuration information. Detail pertaining to the use of XML are well known to those skilled in the art. It is to be understood by those skilled in the art that other techniques (e.g. binary data, SNMP data) could also be used without departing from the scope and spirit of the present invention.
An excerpt from a sample file for nodes.xml 1002 is shown in the following listing. The name of the service node is Tokyo. It contains one FRP at address 22.214.171.124. It also contains two PE routers. One is named PE1 with a loopback address of 126.96.36.199. The other is named PE2 with a loopback address of 188.8.131.52.
The index associated with each PE router is determined by its sequence in the nodes.xml file. The first PE router has an index equal to 0. The last PE router has an index equal to the number of PE routers—1.
A sample file for schedule.xml 1004 is shown in the following listing. It defines two measurement intervals for which traffic matrixes are generated. The first measurement interval starts every Monday at 12:00 Greenwich Mean Time. The duration of the interval is 15 minutes. The second measurement interval starts at 4 Jul. 2001 at 22:00 Greenwich Mean Time. The duration of the interval is 20 minutes.
A measurement interval should be long enough so representative data can be collected. However, it should be short enough so storage and processing requirements are not excessive.
The format of the partial traffic matrixes 1006 and total traffic matrixes 1008 are described later in this specification.
Assume that PE router 130 receives an incoming packet flow. It stores important information about the flow (e.g. source address/port, destination address/port, type-of-service, input interface). It also counts the number of bytes and packets that comprise the flow. This data is transmitted from PE router 130 to FRP 908. The Controller 1200 receives these flow records 1204 and extracts relevant data as described in further detail hereinafter to create ingress records 1206. The ingress records 1206 are stored in an ingress file 1208.
It is important to note that PE router 130 may export several flow records for the same flow. This is because each PE router 130 contains a fixed size buffer in which it records the cumulative byte and packet counts for each active flow. If this buffer is not sufficient to contain the counts for all active flows, the data for the oldest flow is exported as a flow record. This is done to allocate buffer space for the newest flow.
After a measurement interval has completed, the Controller 1200 generates a partial traffic matrix 1216 for that interval. This calculation is done by using: (a) the nodes.xml file 1212 that was retrieved from the MG 922, (b) the schedule.xml file 1214 that was retrieved from the MG 922, (c) the BGP table 1202 that was retrieved from each PE router in the service node, (d) the RD Table 1210 that was retrieved from each PE router in the service node, and (e) the ingress file 1208 that was created from the flow records 1204 that were exported by each PE router in the service node.
The format of the source address 1306, destination address 1308, and type-of-service 1310 depend on the specific technology that is used to implement the network. For example, Internet Protocol Version 4 (IPV4) uses 32 bits for addressing and provides four bits for a priority field. Internet Protocol Version 6 (IPV6) uses 128 bits for addressing and provides eight bits for a class field. More information regarding these protocols may be found on the IETF website, http://www.ietf.org, which is incorporated by reference.
The byte count 1312 and packet count 1314 indicate the number of bytes and packets that are reported by this flow record, respectively. A PE router may export multiple flow records for a flow. Each of these records reports the number of bytes and packets for its portion of the flow.
The egress PE router name 1316 is initialized to an empty string. This element designates the name of the egress PE router for this flow. Ingress records are processed as described in detail hereinafter. When the egress PE router is identified for a flow, the name of that PE router is included in the ingress record 1206.
A traffic matrix contains one row for each PE router. The index of a PE router is determined by its sequence in the nodes.xml file. The first PE router in that file has an index of zero. The last PE router in that file has an index of R−1 where R is the number of PE routers.
Each FRP generates partial traffic matrixes that represent the distribution of packet flows from its PE routers. In the illustrative backbone network 100, each FRP 908, 914, 928, and 936 generates one row of the traffic matrix. These four partial matrixes are uploaded to the MG 922 and are added together to form a total traffic matrix 1008.
A traffic matrix contains one row for each service node. The index of a service node is determined by its sequence in the nodes.xml file. The first service node in that file has an index of zero. The last service node in that file has an index of N−1 where N is the number of service nodes.
Each FRP generates one row of the total traffic matrix. In the illustrative backbone network 100, each FRP 908, 914, 928, and 936 generates one row of the traffic matrix. These four partial matrixes are uploaded to the MG 922 and are added to form a total traffic matrix 1600.
A second embodiment of this invention uses sampled flow records to calculate a traffic matrix. To generate a flow record for every flow on an interface, a PE router must analyze every incoming packet on that interface. This can impose a significant processing overhead. To minimize this overhead, a sampling technique can be used. Every M-th packet is analyzed and flows are identified from sampled packets. The byte and packet counts reported by the PE router are adjusted accordingly (i.e. multiplied by M). Therefore, no change is required to the FRP or MG software.
The first and second embodiments of this invention use schedule.xml to define the start and duration of measurement intervals. A third embodiment of the present invention automatically selects measurement intervals for which a matrix should be generated. For example, the system can compute a matrix each day at the peak traffic interval for a specific PE router (or set of PE routers). Factors such as the traffic load on a communications link or the CPU load on a PE router can be considered when selecting a measurement interval. Historical data can be used to identify times of peak traffic. Faults reported by network elements may also be used to identify intervals when a traffic matrix should be generated. Other factors of interest to the users may also be used and are within the scope of this invention.
Thresholds can be configured to define relevant parameters. Some examples of thresholds are: total traffic on specific interfaces of specific PE routers, incoming traffic on specific interfaces of specific PE routers, outgoing traffic on specific interfaces of specific PE routers, total traffic at specific service nodes, incoming traffic at specific service nodes, and outgoing traffic at specific service nodes.
A sample schedule.xml for this embodiment is shown in the following listing. It indicates that ingress files must be generated on a continuous basis every 15 minutes.
Some of the preceding embodiments may be combined together. For example, the second and third embodiments may be used to provide a system that automatically generates matrixes for peak traffic intervals from sampled flow records.
Numerous other embodiments are also possible. For example, a system can automatically identify the most active PE routers or service nodes during a 24-hour period and generate a partial matrix th at characterizes the distribution of traffic entering and/or exiting those service nodes.
A system can use historical information to minimize the time required for uploading data and computing a matrix. Assume that a matrix is to be computed for the 15-minute interval starting at 17:00 GMT. If the system has previously computed several matrixes for this same interval on this same day of the week, it can identify those matrix elements that have the lowest byte and packet counts. In this manner, it can identify specific ingress and egress files that need not be uploaded. The matrix elements can be estimated based on an average of the stored values.
While the invention has been described with reference to specific embodiments, modifications and variations of the invention may be constructed without departing from the scope of the invention that is defined in the following claims.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US5450408 *||Sep 30, 1991||Sep 12, 1995||Hewlett-Packard Company||Method of ascertaining topology features of a network|
|US5539659 *||Feb 18, 1994||Jul 23, 1996||Hewlett-Packard Company||Network analysis method|
|US5966509 *||Jul 9, 1997||Oct 12, 1999||Fujitsu Limited||Network management device|
|US6115393 *||Jul 21, 1995||Sep 5, 2000||Concord Communications, Inc.||Network monitoring|
|US6339595 *||Dec 23, 1997||Jan 15, 2002||Cisco Technology, Inc.||Peer-model support for virtual private networks with potentially overlapping addresses|
|US6754699 *||Jul 19, 2001||Jun 22, 2004||Speedera Networks, Inc.||Content delivery and global traffic management network system|
|US20020118678 *||Feb 6, 2002||Aug 29, 2002||Tsuneo Hamada||Method of and apparatus for calculating transit traffic in individual routers|
|US20020141343 *||Dec 19, 2001||Oct 3, 2002||Bays Robert James||Methods, apparatuses and systems facilitating deployment, support and configuration of network routing policies|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7500196 *||Mar 23, 2006||Mar 3, 2009||Alcatel Lucent||Method and system for generating route distinguishers and targets for a virtual private network|
|US7626936 *||Jan 31, 2006||Dec 1, 2009||At&T Intellectual Property Ii, L.P.||Traffic matrix computation for a backbone network supporting virtual private networks|
|US7636364 *||Oct 31, 2002||Dec 22, 2009||Force 10 Networks, Inc.||Redundant router network|
|US7668181 *||Aug 11, 2003||Feb 23, 2010||At&T Intellectual Property Ii, L.P.||Virtual private network based upon multi-protocol label switching adapted to measure the traffic flowing between single rate zones|
|US7706298 *||Dec 20, 2006||Apr 27, 2010||Cisco Technology, Inc.||Route dependency selective route download|
|US7743139 *||Sep 28, 2005||Jun 22, 2010||At&T Intellectual Property Ii, L.P.||Method of provisioning a packet network for handling incoming traffic demands|
|US7792057||Dec 21, 2007||Sep 7, 2010||At&T Labs, Inc.||Method and system for computing multicast traffic matrices|
|US7978708 *||Dec 29, 2004||Jul 12, 2011||Cisco Technology, Inc.||Automatic route tagging of BGP next-hop routes in IGP|
|US8064467 *||Nov 30, 2006||Nov 22, 2011||Level 3 Communications, Llc||Systems and methods for network routing in a multiple backbone network architecture|
|US8259713||Feb 6, 2009||Sep 4, 2012||Level 3 Communications, Llc||Systems and methods for network routing in a multiple backbone network architecture|
|US8467394||May 31, 2011||Jun 18, 2013||Cisco Technology, Inc.||Automatic route tagging of BGP next-hop routes in IGP|
|US8526325 *||Jan 31, 2007||Sep 3, 2013||Hewlett-Packard Development Company, L.P.||Detecting and identifying connectivity in a network|
|US8526446||Feb 3, 2006||Sep 3, 2013||Level 3 Communications, Llc||Ethernet-based systems and methods for improved network routing|
|US8656050 *||Sep 24, 2002||Feb 18, 2014||Alcatel Lucent||Methods and systems for efficiently configuring IP-based, virtual private networks|
|US8811185||Aug 24, 2012||Aug 19, 2014||At&T Intellectual Property Ii, L.P.||Traffic matrix computation for a backbone network supporting virtual private networks|
|US8923162 *||Feb 7, 2011||Dec 30, 2014||Orange||Management of private virtual networks|
|US8953461 *||Jun 17, 2011||Feb 10, 2015||Huawei Technologies Co., Ltd.||Method, device, and system for processing border gateway protocol route|
|US8995451||Aug 31, 2012||Mar 31, 2015||Level 3 Communications, Llc||Systems and methods for network routing in a multiple backbone network architecture|
|US9106510 *||Jan 29, 2013||Aug 11, 2015||Cisco Technology, Inc.||Distributed demand matrix computations|
|US9237075||Feb 4, 2013||Jan 12, 2016||Cisco Technology, Inc.||Route convergence monitoring and diagnostics|
|US20040059831 *||Sep 24, 2002||Mar 25, 2004||Chu Thomas P.||Methods and systems for efficiently configuring IP-based, virtual private networks|
|US20040076165 *||Aug 11, 2003||Apr 22, 2004||Le Pennec Jean-Francois||Virtual private network based upon multi-protocol label switching adapted to measure the traffic flowing between single rate zones|
|US20040085965 *||Oct 31, 2002||May 6, 2004||Shivi Fotedar||Redundant router network|
|US20040255028 *||May 30, 2003||Dec 16, 2004||Lucent Technologies Inc.||Functional decomposition of a router to support virtual private network (VPN) services|
|US20060140136 *||Dec 29, 2004||Jun 29, 2006||Clarence Filsfils||Automatic route tagging of BGP next-hop routes in IGP|
|US20060215672 *||Feb 3, 2006||Sep 28, 2006||Level 3 Communications, Inc.||Ethernet-based systems and methods for improved network routing|
|US20070086429 *||Nov 30, 2006||Apr 19, 2007||Level 3 Communications, Inc.||Systems and Methods for Network Routing in a Multiple Backbone Network Architecture|
|US20070153699 *||Dec 20, 2006||Jul 5, 2007||Rex Fernando||Route dependency selective route download|
|US20070223486 *||Mar 23, 2006||Sep 27, 2007||Alcatel||Method and system for generating route distinguishers and targets for a virtual private network|
|US20080151863 *||Oct 31, 2007||Jun 26, 2008||Level 3 Communications Llc||System and method for switching traffic through a network|
|US20080181219 *||Jan 31, 2007||Jul 31, 2008||Wei Wen Chen||Detecting and identifying connectivity in a network|
|US20090141632 *||Feb 6, 2009||Jun 4, 2009||Level 3 Communication, Llc||Systems and methods for network routing in a multiple backbone network architecture|
|US20090161673 *||Dec 21, 2007||Jun 25, 2009||Lee Breslau||Method and System For Computing Multicast Traffic Matrices|
|US20110228785 *||Sep 22, 2011||Cisco Technology, Inc.||Automatic route tagging of bgp next-hop routes in igp|
|US20110242991 *||Oct 6, 2011||Lixin Zhang||Method, device, and system for processing border gateway protocol route|
|US20120314618 *||Feb 7, 2011||Dec 13, 2012||France Telecom||Management of Private Virtual Networks|
|US20130265905 *||Jan 29, 2013||Oct 10, 2013||Cisco Technology, Inc.||Distributed demand matrix computations|
|WO2012070070A1 *||Nov 24, 2011||May 31, 2012||Guavus Network Systems Pvt. Ltd.||System and method for inferring invisible traffic|
|WO2013155021A2 *||Apr 8, 2013||Oct 17, 2013||Cisco Technology, Inc.||Distributed demand matrix computations|
|Cooperative Classification||H04L45/50, H04L41/0803, H04L12/4641, H04L45/04, H04L47/11, H04L43/062|
|European Classification||H04L45/50, H04L41/08A, H04L47/11, H04L45/04, H04L12/46V|
|Feb 13, 2002||AS||Assignment|
Owner name: AT&T CORP., NEW YORK
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GOLAN, JOSEPH;O NEIL, JOSEPH THOMAS;REEL/FRAME:012617/0640
Effective date: 20020211
|Sep 22, 2009||FPAY||Fee payment|
Year of fee payment: 4
|Nov 22, 2013||REMI||Maintenance fee reminder mailed|
|Apr 11, 2014||LAPS||Lapse for failure to pay maintenance fees|
|Jun 3, 2014||FP||Expired due to failure to pay maintenance fee|
Effective date: 20140411