|Publication number||US7046630 B2|
|Application number||US 10/286,464|
|Publication date||May 16, 2006|
|Filing date||Nov 1, 2002|
|Priority date||Mar 8, 1996|
|Also published as||US6108304, US6512745, US20030091049|
|Publication number||10286464, 286464, US 7046630 B2, US 7046630B2, US-B2-7046630, US7046630 B2, US7046630B2|
|Inventors||Hajime Abe, Kazuho Miki, Noboru Endo, Akihiko Takase, Yoshito Sakurai|
|Original Assignee||Hitachi, Ltd.|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (39), Non-Patent Citations (17), Referenced by (125), Classifications (54), Legal Events (3)|
|External Links: USPTO, USPTO Assignment, Espacenet|
This application is a continuation of the U.S. continuation patent application Ser. No. 09/410,562 filed Oct. 9, 1999 now U.S. Pat. No. 6,512,745 which is a continuation application of U.S. patent application Ser. No. 09/093,265, filed on Jun. 8, 1998 now U.S. Pat. No. 6,108,304, which in turn claims a priority of a continuation-in-part application of U.S. patent application Ser. No. 08/810,733 filed on Mar. 4, 1997 now U.S. Pat. No. 6,002,668 and U.S. patent application Ser. No. 08/998,382 filed on Dec. 24, 1997 now U.S. Pat. No. 6,304,555.
1. Field of the Invention
This invention relates to a network, a packet switching network, a packet switching system, and network management equipment which efficiently process a large amount of connectionless data traffic using a connection-oriented network such as an ATM network.
2. Description of Related Art
Recently, as the Internet has rapidly evolved, networks and switching systems which efficiently process a large amount of connectionless data traffic with the use of a connection-oriented network, such as a ATM network, have been offered. ‘Connectionless’ means that data is sent without first setting up a connection to the destination, while ‘connection-oriented’ means that data is sent after setting up a connection to the destination.
For example, the MPOA protocol architecture is described on page 121 in “ATM Internetworking” (Nikkei BP Publishing Center INC.; First edition, Sep. 22, 1995). MPOA is an abbreviation for Multi Protocol ATM. When communicating via the MPOA, an ATM address generated at the MPOA server by converting the layer-3 destination address (for example, destination IP (Internet Protocol) address) is obtained and then an ATM connection is set up using the ATM signaling protocol. Note that the ATM connection used in the protocol is an SVC (Switched Virtual Connection) which is set up on a request-basis when there is data traffic to be sent. The signaling protocol for an SVC is described, for example, in “ATM Forum UNI version 3.1” (Prentice-Hall, Inc.: 1995).
Another communication protocol is an RSVP (Resource Reservation Protocol) described in “RSVP: A New Resource ReSerVation Protocol” (September 1993 number of IEEE Network). The RSVP requires that the receiver sequentially reserve communication bandwidth, a router, a buffer, and other resources for a data path between the sender and the receiver. After the resources have been reserved, data is sent.
A typical connection-oriented communication is a telephone. This communication requires real-time software processing, called call admission control, and resource reservation. Once the resources are reserved, the communication bandwidth, usually the bidirectional bandwidth, is guaranteed. In this communication mode, because the resources are not released even when there is no traffic, the resource usage efficiency is low.
On the other hand, in connectionless communication which is used primarily for LANs, the resources are reserved for each burst of data. This communication is suited for sending a large amount of data instantaneously in one direction only. However, because the communication bandwidth is not always guaranteed in this communication, resource contention occurs as the whole resource usage ratio becomes high. In addition, because data which could not be sent because of insufficient resources must be resent, the resources become more insufficient and, as a result, congestion may result.
ATM was introduced to solve these two problems. ATM contributes to the efficient use of resources. However, ATM still has the two problems described above. That is, ATM still requires complex call admission control and, in addition, results in congestion when the resources become insufficient.
Ideally, all communications should be done via ATM to take full advantage of ATM. However, telephones, LANs, and WANs (Wide Area Network) are used in real time communications and, therefore, the shift of all the communication facilities to those of ATM is not so easy. Because more and more traffic is expected over these networks in future, ATM networks must co-exist with conventional data communication networks.
As the term LAN implies, emphasis has been placed on local communication in the conventional data communication. Recently, however, the need for global communication, such as the Internet, has arisen. In such global communication, an error at a single site in the connectionless communication mode may cause other sites to resend data, one after another, and may cause immediate congestion around the world. This requires a large network to manage resources (such as bandwidth allocation) and to manage a large amount of resources hierarchically.
The above description deals primarily with the problems with the “quantity” and the “scale” of data communication. We must also consider the problems with “quality.” As communication finds its way into our lives, a need has arisen for a variety of services using the telephone network, including automatic message transfer, sender's number indication, collect call, and teleconferencing. To meet these needs, intelligent networks have been built in the telephone network for efficient control signal communication. It is expected that the same need will also arise for data communication. In data communication networks, intelligent networks may also be used as with telephone networks, or a virtual network may be built logically in an ATM network to take full advantage of its characteristics. However, the conventional IAN-oriented data communication networks are not fully compatible with ATM networks, meaning that in a large data communication network, various operations must be performed. For example, in a large data communication network, the user must keep track of data traffic, control communication bandwidths dynamically, or provide additional information on services. Also included in the quality features are the network error isolation function and the congestion prevention function.
The following describes in more detail the problems this invention will try to solve.
When communicating via MPOA, a request-based ATM connection is set up in the SVC mode when there is a data traffic to be sent. Therefore, the data transfer delay time is increased by the time needed to set up an ATM connection. In the worst case, the ATM connection time may be longer than the data transfer time. In addition, when many users generate data and set up request-based connections, many control packets for connection setup and disconnection are transferred before and after actual data transfer. This may result in network congestion.
On the other hand, when communicating via RSVP, the data transfer delay and the delay variation become large because the resources must be reserved before data is sent. In addition, the need to hold the resources such as bandwidth requires the sender to send a refresh packet at a regular interval for holding the resources. Therefore, when there are many users who generate data, the communication of control packets necessary for resource reservation uses a lot of bandwidth, making network management more complex.
This invention seeks to solve the following problems.
It is a first object of this invention to provide a packet switching network, packet switching device, and a network management equipment which eliminate the need to set up connections to reduce a delay and a delay variation involved in data transfer and to reduce the number of control packets for connection setup and resource reservation.
It is a second object of this invention to provide a packet switching network, a network management equipment, and a packet switching device which increase the efficiency of connectionless data flow in a large data network.
It is a third object of this invention to provide a packet switching network, network management equipment, and a packet switching device which are not vulnerable to a physical layer error (transmission path disconnection, and so on) or a logical path error (VC (Virtual Circuit) or VP (Virtual Path) disconnection).
It is a fourth object of this invention to provide a packet switching network, a network management equipment, and a packet switching device which avoid non-instantaneous (for example, several seconds), local (for example, in a specific node) congestion caused by a continuous large amount of data called a burst of data.
A network according to this invention is composed of a connection-oriented core network and a plurality of connectionless access networks with a plurality of connections (which are called permanent virtual routes (PVR) in the following description) created among a plurality of edge nodes. Upon receiving a connectionless data flow from one of the access networks, the network management equipment selects one route from the plurality of PVRs and transfers data over that PVR. As the route selection criterion, the network management equipment uses the status of each PVR, for example, an available bandwidth of each PVR.
To check and control the available bandwidth, the network management equipment keeps track of the traffic of each node or each edge node uses RM (Resource Management) packets to control the flow.
A plurality of connections are set up in advance and, when a congestion or an error is detected, the connection is switched from the main systems to the subsystem.
The access network interface in each edge node keeps (performs shaping on) the data flow transmission rate within a predetermined bandwidth for each PVR and sends data over a logical route with a granted bandwidth.
In addition, a plurality of access links are set up between an access network and the core network using a multi-link procedure to divide the amount of traffic to be sent to the core network.
The preferred embodiments of this invention are described with reference to the drawings.
The core network 100 is a connection-oriented network such as an ATM network. In
A routing protocol within an access network, such as IP (Internet Protocol), is terminated at an edge node. Within the core network 100, a connection-oriented protocol such as ATM (Asynchronous Transfer Mode) or FR (Frame Relay) is used.
Assume that the IP addresses “18.104.22.168”, “22.214.171.124”, “126.96.36.199”, and “188.8.131.52” are assigned to edge nodes EA, EB, EC, and ED, respectively.
In the communication between two edge nodes, data which is in the form of ATM cells is switched and transferred along a PVR. Between node edge EA and node edge EC, a plurality of PVRs (2), R2 and R6, which run along two different data links, are previously defined.
The transmit and receive module 408, connected to the nodes, transfer status data among edge nodes and relay nodes. The data writing module 404 and the data analyzing module 405 are connected to the network management data storage device 401. The former records status data and the latter analyzes status data.
The dynamically changing bandwidth refers to a bandwidth currently used by each node for data transfer. This bandwidth s measured at each node (edge node and relay node), for example, for each connection. An available bandwidth for each route refers to a bandwidth available for each transfer route for additional use. The network management equipment 200 calculates this available bandwidth for each node based on the dynamically changing bandwidth that was measured. An assigned bandwidth for each route-between-route is a bandwidth assigned to each route-between-route between access networks (that is, between edge nodes). The network management equipment determines this assigned bandwidth so that it does not exceed the available bandwidth for each route.
Each edge node and relay node shown in
The ATM switch uses the common buffer switch technology “Switching System” disclosed, for example, in Japanese Patent Laid-Open Publication (KOKAI) No. Hei 4-276943, U.S. patent application Ser. No. 08/306978, and EP Patent No. 502436.
The ATM handler shown in
The line interface on the access network side 10 in
The following describes the steps in the bandwidth information processing flowchart (
A relay node is also capable of measuring the bandwidth and transmitting the bandwidth information. In addition, as in the above-described edge node, a relay node may store the bandwidth information sent from the network management equipment and may adjust the transmission capacity. These functions, if provided in the relay node, give an appropriate data transfer bandwidth to a route-between-route in the core network. Of course, even when this function is not provided in the relay node, the transmission capacity adjustment function provided on an edge node on the input side adjusts the bandwidth at an appropriate level.
The following explains the operation of the network and the edge node according to this invention. In the description, data originated in subnet #A is sent to subnet #C via border router RA, edge node EA, and edge node EC.
A connectionless data flow, originated within subset #A for transmission to subnet #C, reaches edge node EA via border router RA. Edge node EA checks the destination address to find that the destination subnet of this data flow is subnet #C. Then, from the two PVRs, R2 and R6, between edge node EA and subnet #C, edge node EA selects R2 which has the largest bandwidth. The heap sort method, described, for example, in page 239 of “Data structure and algorithm” (Baifukan Co., Ltd.; March 1987), is used as the fast retrieval method for the largest bandwidth route.
In this example, sending edge node EA selects a PVR. The network management equipment may ask edge node EA to select one of the PVRs.
Next, a pair of VPI/VCI=11/12 and port INF=2 corresponding to the PVR-ID of R2 is selected. The ATM cells generated by converting the connectionless data flow are then sent with VPI/VCI=11/12 in the header.
Also, the cells sent over PVR R2 are assembled into a packet at edge node EC for transfer to subnet #C.
As mentioned above, the network in this embodiment has the PVRs (Permanent Virtual Route) registered in advance in the core network 100, converts the destination IP address to an ATM address when data is transferred, and selects a PVR (permanent virtual route) corresponding to the ATM address for transfer of the IP packet, thus eliminating the connection setup delay time and decreasing the delay and the delay variation involved in the data transfer. At the same time, the number of times the control packets are sent for setting up connections and reserving the resources is reduced.
In addition, within the core network 100, pop-by-pop routing by the processor (that is, the processor interprets the destination IP address and selects the output port destination of an IP packet in the same way the router distributes the IP packet) is not performed. Instead, data is stored in ATM cells for switching by the hardware. This reduces the data transfer delay in the core network.
Selecting the largest-bandwidth route from a plurality of PVRs (permanent virtual routes) previously registered with the core network 100 increases the connectionless data flow efficiency in a large data network.
Next, the operation that is performed when there is a change in the core network status is described.
Assume that, while IP data is transferred with PVR R2 selected as shown in
This means that, while IP data is transferred, the network management equipment 200 and the nodes work together to change the status (bandwidth in this example) of the core network.
A connectionless data flow, originated within subset #A for transmission to subnet #C, reaches edge node EA via border router RA. Edge node EA checks the destination address to find that the destination subnet of this data flow is subnet #C. Then, from the two PVRs, R2 and R6, between edge node EA and subnet #C, edge node EA selects R6 which has the largest bandwidth.
A pair of VPI/VCI=10/17 and port INF=1 corresponding to the PVR-ID of the selected R6 is obtained. The ATM cells generated by converting the connectionless data flow are then sent with VPI/VCI=10/17 in the header.
When the old route is changed to the new route as the status changes from that shown in
The ATM cells sent over two different PVRs are assembled into an IP packet at the line interface on the access network side 10 at edge node EC for transfer to subnet #C.
As described above, the network in this embodiment checks the bandwidths of the PVRs in the core network 100 at regular intervals and distributes IP packets to an appropriate PVR according to the bandwidth status at that time, further increasing the efficiency of connectionless data flow transfer in a large data network.
In addition, control packets for connection setup and resource reservation are sent in this embodiment only when the PVRs are set up and when the PVR bandwidth information is updated. A connection need not be set up each time a request for data transfer between two access networks is generated. The number of control packet transferd in the core network is therefore reduced in this embodiment.
As described above, the network in this embodiment performs shaping for each PVR according to the assigned bandwidth in order to allocate a connectionless data flow, sent from each subnet, to a bandwidth granted PVR. Thus, non-instantaneous, local network congestion generated by a burst of data can be avoided.
Although an available bandwidth is used in this embodiment to select a PVR at an edge node- Other information may also be used as the route selection criterion. For example, the buffer status information on each edge node may be used. A route may also be selected according to the time or randomly.
Next, the following explains the operation that is performed when congestion or an error occurs in the core network:
When congestion or an error is detected on R2 as shown in
The OAM (Operating and Management) function in the ATM detects a transmission path error. The network management equipment 200 detects congestion, for example, when it receives congestion information from a node which detects congestion. Then, the network management equipment 200 tells the nodes to switch the PVR to the subsystem PVR.
For example, an edge node checks the amount of data in the logical queue provided for each PVR in the common buffer at regular intervals, and when the amount exceeds a predetermined value, determines that a congestion has occurred on that PVR.
Thus, this embodiment provides a packet switching network which is not vulnerable to a physical layer error (transmission path disconnection, and so on) or a logical path error (VC or VP disconnection).
In the example described above, the network management equipment 200 stores PVR bandwidth information in the routing table 1302 based on the bandwidth information measured at each node, as shown in
Next, the dynamic shaping operation for each PVR is described.
An ATM handler 14 (output side) stores the maximum allowable data rate (ACR: Allowed Cell Rate) in the routing table 1302 based on the explicit bandwidth information contained in a captured backward RM cell. A bandwidth control table 2705 is set up according to the ACR value, and the cell output rate is adjusted for each PVR based on this value. The shaper for each PVR uses the common buffer technology described, for example, in “Switching System” disclosed in Japanese Patent Laid-Open Publication No. Hei 4-276943. That is, in the common buffer, a logical queue is provided for each PVR, and cells are read from the common buffer under control of the bandwidth control table (This corresponds to the data rate adjusting device.) The bandwidth control table contains data specifying a logical queue in the common buffer from which a packet is to be read and a time at which that packet is to be read.
Explicit bandwidth information in a backward RM cell is used, as appropriate, to rewrite the bandwidth control table. This enables dynamic shaping operation to be performed for each PVR.
Based on the setting of the CI (Congestion Indication) bit or the NI (No Increase) bit in a backward RM cell captured by the ATM handler (output) 14, a rate calculation circuit 15 calculates the maximum allowable data rate (ACR: Allowed Cell Rate). Binary mode rate control differs from explicit mode rate control in that the ACR value relatively increases, decreases, or remains unchanged according to the setting of the CI bit and the NI bit. Once the ACR value is set up, the subsequent operation is similar to that of explicit rate control.
The data rate adjusting device comprises a device for generating RM cells and for inserting RM cells into data.
Dynamically shaping the bandwidth for each PVR according to the status of the core network enables a connectionless data flow from each subnet to be assigned efficiently to a PVR. Thus, this method avoids non-instantaneous, local network congestion generated by a burst of data.
The following explains an example in which a data flow is transferred between two subnets via a plurality of PVRs using a multi-link protocol.
The two PVRs, for example R2 and R3, are assigned to the route from edge node EA to edge node EC and to the route from edge node EB to edge node EC, respectively.
A data flow transferred from border router RA to edge node EA is transferred via PVR R2 to edge node EC and, after being assembled into a packet, transferred to border router RC. A data flow transferred from border router RA to edge node EB is transferred via PVR R3 to edge node EC and, after being assembled into a packet, transferred to border router RC. A sequence of packets originated within subnet #A which are transferred via the two PVRs, R2 and R3, are arranged into the original sequence at border router RC.
As mentioned, data traffic is divided using a multi-link protocol to avoid non-instantaneous, local congestion in the core network which may occur because of a burst of data. Thus, the traffic load in the core network 100 is well-balanced.
This configuration also has a plurality of links from border router RA to the core network 100. Therefore, even if an error occurs in the data link from border router RA to edge node EA, data traffic may be sent from border RA to edge node EB. This ensures survivability.
In a preferred mode of this invention, setting up connections in advance eliminates the connection setup time, reduces the delay and delay variations involved in data transfer, and decreases the number of times the control packet for connection setup and resource reservation must be sent.
In a preferred mode of this invention, changing the route, over which data traffic from a connectionless access network is sent, according to the status of the connection-oriented core network enables connectionless data flow processing to be performed effectively in a large data network.
In a preferred mode of this invention, a packet switching network which is not vulnerable to a physical layer error (transmission path disconnection, and so on) or a logical path error (VC or VP disconnection) is provided.
In a preferred mode of this invention, a plurality of data links from an access network to the core network, which are set up using a multi-link procedure, allow the amount of input traffic to the core network to be divided, thus avoiding non-instantaneous, local network congestion which may be caused by a burst of data.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US4736363||Oct 11, 1985||Apr 5, 1988||Northern Telecom Limited||Path oriented routing system and method for packet switching networks|
|US4939726||Jul 18, 1989||Jul 3, 1990||Metricom, Inc.||Method for routing packets in a packet communication network|
|US5016243||Nov 6, 1989||May 14, 1991||At&T Bell Laboratories||Automatic fault recovery in a packet network|
|US5042027 *||Sep 8, 1989||Aug 20, 1991||Hitachi, Ltd.||Communication network system and method of controlling a communication network|
|US5241534||Jun 17, 1991||Aug 31, 1993||Fujitsu Limited||Rerouting and change-back systems for asynchronous transfer mode network|
|US5359600||Feb 16, 1993||Oct 25, 1994||Nippon Telegraph And Telephone Corporation||High throughput supervisory system for ATM switching systems transporting STM-N signals|
|US5412376||Feb 17, 1994||May 2, 1995||Fujitsu Limited||Method for structuring communications network based on asynchronous transfer mode|
|US5426636||Dec 20, 1993||Jun 20, 1995||At&T Corp.||ATM distribution networks for narrow band communications|
|US5452293||Jan 27, 1994||Sep 19, 1995||Dsc Communications Corporation||Apparatus and method of transmitting call information prior to establishing a connection path|
|US5485455||Jan 28, 1994||Jan 16, 1996||Cabletron Systems, Inc.||Network having secure fast packet switching and guaranteed quality of service|
|US5519700||Dec 7, 1994||May 21, 1996||At&T Corp.||Telecommunication system with synchronous-asynchronous interface|
|US5526353||Dec 20, 1994||Jun 11, 1996||Henley; Arthur||System and method for communication of audio data over a packet-based network|
|US5572678||Jan 25, 1993||Nov 5, 1996||Hitachi, Ltd.||System for sending frames from sender to receiver using connectionless protocol and receiving acknowledging frame and retransmission request frame from receiver using connection oriented protocol|
|US5663959||Dec 15, 1995||Sep 2, 1997||Nec Corporatiion||ATM cell switching apparatus having a control cell bypass route|
|US5675576||Jun 5, 1995||Oct 7, 1997||Lucent Technologies Inc.||Concestion control system and method for packet switched networks providing max-min fairness|
|US5764740 *||Aug 11, 1995||Jun 9, 1998||Telefonaktiebolaget Lm Ericsson||System and method for optimal logical network capacity dimensioning with broadband traffic|
|US5835710||Sep 5, 1997||Nov 10, 1998||Kabushiki Kaisha Toshiba||Network interconnection apparatus, network node apparatus, and packet transfer method for high speed, large capacity inter-network communication|
|US5917820||Jun 10, 1996||Jun 29, 1999||Cisco Technology, Inc.||Efficient packet forwarding arrangement for routing packets in an internetwork|
|US5956339||Apr 10, 1997||Sep 21, 1999||Fujitsu Limited||Apparatus for selecting a route in a packet-switched communications network|
|US5963555||Sep 2, 1997||Oct 5, 1999||Hitachi Ltd||Router apparatus using ATM switch|
|US6002674||Jun 27, 1997||Dec 14, 1999||Oki Electric Industry Co., Ltd.||Network control system which uses two timers and updates routing information|
|US6046999||Feb 13, 1998||Apr 4, 2000||Hitachi, Ltd.||Router apparatus using ATM switch|
|US6075787||May 8, 1997||Jun 13, 2000||Lucent Technologies Inc.||Method and apparatus for messaging, signaling, and establishing a data link utilizing multiple modes over a multiple access broadband communications network|
|US6108304 *||Jun 8, 1998||Aug 22, 2000||Abe; Hajime||Packet switching network, packet switching equipment, and network management equipment|
|US6118783||Jun 26, 1997||Sep 12, 2000||Sony Corporation||Exchange apparatus and method of the same|
|US6275494 *||May 15, 1998||Aug 14, 2001||Hitachi, Ltd.||Packet switching system, packet switching network and packet switching method|
|US6314098 *||May 12, 1998||Nov 6, 2001||Nec Corporation||ATM connectionless communication system having session supervising and connection supervising functions|
|US6512745 *||Oct 1, 1999||Jan 28, 2003||Hitachi, Ltd.||Packet switching network, packet switching equipment, and network management equipment|
|EP0502436A2||Feb 28, 1992||Sep 9, 1992||Hitachi, Ltd.||ATM cell switching system|
|EP0597487A2||Nov 12, 1993||May 18, 1994||Nec Corporation||Asynchronous transfer mode communication system|
|JPH04260245A||Title not available|
|JPH04276943A||Title not available|
|JPH06244859A||Title not available|
|JPH08125692A||Title not available|
|JPH09130386A||Title not available|
|JPH09181740A||Title not available|
|JPH09247169A||Title not available|
|WO1996007281A1||Sep 1, 1995||Mar 7, 1996||British Telecommunications Plc||Network management system for communications networks|
|WO1997048214A2||Jun 12, 1997||Dec 18, 1997||British Telecommunications Public Limited Company||Atm network management|
|1||"ABR Flow Control", Traffic Management Specification Version 4.0.|
|2||"ATM Forum UNI version 3.1," PTR Prentice Hall Prentice-Hall, Inc., 1995.|
|3||"Data Structure and Algorithm", Baifukan Co., Ltd., Mar. 1987, p. 239.|
|4||A Dynamically Controllable ATM Transport Network Based on the Virtual Path Concept-NTT Transmission Systems Laboratories, Nov. 28, 1998, pp. 1272-1276.|
|5||Anthony Alles, "ATM Internetworking," Sep. 1995, p. 121.|
|6||ATM VP-Based Broadband Networks for Multimedia Services by Aoyama et al, dated Apr. 31, 1993, IEEE Communications Magazine.|
|7||Burak M: "Connectionless Services In an ATM-LAN Provided By a CL-Server an Implementation and Case Study" proceedings of the global telecommunications conference, US, New York, IEEE dated Nov. 28, 1994.|
|8||Centralized Virtual Path Bandwidth Allocation Scheme for ATM Networks by Logothetis et al., dated Oct. 1992, IEICE Transactions on Communications.|
|9||European Office Action dated Feb. 2, 2005.|
|10||European Search Report dated Feb. 13, 2001.|
|11||European Search Report dated Oct. 21, 2003; EP 98 11 0367.|
|12||J. Murayama et al. "Core Network Design for Large-Scale Internetworking", NTT R&D, vol. 46, No. 3, pp. 223-232 dated Mar. 19, 1997.|
|13||Juha Heinanen, "Multiprotocol Encapsulation Over ATM Adapatation Layer 5," Request for Comments: 1483, Jul. 1993.|
|14||K. Sklower et al., "The PPP Multilink Protocol (MP)," Aug. 1996.|
|15||Lixia Zhang et al., "RSVP: A New Resource ReSerVation Protocol", IEEE Network, Sep. 1993.|
|16||Self-sizing Network Operation Systems in ATM Networks by Nakagawa et al., dated Apr. 15, 1996; NTT Telecommunication Network Lab.|
|17||Y. Kamizuru. "Routing Control for Multihomed Ass" IPS Journal vol. 95, No. 61, 95-DPS-71-3 (Multimedia Communication and Distributed Processing 71-3) dated Jul. 13, 1995.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7292584 *||Dec 30, 1999||Nov 6, 2007||Nokia Corporation||Effective multilink flow handling|
|US8185616 *||Jun 16, 2004||May 22, 2012||Fujitsu Limited||Route designing method|
|US8717895||Jul 6, 2011||May 6, 2014||Nicira, Inc.||Network virtualization apparatus and method with a table mapping engine|
|US8718070||Jul 6, 2011||May 6, 2014||Nicira, Inc.||Distributed network virtualization apparatus and method|
|US8743888||Jul 6, 2011||Jun 3, 2014||Nicira, Inc.||Network control apparatus and method|
|US8743889||Jul 6, 2011||Jun 3, 2014||Nicira, Inc.||Method and apparatus for using a network information base to control a plurality of shared network infrastructure switching elements|
|US8750119||Jul 6, 2011||Jun 10, 2014||Nicira, Inc.||Network control apparatus and method with table mapping engine|
|US8750164 *||Jul 6, 2011||Jun 10, 2014||Nicira, Inc.||Hierarchical managed switch architecture|
|US8761036||Jul 6, 2011||Jun 24, 2014||Nicira, Inc.||Network control apparatus and method with quality of service controls|
|US8775594||Aug 25, 2011||Jul 8, 2014||Nicira, Inc.||Distributed network control system with a distributed hash table|
|US8817620||Jul 6, 2011||Aug 26, 2014||Nicira, Inc.||Network virtualization apparatus and method|
|US8817621||Jul 6, 2011||Aug 26, 2014||Nicira, Inc.||Network virtualization apparatus|
|US8830823||Jul 6, 2011||Sep 9, 2014||Nicira, Inc.||Distributed control platform for large-scale production networks|
|US8830835||Aug 17, 2012||Sep 9, 2014||Nicira, Inc.||Generating flows for managed interconnection switches|
|US8837493||Jul 6, 2011||Sep 16, 2014||Nicira, Inc.||Distributed network control apparatus and method|
|US8842679||Jul 6, 2011||Sep 23, 2014||Nicira, Inc.||Control system that elects a master controller instance for switching elements|
|US8880468||Jul 6, 2011||Nov 4, 2014||Nicira, Inc.||Secondary storage architecture for a network control system that utilizes a primary network information base|
|US8913483||Aug 26, 2011||Dec 16, 2014||Nicira, Inc.||Fault tolerant managed switching element architecture|
|US8913611||Nov 15, 2012||Dec 16, 2014||Nicira, Inc.||Connection identifier assignment and source network address translation|
|US8958292||Jul 6, 2011||Feb 17, 2015||Nicira, Inc.||Network control apparatus and method with port security controls|
|US8958298||Aug 17, 2012||Feb 17, 2015||Nicira, Inc.||Centralized logical L3 routing|
|US8959215||Jul 6, 2011||Feb 17, 2015||Nicira, Inc.||Network virtualization|
|US8964528||Aug 26, 2011||Feb 24, 2015||Nicira, Inc.||Method and apparatus for robust packet distribution among hierarchical managed switching elements|
|US8964598||Aug 26, 2011||Feb 24, 2015||Nicira, Inc.||Mesh architectures for managed switching elements|
|US8964767||Aug 17, 2012||Feb 24, 2015||Nicira, Inc.||Packet processing in federated network|
|US8966024||Nov 15, 2012||Feb 24, 2015||Nicira, Inc.||Architecture of networks with middleboxes|
|US8966029||Nov 15, 2012||Feb 24, 2015||Nicira, Inc.||Network control system for configuring middleboxes|
|US8966035||Apr 1, 2010||Feb 24, 2015||Nicira, Inc.||Method and apparatus for implementing and managing distributed virtual switches in several hosts and physical forwarding elements|
|US8966040||Jul 6, 2011||Feb 24, 2015||Nicira, Inc.||Use of network information base structure to establish communication between applications|
|US9007903||Aug 26, 2011||Apr 14, 2015||Nicira, Inc.||Managing a network by controlling edge and non-edge switching elements|
|US9008087||Aug 26, 2011||Apr 14, 2015||Nicira, Inc.||Processing requests in a network control system with multiple controller instances|
|US9015823||Nov 15, 2012||Apr 21, 2015||Nicira, Inc.||Firewalls in logical networks|
|US9043452||Nov 3, 2011||May 26, 2015||Nicira, Inc.||Network control apparatus and method for port isolation|
|US9049153||Aug 26, 2011||Jun 2, 2015||Nicira, Inc.||Logical packet processing pipeline that retains state information to effectuate efficient processing of packets|
|US9059999||Feb 1, 2013||Jun 16, 2015||Nicira, Inc.||Load balancing in a logical pipeline|
|US9077664||Sep 6, 2011||Jul 7, 2015||Nicira, Inc.||One-hop packet processing in a network with managed switching elements|
|US9083609||Sep 26, 2008||Jul 14, 2015||Nicira, Inc.||Network operating system for managing and securing networks|
|US9106587||Aug 25, 2011||Aug 11, 2015||Nicira, Inc.||Distributed network control system with one master controller per managed switching element|
|US9112811||Aug 26, 2011||Aug 18, 2015||Nicira, Inc.||Managed switching elements used as extenders|
|US9137052||Aug 17, 2012||Sep 15, 2015||Nicira, Inc.||Federating interconnection switching element network to two or more levels|
|US9137107||Oct 25, 2012||Sep 15, 2015||Nicira, Inc.||Physical controllers for converting universal flows|
|US9154433||Aug 17, 2012||Oct 6, 2015||Nicira, Inc.||Physical controller|
|US9172603||Nov 15, 2012||Oct 27, 2015||Nicira, Inc.||WAN optimizer for logical networks|
|US9172663||Aug 25, 2011||Oct 27, 2015||Nicira, Inc.||Method and apparatus for replicating network information base in a distributed network control system with multiple controller instances|
|US9178833||Aug 17, 2012||Nov 3, 2015||Nicira, Inc.||Chassis controller|
|US9185069||Feb 1, 2013||Nov 10, 2015||Nicira, Inc.||Handling reverse NAT in logical L3 routing|
|US9195491||Nov 15, 2012||Nov 24, 2015||Nicira, Inc.||Migrating middlebox state for distributed middleboxes|
|US9203701||Oct 25, 2012||Dec 1, 2015||Nicira, Inc.||Network virtualization apparatus and method with scheduling capabilities|
|US9209998||Aug 17, 2012||Dec 8, 2015||Nicira, Inc.||Packet processing in managed interconnection switching elements|
|US9210079||Mar 8, 2013||Dec 8, 2015||Vmware, Inc.||Method and system for virtual and physical network integration|
|US9225597||Mar 14, 2014||Dec 29, 2015||Nicira, Inc.||Managed gateways peering with external router to attract ingress packets|
|US9231882||Jan 31, 2013||Jan 5, 2016||Nicira, Inc.||Maintaining quality of service in shared forwarding elements managed by a network control system|
|US9231891||Nov 2, 2011||Jan 5, 2016||Nicira, Inc.||Deployment of hierarchical managed switching elements|
|US9246833||Jan 31, 2013||Jan 26, 2016||Nicira, Inc.||Pull-based state dissemination between managed forwarding elements|
|US9253109||Jan 31, 2013||Feb 2, 2016||Nicira, Inc.||Communication channel for distributed network control system|
|US9276897||Feb 1, 2013||Mar 1, 2016||Nicira, Inc.||Distributed logical L3 routing|
|US9288081||Aug 17, 2012||Mar 15, 2016||Nicira, Inc.||Connecting unmanaged segmented networks by managing interconnection switching elements|
|US9288104||Oct 25, 2012||Mar 15, 2016||Nicira, Inc.||Chassis controllers for converting universal flows|
|US9300593||Jan 31, 2013||Mar 29, 2016||Nicira, Inc.||Scheduling distribution of logical forwarding plane data|
|US9300603||Aug 26, 2011||Mar 29, 2016||Nicira, Inc.||Use of rich context tags in logical data processing|
|US9306843||Apr 18, 2013||Apr 5, 2016||Nicira, Inc.||Using transactions to compute and propagate network forwarding state|
|US9306864||Jan 31, 2013||Apr 5, 2016||Nicira, Inc.||Scheduling distribution of physical control plane data|
|US9306875||Aug 26, 2011||Apr 5, 2016||Nicira, Inc.||Managed switch architectures for implementing logical datapath sets|
|US9306909||Nov 20, 2014||Apr 5, 2016||Nicira, Inc.||Connection identifier assignment and source network address translation|
|US9306910||Oct 21, 2013||Apr 5, 2016||Vmware, Inc.||Private allocated networks over shared communications infrastructure|
|US9313129||Mar 14, 2014||Apr 12, 2016||Nicira, Inc.||Logical router processing by network controller|
|US9319336||Jan 31, 2013||Apr 19, 2016||Nicira, Inc.||Scheduling distribution of logical control plane data|
|US9319337||Jan 31, 2013||Apr 19, 2016||Nicira, Inc.||Universal physical control plane|
|US9319338||Jan 31, 2013||Apr 19, 2016||Nicira, Inc.||Tunnel creation|
|US9319375||Feb 1, 2013||Apr 19, 2016||Nicira, Inc.||Flow templating in logical L3 routing|
|US9331937||Apr 18, 2013||May 3, 2016||Nicira, Inc.||Exchange of network state information between forwarding elements|
|US9350657||Oct 31, 2013||May 24, 2016||Nicira, Inc.||Encapsulating data packets using an adaptive tunnelling protocol|
|US9350696||Feb 1, 2013||May 24, 2016||Nicira, Inc.||Handling NAT in logical L3 routing|
|US9356906||Feb 1, 2013||May 31, 2016||Nicira, Inc.||Logical L3 routing with DHCP|
|US9363210||Aug 26, 2011||Jun 7, 2016||Nicira, Inc.||Distributed network control system with one master controller per logical datapath set|
|US9369426||Aug 17, 2012||Jun 14, 2016||Nicira, Inc.||Distributed logical L3 routing|
|US9391928||Aug 26, 2011||Jul 12, 2016||Nicira, Inc.||Method and apparatus for interacting with a network information base in a distributed network control system with multiple controller instances|
|US9407566||Jan 31, 2013||Aug 2, 2016||Nicira, Inc.||Distributed network control system|
|US9407599||Feb 1, 2013||Aug 2, 2016||Nicira, Inc.||Handling NAT migration in logical L3 routing|
|US9413644||Mar 27, 2014||Aug 9, 2016||Nicira, Inc.||Ingress ECMP in virtual distributed routing environment|
|US9419855||Mar 14, 2014||Aug 16, 2016||Nicira, Inc.||Static routes for logical routers|
|US9432204||Sep 6, 2013||Aug 30, 2016||Nicira, Inc.||Distributed multicast by endpoints|
|US9432252||Mar 31, 2014||Aug 30, 2016||Nicira, Inc.||Unified replication mechanism for fault-tolerance of state|
|US9444651||Aug 17, 2012||Sep 13, 2016||Nicira, Inc.||Flow generation from second level controller to first level controller to managed switching element|
|US9461960||Feb 1, 2013||Oct 4, 2016||Nicira, Inc.||Logical L3 daemon|
|US9485185||Oct 31, 2013||Nov 1, 2016||Nicira, Inc.||Adjusting connection validating control signals in response to changes in network traffic|
|US9503321||Mar 21, 2014||Nov 22, 2016||Nicira, Inc.||Dynamic routing for logical routers|
|US9503371||Jan 28, 2014||Nov 22, 2016||Nicira, Inc.||High availability L3 gateways for logical networks|
|US9525647||Oct 7, 2011||Dec 20, 2016||Nicira, Inc.||Network control apparatus and method for creating and modifying logical switching elements|
|US9552219||Nov 16, 2015||Jan 24, 2017||Nicira, Inc.||Migrating middlebox state for distributed middleboxes|
|US9558027||Jan 12, 2015||Jan 31, 2017||Nicira, Inc.||Network control system for configuring middleboxes|
|US9559870||Jun 30, 2014||Jan 31, 2017||Nicira, Inc.||Managing forwarding of logical network traffic between physical domains|
|US9571304||Jun 30, 2014||Feb 14, 2017||Nicira, Inc.||Reconciliation of network state across physical domains|
|US9575782||Dec 20, 2013||Feb 21, 2017||Nicira, Inc.||ARP for logical router|
|US9577845||Jan 28, 2014||Feb 21, 2017||Nicira, Inc.||Multiple active L3 gateways for logical networks|
|US9590901||Mar 14, 2014||Mar 7, 2017||Nicira, Inc.||Route advertisement by managed gateways|
|US9590919||Jan 9, 2015||Mar 7, 2017||Nicira, Inc.||Method and apparatus for implementing and managing virtual switches|
|US9602305||Dec 7, 2015||Mar 21, 2017||Nicira, Inc.||Method and system for virtual and physical network integration|
|US9602312||Jun 30, 2014||Mar 21, 2017||Nicira, Inc.||Storing network state at a network controller|
|US9602385||Dec 18, 2013||Mar 21, 2017||Nicira, Inc.||Connectivity segment selection|
|US9602392||Dec 18, 2013||Mar 21, 2017||Nicira, Inc.||Connectivity segment coloring|
|US9602421||Jan 31, 2013||Mar 21, 2017||Nicira, Inc.||Nesting transaction updates to minimize communication|
|US9602422||Jun 26, 2014||Mar 21, 2017||Nicira, Inc.||Implementing fixed points in network state updates using generation numbers|
|US9647883||Mar 21, 2014||May 9, 2017||Nicria, Inc.||Multiple levels of logical routers|
|US9667447||Jun 30, 2014||May 30, 2017||Nicira, Inc.||Managing context identifier assignment across multiple physical domains|
|US9667556||Oct 31, 2013||May 30, 2017||Nicira, Inc.||Adjusting connection validating control signals in response to changes in network traffic|
|US9680750||Nov 4, 2011||Jun 13, 2017||Nicira, Inc.||Use of tunnels to hide network addresses|
|US9692655||Sep 6, 2011||Jun 27, 2017||Nicira, Inc.||Packet processing in a network with hierarchical managed switching elements|
|US9697030||Nov 20, 2014||Jul 4, 2017||Nicira, Inc.||Connection identifier assignment and source network address translation|
|US9697032||Dec 30, 2014||Jul 4, 2017||Vmware, Inc.||Automated network configuration of virtual machines in a virtual lab environment|
|US9697033||Jan 12, 2015||Jul 4, 2017||Nicira, Inc.||Architecture of networks with middleboxes|
|US9742881||Jun 30, 2014||Aug 22, 2017||Nicira, Inc.||Network virtualization using just-in-time distributed capability for classification encoding|
|US9768980||Sep 30, 2014||Sep 19, 2017||Nicira, Inc.||Virtual distributed bridging|
|US9785455||Dec 20, 2013||Oct 10, 2017||Nicira, Inc.||Logical router|
|US9794079||Mar 31, 2014||Oct 17, 2017||Nicira, Inc.||Replicating broadcast, unknown-unicast, and multicast traffic in overlay logical networks bridged with physical networks|
|US20030133411 *||Feb 14, 2003||Jul 17, 2003||Kabushiki Kaisha Toshiba||Communication resource management method and node control device using priority control and admission control|
|US20030195965 *||Feb 20, 2003||Oct 16, 2003||Il-Gyu Choi||Data communication method using resource reservation|
|US20040151187 *||Jan 31, 2003||Aug 5, 2004||Lichtenstein Walter D.||Scheduling data transfers for multiple use requests|
|US20040153567 *||Jan 31, 2003||Aug 5, 2004||Lichtenstein Walter D.||Scheduling data transfers using virtual nodes|
|US20050154790 *||Jun 16, 2004||Jul 14, 2005||Akira Nagata||Route designing method|
|US20050188089 *||Feb 24, 2004||Aug 25, 2005||Lichtenstein Walter D.||Managing reservations for resources|
|US20070086355 *||Jan 31, 2006||Apr 19, 2007||Fujitsu Limited||Data transmission apparatus for traffic control to maintain quality of service|
|US20090138577 *||Sep 26, 2008||May 28, 2009||Nicira Networks||Network operating system for managing and securing networks|
|US20100257263 *||Apr 1, 2010||Oct 7, 2010||Nicira Networks, Inc.||Method and apparatus for implementing and managing virtual switches|
|US20170188059 *||Dec 28, 2016||Jun 29, 2017||Echostar Technologies L.L.C.||Dynamic content delivery routing and related methods and systems|
|U.S. Classification||370/232, 370/397|
|International Classification||H04L12/24, H04L12/56, H04Q11/04, H04L29/06, H04L12/46|
|Cooperative Classification||H04L49/3081, H04L2012/5647, H04L2012/5642, H04L41/00, H04L47/781, H04L47/762, H04L47/724, H04L47/825, H04L12/5601, H04L12/4608, H04L47/746, H04L29/06, H04L2012/5632, H04L12/5602, H04L2012/5679, H04L47/822, H04L2012/5649, H04L2012/5635, H04L2012/563, H04L2012/5627, H04Q11/0478, H04L2012/5658, H04L2012/5619, H04L2012/5645, H04L47/783, H04L47/745, H04L2012/5667, H04L47/10, H04L47/70|
|European Classification||H04L12/56A1, H04L12/56R, H04L47/10, H04L47/72B, H04L47/82B, H04L47/82E, H04L47/76A, H04L49/30J, H04L47/74C, H04L47/74D, H04L47/78A, H04L47/78C, H04L41/00, H04L12/24, H04Q11/04S2, H04L29/06, H04L12/46B1, H04L12/56A|
|Mar 7, 2006||AS||Assignment|
Owner name: HITACHI, LTD., JAPAN
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YOSHITO, SAKURAI;HAJIME, ABE;NOBORU, ENDO;AND OTHERS;REEL/FRAME:017263/0933
Effective date: 19980602
|Oct 23, 2009||FPAY||Fee payment|
Year of fee payment: 4
|Oct 16, 2013||FPAY||Fee payment|
Year of fee payment: 8