Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS7545740 B2
Publication typeGrant
Application numberUS 11/279,045
Publication dateJun 9, 2009
Filing dateApr 7, 2006
Priority dateApr 7, 2006
Fee statusPaid
Also published asEP2008387A2, EP2008387A4, US20070237172, WO2007116391A2, WO2007116391A3
Publication number11279045, 279045, US 7545740 B2, US 7545740B2, US-B2-7545740, US7545740 B2, US7545740B2
InventorsDavid Zelig, Ronen Solomon, Uzi Khill
Original AssigneeCorrigent Systems Ltd.
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Two-way link aggregation
US 7545740 B2
Abstract
A method for communication includes coupling a network node to one or more interface modules using a first group of first physical links arranged in parallel. Each of the one or more interface modules is coupled to a communication network using a second group of second physical links arranged in parallel. A data frame having frame attributes sent between the communication network and the network node is received. A first physical link out of the first group and a second physical link out of the second group are selected in a single computation based on at least one of the frame attributes. The data frame is sent over the selected first and second physical links. This method allows two or more link aggregation groups to be concatenated, using a single processing stage to determine port assignment for each frame in each of the link aggregation groups.
Images(4)
Previous page
Next page
Claims(31)
1. A method for communication, comprising:
coupling a network node to one or more interface modules using a first group of first physical links arranged in parallel, at least one of said first physical links being a bi-directional link operative to communicate in both an upstream direction and a downstream direction;
coupling each of the one or more interface modules to a communication network using a second group of second physical links arranged in parallel, at least one of said second physical links being a bi-directional link operative to communicate in both an upstream direction and a downstream direction;
receiving a data frame having frame attributes sent between the communication network and the network node;
selecting, in a single computation based on at least one of the frame attributes, a first physical link out of the first group and a second physical link out of the second group; and
sending the data frame over the selected first and second physical links,
said sending comprising communicating along at least one of said bi-directional links.
2. The method according to claim 1, wherein the network node comprises a user node, and wherein sending the data frame comprises establishing a communication service between the user node and the communication network.
3. The method according to claim 1, wherein the second physical links comprise backplane traces formed on a backplane to which the one or more interface modules are coupled.
4. A method for communication, comprising:
coupling a network node to one or more interface modules using a first group of first physical links arranged in parallel;
coupling each of the one or more interface modules to a communication network using a second group of second physical links arranged in parallel;
receiving a data frame having frame attributes sent between the communication network and the network node;
selecting, in a single computation based on at least one of the frame attributes, a first physical link out of the first group and a second physical link out of the second group; and
sending the data frame over the selected first and second physical links,
at least one of the first and second groups of physical links comprising an Ethernet link aggregation (LAG) group.
5. A method for communication, comprising:
coupling a network node to one or more interface modules using a first group of first physical links arranged in parallel;
coupling each of the one or more interface modules to a communication network using a second group of second physical links arranged in parallel;
receiving a data frame having frame attributes sent between the communication network and the network node;
selecting, in a single computation based on at least one of the frame attributes, a first physical link out of the first group and a second physical link out of the second group; and
sending the data frame over the selected first and second physical links,
coupling the network node to the one or more interface modules comprises aggregating two or more of the first physical links into an external Ethernet link aggregation (LAG) group so as to increase a data bandwidth provided to the network node.
6. The method according to claim 1, wherein coupling each of the one or more interface modules to the communication network comprises at least one of multiplexing upstream data frames sent from the network node to the communication network, and demultiplexing downstream data frames sent from the communication network to the network node.
7. The method according to claim 1, wherein selecting the first and second physical links comprises balancing a frame data rate among at least some of the first and second physical links.
8. The method according to claim 1, wherein selecting the first and second physical links comprises applying a mapping function to the at least one of the frame attributes.
9. The method according to claim 8, wherein applying the mapping function comprises applying a hashing function.
10. The method according to claim 9, wherein applying the hashing function comprises determining a hashing size responsively to a number of at least some of the first and second physical links, applying the hashing function to the at least one of the frame attributes to produce a hashing key, calculating a modulo of a division operation of the hashing key by the hashing size, and selecting the first and second physical links responsively to the modulo.
11. The method according to claim 10, wherein selecting the first and second physical links responsively to the modulo comprises selecting the first and second physical links responsively to respective first and second subsets of bits in a binary representation of the modulo.
12. The method according to claim 1, wherein the at least one of the frame attributes comprises at least one of a layer 2 header field, a layer 3 header field, a layer 4 header field, a source Internet Protocol (IP) address, a destination IP address, a source medium access control (MAC) address, a destination MAC address, a source Transmission Control Protocol (TCP) port and a destination TCP port.
13. A method for communication, comprising:
coupling a network node to one or more interface modules using a first group of first physical links arranged in parallel;
coupling each of the one or more interface modules to a communication network using a second group of second physical links arranged in parallel;
receiving a data frame having frame attributes sent between the communication network and the network node;
selecting, in a single computation based on at least one of the frame attributes, a first physical link out of the first group and a second physical link out of the second group; and
sending the data frame over the selected first and second physical links,
coupling the network node to the one or more interface modules and coupling each of the one or more interface modules to the communication network comprising specifying bandwidth requirements comprising at least one of a committed information rate (CIR), a peak information rate (PIR) and an excess information rate (EIR) of a communication service provided by the communication network to the network node, and allocating a bandwidth for the communication service over the first and second physical links responsively to the bandwidth requirements.
14. A method for connecting user ports to a communication network, comprising:
coupling the user ports to one or more user interface modules;
coupling each user interface module to the communication network via a backplane using two or more backplane traces arranged in parallel, at least one of said backplane traces being bi-directional and operative to communicate in both an upstream direction and a downstream direction;
receiving data frames sent between the user ports and the communication network, the data frames having respective frame attributes;
for each data frame, selecting responsively to at least one of the respective frame attributes a backplane trace from the two or more backplane traces; and
sending the data frame over the selected backplane trace;
said sending comprising communicating along said at least one of said backplane traces.
15. A method for connecting user ports to a communication network, comprising:
coupling the user ports to one or more user interface modules;
coupling each user interface module to the communication network via a backplane using two or more backplane traces arranged in parallel;
receiving data frames sent between the user ports and the communication network, the data frames having respective frame attributes;
for each data frame, selecting responsively to at least one of the respective frame attributes a backplane trace from the two or more backplane traces; and
sending the data frame over the selected backplane trace,
at least some of the backplane traces being aggregated into an Ethernet link aggregation (LAG) group.
16. The method according to claim 14, wherein selecting the backplane trace comprises applying a hashing function to the at least one of the frame attributes.
17. Apparatus for connecting a network node with a communication network, comprising:
one or more interface modules, which are arranged to process data frames having frame attributes sent between the network node and the communication network, at least one of said interface modules being operative to communicate in both an upstream direction and a downstream direction;
a first group of first physical links arranged in parallel so as to couple the network node to the one or more interface modules;
a second group of second physical links arranged in parallel so as to couple the one or more interface modules to the communication network; and
a control module, which is arranged to select for each data frame sent between the communication network and the network node, in a single computation based on at least one of the frame attributes, a first physical link out of the first group and a second physical link out of the second group over which to send the data frame;
at least one of said first physical links and at least one of said second links being bi-directional links operative to communicate in both said upstream direction and said downstream direction.
18. The apparatus according to claim 17, and comprising a backplane to which the one or more interface modules are coupled, wherein the second physical links comprise backplane traces formed on the backplane.
19. Apparatus for connecting a network node with a communication network, comprising:
one or more interface modules, which are arranged to process data frames having frame attributes sent between the network node and the communication network;
a first group of first physical links arranged in parallel so as to couple the network node to the one or more interface modules;
a second group of second physical links arranged in parallel so as to couple the one or more interface modules to the communication network; and
a control module, which is arranged to select for each data frame sent between the communication network and the network node, in a single computation based on at least one of the frame attributes, a first physical link out of the first group and a second physical link out of the second group over which to send the data frame,
at least one of the first and second groups of physical links comprising an Ethernet link aggregation (LAG) group.
20. Apparatus for connecting a network node with a communication network, comprising:
one or more interface modules, which are arranged to process data frames having frame attributes sent between the network node and the communication network;
a first group of first physical links arranged in parallel so as to couple the network node to the one or more interface modules;
a second group of second physical links arranged in parallel so as to couple the one or more interface modules to the communication network; and
a control module, which is arranged to select for each data frame sent between the communication network and the network node, in a single computation based on at least one of the frame attributes, a first physical link out of the first group and a second physical link out of the second group over which to send the data frame,
two or more of the first physical links being aggregated into an external Ethernet link aggregation (LAG) group so as to increase a data bandwidth provided to the network node.
21. The apparatus according to claim 17, and comprising a multiplexer, which is arranged to perform at least one of multiplexing upstream data frames sent from the network node to the communication network, and demultiplexing downstream data frames sent from the communication network to the network node.
22. The apparatus according to claim 17, wherein the control module is arranged to balance a frame data rate among at least some of the first and second physical links.
23. The apparatus according to claim 17, wherein the control module is arranged to apply a mapping function to the at least one of the frame attributes so as to select the first and second physical links.
24. The apparatus according to claim 23, wherein the mapping function comprises a hashing function.
25. The apparatus according to claim 24, wherein the control module is arranged to determine a hashing size responsively to a number of at least some of the first and second physical links, to apply the hashing function to the at least one of the frame attributes to produce a hashing key, to calculate a modulo of a division operation of the hashing key by the hashing size, and to select the first and second physical links responsively to the modulo.
26. The apparatus according to claim 25, wherein the control module is arranged to select the first and second physical links responsively to respective first and second subsets of bits in a binary representation of the modulo.
27. The apparatus according to claim 17, wherein the at least one of the frame attributes comprises at least one of a layer 2 header field, a layer 3 header field, a layer 4 header field, a source Internet Protocol (IP) address, a destination IP address, a source medium access control (MAC) address, a destination MAC address, a source Transmission Control Protocol (TCP) port and a destination TCP port.
28. Apparatus for connecting a network node with a communication network, comprising:
one or more interface modules, which are arranged to process data frames having frame attributes sent between the network node and the communication network;
a first group of first physical links arranged in parallel so as to couple the network node to the one or more interface modules;
a second group of second physical links arranged in parallel so as to couple the one or more interface modules to the communication network; and
a control module, which is arranged to select for each data frame sent between the communication network and the network node, in a single computation based on at least one of the frame attributes, a first physical link out of the first group and a second physical link out of the second group over which to send the data frame,
the communication network being arranged to provide a communication service to the network node, the service having specified bandwidth requirements comprising at least one of a committed information rate (CR), a peak information rate (PIR) and an excess information rate (EIR), and the first and second groups of physical links being dimensioned to provide an allocated bandwidth for the communication service responsively to the bandwidth requirements.
29. Apparatus for connecting user ports to a communication network, comprising:
one or more user interface modules coupled to the user ports, which are arranged to process data frames having frame attributes sent between the user ports and the communication network, at least one of said user interface modules being bi-directional and operative to communicate in both an upstream direction and a downstream direction;
a backplane having the one or more user interface modules coupled thereto and comprising a plurality of backplane traces arranged in parallel so as to transfer the data frames between the one or more user interface modules and the communication network, at least one of said backplane traces being bi-directional and operative to communicate in both said upstream direction and said downstream direction; and
a control module, which is arranged to select, for each data frame, responsively to at least one of the frame attributes, a backplane trace from the plurality of backplane traces over which to send the data frame.
30. Apparatus for connecting user ports to a communication network, comprising:
one or more user interface modules coupled to the user ports, which are arranged to process data frames having frame attributes sent between the user ports and the communication network;
a backplane having the one or more user interface modules coupled thereto and comprising a plurality of backplane traces arranged in parallel so as to transfer the data frames between the one or more user interface modules and the communication network;
a control module, which is arranged to select, for each data frame, responsively to at least one of the frame attributes, a backplane trace from the plurality of backplane traces over which to send the data frame;
at least some of the backplane traces are aggregated into an Ethernet link aggregation (LAG) group.
31. The apparatus according to claim 29, wherein the control module is arranged to apply a hashing function to the at least one of the frame attributes so as to select the backplane trace.
Description
FIELD OF THE INVENTION

The present invention relates generally to communication networks, and particularly to methods and systems for link aggregation in network elements.

BACKGROUND OF THE INVENTION

Link aggregation (LAG) is a technique by which a group of parallel physical links between two endpoints in a data network can be joined together into a single logical link (referred to as the “LAG group”). Traffic transmitted between the endpoints is distributed among the physical links in a manner that is transparent to the clients that send and receive the traffic. For Ethernet™ networks, link aggregation is defined by Clause 43 of IEEE Standard 802.3ad, Carrier Sense Multiple Access with Collision Detection (CSMA/CD) Access Method and Physical Layer Specifications (2002 Edition), which is incorporated herein by reference. Clause 43 defines a link aggregation protocol sub-layer, which interfaces between the standard Media Access Control (MAC) layer functions of the physical links in a link aggregation group and the MAC clients that transmit and receive traffic over the aggregated links. The link aggregation sub-layer comprises a distributor function, which distributes data frames submitted by MAC clients among the physical links in the group, and a collector function, which receives frames over the aggregated links and passes them to the appropriate MAC clients.

SUMMARY OF THE INVENTION

In various communication applications, users connect to a communication network through a network element, such as an access concentrator or aggregator for obtaining different data services.

Embodiments of the present invention that are described hereinbelow provide improved methods and systems for connecting users to a communication network with increased capacity and quality of service. The network element comprises one or more user interface modules (UIMs), each serving one or more user ports. In some embodiments, each UIM is connected to the communication network using two or more physical links arranged in parallel, in order to provide sufficient bandwidth at a given quality-of-service (QoS) level.

Upstream data frames sent from the user ports to the communication network and downstream data frames sent from the communication network to the user ports are distributed among the parallel physical links, so as to balance the traffic load among the links. The load balancing enables each UIM, and the network element as a whole, to deliver a higher bandwidth at a given QoS or to improve the QoS at a given bandwidth.

In some embodiments, the UIMs are coupled to a backplane of the network element, and the parallel physical links comprise backplane traces. In some embodiments, the physical links are configured as an Ethernet link aggregation (LAG) group. Distribution of frames to individual physical links typically comprises applying a suitable mapping function, such as a hashing function. The mapping function typically uses frame attributes, such as various header fields of the frame, to determine a physical link over which to send each frame.

Unlike some known network element configurations, in which each user port is fixedly assigned to a specific backplane trace, the load balancing operation in embodiments of the present invention enables statistical multiplexing of the frames, in which there is no direct relationship or connection between user ports and backplane traces.

In some embodiments, two or more physical user ports are aggregated into a LAG group external to the network element, so as to form an aggregated user port having a higher bandwidth. The frames to and from the aggregated user port are distributed among the physical user ports of the external LAG group to balance the load and enable higher bandwidth and higher QoS. In the downstream direction, a combined mapping operation for frames addressed to the aggregated user port determines an individual backplane trace and a physical user port over which to send each frame in a single mapping computation.

Several system configurations that implement the disclosed methods are described hereinbelow. Bandwidth allocation considerations for allocating sufficient bandwidth in the various resources of the network element are also described and demonstrated.

Although some of the embodiments described herein relate specifically to access concentrators, aspects of the present invention are also applicable to link aggregation performed by network elements of other sorts.

There is therefore provided, in accordance with an embodiment of the present invention, a method for communication, including:

coupling a network node to one or more interface modules using a first group of first physical links arranged in parallel;

coupling each of the one or more interface modules to a communication network using a second group of second physical links arranged in parallel;

receiving a data frame having frame attributes sent between the communication network and the network node;

selecting, in a single computation based on at least one of the frame attributes, a first physical link out of the first group and a second physical link out of the second group; and

sending the data frame over the selected first and second physical links.

In an embodiment, the network node includes a user node, and sending the data frame includes establishing a communication service between the user node and the communication network.

In another embodiment, the second physical links include backplane traces formed on a backplane to which the one or more interface modules are coupled.

In yet another embodiment, at least one of the first and second groups of physical links includes an Ethernet link aggregation (LAG) group.

In still another embodiment, coupling the network node to the one or more interface modules includes aggregating two or more of the first physical links into an external Ethernet LAG group so as to increase a data bandwidth provided to the network node.

In an embodiment, coupling each of the one or more interface modules to the communication network includes at least one of multiplexing upstream data frames sent from the network node to the communication network, and demultiplexing downstream data frames sent from the communication network to the network node.

In another embodiment, selecting the first and second physical links includes balancing a frame data rate among at least some of the first and second physical links.

In an embodiment, selecting the first and second physical links includes applying a mapping function to the at least one of the frame attributes. In another embodiment, applying the mapping function includes applying a hashing function. In still another embodiment, applying the hashing function includes determining a hashing size responsively to a number of at least some of the first and second physical links, applying the hashing function to the at least one of the frame attributes to produce a hashing key, calculating a modulo of a division operation of the hashing key by the hashing size, and selecting the first and second physical links responsively to the modulo. In an embodiment, selecting the first and second physical links responsively to the modulo includes selecting the first and second physical links responsively to respective first and second subsets of bits in a binary representation of the modulo.

In an embodiment, the at least one of the frame attributes includes at least one of a layer 2 header field, a layer 3 header field, a layer 4 header field, a source Internet Protocol (IP) address, a destination IP address, a source medium access control (MAC) address, a destination MAC address, a source Transmission Control Protocol (TCP) port and a destination TCP port.

In another embodiment, coupling the network node to the one or more interface modules and coupling each of the one or more interface modules to the communication network include specifying bandwidth requirements including at least one of a committed information rate (CIR), a peak information rate (PIR) and an excess information rate (EIR) of a communication service provided by the communication network to the network node, and allocating a bandwidth for the communication service over the first and second physical links responsively to the bandwidth requirements.

There is additionally provided, in accordance with an embodiment of the present invention, a method for connecting user ports to a communication network, including:

coupling the user ports to one or more user interface modules;

coupling each user interface module to the communication network via a backplane using two or more backplane traces arranged in parallel;

receiving data frames sent between the user ports and the communication network, the data frames having respective frame attributes;

for each data frame, selecting responsively to at least one of the respective frame attributes a backplane trace from the two or more backplane traces; and

sending the data frame over the selected backplane trace.

Apparatus for connecting a network node with a communication network and for connecting user ports to a communication network are also provided.

The present invention will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:

BRIEF DESCRIPTION OF THE DRAWINGS

FIGS. 1 and 2 are block diagrams that schematically illustrate communication systems, in accordance with embodiments of the present invention;

FIG. 3 is a block diagram that schematically illustrates elements of a communication system, in accordance with an embodiment of the present invention;

FIG. 4 is a flow chart that schematically illustrates a method for single-stage hashing, in accordance with an embodiment of the present invention; and

FIG. 5 is a block diagram that schematically illustrates bandwidth allocation aspects of a communication system, in accordance with an embodiment of the present invention.

DETAILED DESCRIPTION OF EMBODIMENTS System Description

FIG. 1 is a block diagram that schematically illustrates a communication system 20, in accordance with an embodiment of the present invention. System 20 interconnects a plurality of user ports 24 to a communication network 28. Network 28 may comprise a wide-area network (WAN), such as the Internet, a network internal to a particular organization (Intranet), or any other suitable communication network.

A network element 32, such as an access concentrator, connects user ports 24 to a node 36 in network 28, typically via a network processor (NP) 38. Node 36 may comprise any suitable network element, such as a switch. The network element enables bi-directional communication in both upstream (i.e., user ports to network 28) and downstream (network 28 to user ports) directions.

In some embodiments, system 20 provides data services to users via network element 32. Typically, the system uses a Layer 2 communication protocol, such as an Ethernet™ communication protocol, in which data is transferred among the different system components using Ethernet frames. In some cases, services use higher level protocols, such as Multi-protocol label switching (MPLS). Alternatively, the Internet Protocol (IP) or other layer 2 or layer 3 protocols may also be used.

Network element 32 comprises one or more user interface modules (UIMs), such as line cards 40. Each line card is assigned to process data frames of one or more user ports. In some embodiments, the line cards are plugged into, mounted on, or otherwise coupled to a backplane 52, which distributes digital signals carrying the frames to and from line cards 40. Backplane 52 comprises physical links, such as backplane traces 56, typically in the form of printed circuit board (PCB) conductors. Each backplane trace 56 has a finite bandwidth and can support a certain maximum frame throughput.

A multiplexer (MUX) 44 is coupled to backplane traces 56. In the upstream direction, MUX 44 multiplexes upstream frames coming out of line cards 40 to produce an upstream output. The upstream output is provided to network processor 38. The upstream output is then sent by network processor 38 via a network connection 48 to network 28. In the downstream direction, network processor 38 of network element 32 accepts a downstream input comprising downstream frames addressed to user ports 24 from node 36 through connection 48. MUX 44 sends each frame to the appropriate user port via the appropriate line-card, using methods which will be explained below.

In many cases, a particular line card is connected to MUX 44 using two or more parallel backplane traces, in order to support the total bandwidth of the user ports assigned to this line card. In general, the statistical distribution of frames sent over different backplane traces may differ significantly from trace to trace, even for traces that belong to the same line card. In order to improve the capacity of the line cards, and consequently of network element 32 as a whole, it is desirable to balance the load of frames sent over the different traces, so as to avoid situations in which a certain backplane trace is overloaded, while excess capacity is available on a neighboring trace.

In many cases, the bandwidth offered by network element 32 to its users is specified in terms of quality of service (QoS) figures of merit, such as a guaranteed bandwidth (sometimes denoted CIR—Committed Information Rate) and a peak bandwidth (sometimes denoted PIR—Peak Information Rate). An alternative definition sometimes specifies the excess information rate (EIR), wherein CIR+EIR=PIR. Balancing the frame load between backplane traces 56 reduces the probability of lost frames due to overloading of backplane traces. The load balancing thus improves the QoS figures of merit, increases the quality of service offered by the concentrator and/or enables a higher bandwidth for a given QoS.

One way of balancing the load is to distribute the data rate of the Ethernet frames as uniformly as possible among the backplane traces. Alternatively, any other load balancing criterion can be used. In order to achieve load balancing, each group of backplane traces belonging to a particular line card is configured as an Ethernet link aggregation (LAG) group 58. Each LAG group 58 is considered by the relevant line card and by multiplexer 44 to be a single logical link having an aggregated bandwidth (i.e., capacity) equal to the sum of the bandwidths of the individual backplane traces in the group. As a result, there is no pre-assigned relationship or connection between any given user port 24 and a specific backplane trace 56, as in some conventional network elements. Ethernet frames are statistically multiplexed so as to balance the load among the backplane traces.

In some embodiments, Ethernet frames are mapped to individual backplane traces in the LAG group in accordance with a suitable mapping function, such as a hashing function. The mapping function distributes the frames among the different backplane traces so as to balance the load between the traces. In some embodiments, the mapping function hashes one or more frame attributes, such as header fields of the Ethernet frame, to produce a hashing key. The hashing key corresponds to an index of the backplane trace over which the frame is to be sent. Header fields may comprise any suitable layer 2, 3 or 4 headers of the Ethernet frame, such as source Internet Protocol (IP) address, destination IP address, source medium access control (MAC) address, destination MAC address, source Transmission Control Protocol (TCP) port and destination TCP port, MPLS label, virtual circuit (VC) label, virtual local area network identification (VLAN-ID) tag, as well as any other suitable header field.

In some embodiments, in the upstream direction, each line card 40 performs an upstream mapping of upstream frames to the individual backplane traces in its LAG group using a suitable mapping function. The mapping function typically uses attributes of the upstream frames, as described above.

In some embodiments, in the downstream direction, a control module 60 in network element 32 determines a downstream mapping of each downstream frame received through connection 48 to the appropriate backplane trace, using a suitable mapping function. The mapping performed by module 60 should naturally consider the user port to which the frame is addressed, in order to send the frame to the line card serving this user port. Module 60 controls multiplexer 44 in order to send each frame over the appropriate backplane trace. In addition to determining the appropriate line card for each frame (i.e., determining over which LAG group 58 to send the frame), the downstream mapping also balances the load of frames within each LAG group, responsively to attributes of the downstream frames.

For example, the downstream mapping can be implemented in two sequential stages. First, for each Ethernet frame, module 60 determines the appropriate line card to which the frame should be sent, depending on the destination user port. Then, within the backplane traces belonging to the LAG group of the appropriate line card, module 60 selects a particular trace and controls MUX 44 to send the frame over the selected trace. Alternatively, any other suitable mapping method can be used by module 60.

Control module 60 may be implemented in hardware, as software code running on a suitable processor, or as a combination of hardware and software elements. Module 60 may comprise an independent module, or be integrated with other components of network element 32. Different embodiments of network element 32 may comprise any number of line cards 40. Each line card may serve any number of user ports 24 and any number of backplane traces 56. In general, there need not be a direct relation between the number of user ports and the number of backplane traces. Each user port and each backplane trace may have any suitable bandwidth. User ports can have equal or different bandwidths. Similarly, the backplane traces within each LAG group 58 can have equal or different bandwidths.

FIG. 2 is a block diagram that schematically illustrates communication system 20, in accordance with another embodiment of the present invention. In some cases, a particular user requires a bandwidth higher than the bandwidth of a single user port 24. For this purpose, in the exemplary embodiment of FIG. 2, a number of user ports 24 are configured to form an aggregated user port 64. User ports 24 forming port 64 are configured as an Ethernet LAG group, referred to as an external LAG group 68. Thus, the bandwidth of port 64 is generally equal to the sum of bandwidths of the individual user ports 24 in external LAG group 68, in both upstream and downstream directions.

External LAG group 68 may comprise any number of user ports 24, belonging to any number of line cards 40. In general, a particular line card 40 may have some of its user ports 24 assigned to an external LAG group and other user ports 24 used individually or assigned to another external LAG group.

An external multiplexer (MUX) 72 performs the multiplexing and de-multiplexing (mapping) functions of the external LAG group. MUX 72 is external to network element 32 and is often located in user equipment 76, separate and distinct from network element 32. Aggregated port 64 is typically connected on the downstream side to a user node, such as a layer 2 or layer 3 switch (not shown).

In the downstream direction, MUX 72 multiplexes the downstream frames arriving over the user ports of external LAG group 68 to port 64. In the upstream direction, MUX 72 applies a suitable mapping function, such as a hashing function, to balance the load of upstream frames sent from port 64 over the different user ports of group 68. The mapping function uses attributes of the upstream frames, as explained above. Such load balancing helps to increase the upstream bandwidth of port 64 and/or its quality of service.

Single-Stage Downstream Mapping Method

In the system configuration of FIG. 2, consider the downstream frames arriving from network 28, via network processor 38, to multiplexer 44 and addressed to aggregated user port 64. In general, the processing of these frames in network element 32 comprises two consecutive mapping operations. First, each frame is mapped to one of user ports 24 in external LAG group 68. Determining the user port implicitly determines through which line card 40 the frame will pass. Then, the same frame is mapped to one of backplane traces 56 in the appropriate LAG group 58 that serves the selected line card. In some embodiments, in order to reduce the hardware complexity and/or computational complexity of network element 32, control module 60 performs a single combined mapping operation that combines the two mapping operations described above.

Thus, the combined mapping comprises a single hashing operation that determines, for each such downstream frame, both the backplane trace 56 over which the frame is to be sent to one of line cards 40, and the user port 24 to be used within external LAG group 68.

FIG. 3 is a block diagram that schematically illustrates elements of system 20, in accordance with an embodiment of the present invention. FIG. 3 is a simplified diagram, shown for the purpose of explaining the single-stage hashing method. As such, system elements unnecessary for explaining the method are omitted from the figure. The exemplary configuration of network element 32 in FIG. 3 comprises four line cards 40. Each line card is connected to MUX 44 using four backplane traces 56. The four backplane traces of each line card are configured as a LAG group. Each line card 40 serves a single user port 24. The four user ports are configured as an external LAG group to form aggregated user port 64, as explained above.

FIG. 4 is a flow chart that schematically illustrates a method for single-stage downstream hashing, in accordance with an embodiment of the present invention. Although the following description demonstrates the method using the simplified system configuration of FIG. 3 above, the method can also be applied in any other system configuration comprising two stages of link aggregation, such as the configurations discussed in the descriptions of FIGS. 1 and 2 above.

The method begins with control module 60 determining a hashing size parameter denoted Nbpow, at a hash size definition step 80. Nbpow is defined as Nbpow=Nextp·Nbpt, wherein Nextp denotes the number of user ports in external LAG group 68, and Nbpt denotes the number of backplane traces 56 in each LAG group 58. For simplicity, it is assumed that all line cards 40 having a user port in external LAG group 68 have the same number of backplane traces 56 connecting them to MUX 44. In the present example, Nextp=4, Nbpt=4, therefore Nbpow=16.

In different embodiments of network element 32, the value of hashing size Nbpow can be hard-wired in the concentrator design. Alternatively, the hashing size can be provided to the network element as part of its configuration, or determined by control module 60 responsively to the configuration setting of network element 32, as detected by module 60. In general, different line cards 40 may comprise different numbers of user ports belonging to external LAG group 68.

During normal operation, network element 32 receives downstream frames via network connection 36, at a frame reception step 82. For each downstream frame, control module 60 calculates a hashing key of the frame, at a hash key calculation step 84. As explained above, the hashing key is typically calculated by applying a suitable hashing function to the frame attributes of the downstream frame.

Module 60 performs an integer modulo-Nbpow division of the hashing key, at a mapping calculation step 86. Module 60 divides the hashing key of the downstream frame by Nbpow and retains the modulo, or the remainder of the division operation, as a mapping index. The mapping index can be represented as a binary number having a fixed number of bits, depending on the value of Nbpow. In the present example in which Nbpow=16, the mapping index, being a remainder after division by 16, is an integer number in the range 0 . . . 15, which can be represented using 4 bits.

Module 60 partitions the binary representation of the mapping index into two parts having N1 and N2 bits. Module 60 uses N1 bits as a user slot/port index, indicating over which user port 24 in external LAG group 68 the frame should be sent. The remaining N2 bits are used as a backplane trace index, indicating over which of the backplane traces of the relevant line card the frame should be sent. In the present example, the four bit mapping index is partitioned so that two bits encode the user port and two bits encode the backplane trace. In other words, N1=N2=2. The user port index and the backplane trace index jointly define the combined mapping operation for determining how the particular downstream frame is to be handled. Alternatively, module 60 can apply any other suitable method for determining the combined mapping responsively to the frame attributes.

Having determined the combined mapping, module 60 controls MUX 44 so as to send the downstream frame, at a frame sending step 88. MUX 44 sends the frame to the appropriate line card over the appropriate backplane trace, responsively to the user port index and the backplane trace index, respectively. In some embodiments, MUX 44 sends the user port index to the line card along with the frame. The line card then selects the user port over which to send the frame to aggregated port 64 responsively to the user port index.

After sending the frame over the appropriate backplane trace and user port, the method returns to reception step 82 to process the next downstream frame.

Bandwidth Allocation Considerations

FIG. 5 is a block diagram that schematically illustrates bandwidth allocation aspects in communication system 20, in accordance with an embodiment of the present invention. In the exemplary configuration of FIG. 5, network element 32 comprises two line cards 100 and 101, each having a similar structure and functionality to line cards 40 of FIGS. 1-3 above. Network element 32 connects network 28 with a user node, in the present example comprising a layer-2 switch 102. Port 64 is served by three user ports 24 configured as an external LAG group 68. As explained above, frames sent between switch 102 and network 28 undergo two stages of link aggregation. In addition to port 64, network element 32 also supports an independent user port 104 on line card 101. Frames sent between port 104 and network 28 undergo link aggregation only once.

Each of user ports 24 and 104 is coupled to a respective queue 106, which queues the frames of this port. In some embodiments, each port may have separate queues for upstream and downstream frames. Each line card comprises a link aggregator 108, which performs the aggregation of backplane traces 56 of this line card into the respective LAG group 58 in both upstream and downstream directions.

The configuration shown in FIG. 5 is an exemplary configuration chosen for the sake of simplicity. The bandwidth allocation calculations given below can be used in conjunction with any other suitable configuration of system 20, such as configurations having different numbers and arrangements of line cards, user ports, aggregated user ports and backplane traces.

Consider a particular communication service provided by network 28 to aggregated port 64. The bandwidth allocation of this communication service is commonly specified in terms of CIR and PIR or CIR and EIR figures-of-merit, as explained above. In order to provide these bandwidths at the specified quality-of-service, it is desirable to allocate sufficient bandwidth in the different physical resources used for transferring the frames. These resources comprise, for example, user ports 24, backplane traces 56 and different queues 106 in line cards 100 and 101. In some embodiments, a high quality of service is achieved by defining suitable bandwidth margins for the different physical resources. The following description details an exemplary calculation for allocating sufficient bandwidth to the different resources of system 20. Alternatively, any other suitable bandwidth allocation method can be used.

In the upstream direction, let BWSERVICE denote the total bandwidth of the communication service in question. (BWSERVICE may refer to the CIR, EIR or PIR of the service, as applicable. Similarly, the different bandwidth calculations given throughout the description below may be used to allocate CIR, EIR or PIR to the different system resources.) Upstream frames belonging to this service originating from switch 102 are mapped and distributed by MUX 72 among the three user ports 24 of external LAG group 68. In theory, the mapping operation should distribute the frames evenly among the user ports, so that each port receives a bandwidth of BWSERVICE/3. In practice, however, the actual bandwidth distribution may deviate significantly from these values. In particular, deviations are likely to occur in embodiments in which the mapping operation is a data-dependent operation, such as hashing.

Therefore, in order to account for these deviations, each of the three user ports of external LAG group 68 is allocated a higher bandwidth given by:
BW ELAGPORT=(BW SERVICE/3)·MARGINUPELAG  [1]
wherein MARGINUPELAG denotes an upstream bandwidth margin of each user port 24 in the external LAG group. (In other words, MARGINUPELAG=1.5 corresponds to a 50% bandwidth margin.) In general, the bandwidth allocated to a particular user port 24 should also be allocated to the respective queue 106 that queues the frames of this port.

After being mapped and sent over one of user ports 24, the upstream frames are processed by one of line cards 100 and 101. As part of this processing, the frames are mapped again by link aggregator 108 in the line card, so as to distribute them among the four backplane traces 56 of LAG group 56. It is thus desirable to allocate sufficient bandwidth on each of backplane traces 56. Assuming an optimal (uniform) distribution among the backplane traces, the bandwidth received by each backplane trace can be written as BWELAGPORT*#PORTSELAG/#BPT, wherein #PORTSELAG denotes the number of user ports of external LAG group 68 that are processed by the particular line card, and #BPT denotes the number of backplane traces 56 in the LAG group 58 of this line card.

Since, as explained above, the actual distribution achieved by the mapping operation (in this case, the mapping between backplane traces performed by aggregator 108) often deviates from uniform distribution, a suitable margin denoted MARGINUPBPLAG is added. Thus, the bandwidth allocation of each backplane trace can be written as BWELAGPORT*#PORTSELAG*MARGINUPBPLAG/#BPT, wherein #PORTSELAG denotes the number of user ports in the particular line card. In some cases, however, this bandwidth allocation is greater than BWSERVICE, the total bandwidth of the communication service. Clearly there is no need to assign to any single backplane trace a bandwidth that is greater than the total service bandwidth. Therefore, the bandwidth allocated to each backplane trace can be written as:

BW BPT = Min { BW ELAGPORT · £ PORTS ELAG · MARGIN UPBPLAG / £ BPT , BW SERVICE } [ 2 ]

Note that according to equation [1], BWELAGPORT already contains MARGINUPELAG, the bandwidth margin of the external LAG. #PORTSELAG refers to the specific line card in question.

The total bandwidth allocated on LAG group 56 is thus given by

BW BPLAG = Min { BW ELAGPORT · £ PORTS ELAG · MARGIN UPBPLAG / £ BPT , BW SERVICE } · £ BPT [ 3 ]

In cases where external LAG is not used, such as for upstream frames originating from independent port 104, the total bandwidth allocated on LAG group 56 is given by the simpler expression:

BW BPLAG = Min { BW SERVICE · MARGIN UPBPLAG / £ BPT , BW SERVICE } · £ BPT [ 4 ]

In the downstream direction, as explained above, downstream frames undergo only a single mapping operation. When external LAG is used, using a similar calculation, the total bandwidth allocated to LAG group 56 for downstream frames of a particular service can be written as:

BW BPLAG = Min { BW ELAGPORT · £ PORTS ELAG · / £ BPT , BW SERVICE } · £ BPT [ 5 ]
wherein a suitable downstream bandwidth margin is assumed to be already included in BWELAGPORT. When external LAG is not used, such as for downstream frames addressed to independent port 104, the total bandwidth allocation can be written as:

BW BPLAG = Min { BW SERVICE · MARGIN DNBPLAG / £ BPT , BW SERVICE } · £ BPT [ 6 ]
wherein MARGINDNBPLAG denotes a downstream bandwidth margin for the backplane traces, which may be the same as or different from MARGINUPBPLAG.

Although the methods and systems described herein mainly address link aggregation of backplane traces and external link aggregation in Ethernet communication systems, the principles of the present invention can also be used in any system configuration in which a user interface module connects one or more user ports to a communication network via two or more parallel physical links. In particular, the principles of the present invention can also be used in other applications involving the selection of physical links in two successive link aggregation stages using a single computation.

It will thus be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and sub-combinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.

Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US3411507Apr 1, 1964Nov 19, 1968Medtronic IncMethod of gastrointestinal stimulation with electrical pulses
US4535785Sep 23, 1982Aug 20, 1985Minnesota Mining And Manufacturing Co.Method and apparatus for determining the viability and survival of sensori-neutral elements within the inner ear
US4573481Jun 25, 1984Mar 4, 1986Huntington Institute Of Applied ResearchImplantable electrode array
US4602624Oct 11, 1984Jul 29, 1986Case Western Reserve UniversityImplantable cuff, method of manufacture, and method of installation
US4608985Oct 11, 1984Sep 2, 1986Case Western Reserve UniversityAntidromic pulse generating wave form for collision blocking
US4628942Oct 11, 1984Dec 16, 1986Case Western Reserve UniversityAsymmetric shielded two electrode cuff
US4649936Oct 11, 1984Mar 17, 1987Case Western Reserve UniversityAsymmetric single electrode cuff for generation of unidirectionally propagating action potentials for collision blocking
US4702254Dec 30, 1985Oct 27, 1987Jacob ZabaraMethod of controlling/preventing involuntary movements
US4739764Apr 22, 1986Apr 26, 1988The Regents Of The University Of CaliforniaMethod for stimulating pelvic floor muscles for regulating pelvic viscera
US4867164Oct 26, 1987Sep 19, 1989Jacob ZabaraNeurocybernetic prosthesis
US4962751May 30, 1989Oct 16, 1990Welch Allyn, Inc.Hydraulic muscle pump
US5025807Jan 25, 1989Jun 25, 1991Jacob ZabaraNeurocybernetic prosthesis
US5188104Feb 1, 1991Feb 23, 1993Cyberonics, Inc.Treatment of eating disorders by nerve stimulation
US5199430Mar 11, 1991Apr 6, 1993Case Western Reserve UniversityMicturitional assist device
US5205285Jun 14, 1991Apr 27, 1993Cyberonics, Inc.Voice suppression of vagal stimulation
US5215086May 3, 1991Jun 1, 1993Cyberonics, Inc.Therapeutic treatment of migraine symptoms by stimulation
US5263480Aug 7, 1992Nov 23, 1993Cyberonics, Inc.Treatment of eating disorders by nerve stimulation
US5282468Jan 8, 1992Feb 1, 1994Medtronic, Inc.Implantable neural electrode
US5285441Mar 17, 1992Feb 8, 1994At&T Bell LaboratoriesErrorless line protection switching in asynchronous transer mode (ATM) communications systems
US5292344Jul 10, 1992Mar 8, 1994Douglas Donald DPercutaneously placed electrical gastrointestinal pacemaker stimulatory system, sensing system, and pH monitoring system, with optional delivery port
US5299569May 3, 1991Apr 5, 1994Cyberonics, Inc.Treatment of neuropsychiatric disorders by nerve stimulation
US5335657May 3, 1991Aug 9, 1994Cyberonics, Inc.Therapeutic treatment of sleep disorder by nerve stimulation
US5423872May 26, 1993Jun 13, 1995Cigaina; ValerioProcess and device for treating obesity and syndromes related to motor disorders of the stomach of a patient
US5540730Jun 6, 1995Jul 30, 1996Cyberonics, Inc.Treatment of motility disorders by nerve stimulation
US5540734Sep 28, 1994Jul 30, 1996Zabara; JacobCranial nerve stimulation treatments using neurocybernetic prosthesis
US5571150Dec 19, 1994Nov 5, 1996Cyberonics, Inc.Treatment of patients in coma by nerve stimulation
US5690691May 8, 1996Nov 25, 1997The Center For Innovative TechnologyGastro-intestinal pacemaker having phased multi-point stimulation
US5707400Sep 19, 1995Jan 13, 1998Cyberonics, Inc.Treating refractory hypertension by nerve stimulation
US5716385Nov 12, 1996Feb 10, 1998University Of VirginiaCrural diaphragm pacemaker and method for treating esophageal reflux disease
US5755750Nov 8, 1996May 26, 1998University Of FloridaMethod and apparatus for selectively inhibiting activity in nerve fibers
US5836994Apr 30, 1997Nov 17, 1998Medtronic, Inc.Method and apparatus for electrical stimulation of the gastrointestinal tract
US6026326Jan 13, 1997Feb 15, 2000Medtronic, Inc.Apparatus and method for treating chronic constipation
US6083249Sep 24, 1998Jul 4, 2000Medtronic, Inc.Apparatus for sensing and stimulating gastrointestinal tract on-demand
US6091992Dec 15, 1997Jul 18, 2000Medtronic, Inc.Method and apparatus for electrical stimulation of the gastrointestinal tract
US6097984Nov 25, 1998Aug 1, 2000Medtronic, Inc.System and method of stimulation for treating gastro-esophageal reflux disease
US6104955Dec 15, 1997Aug 15, 2000Medtronic, Inc.Method and apparatus for electrical stimulation of the gastrointestinal tract
US6147993Oct 14, 1997Nov 14, 2000Cisco Technology, Inc.Method and apparatus for implementing forwarding decision shortcuts at a network switch
US6151297Jul 8, 1997Nov 21, 2000Hewlett-Packard CompanyMethod and system for link level server/switch trunking
US6205359Oct 26, 1998Mar 20, 2001Birinder Bob BovejaApparatus and method for adjunct (add-on) therapy of partial complex epilepsy, generalized epilepsy and involuntary movement disorders utilizing an external stimulator
US6205488Nov 13, 1998Mar 20, 2001Nortel Networks LimitedInternet protocol virtual private network realization using multi-protocol label switching tunnels
US6275493Apr 2, 1998Aug 14, 2001Nortel Networks LimitedMethod and apparatus for caching switched virtual circuits in an ATM network
US6304575Aug 31, 1998Oct 16, 2001Cisco Technology, Inc.Token ring spanning tree protocol
US6339595Dec 23, 1997Jan 15, 2002Cisco Technology, Inc.Peer-model support for virtual private networks with potentially overlapping addresses
US6466985Apr 9, 1999Oct 15, 2002At&T Corp.Method and apparatus for providing quality of service using the internet protocol
US6553029Jul 9, 1999Apr 22, 2003Pmc-Sierra, Inc.Link aggregation in ethernet frame switches
US6560231Jul 22, 1998May 6, 2003Ntt Mobile Communications Network, Inc.Multiplex transmission system and bandwidth control method
US6600741Mar 25, 1999Jul 29, 2003Lucent Technologies Inc.Large combined broadband and narrowband switch
US6604136Jun 25, 1999Aug 5, 2003Intel CorporationApplication programming interfaces and methods enabling a host to interface with a network processor
US6624917Oct 28, 1999Sep 23, 2003International Business Machines CorporationOptical power adjustment circuits for parallel optical transmitters
US6628624Dec 9, 1998Sep 30, 2003Cisco Technology, Inc.Value-added features for the spanning tree protocol
US6665273Jan 11, 2000Dec 16, 2003Cisco Technology, Inc.Dynamically adjusting multiprotocol label switching (MPLS) traffic engineering tunnel bandwidth
US6735198Dec 21, 1999May 11, 2004Cisco Technology, Inc.Method and apparatus for updating and synchronizing forwarding tables in a distributed network switch
US6760775Mar 6, 2000Jul 6, 2004At&T Corp.System, method and apparatus for network service load and reliability management
US6763025Mar 12, 2001Jul 13, 2004Advent Networks, Inc.Time division multiplexing over broadband modulation method and apparatus
US6765921Jun 28, 2000Jul 20, 2004Nortel Networks LimitedCommunications network
US6778496Jun 7, 2000Aug 17, 2004Lucent Technologies Inc.Distributed call admission and load balancing method and apparatus for packet networks
US6788681Feb 25, 2000Sep 7, 2004Nortel Networks LimitedVirtual private networks and methods for their operation
US6886043Jun 28, 2000Apr 26, 2005Nortel Networks LimitedCommunications network
US6963578 *Jul 31, 2001Nov 8, 2005Hitachi, Ltd.Router
US7184402Aug 30, 2001Feb 27, 2007Cisco Technology, IncMethod for multi-link load balancing to improve sequenced delivery of frames at peer end
US20020176450Oct 31, 2001Nov 28, 2002Sycamore Networks, Inc.System and methods for selectively transmitting ethernet traffic over SONET/SDH optical network
US20030103449Aug 2, 2002Jun 5, 2003Corrigent Systems Ltd.Traffic engineering in bi-directional ring networks
US20040008687Jul 11, 2002Jan 15, 2004Hitachi Ltd.Method and apparatus for path configuration in networks
US20040017816Jun 4, 2003Jan 29, 2004Prashanth IshwarManaging traffic in a multiport network node using logical ports
US20040107285Nov 7, 2003Jun 3, 2004Science Applications International CorporationMethod for establishing secure communication link between computers of virtual private network
US20040156313Feb 3, 2004Aug 12, 2004Hofmeister Ralph TheodoreMethod and apparatus for performing data flow ingress/egress admission control in a provider network
US20040202171Mar 14, 2001Oct 14, 2004Daisuke HamaNetwork and edge router
US20040228278May 13, 2003Nov 18, 2004Corrigent Systems, Ltd.Bandwidth allocation for link aggregation
US20050125490Dec 5, 2003Jun 9, 2005Ramia Kannan B.Device and method for handling MPLS labels
US20050163115Apr 16, 2004Jul 28, 2005Sitaram DontuDistributed forwarding in virtual network devices
US20050180171Apr 28, 2004Aug 18, 2005Hsin-Tao HuangBack light module of liquid crystal display device
US20070038767Jan 9, 2003Feb 15, 2007Miles Kevin GMethod and apparatus for constructing a backup route in a data communications network
US20080031263Aug 7, 2006Feb 7, 2008Cisco Technology, Inc.Method and apparatus for load balancing over virtual network links
WO2001010375A2Aug 3, 2000Feb 15, 2001Impulse Dynamics NvInhibition of action potentials
Non-Patent Citations
Reference
1Baker, et al., "PPP Bridging Control Protocol (BCP)", in IETF RFC 1638, Jun. 1994, p. 1-25.
2Bradley, et al., "Multiprotocol Interconnect over Frame Relay" in IETF RFC 1490, Jul. 1993, p. 1-31.
3Clause 43 of IEEE Standard 802.3a, "Carrier Sense Multiple Access Method with Collision Detection (CSMA/CD) Access Method and Physical Layer Specifications" 2002 Edition.
4Deering, S., "Host Extensions for IP Multicasting", in RFC 1112, Aug. 1989, p. 1-15.
5Finlayson, R. et al, "A Reverse Address Resolution Protocol", RFC 903, Jun. 1984, p. 1-5.
6Malkin, G. "RIP version 2", RFC 2453, Nov. 1998, p. 1-39.
7Mamakos, et al., "A Method for Transmitting PPP Over Ethernet (PPPoE)", in IETF RFC 2516, Feb. 1999, p. 1-15.
8Plummer, D., "An Ethernet Address Resolution Protocol", in RFC 826, Nov. 1982, p. 1-8.
9Rosen, et al., "Multiprotocol Label Switching Architecture" in Request for Comments (RFC) 3031 of the Internet Engineering Task Force (IETF), Jan. 2001 p. 1-54. Also available at www.ietf.org/rfc.html.
10Tsiang, D. et al., "The Cisco SRP MAC Layer Protocol" in Request for Comments (RFC) 2892 of the Internet Engineering Task Force (IETF), Aug. 2000 p. 1-46.
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US7626930 *Jul 16, 2007Dec 1, 2009Corrigent Systems Ltd.Hash-based multi-homing
US7852843 *Jul 10, 2007Dec 14, 2010Cortina Systems, Inc.Apparatus and method for layer-2 to layer-7 search engine for high speed network application
US7869432 *Jun 29, 2007Jan 11, 2011Force 10 Networks, IncPeer-to-peer link aggregation across a service provider network
US7899057Apr 28, 2006Mar 1, 2011Jds Uniphase CorporationSystems for ordering network packets
US8107822Aug 26, 2008Jan 31, 2012Finisar CorporationProtocols for out-of-band communication
US8213333Jul 11, 2007Jul 3, 2012Chip GreelIdentifying and resolving problems in wireless device configurations
US8355328May 14, 2010Jan 15, 2013Broadcom CorporationDynamic load balancing
US8526821Dec 28, 2007Sep 3, 2013Finisar CorporationTransceivers for testing networks and adapting to device changes
US8665879 *Apr 8, 2010Mar 4, 2014Broadcom CorporationFlow based path selection randomization using parallel hash functions
US20110013639 *Apr 8, 2010Jan 20, 2011Broadcom CorporationFlow based path selection randomization using parallel hash functions
US20110051735 *May 14, 2010Mar 3, 2011Broadcom CorporationDynamic load balancing using virtual link credit accounting
Classifications
U.S. Classification370/230, 370/445, 370/235
International ClassificationH04L12/56
Cooperative ClassificationY02B60/33, H04L45/245, H04L45/745, H04L47/10, H04L47/125
European ClassificationH04L47/10, H04L45/745, H04L45/24A, H04L47/12B
Legal Events
DateCodeEventDescription
Apr 1, 2014ASAssignment
Owner name: ORCKIT-CORRIGENT LTD, ISRAEL
Effective date: 20090906
Free format text: CHANGE OF NAME;ASSIGNOR:CORRIGENT SYSTEMS LTD.;REEL/FRAME:032579/0201
Jul 26, 2013ASAssignment
Owner name: ORCKIT-CORRIGENT LTD., ISRAEL
Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:HUDSON BAY IP OPPORTUNITIES MASTER FUND LP;REEL/FRAME:030887/0983
Effective date: 20130723
Jun 6, 2013SULPSurcharge for late payment
Jun 6, 2013FPAYFee payment
Year of fee payment: 4
Mar 18, 2013ASAssignment
Effective date: 20130318
Owner name: HUDSON BAY IP OPPORTUNITIES MASTER FUND, LP, NEW Y
Free format text: SECURITY AGREEMENT;ASSIGNOR:ORCKIT-CORRIGENT LTD.;REEL/FRAME:030033/0774
Jan 21, 2013REMIMaintenance fee reminder mailed
Apr 7, 2006ASAssignment
Owner name: CORRIGENT SYSTEMS LTD., ISRAEL
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZELIG, MR. DAVID;SOLOMON, MR. RONEN;KHILL, MR. UZI;REEL/FRAME:017441/0881
Effective date: 20060316