WO2005067532A2 - Managing processing utilization in a network node - Google Patents

Managing processing utilization in a network node Download PDF

Info

Publication number
WO2005067532A2
WO2005067532A2 PCT/US2005/001284 US2005001284W WO2005067532A2 WO 2005067532 A2 WO2005067532 A2 WO 2005067532A2 US 2005001284 W US2005001284 W US 2005001284W WO 2005067532 A2 WO2005067532 A2 WO 2005067532A2
Authority
WO
WIPO (PCT)
Prior art keywords
packet
cpu
learning
packets
reach
Prior art date
Application number
PCT/US2005/001284
Other languages
French (fr)
Other versions
WO2005067532A3 (en
Inventor
Sandeep Lodha
Thirumalpathy Balakrishnan
Original Assignee
Riverstone Networks, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Riverstone Networks, Inc. filed Critical Riverstone Networks, Inc.
Priority to JP2006549635A priority Critical patent/JP2007525883A/en
Priority to CN2005800064873A priority patent/CN101351995B/en
Priority to EP05705737A priority patent/EP1721411A4/en
Publication of WO2005067532A2 publication Critical patent/WO2005067532A2/en
Publication of WO2005067532A3 publication Critical patent/WO2005067532A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/54Organization of routing tables
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/38Flow based routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2483Traffic characterised by specific attributes, e.g. priority or QoS involving identification of individual flows
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/253Routing or path finding in a switch fabric using establishment or release of connections between ports
    • H04L49/254Centralised controller, i.e. arbitration or scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3009Header conversion, routing tables or routing tags

Definitions

  • the invention relates to packet-based communications networks, and more particularly, to techniques for managing the utilization of processing resources in a network node such as a switch or router.
  • Packet-based network nodes such as switches and routers, generate a database of forwarding information that is used to forward incoming packetized traffic.
  • the forwarding information is generated through software-based protocols that are executed by a central processing unit (CPU).
  • CPU central processing unit
  • the forwarding information is often programmed into hardware-based forwarding tables.
  • the hardware-based forwarding tables can be rapidly searched to provide forwarding decisions without ever having to utilize the resources of the CPU.
  • forwarding information for a received flow of packets does not exist in the hardware-based forwarding table, the packets from the flow are sent to the CPU for processing until forwarding information can be learned and a forwarding table entry can be programmed into the hardware-based forwarding table.
  • the CPU of a network node has a finite processing capacity and as more packets are sent to the CPU, more of the finite processing capacity is consumed by processing the received packets. If the load on the CPU is too great, the response time of the CPU will slow and some packets may be dropped.
  • Many of the most advanced switches and routers utilize a chassis-based distributed architecture in which separate linecards are dedicated to different functions. For example, a control module linecard is dedicated to central management and control operations, port interface linecards are dedicated to sending and receiving network traffic and performing hardware-based forwarding, and a switch fabric linecard is dedicated to providing data paths between the various linecards.
  • control module includes a main CPU that is responsible for generating and managing the forwarding information for the entire network node and for programming the hardware-based forwarding tables of the port interfaces.
  • the wide set of responsibilities of the control module makes the finite processing capacity of the main CPU a very valuable resource.
  • a technique for managing the utilization of processing resources involves filtering packets that are sent to a CPU for learning before allowing the packets to reach the CPU.
  • the filtering involves determining if related packets have already been allowed to reach the CPU for learning and using the knowledge about related packets to determine if a current packet should be allowed to reach the CPU.
  • the processing resources of the CPU are conserved by allowing only one packet per flow to reach the CPU for learning.
  • the one packet is used by the CPU to generate the necessary forwarding information and to initiate programming of the hardware-based forwarding table so that subsequent packets of the same flow can be forwarded directly from the hardware-based forwarding engine. Because only one packet per flow is allowed to reach the CPU for learning, the processing resources of the CPU are not consumed by learning the same forwarding information for multiple packets of the same flow.
  • Fig. 1 depicts a network node that includes a CPU, a hardware-based forwarding table, and a learning filter.
  • Fig. 2 depicts an embodiment of the learning filter from Fig. 1.
  • Fig. 3 depicts a process flow diagram of a technique for managing the utilization of processing resources.
  • Fig. 4 depicts an embodiment of a network node with a distributed architecture that is configured to filter packets that are sent for learning.
  • FIG. 5 depicts another embodiment of a network node with a distributed architecture.
  • Fig. 6 depicts a process flow diagram of a method for managing the utilization of processing resources of a CPU.
  • Fig. 1 depicts a network node 100 that includes a central processing unit (CPU) 102, a hardware-based forwarding engine 104, and a learning filter 106.
  • the network node handles traffic in discrete segments, often referred to as datagrams.
  • the network node is an Ethernet switch/router that forwards traffic within the network node using Layer 2, Layer 3, and/or Layer 4 header information, where the "Layers" are defined in the Open System Interconnection (OSI) model by the International Standardization Organization (ISO).
  • the network node may include port interfaces that support other network protocols such as asynchronous transfer mode (ATM), synchronous optical network (SONET), and Frame Relay.
  • ATM asynchronous transfer mode
  • SONET synchronous optical network
  • Frame Relay Frame Relay
  • the CPU 102 of the network node 100 runs an operating system and supports software protocols that are necessary to forward network traffic.
  • the CPU may be embodied as a multifunction processor and/or an application-specific processor. Examples of processors include the PowerPCTM family of processors by IBM and the x86 family of processors by Intel. Examples of operating systems that may be run by the CPU include NetBSD, Linux, and vx WORKS. Although not shown, the CPU may be supported by other hardware (e.g., memory and application- specific integrated circuits (ASICs)).
  • ASICs application-specific integrated circuits
  • the protocols run by the CPU 102 are the protocols involved with generating forwarding information. These protocols, referred to herein as software- based learning protocols 110, include Layer 2 learning protocols and Layer 3 learning protocols.
  • the Layer 2 protocol that is used to switch traffic is Ethernet and Layer 2 learning involves associating a destination media access control (MAC) address with an output port of the network node.
  • MAC media access control
  • Layer 2 learning may also involve associating virtual local area network (VLAN) identifiers (IDs) with destination MAC addresses and/or output ports.
  • VLAN virtual local area network
  • the Layer 3 protocol that is used to route traffic is Internet Protocol (IP)-based (including IP and IPX) and Layer 3 learning involves associating a destination IP address with a next-hop IP address.
  • IP Internet Protocol
  • Examples of common Layer 3 protocols that are run by the CPU 102 include the open shortest path first (OSPF) protocol, the border gateway protocol (BGP), the intermediate system-to- intermediate system (ISIS) protocol, and multiprotocol label switching (MPLS).
  • OSPF open shortest path first
  • BGP border gateway protocol
  • ISIS intermediate system-to- intermediate system
  • MPLS multiprotocol label switching
  • Traffic is typically communicated between packet-based network nodes in groups of related packets. The groups of related packets are often referred to as a "flow.” Packets of a flow have some common information.
  • common Layer 2 information may include any combination of a destination MAC address, a source MAC address, a VLAN ID, and/or a port of entry.
  • Common Layer 3 information may include any combination of a destination IP address, a source IP address, type of service (TOS), a destination port number, and/or a source port number.
  • TOS type of service
  • the hardware-based forwarding engine 104 of Fig. 1 is responsible for making hardware-based forwarding decisions for incoming traffic.
  • the hardware- based forwarding engine includes a hardware-based forwarding table 112 that is programmed with forwarding table entries.
  • the forwarding table entries associate incoming packet information with output information.
  • hardware-based forwarding tables are typically embodied in random access memory (RAM) and/or content addressable memory (CAM) that can be rapidly accessed and searched.
  • Hardware-based forwarding decisions can only be made on incoming packets if the respective hardware-based forwarding table contains forwarding information that corresponds to the incoming packets.
  • the hardware-based forwarding engine compares header information from received packets to the forwarding table entries to look for a table entry match. If the hardware-based forwarding engine is not able to make a forwarding decision on the incoming packets, then the hardware-based forwarding table needs to be programmed with a forwarding table entiy that corresponds to the incoming packets. The process of obtaining forwarding information is referred to herein as learning.
  • the hardware-based forwarding table may contain forwarding information that corresponds to the incoming packets although for some reason, the forwarding information is inactive (e.g., cannot be used to make a forwarding decision).
  • packets that are sent to the CPU 102 for learning are filtered before being allowed to reach the CPU.
  • the filtering involves determining if related packets have already been allowed to reach the CPU and using the knowledge about related packets to determine if a current packet should be allowed to reach the CPU.
  • the resources of the CPU are conserved by allowing only one packet per flow to reach the CPU for learning.
  • the one packet is used by the CPU to generate the necessary forwarding information and to initiate programming of the hardware-based forwarding table 112 so that subsequent packets of the same flow can be forwarded directly from the hardware-based forwarding engine 104.
  • the filtering of packets that are sent to the CPU 102 for learning is performed by the learning filter 106.
  • the learning filter receives all of the packets that are sent from the hardware-based forwarding engine 104 to the CPU for learning and determines which of the received packets are allowed to reach the CPU. Only a subset of the originally sent packets is allowed to reach the CPU as a result of the filtering.
  • the learning filter may use a variety of techniques to determine which of the received packets are allowed to reach the CPU. Some examples of the learning filter and filtering techniques are described below.
  • the learning filter is an ASIC chip that is located in a data path between the CPU and the hardware-based forwarding engine.
  • Fig. 2 depicts an embodiment of the learning filter 106 from Fig. 1.
  • the learning filter includes a hasher 116, a per-flow state machine 118, and an output controller 120.
  • the learning filter receives packets that are sent by the hardware- based forwarding engine 104 to the CPU 102 for learning.
  • the hasher obtains header information from the received packets and hashes certain header information to generate hash values that identify the flows to which the packets belong.
  • Layer 2 packets are hashed on a combination of the destination MAC address, the source MAC address, the VLAN ID, and the port of entry while Layer 3 packets are hashed on the destination IP address, source IP address, TOS, destination port number, and source port number.
  • the hash value generated by the hasher is provided to the per-flow state machine.
  • the per-flow state machine maintains a state table 122 that indicates a state for each identified flow, where each flow is identified by a hash value.
  • the current state of a flow is provided to the output controller.
  • the output controller determines whether or not a packet is allowed to reach the CPU based on the current state. As a result of the filtering that takes place, only a subset of the packets that were received by the learning filter are allowed to reach the CPU.
  • the per-flow state machine 118 maintains two states for each flow, where the states are identified as state 1 (SI) and state 2 (S2). State 1 indicates that no packets from the corresponding flow have been allowed to reach the CPU 102 and state 2 indicates that a packet of the corresponding flow has been allowed to reach the CPU.
  • the state of a flow is initially set to state 1 and a packet is allowed to reach the CPU when the state is state 1. Once a packet is allowed to reach the CPU, the state is changed to state 2. While the state of a flow is set to state 2, no more packets from the flow are allowed to reach the CPU.
  • the state of a flow can be reset to state 1 according to a pre-established algorithm to ensure that the forwarding information of flows is periodically updated.
  • the state machine may be configured to reset to state 1 after the forwarding table is programmed with the corresponding table entry or after some fixed period of time.
  • the result of the learning filter logic of Fig. 2 is that only one packet per flow is allowed to reach the CPU for processing. This can greatly reduce the load on the CPU without inhibiting the learning process.
  • the learning filter and filtering logic is described, other filtering techniques may be used to reduce the number of packets that are allowed to reach the CPU for learning.
  • Fig. 3 depicts a process flow diagram of a technique for managing the utilization of processing resources of a CPU.
  • a packet is received at a network node.
  • decision point 202 it is determined whether or not learning is required. If it is determined that learning is not required, then at block 204 the packet is forwarded by the hardware-based forwarding engine using forwarding information that exists in its hardware-based forwarding table. If it is determined that learning is required, then at block 206, the flow to which the packet belongs is identified. For example, the flow is identified by hashing certain fields of the packet header. After the flow is identified, at decision point 208, it is determined if a packet from the identified flow has already been sent to the CPU for learning. For example, a state machine is consulted to determine whether a packet from the identified flow has already been sent to the CPU for learning.
  • Fig. 4 depicts an embodiment of a network node 130 with a distributed architecture that is configured to filter packets that are sent for learning.
  • the distributed architecture of the network node includes a control module linecard 132, a switch fabric linecard 134, and two port interface linecards 136 (port interfaces A and B).
  • a single learning filter 106 is located at the control module to filter packets received from all of the port interfaces.
  • the learning filter depicted in Fig. 4 performs the same filtering functions as the learning filter that is described above with reference to Figs. 1 and 2.
  • the control module 132 includes a CPU 102 (identified as the "main CPU") and the learning filter 106.
  • the control module supports various functions such as network management functions and protocol implementation functions.
  • the control module also includes memory such as electrically erasable programmable read-only memory (EEPROM) or flash ROM for storing operational code and dynamic random access memory (DRAM) for buffering traffic and storing data structures, such as forwarding information.
  • EEPROM electrically erasable programmable read-only memory
  • DRAM dynamic random access memory
  • the main CPU may include a multifunction processor and/or an application- specific processor as described above.
  • the main CPU supports the software-based learning as indicated by the software-based learning protocols functional block 110.
  • the software-based learning includes generating Layer 2 and Layer 3 forwarding information as is well-known in the field.
  • the switch fabric 134 provides datapaths between the control module 132 and the port interfaces 136 (e.g., datapaths between the control module and the port interfaces and datapaths between the port interfaces).
  • the switch fabric may utilize, for example, shared memory, a shared bus, or crosspoint matrices.
  • the port interfaces 136 include a port interface CPU 138, a hardware-based forwarding engine 104, and input/output ports 140. In general, functions performed by the port interfaces include receiving traffic into the network node, buffering traffic, storing forwarding information, protocol processing, making forwarding decisions, and transmitting traffic from the network node 130. In the embodiment of Fig. 4, the port interface CPU of each port interface runs its own operating system.
  • the port interface CPU within each port interface linecard may include a multifunction processor (e.g., an IBM PowerPC® processor) and/or an application specific processor. Operational code is typically stored in non-volatile memory (not shown) such as EEPROM or flash ROM while traffic is typically buffered in volatile memory (not shown) such as RAM.
  • non-volatile memory not shown
  • volatile memory not shown
  • the hardware-based forwarding engines 104 depicted in Fig. 4 perform the same functions as the hardware-based forwarding engine described with reference to Fig. 1.
  • One task performed by the hardware-based forwarding engine is determining if incoming packets need to be learned so that forwarding decisions can be made directly by the hardware-based forwarding engines. Packets that need to be learned are sent to the control module 132 through the switch fabric 134.
  • the hardware-based forwarding engines 104 of the port interfaces 136 determine ifreceived packets need learning. Ifreceived packets need learning, then the packets are sent across the switch fabric 134 to the control module 132. At the control module, the packets are first processed by the learning filter 106.
  • the learning filter acts as a gateway that determines whether or not the packets reach the main CPU 102 for learning. Because the learning filter is located on the control module, it can receive packets from all of the different port interfaces and therefore functions as a central filtering point. This enables all of the filtering to be accomplished with a single learning filter ASIC. Additionally, this enables the filtering to be accomplished without requiring changes to the main CPU or the hardware-based forwarding engines.
  • Fig. 5 depicts another embodiment of a network node 150 with a distributed architecture.
  • the embodiment of Fig. 5 is similar to the embodiment of Fig. 4 except that the filtering function is performed in a distributed manner at each port interface 136.
  • each port interface includes an interface-specific learning filter 106A and 106B that filters only packets from its corresponding port interface.
  • the interface-specific learning filters perform the same basic functions as the learning filter described with reference to Figs. 1 and 2. Packets that pass the filtering are sent from the respective port interfaces to the main CPU 102 of the control module 132 through the switch fabric.
  • Fig. 6 depicts a process flow diagram of a method for managing the utilization of processing resources of a CPU.
  • a packet is received.
  • it is determined if forwarding information related to the packet needs to be learned to forward the packet.
  • a decision is made whether to subject the packet to learning. The decision is based on whether any other related packets have already been subjected to learning.
  • the filtering function may be incorporated into the CPU such that all packets sent for learning are received by the CPU but only selected packets are subjected to learning processing.
  • the number of packets allowed to reach the CPU is reduced from the total number of packets of a flow that are initially sent to the CPU for learning.
  • sending a packet within the network node may involve sending only header information of the packet.
  • sending a packet to the CPU for learning may involve sending only header information of the packet to the CPU.
  • the first packet of a flow is allowed to reach the CPU for learning, in other embodiments it is possible that a packet other than the first packet is allowed to reach the CPU.

Abstract

A technique for managing the utilization of processing resources involves filtering packets that are sent to a CPU for learning before allowing the packets to reach the CPU. The filtering involves determining if related packets have already been allowed to reach the CPU for learning and using the knowledge about related packets to determine if a current packet should be allowed to reach the CPU In one embodiment, the processing resource of the CPU are conserved by allowing one packet per flow to reach the CPU for learning The one packet is used by the CPU (102) to generate the necessary forwarding information and to initiate programming of the hardware-based forwarding table (112) so that subsequent packets of the same flow can be forwarded directly from the hardware-based forwarding engine (104).

Description

MANAGING PROCESSING UTILIZATION IN A NETWORK NODE
CROSS-REFERENCE TO RELATED APPLICATION
[001] This application is entitled to the benefit of provisional U.S. Patent Application Serial Number 60/536,469, filed 14 January 2004.
FIELD OF THE INVENTION
[002] The invention relates to packet-based communications networks, and more particularly, to techniques for managing the utilization of processing resources in a network node such as a switch or router.
BACKGROUND OF THE INVENTION
[003] Packet-based network nodes, such as switches and routers, generate a database of forwarding information that is used to forward incoming packetized traffic. The forwarding information is generated through software-based protocols that are executed by a central processing unit (CPU). In order to increase the speed and throughput of switches and routers, the forwarding information is often programmed into hardware-based forwarding tables. The hardware-based forwarding tables can be rapidly searched to provide forwarding decisions without ever having to utilize the resources of the CPU. When forwarding information for a received flow of packets does not exist in the hardware-based forwarding table, the packets from the flow are sent to the CPU for processing until forwarding information can be learned and a forwarding table entry can be programmed into the hardware-based forwarding table. The CPU of a network node has a finite processing capacity and as more packets are sent to the CPU, more of the finite processing capacity is consumed by processing the received packets. If the load on the CPU is too great, the response time of the CPU will slow and some packets may be dropped. [004] Many of the most advanced switches and routers utilize a chassis-based distributed architecture in which separate linecards are dedicated to different functions. For example, a control module linecard is dedicated to central management and control operations, port interface linecards are dedicated to sending and receiving network traffic and performing hardware-based forwarding, and a switch fabric linecard is dedicated to providing data paths between the various linecards. In a distributed architecture, the control module includes a main CPU that is responsible for generating and managing the forwarding information for the entire network node and for programming the hardware-based forwarding tables of the port interfaces. The wide set of responsibilities of the control module makes the finite processing capacity of the main CPU a very valuable resource.
[005] In view of the foregoing, what is needed is a technique for efficiently managing the utilization of processing resources in a packet-based network node.
SUMMARY OF THE INVENTION
[006] A technique for managing the utilization of processing resources involves filtering packets that are sent to a CPU for learning before allowing the packets to reach the CPU. The filtering involves determining if related packets have already been allowed to reach the CPU for learning and using the knowledge about related packets to determine if a current packet should be allowed to reach the CPU. In one embodiment, the processing resources of the CPU are conserved by allowing only one packet per flow to reach the CPU for learning. The one packet is used by the CPU to generate the necessary forwarding information and to initiate programming of the hardware-based forwarding table so that subsequent packets of the same flow can be forwarded directly from the hardware-based forwarding engine. Because only one packet per flow is allowed to reach the CPU for learning, the processing resources of the CPU are not consumed by learning the same forwarding information for multiple packets of the same flow.
[007] Other aspects and advantages of the present invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, illustrating by way of example the principles of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[008] Fig. 1 depicts a network node that includes a CPU, a hardware-based forwarding table, and a learning filter.
[009] Fig. 2 depicts an embodiment of the learning filter from Fig. 1.
[0010] Fig. 3 depicts a process flow diagram of a technique for managing the utilization of processing resources. [0011] Fig. 4 depicts an embodiment of a network node with a distributed architecture that is configured to filter packets that are sent for learning.
[0012] Fig. 5 depicts another embodiment of a network node with a distributed architecture.
[0013] Fig. 6 depicts a process flow diagram of a method for managing the utilization of processing resources of a CPU.
[0014] Throughout the description, similar reference numbers may be used to identify similar elements.
DETAILED DESCRIPTION
[0015] Fig. 1 depicts a network node 100 that includes a central processing unit (CPU) 102, a hardware-based forwarding engine 104, and a learning filter 106. The network node handles traffic in discrete segments, often referred to as datagrams. In an embodiment, the network node is an Ethernet switch/router that forwards traffic within the network node using Layer 2, Layer 3, and/or Layer 4 header information, where the "Layers" are defined in the Open System Interconnection (OSI) model by the International Standardization Organization (ISO). The network node may include port interfaces that support other network protocols such as asynchronous transfer mode (ATM), synchronous optical network (SONET), and Frame Relay. Although an Ethernet-based switch/router is described, the disclosed techniques can be applied to network nodes that utilize other protocols to transfer traffic. [0016] The CPU 102 of the network node 100 runs an operating system and supports software protocols that are necessary to forward network traffic. The CPU may be embodied as a multifunction processor and/or an application-specific processor. Examples of processors include the PowerPC™ family of processors by IBM and the x86 family of processors by Intel. Examples of operating systems that may be run by the CPU include NetBSD, Linux, and vx WORKS. Although not shown, the CPU may be supported by other hardware (e.g., memory and application- specific integrated circuits (ASICs)).
[0017] Among the protocols run by the CPU 102 are the protocols involved with generating forwarding information. These protocols, referred to herein as software- based learning protocols 110, include Layer 2 learning protocols and Layer 3 learning protocols. In the embodiment of Fig. 1, the Layer 2 protocol that is used to switch traffic is Ethernet and Layer 2 learning involves associating a destination media access control (MAC) address with an output port of the network node. A destination MAC address is associated with an output port of the network node by learning the input port and source MAC address of received packets. As is well-known in the field, the correct output port for a destination MAC address can be learned by "flooding" packets with the destination MAC address to be learned onto all of the relevant output ports and then watching to see the port on which a corresponding packet is received. Layer 2 learning may also involve associating virtual local area network (VLAN) identifiers (IDs) with destination MAC addresses and/or output ports.
[0018] In the embodiment of Fig. 1, the Layer 3 protocol that is used to route traffic is Internet Protocol (IP)-based (including IP and IPX) and Layer 3 learning involves associating a destination IP address with a next-hop IP address. Examples of common Layer 3 protocols that are run by the CPU 102 include the open shortest path first (OSPF) protocol, the border gateway protocol (BGP), the intermediate system-to- intermediate system (ISIS) protocol, and multiprotocol label switching (MPLS). [0019] Traffic is typically communicated between packet-based network nodes in groups of related packets. The groups of related packets are often referred to as a "flow." Packets of a flow have some common information. For example, common Layer 2 information may include any combination of a destination MAC address, a source MAC address, a VLAN ID, and/or a port of entry. Common Layer 3 information may include any combination of a destination IP address, a source IP address, type of service (TOS), a destination port number, and/or a source port number.
[0020] The hardware-based forwarding engine 104 of Fig. 1 is responsible for making hardware-based forwarding decisions for incoming traffic. The hardware- based forwarding engine includes a hardware-based forwarding table 112 that is programmed with forwarding table entries. The forwarding table entries associate incoming packet information with output information. As is known in the field, hardware-based forwarding tables are typically embodied in random access memory (RAM) and/or content addressable memory (CAM) that can be rapidly accessed and searched. Hardware-based forwarding decisions can only be made on incoming packets if the respective hardware-based forwarding table contains forwarding information that corresponds to the incoming packets. In operation, the hardware- based forwarding engine compares header information from received packets to the forwarding table entries to look for a table entry match. If the hardware-based forwarding engine is not able to make a forwarding decision on the incoming packets, then the hardware-based forwarding table needs to be programmed with a forwarding table entiy that corresponds to the incoming packets. The process of obtaining forwarding information is referred to herein as learning. In some embodiments, the hardware-based forwarding table may contain forwarding information that corresponds to the incoming packets although for some reason, the forwarding information is inactive (e.g., cannot be used to make a forwarding decision). When forwarding information is inactive, no forwarding decision can be made and the related packets are sent to the CPU for learning. [0021] Since traffic is typically communicated in a flow of packets, if the hardware- based forwarding engine 104 is not able to make forwarding decisions on the incoming packets, then all of the packets of the flow are sent to the CPU 102 for forwarding until the CPU communicates forwarding information to the hardware- based forwarding engine and the forwarding table is programmed with the necessary forwarding information. The more packets that are sent to the CPU for processing, the longer the delay in processing can be. If the processing delay is too long, packets may be dropped. Delays in processing and dropped packets negatively affect the performance of the network node.
[0022] In accordance with an embodiment of the invention, packets that are sent to the CPU 102 for learning are filtered before being allowed to reach the CPU. The filtering involves determining if related packets have already been allowed to reach the CPU and using the knowledge about related packets to determine if a current packet should be allowed to reach the CPU. In one embodiment, the resources of the CPU are conserved by allowing only one packet per flow to reach the CPU for learning. The one packet is used by the CPU to generate the necessary forwarding information and to initiate programming of the hardware-based forwarding table 112 so that subsequent packets of the same flow can be forwarded directly from the hardware-based forwarding engine 104. Because only one packet per .flow is allowed to reach the CPU for learning, the processing resources of the CPU are not consumed by learning the same forwarding information for multiple packets of the same flow. [0023] In the embodiment of Fig. 1, the filtering of packets that are sent to the CPU 102 for learning is performed by the learning filter 106. The learning filter receives all of the packets that are sent from the hardware-based forwarding engine 104 to the CPU for learning and determines which of the received packets are allowed to reach the CPU. Only a subset of the originally sent packets is allowed to reach the CPU as a result of the filtering. The learning filter may use a variety of techniques to determine which of the received packets are allowed to reach the CPU. Some examples of the learning filter and filtering techniques are described below. In the embodiment of Fig. 1, the learning filter is an ASIC chip that is located in a data path between the CPU and the hardware-based forwarding engine. [0024] Fig. 2 depicts an embodiment of the learning filter 106 from Fig. 1. The learning filter includes a hasher 116, a per-flow state machine 118, and an output controller 120. The learning filter receives packets that are sent by the hardware- based forwarding engine 104 to the CPU 102 for learning. The hasher obtains header information from the received packets and hashes certain header information to generate hash values that identify the flows to which the packets belong. For example, Layer 2 packets are hashed on a combination of the destination MAC address, the source MAC address, the VLAN ID, and the port of entry while Layer 3 packets are hashed on the destination IP address, source IP address, TOS, destination port number, and source port number. Although some examples of hashing fields are described, other fields or combinations of fields are possible. The hash value generated by the hasher is provided to the per-flow state machine. The per-flow state machine maintains a state table 122 that indicates a state for each identified flow, where each flow is identified by a hash value. The current state of a flow is provided to the output controller. The output controller determines whether or not a packet is allowed to reach the CPU based on the current state. As a result of the filtering that takes place, only a subset of the packets that were received by the learning filter are allowed to reach the CPU.
[0025] In the embodiment of Fig. 2, the per-flow state machine 118 maintains two states for each flow, where the states are identified as state 1 (SI) and state 2 (S2). State 1 indicates that no packets from the corresponding flow have been allowed to reach the CPU 102 and state 2 indicates that a packet of the corresponding flow has been allowed to reach the CPU. In the embodiment of Fig. 2, the state of a flow is initially set to state 1 and a packet is allowed to reach the CPU when the state is state 1. Once a packet is allowed to reach the CPU, the state is changed to state 2. While the state of a flow is set to state 2, no more packets from the flow are allowed to reach the CPU. The state of a flow can be reset to state 1 according to a pre-established algorithm to ensure that the forwarding information of flows is periodically updated. For example, the state machine may be configured to reset to state 1 after the forwarding table is programmed with the corresponding table entry or after some fixed period of time. The result of the learning filter logic of Fig. 2 is that only one packet per flow is allowed to reach the CPU for processing. This can greatly reduce the load on the CPU without inhibiting the learning process. Although one example of the learning filter and filtering logic is described, other filtering techniques may be used to reduce the number of packets that are allowed to reach the CPU for learning. [0026] Fig. 3 depicts a process flow diagram of a technique for managing the utilization of processing resources of a CPU. At block 200, a packet is received at a network node. At decision point 202, it is determined whether or not learning is required. If it is determined that learning is not required, then at block 204 the packet is forwarded by the hardware-based forwarding engine using forwarding information that exists in its hardware-based forwarding table. If it is determined that learning is required, then at block 206, the flow to which the packet belongs is identified. For example, the flow is identified by hashing certain fields of the packet header. After the flow is identified, at decision point 208, it is determined if a packet from the identified flow has already been sent to the CPU for learning. For example, a state machine is consulted to determine whether a packet from the identified flow has already been sent to the CPU for learning. If a packet from the identified flow has already been sent to the CPU for learning, then the current packet is not sent to the CPU for learning (block 210). If a packet from the identified flow has not already been sent to the CPU for learning, then the current packet is sent to the CPU for learning (block 212). The process flow of Fig. 3 is repeated for each packet that is received at the network node. [0027] Fig. 4 depicts an embodiment of a network node 130 with a distributed architecture that is configured to filter packets that are sent for learning. The distributed architecture of the network node includes a control module linecard 132, a switch fabric linecard 134, and two port interface linecards 136 (port interfaces A and B). In the embodiment of Fig. 4, a single learning filter 106 is located at the control module to filter packets received from all of the port interfaces. The learning filter depicted in Fig. 4 performs the same filtering functions as the learning filter that is described above with reference to Figs. 1 and 2.
[0028] The control module 132 includes a CPU 102 (identified as the "main CPU") and the learning filter 106. In general, the control module supports various functions such as network management functions and protocol implementation functions. Although not shown, the control module also includes memory such as electrically erasable programmable read-only memory (EEPROM) or flash ROM for storing operational code and dynamic random access memory (DRAM) for buffering traffic and storing data structures, such as forwarding information. In addition, there may be more than one discrete processor unit and more than one memory unit on the control module. The main CPU may include a multifunction processor and/or an application- specific processor as described above. The main CPU supports the software-based learning as indicated by the software-based learning protocols functional block 110. The software-based learning includes generating Layer 2 and Layer 3 forwarding information as is well-known in the field.
[0029] The switch fabric 134 provides datapaths between the control module 132 and the port interfaces 136 (e.g., datapaths between the control module and the port interfaces and datapaths between the port interfaces). The switch fabric may utilize, for example, shared memory, a shared bus, or crosspoint matrices. [0030] The port interfaces 136 include a port interface CPU 138, a hardware-based forwarding engine 104, and input/output ports 140. In general, functions performed by the port interfaces include receiving traffic into the network node, buffering traffic, storing forwarding information, protocol processing, making forwarding decisions, and transmitting traffic from the network node 130. In the embodiment of Fig. 4, the port interface CPU of each port interface runs its own operating system. The port interface CPU within each port interface linecard may include a multifunction processor (e.g., an IBM PowerPC® processor) and/or an application specific processor. Operational code is typically stored in non-volatile memory (not shown) such as EEPROM or flash ROM while traffic is typically buffered in volatile memory (not shown) such as RAM.
[0031] The hardware-based forwarding engines 104 depicted in Fig. 4 perform the same functions as the hardware-based forwarding engine described with reference to Fig. 1. One task performed by the hardware-based forwarding engine is determining if incoming packets need to be learned so that forwarding decisions can be made directly by the hardware-based forwarding engines. Packets that need to be learned are sent to the control module 132 through the switch fabric 134. [0032] In operation, the hardware-based forwarding engines 104 of the port interfaces 136 determine ifreceived packets need learning. Ifreceived packets need learning, then the packets are sent across the switch fabric 134 to the control module 132. At the control module, the packets are first processed by the learning filter 106. The learning filter acts as a gateway that determines whether or not the packets reach the main CPU 102 for learning. Because the learning filter is located on the control module, it can receive packets from all of the different port interfaces and therefore functions as a central filtering point. This enables all of the filtering to be accomplished with a single learning filter ASIC. Additionally, this enables the filtering to be accomplished without requiring changes to the main CPU or the hardware-based forwarding engines.
[0033] Fig. 5 depicts another embodiment of a network node 150 with a distributed architecture. The embodiment of Fig. 5 is similar to the embodiment of Fig. 4 except that the filtering function is performed in a distributed manner at each port interface 136. In particular, each port interface includes an interface-specific learning filter 106A and 106B that filters only packets from its corresponding port interface. The interface-specific learning filters perform the same basic functions as the learning filter described with reference to Figs. 1 and 2. Packets that pass the filtering are sent from the respective port interfaces to the main CPU 102 of the control module 132 through the switch fabric.
[0034] Fig. 6 depicts a process flow diagram of a method for managing the utilization of processing resources of a CPU. At block 220, a packet is received. At block 222, it is determined if forwarding information related to the packet needs to be learned to forward the packet. At block 224, if learning is needed, a decision is made whether to subject the packet to learning. The decision is based on whether any other related packets have already been subjected to learning.
[0035] In the embodiment described herein, only packets that pass the filtering are sent to the CPU 102 for processing. In an alternative embodiment, the filtering function may be incorporated into the CPU such that all packets sent for learning are received by the CPU but only selected packets are subjected to learning processing. [0036] Although in one embodiment only one packet per flow is allowed to reach the CPU 102, in other embodiments, the number of packets allowed to reach the CPU is reduced from the total number of packets of a flow that are initially sent to the CPU for learning.
[0037] In an embodiment, sending a packet within the network node may involve sending only header information of the packet. For example, sending a packet to the CPU for learning may involve sending only header information of the packet to the CPU. [0038] Additionally, although in one embodiment the first packet of a flow is allowed to reach the CPU for learning, in other embodiments it is possible that a packet other than the first packet is allowed to reach the CPU. [0039] Although specific embodiments of the invention have been described and illustrated, the invention is not to be limited to the specific forms or arrangements of parts as described and illustrated herein. The invention is limited only by the claims.

Claims

WHAT IS CLAIMED IS:
1. A method for managing the utilization of processing resources in a packet- based network node comprising: receiving a packet; determining if forwarding information related to the packet needs to be learned to forward the packet; if learning is needed, deciding whether to subject the packet to learning based on whether any other related packets have already been subjected to learning.
2. The method of claim 1 wherein determining if forwarding information related to the packet needs to be learned comprises comparing header information of the packet to entries in a hardware-based forwarding table to find a match.
3. The method of claim 2 wherein learning is needed if no match is found in the comparison of the header information to entries in the hardware-based forwarding table.
4. The method of claim 1 wherein deciding whether to subject the packet to learning comprises identifying a flow to which the packet belongs and determining whether a packet from the same flow has already been subjected to learning.
5. The method of claim 4 further including subjecting the packet to learning only if it is determined that a packet from the same flow has not already been subjected to learning.
6. The method of claim 4 wherein identifying a flow to which the packet belongs involves hashing header information of the packet to produce a hash value and wherein determining whether a packet from the same flow has already been subjected to learning comprises indexing a state table using the hash value to obtain state information.
7. The method of claim 6 wherein the state information indicates whether a packet from the flow has already been subjected to learning.
8. The method of claim 1 further comprising sending the packet to a central processing unit (CPU) for processing if it is determined that the packet should be subjected to learning.
9. A system for managing the utilization of processing resources in a packet- based network node comprising: a central processing unit (CPU) configured to learn forwarding information that is used to forward packets; a hardware-based forwarding engine configured to determine whether a packet should be sent to the CPU for learning; and a learning filter configured to receive packets from the hardware-based forwarding engine that are determined by the hardware-based forwarding engine to need learning and to decide whether to allow the received packets to reach the CPU based on whether any other related packets have already been allowed to reach the CPU.
10. The system of claim 9 wherein the hardware-based forwarding engine includes a hardware-based forwarding table that can be programmed with forwarding table entries and wherein the hardware-based forwarding engine is configured to send a packet to the CPU for learning when the hardware-based forwarding engine does not contain forwarding information corresponding to the received packet.
11. The system of claim 9 wherein related packets are packets from the same flow of packets and wherein the learning filter comprises a hasher that is configured to identify a flow to which the packet belongs and a state table for indicating whether a packet from an identified flow has already been sent to the CPU, wherein the hasher generates a hash value that identifies a flow and wherein the hash value is used to index the state table.
12. The system of claim 9 wherein the learning filter is configured to allow a reduced number of packets from a flow to reach the CPU.
13. The system of claim 9 wherein the CPU and learning filter are located on a control module linecard and the hardware-based forwarding engine is located on a port interface linecard, the system further including a plurality of port interface linecards each having a hardware-based forwarding engine, wherein the learning filter is configured to receive packets from each of the port interface linecards.
14. The system of claim 9 wherein the CPU is located on a control module linecard and the learning filter is located along with the hardware-based forwarding engine on a port interface linecard, the network node further comprising a plurality of port interface linecards, each port interface linecard including a learning filter.
15. A method for managing the utilization of processing resources in a packet- based network node comprising: receiving a packet; sending the packet to a central processing unit (CPU) for learning; before the packet reaches the CPU; determining if a related packet has already been allowed to reach the
CPU for learning; deciding whether to allow the packet to reach the CPU based on whether a related packet has already been allowed to reach the CPU.
16. The method of claim 15 wherein determining if a related packet has already been allowed to reach the CPU for learning comprises identifying a flow to which the packet is associated.
17. The method of claim 16 further comprising determining whether a packet from the same flow has been allowed to reach the CPU.
18. The method of claim 17 further comprising allowing the received packet to reach the CPU for learning only if another packet from the same flow has not already been allowed to reach the CPU for learning.
19. The method of claim 17 further comprising allowing a reduced number of packets from the same flow to reach the CPU for learning.
20. The method of claim 15 wherein determining if a related packet has already been allowed to reach the CPU for learning comprises hashing header information of the received packet to produce a hash value and indexing a state table using the hash value, wherein the state table includes state information that indicates whether a related packet has already been allowed to reach the CPU.
PCT/US2005/001284 2004-01-14 2005-01-14 Managing processing utilization in a network node WO2005067532A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2006549635A JP2007525883A (en) 2004-01-14 2005-01-14 Processing usage management in network nodes
CN2005800064873A CN101351995B (en) 2004-01-14 2005-01-14 Managing processing utilization in a network node
EP05705737A EP1721411A4 (en) 2004-01-14 2005-01-14 Managing processing utilization in a network node

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US53646904P 2004-01-14 2004-01-14
US60/536,469 2004-01-14

Publications (2)

Publication Number Publication Date
WO2005067532A2 true WO2005067532A2 (en) 2005-07-28
WO2005067532A3 WO2005067532A3 (en) 2008-08-21

Family

ID=34794407

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2005/001284 WO2005067532A2 (en) 2004-01-14 2005-01-14 Managing processing utilization in a network node

Country Status (5)

Country Link
US (1) US7443856B2 (en)
EP (1) EP1721411A4 (en)
JP (1) JP2007525883A (en)
CN (1) CN101351995B (en)
WO (1) WO2005067532A2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007067747A2 (en) 2005-12-07 2007-06-14 Lucent Technologies Managing the distribution of control protocol information in a network node
JP2007221240A (en) * 2006-02-14 2007-08-30 Nippon Telegr & Teleph Corp <Ntt> Device and method for controlling passage of packet
WO2011029361A1 (en) * 2009-09-09 2011-03-17 中兴通讯股份有限公司 Method, device and switch chip for reducing utilization rate of central processing unit of switch
CN101184095B (en) * 2007-12-06 2011-09-21 中兴通讯股份有限公司 Network anti-attack method and system based on strategy control listing of CPU
EP2947827A4 (en) * 2013-01-21 2016-03-02 Zte Corp Method and apparatus for improving forwarding performance of chip

Families Citing this family (76)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7236490B2 (en) 2000-11-17 2007-06-26 Foundry Networks, Inc. Backplane interface adapter
US7596139B2 (en) 2000-11-17 2009-09-29 Foundry Networks, Inc. Backplane interface adapter with error control and redundant fabric
US7266117B1 (en) 2002-05-06 2007-09-04 Foundry Networks, Inc. System architecture for very fast ethernet blade
US7468975B1 (en) 2002-05-06 2008-12-23 Foundry Networks, Inc. Flexible method for processing data packets in a network routing system for enhanced efficiency and monitoring capability
US20120155466A1 (en) 2002-05-06 2012-06-21 Ian Edward Davis Method and apparatus for efficiently processing data packets in a computer network
US7187687B1 (en) 2002-05-06 2007-03-06 Foundry Networks, Inc. Pipeline method and system for switching packets
US7826452B1 (en) 2003-03-24 2010-11-02 Marvell International Ltd. Efficient host-controller address learning in ethernet switches
US6901072B1 (en) 2003-05-15 2005-05-31 Foundry Networks, Inc. System and method for high speed packet transmission implementing dual transmit and receive pipelines
US7817659B2 (en) 2004-03-26 2010-10-19 Foundry Networks, Llc Method and apparatus for aggregating input data streams
US8730961B1 (en) 2004-04-26 2014-05-20 Foundry Networks, Llc System and method for optimizing router lookup
US7657703B1 (en) 2004-10-29 2010-02-02 Foundry Networks, Inc. Double density content addressable memory (CAM) lookup scheme
US20070081526A1 (en) * 2005-09-27 2007-04-12 Accton Technology Corporation Network switch device
US7620043B2 (en) * 2005-09-29 2009-11-17 Fujitsu Limited Using CRC-15 as hash function for MAC bridge filter design
CN100442772C (en) * 2005-10-19 2008-12-10 华为技术有限公司 Bridge-connection transmitting method
EP1777889B1 (en) * 2005-10-19 2009-04-15 Alcatel Lucent Method of processing information packets and telecommunication apparatus using the same
US8448162B2 (en) 2005-12-28 2013-05-21 Foundry Networks, Llc Hitless software upgrades
US7796590B1 (en) * 2006-02-01 2010-09-14 Marvell Israel (M.I.S.L.) Ltd. Secure automatic learning in ethernet bridges
US7903654B2 (en) 2006-08-22 2011-03-08 Foundry Networks, Llc System and method for ECMP load sharing
US8238255B2 (en) * 2006-11-22 2012-08-07 Foundry Networks, Llc Recovering from failures without impact on data traffic in a shared bus architecture
US8155011B2 (en) * 2007-01-11 2012-04-10 Foundry Networks, Llc Techniques for using dual memory structures for processing failure detection protocol packets
US8271859B2 (en) 2007-07-18 2012-09-18 Foundry Networks Llc Segmented CRC design in high speed networks
US8037399B2 (en) 2007-07-18 2011-10-11 Foundry Networks, Llc Techniques for segmented CRC design in high speed networks
US8509236B2 (en) 2007-09-26 2013-08-13 Foundry Networks, Llc Techniques for selecting paths and/or trunk ports for forwarding traffic flows
US9559987B1 (en) * 2008-09-26 2017-01-31 Tellabs Operations, Inc Method and apparatus for improving CAM learn throughput using a cache
US8090901B2 (en) 2009-05-14 2012-01-03 Brocade Communications Systems, Inc. TCAM management approach that minimize movements
US8599850B2 (en) 2009-09-21 2013-12-03 Brocade Communications Systems, Inc. Provisioning single or multistage networks using ethernet service instances (ESIs)
CN102648455B (en) * 2009-12-04 2015-11-25 日本电气株式会社 Server and stream control routine
US9769016B2 (en) 2010-06-07 2017-09-19 Brocade Communications Systems, Inc. Advanced link tracking for virtual cluster switching
US9270486B2 (en) 2010-06-07 2016-02-23 Brocade Communications Systems, Inc. Name services for virtual cluster switching
US9716672B2 (en) 2010-05-28 2017-07-25 Brocade Communications Systems, Inc. Distributed configuration management for virtual cluster switching
US8867552B2 (en) 2010-05-03 2014-10-21 Brocade Communications Systems, Inc. Virtual cluster switching
US9628293B2 (en) 2010-06-08 2017-04-18 Brocade Communications Systems, Inc. Network layer multicasting in trill networks
US9806906B2 (en) 2010-06-08 2017-10-31 Brocade Communications Systems, Inc. Flooding packets on a per-virtual-network basis
US9807031B2 (en) 2010-07-16 2017-10-31 Brocade Communications Systems, Inc. System and method for network configuration
CN102075421B (en) * 2010-12-30 2013-10-02 杭州华三通信技术有限公司 Service quality processing method and device
JP5787061B2 (en) * 2011-03-30 2015-09-30 日本電気株式会社 Switch system, line card, FDB information learning method and program
US9736085B2 (en) 2011-08-29 2017-08-15 Brocade Communications Systems, Inc. End-to end lossless Ethernet in Ethernet fabric
US9699117B2 (en) 2011-11-08 2017-07-04 Brocade Communications Systems, Inc. Integrated fibre channel support in an ethernet fabric switch
US9450870B2 (en) 2011-11-10 2016-09-20 Brocade Communications Systems, Inc. System and method for flow management in software-defined networks
US8995272B2 (en) 2012-01-26 2015-03-31 Brocade Communication Systems, Inc. Link aggregation in software-defined networks
US9742693B2 (en) 2012-02-27 2017-08-22 Brocade Communications Systems, Inc. Dynamic service insertion in a fabric switch
US9154416B2 (en) 2012-03-22 2015-10-06 Brocade Communications Systems, Inc. Overlay tunnel in a fabric switch
US9374301B2 (en) 2012-05-18 2016-06-21 Brocade Communications Systems, Inc. Network feedback in software-defined networks
US10277464B2 (en) 2012-05-22 2019-04-30 Arris Enterprises Llc Client auto-configuration in a multi-switch link aggregation
US9401872B2 (en) 2012-11-16 2016-07-26 Brocade Communications Systems, Inc. Virtual link aggregations across multiple fabric switches
US9413691B2 (en) 2013-01-11 2016-08-09 Brocade Communications Systems, Inc. MAC address synchronization in a fabric switch
US9548926B2 (en) 2013-01-11 2017-01-17 Brocade Communications Systems, Inc. Multicast traffic load balancing over virtual link aggregation
US9350680B2 (en) 2013-01-11 2016-05-24 Brocade Communications Systems, Inc. Protection switching over a virtual link aggregation
US9565099B2 (en) 2013-03-01 2017-02-07 Brocade Communications Systems, Inc. Spanning tree in fabric switches
US9401818B2 (en) 2013-03-15 2016-07-26 Brocade Communications Systems, Inc. Scalable gateways for a fabric switch
US9699001B2 (en) 2013-06-10 2017-07-04 Brocade Communications Systems, Inc. Scalable and segregated network virtualization
US9800521B2 (en) * 2013-08-26 2017-10-24 Ciena Corporation Network switching systems and methods
US9806949B2 (en) 2013-09-06 2017-10-31 Brocade Communications Systems, Inc. Transparent interconnection of Ethernet fabric switches
US9912612B2 (en) 2013-10-28 2018-03-06 Brocade Communications Systems LLC Extended ethernet fabric switches
US9548873B2 (en) 2014-02-10 2017-01-17 Brocade Communications Systems, Inc. Virtual extensible LAN tunnel keepalives
US10581758B2 (en) 2014-03-19 2020-03-03 Avago Technologies International Sales Pte. Limited Distributed hot standby links for vLAG
US10476698B2 (en) 2014-03-20 2019-11-12 Avago Technologies International Sales Pte. Limited Redundent virtual link aggregation group
US10063473B2 (en) 2014-04-30 2018-08-28 Brocade Communications Systems LLC Method and system for facilitating switch virtualization in a network of interconnected switches
US9800471B2 (en) 2014-05-13 2017-10-24 Brocade Communications Systems, Inc. Network extension groups of global VLANs in a fabric switch
US10616108B2 (en) 2014-07-29 2020-04-07 Avago Technologies International Sales Pte. Limited Scalable MAC address virtualization
US9807007B2 (en) * 2014-08-11 2017-10-31 Brocade Communications Systems, Inc. Progressive MAC address learning
US9699029B2 (en) 2014-10-10 2017-07-04 Brocade Communications Systems, Inc. Distributed configuration management in a switch group
US9626255B2 (en) 2014-12-31 2017-04-18 Brocade Communications Systems, Inc. Online restoration of a switch snapshot
US9628407B2 (en) 2014-12-31 2017-04-18 Brocade Communications Systems, Inc. Multiple software versions in a switch group
US10003552B2 (en) 2015-01-05 2018-06-19 Brocade Communications Systems, Llc. Distributed bidirectional forwarding detection protocol (D-BFD) for cluster of interconnected switches
US9942097B2 (en) 2015-01-05 2018-04-10 Brocade Communications Systems LLC Power management in a network of interconnected switches
US9807005B2 (en) 2015-03-17 2017-10-31 Brocade Communications Systems, Inc. Multi-fabric manager
US10038592B2 (en) 2015-03-17 2018-07-31 Brocade Communications Systems LLC Identifier assignment to a new switch in a switch group
US10579406B2 (en) 2015-04-08 2020-03-03 Avago Technologies International Sales Pte. Limited Dynamic orchestration of overlay tunnels
US10439929B2 (en) 2015-07-31 2019-10-08 Avago Technologies International Sales Pte. Limited Graceful recovery of a multicast-enabled switch
US10171303B2 (en) 2015-09-16 2019-01-01 Avago Technologies International Sales Pte. Limited IP-based interconnection of switches with a logical chassis
CN105591923B (en) * 2015-10-28 2018-11-27 新华三技术有限公司 A kind of storage method and device of forwarding-table item
US9912614B2 (en) 2015-12-07 2018-03-06 Brocade Communications Systems LLC Interconnection of switches based on hierarchical overlay tunneling
US10164796B2 (en) * 2016-04-19 2018-12-25 Avago Technologies International Sales Pte. Limited Flexible flow table with programmable state machine
US10237090B2 (en) 2016-10-28 2019-03-19 Avago Technologies International Sales Pte. Limited Rule-based network identifier mapping
CN114422365B (en) * 2022-01-21 2024-03-19 成都飞鱼星科技股份有限公司 Internet surfing behavior management method and system based on hardware flow acceleration

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06197111A (en) * 1992-10-26 1994-07-15 Hitachi Ltd Internetwork device
JPH10145417A (en) * 1996-11-15 1998-05-29 Hitachi Ltd Internetwork system
JP2951910B2 (en) * 1997-03-18 1999-09-20 松下電器産業株式会社 Gradation correction device and gradation correction method for imaging device
US6430188B1 (en) * 1998-07-08 2002-08-06 Broadcom Corporation Unified table for L2, L3, L4, switching and filtering
JP4156112B2 (en) * 1998-12-25 2008-09-24 富士通株式会社 High-speed search method and high-speed search device
EP1032164A1 (en) * 1999-02-26 2000-08-30 International Business Machines Corporation Method of self-learning for the switching nodes of a data transmission network
US6483804B1 (en) * 1999-03-01 2002-11-19 Sun Microsystems, Inc. Method and apparatus for dynamic packet batching with a high performance network interface
EP1273139A2 (en) * 2000-04-13 2003-01-08 Advanced Micro Devices, Inc. Method and device for layer 3 address learning
US6999455B2 (en) * 2000-07-25 2006-02-14 Broadcom Corporation Hardware assist for address learning
US20030035430A1 (en) * 2000-10-03 2003-02-20 Junaid Islam Programmable network device
US7313614B2 (en) * 2000-11-02 2007-12-25 Sun Microsystems, Inc. Switching system
JP3711895B2 (en) * 2001-06-13 2005-11-02 日本電気株式会社 Search system, search condition CAM registration method used therefor, and program thereof
CN100338922C (en) * 2001-09-11 2007-09-19 友讯科技股份有限公司 Address number control method for address form of network switching device
US7269663B2 (en) * 2001-09-28 2007-09-11 Intel Corporation Tagging packets with a lookup key to facilitate usage of a unified packet forwarding cache
US6973503B2 (en) * 2002-05-23 2005-12-06 International Business Machines Corporation Preventing at least in part control processors from being overloaded
US7177311B1 (en) * 2002-06-04 2007-02-13 Fortinet, Inc. System and method for routing traffic through a virtual router-based network switch
CN1328889C (en) * 2002-06-06 2007-07-25 中兴通讯股份有限公司 Routing method based on link status
CN100337450C (en) * 2002-08-05 2007-09-12 华为技术有限公司 Communication method between virtual local area webs
US7761876B2 (en) 2003-03-20 2010-07-20 Siemens Enterprise Communications, Inc. Method and system for balancing the load on media processors based upon CPU utilization information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of EP1721411A4 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007067747A2 (en) 2005-12-07 2007-06-14 Lucent Technologies Managing the distribution of control protocol information in a network node
EP1958400A2 (en) * 2005-12-07 2008-08-20 Lucent Technologies, Inc. Managing the distribution of control protocol information in a network node
JP2009518962A (en) * 2005-12-07 2009-05-07 アルカテル−ルーセント ユーエスエー インコーポレーテッド Distribution management of control protocol information in network nodes
EP1958400A4 (en) * 2005-12-07 2011-02-23 Lucent Technologies Inc Managing the distribution of control protocol information in a network node
US8054830B2 (en) 2005-12-07 2011-11-08 Alcatel Lucent Managing the distribution of control protocol information in a network node
JP2007221240A (en) * 2006-02-14 2007-08-30 Nippon Telegr & Teleph Corp <Ntt> Device and method for controlling passage of packet
CN101184095B (en) * 2007-12-06 2011-09-21 中兴通讯股份有限公司 Network anti-attack method and system based on strategy control listing of CPU
WO2011029361A1 (en) * 2009-09-09 2011-03-17 中兴通讯股份有限公司 Method, device and switch chip for reducing utilization rate of central processing unit of switch
EP2947827A4 (en) * 2013-01-21 2016-03-02 Zte Corp Method and apparatus for improving forwarding performance of chip
US9838312B2 (en) 2013-01-21 2017-12-05 Xi'an Zhongxing New Software Co., Ltd Method and apparatus for improving forwarding performance of chip

Also Published As

Publication number Publication date
JP2007525883A (en) 2007-09-06
US7443856B2 (en) 2008-10-28
CN101351995A (en) 2009-01-21
WO2005067532A3 (en) 2008-08-21
US20050152335A1 (en) 2005-07-14
EP1721411A2 (en) 2006-11-15
CN101351995B (en) 2011-02-02
EP1721411A4 (en) 2010-09-08

Similar Documents

Publication Publication Date Title
US7443856B2 (en) Managing processing utilization in a network node
EP1158725B1 (en) Method and apparatus for multi- redundant router protocol support
US7054311B2 (en) Methods and apparatus for storage and processing of routing information
US8576721B1 (en) Local forwarding bias in a multi-chassis router
US8077613B2 (en) Pinning and protection on link aggregation groups
US7184437B1 (en) Scalable route resolution
US7558268B2 (en) Apparatus and method for combining forwarding tables in a distributed architecture router
US7260096B2 (en) Method and router for forwarding internet data packets
US9065724B2 (en) Managing a flow table
US8774179B1 (en) Member link status change handling for aggregate interfaces
EP3641247B1 (en) Optimized multicast forwarding with a cache
US7764672B2 (en) Packet communication device
US20120127997A1 (en) Method for optimizing a network prefix-list search
US9025601B2 (en) Forwarding ASIC general egress multicast filter method
US7277386B1 (en) Distribution of label switched packets
EP1609279A2 (en) Method for recursive bgp route updates in mpls networks
US8144584B1 (en) WRR scheduler configuration for optimized latency, buffer utilization
EP3507953A1 (en) Techniques for architecture-independent dynamic flow learning in a packet forwarder
US7496096B1 (en) Method and system for defining hardware routing paths for networks having IP and MPLS paths
WO2013051004A2 (en) A low latency carrier class switch-router
US11171883B1 (en) Peering-fabric routing using switches having dynamically configurable forwarding logic
CN114401222A (en) Data forwarding method and device based on policy routing and storage medium
CN114221834A (en) Message forwarding method and device
EP2107724B1 (en) Improved MAC address learning
CN112702265A (en) Solution method for providing distributed drainage under virtual scene

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

WWE Wipo information: entry into national phase

Ref document number: 2571/CHENP/2006

Country of ref document: IN

WWE Wipo information: entry into national phase

Ref document number: 2006549635

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Ref document number: DE

WWE Wipo information: entry into national phase

Ref document number: 2005705737

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 200580006487.3

Country of ref document: CN

WWP Wipo information: published in national office

Ref document number: 2005705737

Country of ref document: EP