WO1999000944A1 - Mechanism for packet field replacement in a multi-layer distributed network element - Google Patents

Mechanism for packet field replacement in a multi-layer distributed network element Download PDF

Info

Publication number
WO1999000944A1
WO1999000944A1 PCT/US1998/013200 US9813200W WO9900944A1 WO 1999000944 A1 WO1999000944 A1 WO 1999000944A1 US 9813200 W US9813200 W US 9813200W WO 9900944 A1 WO9900944 A1 WO 9900944A1
Authority
WO
WIPO (PCT)
Prior art keywords
packet
crc
output
set forth
switch
Prior art date
Application number
PCT/US1998/013200
Other languages
French (fr)
Inventor
Ariel Hendel
Shimon Muller
Louise Yeung
Original Assignee
Sun Microsystems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Microsystems, Inc. filed Critical Sun Microsystems, Inc.
Priority to EP98931579A priority Critical patent/EP1005741A4/en
Priority to JP50571399A priority patent/JP2002507364A/en
Publication of WO1999000944A1 publication Critical patent/WO1999000944A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3009Header conversion, routing tables or routing tags
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/351Switches specially adapted for specific applications for local area network [LAN], e.g. Ethernet switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/354Switches specially adapted for specific applications for supporting virtual local area networks [VLAN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Definitions

  • This invention relates generally to communication systems that couple computers, and more specifically to relaying messages through a network element.
  • a local area network is the most basic and simplest network that allows communication between a source computer and destination computer.
  • the LAN can be envisioned as a cloud to which computers (also called endstations or end-nodes) that wish to communicate with one another are attached.
  • At least one network element will connect with all of the endstations in the LAN.
  • An example of a simple network element is the repeater which is a physical layer relay that forwards bits.
  • the repeater may have a number of ports, each endstation being attached to one port.
  • the repeater receives bits that may form a packet of data that contains a message from a source endstation, and blindly forwards the packet bit-by-bit. The bits are then received by all other endstations in the LAN, including the destination.
  • a single LAN may be insufficient to meet the requirements of an organization that has many endstations, because of the limited number of physical connections available to and the limited message handling capability of a single repeater.
  • the repeater-based approach can support only a limited number of endstations over a limited geographical area.
  • a network is thus said to have a topology which defines the features and hierarchical position of nodes and endstations within the network.
  • endstations through packet switched networks has traditionally followed a peer-to-peer layered architectural abstraction.
  • a given layer in a source computer communicates with the same layer of a peer endstation (usually the destination) across the network.
  • a layer By attaching a header to the data unit received from a higher layer, a layer provides services to enable the operation of the layer above it.
  • a received packet will typically have several headers that were added to the original pay load by the different layers operating at the source.
  • Layer 1 Physical
  • Layer 2 data link
  • Layer 3 network
  • Layer 4 transport
  • the physical layer transmits unstructured bits of information across a communication link.
  • the repeater is an example of a network element that operates in this layer.
  • the physical layer concerns itself with such issues as the size and shape of connectors, conversion of bits to electrical signals, and bit-level synchronization.
  • Layer 2 provides for transmission of frames of data and error detection. More importantly, the data link layer as referred to in this invention is typically designed to "bridge,” or carry a packet of information across a single hop, i.e., a hop being the journey taken by a packet in going from one node to another. By spending only minimal time processing a received packet before sending the packet to its next destination, the data link layer can forward a packet much faster than the layers above it, which are discussed next.
  • the data link layer provides addressing that may be used to identify a source and a destination between any computers interconnected at or below the data link layer. Examples of Layer 2 bridging protocols include those defined in IEEE 802 such as CSMA/CD, token bus, and token ring (including Fiber Distributed Data Interface, or
  • Layer 3 also includes the ability to provide addresses of computers that communicate with each other.
  • the network layer also works with topological information about the network hierarchy.
  • the network layer may also be configured to "route" a packet from the source to a destination using the shortest path.
  • the network layer can control congestion by simply dropping selected packets, which the source might recognize as a request to reduce the packet rate.
  • Layer 4 the transport layer, provides an application program such as an electronic mail program with a "port address" which the application can use to interface with Layer 3.
  • An application program such as an electronic mail program with a "port address" which the application can use to interface with Layer 3.
  • a key difference between the transport layer and the lower layers is that a program on the source computer carries a conversation with a similar program on the destination computer, whereas in the lower layers, the protocols are between each computer and its immediate neighbors in the network, where the ultimate source and destination endstations may be separated by a number of intermediate nodes.
  • Examples of Layer 4 and Layer 3 protocols include the Internet suite of protocols such as TCP (Transmission Control Protocol) and IP (Internet Protocol).
  • Endstations are the source and ultimate destination of a packet, whereas a node refers to an intermediate point between the endstations.
  • a node will typically include a network element which has the capability to receive and forward messages on a packet- by-packet basis.
  • a router can form and store a topological map of the network around it based upon exchanging information with its neighbors. If a LAN is designed with Layer 3 addressing capability, then routers can be used to forward packets between LANs by taking advantage of the hierarchical routing information available from the endstations. Once a table of endstation addresses and routes has been compiled by the router, packets received by the router can be forwarded after comparing the packet's Layer 3 destination address to an existing and matching entry in the memory.
  • the router operates by parsing the header of a received packet, making decisions based on a routing table inside the router, and forwarding the packet, with any required header modifications, to the next node or endstation.
  • the packet will go through several such "hops" before reaching its destination where a hop is defined as the packet traveling from one node or endstation to another node or endstation.
  • bridges are network elements operating in the data link layer (Layer 2) rather than Layer 3. They have the ability to forward a packet based only on the Layer 2 address of the packet's destination, typically called the medium access control (MAC) address. Generally speaking, bridges do not modify the packets. Bridges forward packets in a flat network having no hierarchy without any cooperation from the endstations.
  • Layer 2 the data link layer
  • MAC medium access control
  • Hybrid forms of network elements also exist, such as brouters and switches.
  • a brouter is a router which can also perform as a bridge.
  • the term switch refers to a network element which is capable of forwarding packets at high speed with functions implemented in hardwired logic as opposed to a general purpose processor executing instructions. Switches come in many flavors, operating at both Layer 2 and Layer 3.
  • Layer 2 elements such as bridges provides fast packet forwarding between LANs but has no flexibility in traffic isolation, redundant topologies, and end-to-end policies for queuing and access control.
  • Endstations in a subnetwork can invoke conversations based on either Layer 3 or Layer 2 addressing.
  • bridges forward packets based on only Layer 2 parsing, the provide simple yet speedy forwarding services.
  • the bridge does not support the use of high layer handling directives including queuing, priority, and forwarding constraints between endstations in the same subnetwork.
  • a prior art solution to enhancing bridge-like conversations within a subnetwork relies on a network element that uses a combination of Layer 2 and upper layer headers.
  • the Layer 3 and Layer 4 information of an initial packet are examined, and a "flow" of packets is predicted and identified using a new Layer 2 entry in the forwarding memory, with a fixed quality of service (QOS).
  • QOS quality of service
  • subsequent packets are forwarded at Layer 2 speed (with the fixed QOS) based upon a match of the Layer 2 header with the Layer 2 entry in the forwarding memory.
  • no entries with Layer 3 and Layer 4 headers are placed in the forwarding memory to identify the flow.
  • Layer 3 elements such as routers.
  • packet forwarding speed is sacrificed in return for the greater intelligence and decision making capability provided by the router. Therefore, networks are often built using a combination of Layer 2 and Layer 3 elements.
  • the role of the server has multiplied with browser-based applications that use the Internet, thus leading to increasing variation in traffic distribution.
  • the network was designed with the client and the file server in the same subnetwork to avoid router bottlenecks.
  • more specialized servers like World Wide Web and video servers are typically not on the client's subnetwork, such that crossing routers is unavoidable. Therefore, the need for packets to traverse routers at higher speeds is crucial.
  • the choice of bridge versus router typically results in a significant trade-off, lower functionality when using bridges, and lower speed when using routers.
  • the service characteristics within a network are no longer homogenous, as the performance of a server becomes location dependent if its traffic patterns involve routers.
  • the network element should be able to operate at bridge-like speeds, yet be capable of routing packets across different subnetworks and provide upper layer functionalities such as quality of service.
  • the invention is an apparatus and related method for relaying packets by a multilayer distributed network element according to known routing protocols.
  • the invention is directed at a multi-layer distributed network element (MLDNE) for receiving and forwarding packets using known routing protocols.
  • MLDNE multi-layer distributed network element
  • the MLDNE has a number of subsystems that are coupled by internal links.
  • Each subsystem has a forwarding memory and associated memory.
  • the memories associate packet header information including addresses with routing information.
  • a subsystem also includes external ports that connect with neighboring nodes and endstations, and internal ports that connect with other subsystems through the internal links.
  • the subsystem determines whether the packet should be routed based upon a first header portion, including a Layer 2 destination address of the received packet, matching a Layer 2 address of the MLDNE. If the first header portion of the received packet matches the MLDNE address, then the first subsystem determines, using its forwarding memory, whether a route has been previously determined for a second header portion, including Layer 3 source and destination addresses, of the received packet.
  • a neighbor node's Layer 2 address replaces the Layer 2 destination address of the packet.
  • the neighbor node's address was previously stored in the associated memory as part of the routing information associated with the matching type 2 entry.
  • the routing information in the associated memory also identifies the external ports of the inbound subsystem that connect with the neighbor node. If the neighbor node is connected to a subsystem other than the inbound subsystem, the situation would have been recognized at the time the matching type 2 entry was created such that the associated memory would identify the internal port of the inbound subsystem, rather than external port, that connects with the other subsystem to which the neighbor node or endstation is connected.
  • the packet When the packet is received over the internal link by a second subsystem, the packet is forwarded to the neighbor node in response to the packet's new first header portion matching a type 1 entry in the second forwarding memory.
  • the type 1 entry in the second subsystem contains the address of the neighbor node or endstation and had been created independently of the matching type 2 entry of the inbound subsystem.
  • the inbound subsystem After determining that a received packet should be routed, the inbound subsystem also generates a first control signal which indicates to the external port that eventually forwards the packet that a third header portion identifying the packet's source be modified before sending the packet to the neighbor node. A Layer 2 source address of the packet is replaced with a source address associated with the external port. The control signal is also passed over an internal link to the second subsystem if the neighbor node is reachable through that subsystem.
  • the invention's distributed architecture can also be configured to support routing of multicast packets.
  • a second control signal may be sent across an internal link in response to which the second subsystem performs a type 2 search of the forwarding memory (based on the network layer and higher layer headers of the packet). If a matching type 2 entry is found, then the external ports of the second subsystem check the first control signal (also received from the inbound subsystem) to see if the source address of the packet needs to be replaced, and the packet is then forwarded with the appropriate modifications to its headers.
  • the first control signal may also be received and checked by the external ports of the inbound system where the multicast destination group includes nodes/endstations connected to the inbound subsystem.
  • the invention's search engine, forwarding engine, and data structures are organized in a way that supports bridging and routing functions simultaneously, where if routing criteria are not met for a received packet, then bridging functions are provided automatically.
  • the invention is implemented with the data link layer (Layer 2), the network layer (Layer 3) and higher layers including the transport layer (Layer 4).
  • Figure 1 is a high level view of an exemplary network application of a multilayer distributed network element (MLDNE) of the invention.
  • MLDNE multilayer distributed network element
  • FIG. 1 in an internal view of the MLDNE as an embodiment of the invention.
  • Figure 3 illustrates an exemplary forwarding and associated memory of a subsystem in the MLDNE, including associated data for the routing of packets, according to another embodiment of the invention.
  • Figure 4 is a block diagram of an embodiment of the MLDNE having only two subsystems and acting as a router between a client and a server.
  • Figure 5 is a flow diagram of processing a received packet for routing purposes by the invention's network element.
  • Figure 6 is a continuation of the flow diagram in Figure 5 and includes steps performed in processing a unicast packet.
  • Figure 7 shows exemplary steps and operations performed by the invention's network element for routing a multicast packet.
  • Figure 8A is a simplified block diagram of a packet structure utilizied in one embodiment of the invention.
  • Figure 8B is a structure for header field replacement of packets by the invention.
  • the invention defines a network element that is used to interconnect a number of nodes and endstations in a variety of different ways.
  • an application of the multi-layer distributed network element would be to route packets according to predefined routing protocols over a homogenous data link layer such as the IEEE 802.3 standard, or Ethernet.
  • Figure 1 illustrates the invention's use as a router in a network where the MLDNE 201 couples a client C to the Router 107 which in turn couples with the Server 105.
  • the MLDNE 201 can interconnect a number of desktop units (endstations), while acting as an intermediate node, through its external connections 217.
  • the MLDNE 201 is capable of providing a high performance communication path between servers and desktop units while acting as a router, where the Server 105 and the client C reside in different LANs.
  • the MLDNE's distributed architecture can be configured to route message traffic in accordance with a number of known routing algorithms such as RIP and OSPF.
  • the MLDNE is configured to handle message traffic using the
  • TCP Transmission Control Protocol
  • IP Internet Protocol
  • MAC medium access control
  • a network element is configured to implement packet routing functions in a distributed manner, i.e., different parts of a function are performed by identical building block subsystems in the MLDNE, while the final result of the functions remains transparent to the external nodes and endstations.
  • the MLDNE has a scalable architecture which allows the designer to increase the number of external connections by adding additional subsystems.
  • the MLDNE 201 contains a number of identical subsystems 210 that are fully meshed and interconnected using a number of internal links 241 to create a larger network element. At least one internal link couples any two subsystems.
  • Each subsystem 210 includes a forwarding memory 213 and an associated memory 214.
  • the forwarding memory 213 stores an address table used for matching with the headers of received packets.
  • the associated memory stores data associated with each entry in the forwarding memory that is used to identify forwarding attributes for forwarding the packets through the MLDNE.
  • a number of external ports (not shown) having input and output capability interface the external connections 217.
  • Internal ports (not shown) also having input and output capability in each subsystem couple the internal links 241. In the preferred embodiment, the external and internal ports lie within a hardwired-logic switching element 21 1 implemented by an application specific integrated circuit (ASIC).
  • ASIC application specific integrated circuit
  • a received packet arrives at an inbound subsystem through one of the external connections 217, and will be forwarded to a node or endstation outside the MLDNE through another external connection in an outbound subsystem.
  • the outbound and inbound subsystems can be either the same or different subsystems.
  • the MLDNE 201 includes a central processing system (CPS) 260 that is coupled to the individual subsystems 210 through a communication bus 251 such as the Peripheral Components Interconnect (PCI).
  • the CPS 260 includes a central processing unit (CPU) 261 coupled to a central memory 263.
  • Central memory Central memory
  • the CPS has a direct control and communication interface to each subsystem 210.
  • the CPS is also configured with a number of routing protocols that are used to identify a neighbor node as part of a route for forwarding a received packet to its ultimate destination, normally specified in the Layer 3 destination address of the packet.
  • Other responsibilities of the CPS 260 include setting data path resources such as packet buffers between the different subsystems.
  • the CPS 260 performs the important task of determining whether or not a type 2 entry should be added to the forwarding memory of each individual subsystem.
  • the forwarding memory includes a number of entries of two types, type 2 entry 321 and type 1 entry 301. Each entry in the forwarding memory includes data to be compared with the headers of received packets.
  • the data fields for each type 2 entry 321 include a class field 323, an IP source field 325, an IP destination field 327, an application source port 333, an application destination port 335, and an Inbound Port field 337.
  • a class field, a Layer 2 address field, and a VLAN identification (VID) field are shown in the exemplary embodiment.
  • VIP VLAN identification
  • each type 2 entry 321 and type 1 entry 301 are associated data stored in associated memory 214.
  • the associated data fields contain information needed to forward a matching packet received by the subsystem.
  • the subsystem port field 347 identifies the internal or external ports of the subsystem used for forwarding the matching packet to the neighboring node in the next hop.
  • the next hop address field 357 identifies the neighbor node's Layer 2 address which replaces the original Layer 2 destination address of a received unicast packet to be routed.
  • a priority field 345 is used for queuing purposes by the external port which actually sends the packet outside the MLDNE.
  • the age fields 343 and 344 help minimize the number of entries in the forwarding memory by indicating that a recently received packet has matched the corresponding type 1 or type 2 entry.
  • a NEW VID address field 353 allows the MLDNE to be configured to support virtual LANs (VLANs).
  • the associated data also includes a NEW VLAN identification (VID) TAG field, used to notify the subsystem of a need to change the packet's VID, particularly when forwarding the packet across subnetworks.
  • VID VLAN identification
  • the inbound subsystem in response will either insert a new tag, or replace an existing tag with the value in the NEW
  • VID field For example, when routing between VLANs requires the forwarded packet's tag to be different from the received packet's tag, then the NEW VID field will contain the replacement tag for the subsystem to replace before forwarding the packet.
  • additional control information may be made available over the internal link to the outbound subsystem receiving the packet.
  • additional control information includes an orig_tag bit which indicates whether or not the received packet was originally tagged with VLAN information, a mod_tag bit which indicates whether the tag was modified by the inbound subsystem, and a dont_tag bit which indicates that the received packet should not be tagged by the outbound subsystem.
  • the associated memory can be configured to include a multicast route field 355 which activates multicast routing capability in the subsystem as further explained below.
  • the routing operation of the MLDNE 201 will be described for an exemplary embodiment using the flow diagram of Figures 5-7 in conjunction with the exemplary network application in Figure 4. References to fields in the forwarding and associated memories are found in Figure 3.
  • the journey of a packet is traced beginning with a client C in subnetwork 103 coupled to an external connection of MLDNE 201.
  • the client C sends a packet to server 105 which is identified in the Layer 3 destination address field of the packet's header.
  • the packet must traverse a router 107 which is assumed to have a Layer 2 address known by the MLDNE 201.
  • a packet is received by the MLDNE 201 at external port El of the inbound subsystem 410.
  • the packet includes a message originated from a client C having a Layer 3 address in a logically defined network subnetwork 103.
  • Subsystem 410 is configured to recognize that external ports El and E2 couple the subnetwork 103.
  • first header portion including the Layer 2 destination address in the present embodiment, of the received packet is compared with a router address of the MLDNE 201.
  • the router address may be a Layer 2 address assigned to external port Ei, or a Layer 2 address assigned to the MLDNE as a whole. Normally, the MLDNE will be configured so that each external port is assigned its own router address. If the first header portion of the received packet matches the router address, then operation proceeds to block 515 where the packet is declared to be a potential unicast route candidate. If, however, the first header portion does not match the router address, then operation proceeds to block 509 where the packet is declared as not being a unicast routable packet. As will be appreciated below, such a packet can still be a multicast packet having a multicast route available in the MLDNE.
  • block 517 For a unicast packet of the route class, block 517 performs a search of the forwarding memory 413 for a matching type 2 entry using "route" as the class field 323.
  • the search of the forwarding memory in block 517 leads to the decision block 521 where the test is whether a type 2 matching entry exists in the forwarding memory 413. If not, then operation proceeds with block 523 where relevant portions of the received packet headers are sent to the CPS via the CPS port in subsystem 410 and the CPS bus 451.
  • the CPS 460 When the CPS 460 receives the portions of the headers of the "missed" packet from subsystem 410 in block 533, the CPS then examines access policies and class of service policies that have been preconfigured in the CPS, and the CPS Layer 2 and Layer 3 topology tables. The CPS has the option of denying service to the path requested by the received packet, performing the routing function entirely in its own software, or preparing a type 2 entry in the inbound system's forwarding memory for the route.
  • the routing algorithms of the MLDNE 201 are implemented by the CPS. If a unicast route exists or can be readily computed for the received packet, then the CPS decides in decision block 537 to proceed with block 539 and add a route class type 2 entry 321 to the forwarding memory, and associated data to the associated memory, of the inbound subsystem 410. If the neighbor node connects to an external port of the inbound subsystem 410, as determined by the CPS consulting a Layer 2 table in the central memory, then the external port is identified in the new type 2 entry's associated subsystem port field 347. Similarly, if the neighbor node connects to the subsystem 420, then an internal port 11 or F? is identified.
  • the received packet is forwarded as a unicast packet as illustrated in exemplary form in Figure 6.
  • the switching element 41 1 evaluates whether the unicast packet's time to live has been exceeded.
  • a time to live field is assumed to exist in the received packet's headers. If the packet has been circulating through the network too long as indicated by its time to live field, then the inbound subsystem only sends the received packet to the CPS, and then a time exceeded error message in accordance with, for example, the Internal Control Message
  • Internet community is generated by the CPS as in block 609.
  • TTL time to live
  • decision block 615 determines whether a new VLAN identification tag is required by checking the status of the NEW VID tag field 351.
  • a first control signal such as a sa_replace bit
  • the sa_replace bit will be handed off to the external and internal ports indicated in the subsystem port field 347, and thus may be transferred over an internal link 441, together with the packet, to the subsystem 420.
  • the first control signal will notify the subsystem (either the inbound one or another subsystem) to replace the Layer 2 source address of the packet with the source address of the external port used for forwarding the packet.
  • the packet together with any control information are processed by internal port 12 in switching element 411, and delivered to the internal link 441 to connect with the outbound subsystem 420 in block 627.
  • the modified packet and control information stay in the inbound subsystem and are processed by an external port, where operation continues in block 630.
  • the packet is received over the internal links in outbound subsystem 420.
  • a type 1 matching cycle then begins and decision block 629 is reached to determine whether a matching type 1 entry exists in the forwarding memory 423. If a type 1 entry exists then operation continues with block 630. The operation from block 630 to block 637 are performed by the "outbound" subsystem where the packet leaves the MLDNE. be it the inbound subsystem 410 or a different subsystem 420. If the sa_replace bit, as checked in decision block 630, is set, then the switching element replaces a third header portion, including at least the Layer 2 source address of the received packet, with the Layer 2 address of the external port E3 through which the packet must be forwarded. The external port E3 was identified in the associated data (in associated memory) corresponding to either the matching type 1 entry found in block 629 (the packet came across internal link) or the matching type 2 entry found in block 521 (the packet remained in inbound subsystem).
  • the MLDNE can be configured so that each external port is assigned a unique Layer 2 address. Alternatively, a single source address may be assigned to the MLDNE as a whole and shared by all external ports. In either case, following the replacement of the third header portion, the cyclic redundancy code (CRC) of the packet's headers is recomputed in block 635 and the packet is then forwarded to the neighbor node being the router 107 in Figure 4.
  • CRC cyclic redundancy code
  • the packet's journey has been described originating from the client C and traveling through subsystem 410, internal link 441, and subsystem 420 in MLDNE 201.
  • the packet is then received by router 107 and forwarded according to conventional means to server 105.
  • server 105 assumed that a route for the server 105 as a destination through router 107 had been previously obtained by the MLDNE 201 using conventional techniques for determining the routes.
  • routing policies as well as class of service queuing have the granularity and flexibility of Layer 3 end-to-end addresses and protocol based classification. These routing policies and class of service queuing are identified in the associated data corresponding to each matching type 2 entry, and may be sent across the internal link to a separate outbound subsystem.
  • the routing features of the invention for multicast packets are now presented while referring once again to the entries in the forwarding and associated memories of Figure 3 and the flow diagram of Figure 7.
  • multicast routing in the invention's MLDNE can be supported by similar hardware structures that implement unicast routing in the MLDNE, multicast does present significantly different problems to the network element designer.
  • the routing protocols used to derive the type 2 entries in the forwarding memory include protocols such as MOSPF and DVMRP which are well-known in the art. These multicast routing protocols produce a loop-free distribution tree for the packet's group destination network layer multicast address and a source network layer address for the sender.
  • the MLDNE has a local multicast forwarding rule which yields a number of external ports (and their corresponding subsystems) for forwarding the packet, as a function of a received multicast packet group destination Layer 3 address, source Layer 3 address, and the inbound subsystem port of arrival.
  • This dependency is reflected in the type 2 entry in the forwarding memory of Figure 3 as the fields 327, 325, and 337, respectively, to be matched with a received packet's headers.
  • the inbound port of arrival field 337 is included to prevent forwarding duplicate packets over alternate paths.
  • the MLDNE is configured to identify a multicast packet based on at least two criteria. First, the packet headers must match a given class. Second, the packet's headers must match an existing type 2 entry that refers to a multicast group destination address. The matching type 2 entry for the multicast case may be created as a result of executing a multicast registration protocol such as IGMP.
  • Figure 7 illustrates an exemplary flow diagram for routing a received multicast packet through the MLDNE 201 of Figure 4.
  • a packet is received by the subsystem 410 and the packet headers match a certain class and a type 2 entry 321 which has a multicast route field 355 indicating that the entry is for multicast routing, as in block 703, control is transferred to the decision block 705.
  • the routing operation continues in block 709 in the inbound subsystem 410 by decrementing the time to live field in the received packet's header.
  • the packet's TTL was exceeded, then in block 707 the packet may be flooded, not routed, to its VLAN.
  • a packet's VLAN in general, defines the Layer 2 topology used for flooding, in other words the broadcast domain.
  • the inbound subsystem 410 determines whether a new VLAN tag is required for the received packet, based on the NEW VID tag field 351 in the associated memory. If so, then the VID in the Layer 2 header of the packet is replaced with the destination VID of the next hop, as found in the associated memory, as in block 713. Note that block 713 is performed only if the Layer 3 multicast destination address of the received packet refers to endstations that lie within the same VLAN. Such a determination was made by the CPS when the type 2 entry was created.
  • the inbound subsystem 410 prepares to notify the external ports that will forward the packets outside the MLDNE of a need to route the packet by setting the first control signal (sa_replace bit) to indicate to the forwarding external ports that the Layer 2 source address of the packet to be forwarded must be replaced with the source address of the external port.
  • the inbound subsystem compensates the packet's header check sum value in block 717.
  • the inbound subsystem 410 then hands off copies of the packet to the external and internal ports of the inbound subsystem 410 that are identified in the subsystem ports field 347 of the associated memory as corresponding to the matching type 2 entry, as in block 719.
  • a copy of the packet traverses an internal link and arrives at a different subsystem 420 in block 720, operation proceeds with decision block 721 where a second control signal, here called the distributed flow (DF or distrib_flow) bit, may be received by the outbound subsystem 420. If the DF bit is set, then a class filter determines the class of the packet, based upon the packet's headers, and a type 2 search (with the identified class) is conducted in block 722.
  • DF distributed flow
  • the distrib_flow construct allows the CPS to define a type 2 entry in the outbound subsystem 420 corresponding to the matching multicast route entry in the inbound subsystem. This allows different priorities to be assigned by the CPS to the different external ports that will service the multicast route, to further control queuing granularity for packets traversing the MLDNE.
  • a force_be bit placed by the CPS and obtained after a type 2 search in the outbound subsystem) in the associated data of the matching type 2 entry overrides the priority received over the internal link with the packet, such that the packet will be forced to the lowest priority, thus providing some granularity in queuing at the external ports.
  • the distrib_flow bit is not set, then a type 1 search is performed on the forwarding memory 423, and the packet is forwarded or flooded accordingly without the type 2 queuing granularity discussed above.
  • a multicast route requires two type 2 entries to be created by the CPS where the inbound and outbound subsystems are different.
  • the operations from block 723 to block 729 are performed by the outbound subsystem, be it the subsystem 410 or subsystem 420.
  • the outbound subsystem in decision block 723 determines whether the sa_replace bit has been set to indicate that the Layer 2 source address of each copy of the packet should be replaced with the Layer 2 address of the corresponding external port used for forwarding the packet outside the MLDNE. If not, then the packet may be forwarded using a Layer 2 search result.
  • the outbound subsystem in particular an external port of the outbound subsystem, replaces the Layer 2 source address of the packet with a Layer 2 address of the external port. Operation then proceeds with block 727 where a CRC is recomputed for the modified Layer 2 header, and the packet is forwarded in block 729.
  • Figure 8 A is a simplified diagram of the packet structure utilized. More particularly, as the inbound subsystem has determined certain information regarding the packet, e.g., routing, it is advantageous to simply convey this information to the outbound subsystem so that subsequent processing, such as the header field replacement, can easily be performed without reperforming the same steps performed by the inbound subsystem. Furthermore, it is desirable to maintain end-to-end error robustness.
  • the inbound subsystem encapsulates the packet 800 with control information 805 and a cycle redundancy code (CRC) 810.
  • CRC cycle redundancy code
  • the outbound system receives the encapsulated packet, determines frame validity using CRC 810, strips the CRC 810 and removes the control information 805 to determine the subsequent processing to be performed to output the packet.
  • the control information includes information to instruct the outbound subsystem how to update the header information, if needed, before output.
  • the control information includes the following:
  • orig_tag - when set indicates that the VLAN tag is the original tag the packet arrived with at the inbound subsystem; • mod_tag - when set, indicates that the VLAN tag the packet arrived with has been modified;
  • priority (2) - indicates the queuing priority level in the subsystem external ports for the particular packet
  • FIG. 8B A simplified block diagram illustrating the process for header field replacement of packets communicated through internal links is illustrated in the diagram of Figure 8B.
  • the inbound subsystem includes elements to process the received packet prior to transmission to the outbound system and the outbound system includes elements that perform other function in addition to those described herein.
  • the inbound system 825 receives the packet and accesses the memory containing the database (not shown) to obtain information regarding the packet, e.g., if the packet is to be routed or if VLAN routing is supported. Certain control information is generated and provided to the cascading output process (COP) 835 which prepends the control information to the packet and outputs the packet with the prepended control information to the output interface 840 which generates and appends a CRC to encapsulate the packet for output to the outbound subsystem 830.
  • the output interface is a media access controller (MAC); however, other interfaces could be used.
  • the outbound subsystem 830 receives the encapsulated packet at the input interface 845, which is preferably a MAC, performs frame validity checking and strips the CRC.
  • the input interface 845 outputs to the cascading input process (CIP) 850 the packet stripped of the CRC and the CIP 850 removes the control information and forwards the packet, stripped of the encapsulating CRC and control information, to the packet memory 855.
  • the control information is stored in the control field 857 corresponding to the packet stored in the memory 855.
  • the output port process 860 retrieves the packet and the control information from the packet memory 855 and based upon the control information, selectively performs modifications to the packet and issues control signals to the output interface 865 (i.e., MAC).
  • the OPP 860 strips the last 4 bytes of the packet corresponding to the CRC and asserts control signals to the MAC 865 to append a CRC and replace the source address with its own MAC address. For example, the OPP 860 issues a replace_SA signal and clears a no_CRC bit in a control word sent to the MAC 865.
  • the OPP 860 removes the VLAN tag field in the packet, strips the last 4 bytes of the packet corresponding to the CRC and issues a control signal to the MAC 865 to append a CRC.
  • the OPP 860 decodes , orig_tag, mod_tag and dont_tag and a fourth indicator, tag_enable.
  • Tag_enable is an internal variable which indicates that the network segment connected to this output port does not support VLAN tagging. This variable is determined by a network management mechanism based on the underlying network topology.
  • the result of the decoding process indicates whether the OPP 860 is to strip the tag and whether the MAC 865 is to generate a CRC.
  • the OPP decodes according the following table:
  • the OPP 860 removes the tag, preferably as the tag is transferred to the MAC 86. If no CRC is to be generated, the OPP 860 sends a signal indicating that no CRC is to be generated (e.g., set no_CRC) and the MAC 865 transmits the packet as it is received. If the CRC is to be generated, the last 4 bytes are removed from the packet by the OPP 860 a signal to generate the CRC is sent to the MAC 865, (e.g., clear no-CRC). The MAC 865, based upon the control signals received from the OPP 860, replaces the source address field with its own MAC address and generates a CRC that is appended to the end of the packet as the packet is output.
  • a signal indicating that no CRC is to be generated e.g., set no_CRC
  • the MAC 865 transmits the packet as it is received. If the CRC is to be generated, the last 4 bytes are removed from the packet by the OPP
  • the encapsulation process can potentially extend the packet by a number of bytes. This can negatively affect the capacity of the link.
  • the protocol parameter in the present embodiment the Ethernet protocol
  • the protocol parameter are fine tuned to reduce the preamble size by 5 bytes, the inte ⁇ acket gap by 5 byes and increase the maximum packet size by 10 bytes.

Abstract

A multi-layer distributed network element for relaying packets according to known routing protocols. A distributed architecture of multiple subsystems (210) delivers routing at wire-speed performance across subnetworks. Each subsystem (210) includes a forwarding memory (213) and an associated memory (214) and is configured to identify unicast and multicast packets for routing purposes, modify the packets in hardware, including replace VLAN information, and forward the packets to the next hop. The routing decisions are made in the inbound subsystem (410), and packets and associated control information are forwarded, if necessary given the network topology, through a separate outbound subsystem (420). When packets traverse the internal links from one subsystem to another, encapsulation operations are conducted such as appending an additional cyclic redundancy code (CRC) to the packet before going through the internal link.

Description

MECHANISM FOR PACKET FIELD REPLACEMENT IN A MULTILAYER DISTRIBUTED NETWORK ELEMENT
BACKGROUND
1 . Field of the Invention
This invention relates generally to communication systems that couple computers, and more specifically to relaying messages through a network element.
2 . Description of Related Art
Communication between computers has become an important aspect of everyday life in both private and business environments. Computers converse with each other based upon a physical medium for transmitting the messages back and forth, and upon a set of rules implemented by electronic hardware attached to and programs running on the computers. These rules, often called protocols, define the orderly transmission and receipt of messages in a network of connected computers.
A local area network (LAN) is the most basic and simplest network that allows communication between a source computer and destination computer. The LAN can be envisioned as a cloud to which computers (also called endstations or end-nodes) that wish to communicate with one another are attached. At least one network element will connect with all of the endstations in the LAN. An example of a simple network element is the repeater which is a physical layer relay that forwards bits. The repeater may have a number of ports, each endstation being attached to one port. The repeater receives bits that may form a packet of data that contains a message from a source endstation, and blindly forwards the packet bit-by-bit. The bits are then received by all other endstations in the LAN, including the destination.
A single LAN, however, may be insufficient to meet the requirements of an organization that has many endstations, because of the limited number of physical connections available to and the limited message handling capability of a single repeater. Thus, because of these physical limitations, the repeater-based approach can support only a limited number of endstations over a limited geographical area.
The capability of computer networks, however, has been extended by connecting different subnetworks to form larger networks that contain thousands of endstations communicating with each other. These LANs can in turn be connected to each other to create even larger enterprise networks, including wide area network (WAN) links.
To facilitate communication between subnetworks in a larger network, more complex electronic hardware and software have been proposed and are currently used in conventional networks. Also, new sets of rules for reliable and orderly communication among those endstations have been defined by various standards based on the principle that the endstations interconnected by suitable network elements define a network hierarchy, where endstations within the same subnetwork have a common classification. A network is thus said to have a topology which defines the features and hierarchical position of nodes and endstations within the network.
The interconnection of endstations through packet switched networks has traditionally followed a peer-to-peer layered architectural abstraction. In such a model, a given layer in a source computer communicates with the same layer of a peer endstation (usually the destination) across the network. By attaching a header to the data unit received from a higher layer, a layer provides services to enable the operation of the layer above it. A received packet will typically have several headers that were added to the original pay load by the different layers operating at the source.
There are several layer partitioning schemes in the prior art, such as the Arpanet and the Open Systems Interconnect (OSI) models. The seven layer OSI model used here to describe the invention is a convenient model for mapping the functionality and detailed implementations of other models. Aspects of the Arpanet, however, (now redefined by the Internet Engineering Task Force, or IETF) will also be used in specific implementations of the invention to be discussed below.
The relevant layers for background purposes here are Layer 1 (physical), Layer 2 (data link), and Layer 3 (network), and to a limited extent Layer 4 (transport). A brief summary of the functions associated with these layers follows.
The physical layer transmits unstructured bits of information across a communication link. The repeater is an example of a network element that operates in this layer. The physical layer concerns itself with such issues as the size and shape of connectors, conversion of bits to electrical signals, and bit-level synchronization.
Layer 2 provides for transmission of frames of data and error detection. More importantly, the data link layer as referred to in this invention is typically designed to "bridge," or carry a packet of information across a single hop, i.e., a hop being the journey taken by a packet in going from one node to another. By spending only minimal time processing a received packet before sending the packet to its next destination, the data link layer can forward a packet much faster than the layers above it, which are discussed next. The data link layer provides addressing that may be used to identify a source and a destination between any computers interconnected at or below the data link layer. Examples of Layer 2 bridging protocols include those defined in IEEE 802 such as CSMA/CD, token bus, and token ring (including Fiber Distributed Data Interface, or
FDDI).
Similar to Layer 2, Layer 3 also includes the ability to provide addresses of computers that communicate with each other. The network layer, however, also works with topological information about the network hierarchy. The network layer may also be configured to "route" a packet from the source to a destination using the shortest path. Finally, the network layer can control congestion by simply dropping selected packets, which the source might recognize as a request to reduce the packet rate.
Finally, Layer 4, the transport layer, provides an application program such as an electronic mail program with a "port address" which the application can use to interface with Layer 3. A key difference between the transport layer and the lower layers is that a program on the source computer carries a conversation with a similar program on the destination computer, whereas in the lower layers, the protocols are between each computer and its immediate neighbors in the network, where the ultimate source and destination endstations may be separated by a number of intermediate nodes. Examples of Layer 4 and Layer 3 protocols include the Internet suite of protocols such as TCP (Transmission Control Protocol) and IP (Internet Protocol).
Endstations are the source and ultimate destination of a packet, whereas a node refers to an intermediate point between the endstations. A node will typically include a network element which has the capability to receive and forward messages on a packet- by-packet basis.
Generally speaking, the larger and more complex networks typically rely on nodes that have higher layer (Layers 3 and 4) functionalities. A very large network consisting of several smaller subnetworks must typically use a Layer 3 network element known as a router which has knowledge of the topology of the subnetworks.
A router can form and store a topological map of the network around it based upon exchanging information with its neighbors. If a LAN is designed with Layer 3 addressing capability, then routers can be used to forward packets between LANs by taking advantage of the hierarchical routing information available from the endstations. Once a table of endstation addresses and routes has been compiled by the router, packets received by the router can be forwarded after comparing the packet's Layer 3 destination address to an existing and matching entry in the memory.
The router operates by parsing the header of a received packet, making decisions based on a routing table inside the router, and forwarding the packet, with any required header modifications, to the next node or endstation. Thus, the packet will go through several such "hops" before reaching its destination where a hop is defined as the packet traveling from one node or endstation to another node or endstation.
In comparison to routers, bridges are network elements operating in the data link layer (Layer 2) rather than Layer 3. They have the ability to forward a packet based only on the Layer 2 address of the packet's destination, typically called the medium access control (MAC) address. Generally speaking, bridges do not modify the packets. Bridges forward packets in a flat network having no hierarchy without any cooperation from the endstations.
Hybrid forms of network elements also exist, such as brouters and switches. A brouter is a router which can also perform as a bridge. The term switch refers to a network element which is capable of forwarding packets at high speed with functions implemented in hardwired logic as opposed to a general purpose processor executing instructions. Switches come in many flavors, operating at both Layer 2 and Layer 3.
Having discussed the current technology of networking in general, the limitations of such conventional techniques will now be addressed. With an increasing number of users requiring increased bandwidth from existing networks due to multimedia applications to run on the modern day Internet, modern and future networks must be able to support a very high bandwidth and a large number of users. Furthermore, such networks should be able to support multiple traffic types such as voice and video which typically require different service characteristics. Statistical studies show that the network domain, i.e., a group of interconnected LANs, as well as the number of individual endstations connected to each LAN, will grow at a faster rate in the future. Thus, more network bandwidth and more efficient use of resources is needed to meet these requirements.
Building networks using Layer 2 elements such as bridges provides fast packet forwarding between LANs but has no flexibility in traffic isolation, redundant topologies, and end-to-end policies for queuing and access control. Endstations in a subnetwork can invoke conversations based on either Layer 3 or Layer 2 addressing. As bridges forward packets based on only Layer 2 parsing, the provide simple yet speedy forwarding services. However, the bridge does not support the use of high layer handling directives including queuing, priority, and forwarding constraints between endstations in the same subnetwork.
A prior art solution to enhancing bridge-like conversations within a subnetwork relies on a network element that uses a combination of Layer 2 and upper layer headers. In that system, the Layer 3 and Layer 4 information of an initial packet are examined, and a "flow" of packets is predicted and identified using a new Layer 2 entry in the forwarding memory, with a fixed quality of service (QOS). Thereafter, subsequent packets are forwarded at Layer 2 speed (with the fixed QOS) based upon a match of the Layer 2 header with the Layer 2 entry in the forwarding memory. Thus, no entries with Layer 3 and Layer 4 headers are placed in the forwarding memory to identify the flow.
However, consider the scenario where there are two or more programs communicating between the same pair of endstations, such as an electronic mail program and a video conferencing session. If the programs have dissimilar QOS needs, the prior art scheme just presented will not support different QOS characteristics between the same pair of endstations, because the prior art scheme does not consider information in Layer 3 and Layer 4 when forwarding. Thus, there is a need for a network element that is flexible enough to support independent priority requests from applications running on endstations connected to the same subnetwork.
The latter attributes may be met using Layer 3 elements such as routers. But packet forwarding speed is sacrificed in return for the greater intelligence and decision making capability provided by the router. Therefore, networks are often built using a combination of Layer 2 and Layer 3 elements.
The role of the server has multiplied with browser-based applications that use the Internet, thus leading to increasing variation in traffic distribution. When the role of the server was narrowly limited to a file server, for example, the network was designed with the client and the file server in the same subnetwork to avoid router bottlenecks. However, more specialized servers like World Wide Web and video servers are typically not on the client's subnetwork, such that crossing routers is unavoidable. Therefore, the need for packets to traverse routers at higher speeds is crucial. The choice of bridge versus router typically results in a significant trade-off, lower functionality when using bridges, and lower speed when using routers. Furthermore, the service characteristics within a network are no longer homogenous, as the performance of a server becomes location dependent if its traffic patterns involve routers.
Therefore, there is a need for a network element that can handle changing network conditions such as topology and message traffic yet make efficient use of high performance hardware to switch packets based on their Layer 2, Layer 3, and Layer 4 headers. The network element should be able to operate at bridge-like speeds, yet be capable of routing packets across different subnetworks and provide upper layer functionalities such as quality of service.
SUMMARY
The invention is an apparatus and related method for relaying packets by a multilayer distributed network element according to known routing protocols.
The invention is directed at a multi-layer distributed network element (MLDNE) for receiving and forwarding packets using known routing protocols. The MLDNE has a number of subsystems that are coupled by internal links. Each subsystem has a forwarding memory and associated memory. The memories associate packet header information including addresses with routing information. A subsystem also includes external ports that connect with neighboring nodes and endstations, and internal ports that connect with other subsystems through the internal links.
When a packet is received by a first "inbound" subsystem, the subsystem determines whether the packet should be routed based upon a first header portion, including a Layer 2 destination address of the received packet, matching a Layer 2 address of the MLDNE. If the first header portion of the received packet matches the MLDNE address, then the first subsystem determines, using its forwarding memory, whether a route has been previously determined for a second header portion, including Layer 3 source and destination addresses, of the received packet.
If a type 2 entry in the forwarding memory matches the received packet's second header portion, then a neighbor node's Layer 2 address (found in associated memory) replaces the Layer 2 destination address of the packet. The neighbor node's address was previously stored in the associated memory as part of the routing information associated with the matching type 2 entry. In addition to Quality of Service information, the routing information in the associated memory also identifies the external ports of the inbound subsystem that connect with the neighbor node. If the neighbor node is connected to a subsystem other than the inbound subsystem, the situation would have been recognized at the time the matching type 2 entry was created such that the associated memory would identify the internal port of the inbound subsystem, rather than external port, that connects with the other subsystem to which the neighbor node or endstation is connected. When the packet is received over the internal link by a second subsystem, the packet is forwarded to the neighbor node in response to the packet's new first header portion matching a type 1 entry in the second forwarding memory. The type 1 entry in the second subsystem contains the address of the neighbor node or endstation and had been created independently of the matching type 2 entry of the inbound subsystem.
After determining that a received packet should be routed, the inbound subsystem also generates a first control signal which indicates to the external port that eventually forwards the packet that a third header portion identifying the packet's source be modified before sending the packet to the neighbor node. A Layer 2 source address of the packet is replaced with a source address associated with the external port. The control signal is also passed over an internal link to the second subsystem if the neighbor node is reachable through that subsystem.
The invention's distributed architecture can also be configured to support routing of multicast packets. Once a multicast routable packet has been identified in the inbound subsystem, a second control signal may be sent across an internal link in response to which the second subsystem performs a type 2 search of the forwarding memory (based on the network layer and higher layer headers of the packet). If a matching type 2 entry is found, then the external ports of the second subsystem check the first control signal (also received from the inbound subsystem) to see if the source address of the packet needs to be replaced, and the packet is then forwarded with the appropriate modifications to its headers. The first control signal may also be received and checked by the external ports of the inbound system where the multicast destination group includes nodes/endstations connected to the inbound subsystem.
The invention's search engine, forwarding engine, and data structures are organized in a way that supports bridging and routing functions simultaneously, where if routing criteria are not met for a received packet, then bridging functions are provided automatically.
In its present embodiment, the invention is implemented with the data link layer (Layer 2), the network layer (Layer 3) and higher layers including the transport layer (Layer 4).
DRAWINGS
The foregoing aspects and other features of the invention will be better understood by referring to the figures, detailed description, and claims below where: Figure 1 is a high level view of an exemplary network application of a multilayer distributed network element (MLDNE) of the invention.
Figure 2 in an internal view of the MLDNE as an embodiment of the invention.
Figure 3 illustrates an exemplary forwarding and associated memory of a subsystem in the MLDNE, including associated data for the routing of packets, according to another embodiment of the invention.
Figure 4 is a block diagram of an embodiment of the MLDNE having only two subsystems and acting as a router between a client and a server.
Figure 5 is a flow diagram of processing a received packet for routing purposes by the invention's network element.
Figure 6 is a continuation of the flow diagram in Figure 5 and includes steps performed in processing a unicast packet.
Figure 7 shows exemplary steps and operations performed by the invention's network element for routing a multicast packet.
Figure 8A is a simplified block diagram of a packet structure utilizied in one embodiment of the invention.
Figure 8B is a structure for header field replacement of packets by the invention.
DETAILED DESCRIPTION
As shown in the drawings by way of illustration, the invention defines a network element that is used to interconnect a number of nodes and endstations in a variety of different ways. In particular, an application of the multi-layer distributed network element (MLDNE) would be to route packets according to predefined routing protocols over a homogenous data link layer such as the IEEE 802.3 standard, or Ethernet. Figure 1 illustrates the invention's use as a router in a network where the MLDNE 201 couples a client C to the Router 107 which in turn couples with the Server 105. The MLDNE 201 can interconnect a number of desktop units (endstations), while acting as an intermediate node, through its external connections 217. The MLDNE 201 is capable of providing a high performance communication path between servers and desktop units while acting as a router, where the Server 105 and the client C reside in different LANs. The MLDNE's distributed architecture can be configured to route message traffic in accordance with a number of known routing algorithms such as RIP and OSPF. In a preferred embodiment, the MLDNE is configured to handle message traffic using the
Internet suite of protocols, and more specifically the Transmission Control Protocol
(TCP) and the Internet Protocol (IP) over the Ethernet LAN standard and medium access control (MAC) data link layer. The TCP is also referred to here as an exemplary Layer 4 protocol, while the IP is referred to repeatedly as a Layer 3 protocol. However, other protocols can be used to implement the concepts of the invention.
In a first embodiment of the invention's MLDNE, a network element is configured to implement packet routing functions in a distributed manner, i.e., different parts of a function are performed by identical building block subsystems in the MLDNE, while the final result of the functions remains transparent to the external nodes and endstations. As will be appreciated from the discussion below and the diagram in Figure 2, the MLDNE has a scalable architecture which allows the designer to increase the number of external connections by adding additional subsystems.
As illustrated in block diagram form in Figure 2, the MLDNE 201 contains a number of identical subsystems 210 that are fully meshed and interconnected using a number of internal links 241 to create a larger network element. At least one internal link couples any two subsystems. Each subsystem 210 includes a forwarding memory 213 and an associated memory 214. The forwarding memory 213 stores an address table used for matching with the headers of received packets. The associated memory stores data associated with each entry in the forwarding memory that is used to identify forwarding attributes for forwarding the packets through the MLDNE. A number of external ports (not shown) having input and output capability interface the external connections 217. Internal ports (not shown) also having input and output capability in each subsystem couple the internal links 241. In the preferred embodiment, the external and internal ports lie within a hardwired-logic switching element 21 1 implemented by an application specific integrated circuit (ASIC).
A received packet arrives at an inbound subsystem through one of the external connections 217, and will be forwarded to a node or endstation outside the MLDNE through another external connection in an outbound subsystem. The outbound and inbound subsystems can be either the same or different subsystems.
Referring to Figure 2, the MLDNE 201 includes a central processing system (CPS) 260 that is coupled to the individual subsystems 210 through a communication bus 251 such as the Peripheral Components Interconnect (PCI). The CPS 260 includes a central processing unit (CPU) 261 coupled to a central memory 263. Central memory
263 includes a copy of the entries contained in the individual forwarding memories 213 of the various subsystems. The CPS has a direct control and communication interface to each subsystem 210. The CPS is also configured with a number of routing protocols that are used to identify a neighbor node as part of a route for forwarding a received packet to its ultimate destination, normally specified in the Layer 3 destination address of the packet. Other responsibilities of the CPS 260 include setting data path resources such as packet buffers between the different subsystems. Finally, the CPS 260 performs the important task of determining whether or not a type 2 entry should be added to the forwarding memory of each individual subsystem.
Figure 3 takes a closer look at the forwarding and associated memories in each subsystem. The forwarding memory includes a number of entries of two types, type 2 entry 321 and type 1 entry 301. Each entry in the forwarding memory includes data to be compared with the headers of received packets. For the particular embodiment of TCP/IP, the data fields for each type 2 entry 321 include a class field 323, an IP source field 325, an IP destination field 327, an application source port 333, an application destination port 335, and an Inbound Port field 337. For the type 1 entry 301, a class field, a Layer 2 address field, and a VLAN identification (VID) field are shown in the exemplary embodiment. Of course, additional header information and similar definitions using alternate network and transport layer protocols can be developed and included in each entry and used for matching the headers of received packets, as will be apparent to one skilled in the art.
Associated with each type 2 entry 321 and type 1 entry 301 are associated data stored in associated memory 214. The associated data fields contain information needed to forward a matching packet received by the subsystem. The subsystem port field 347 identifies the internal or external ports of the subsystem used for forwarding the matching packet to the neighboring node in the next hop. The next hop address field 357 identifies the neighbor node's Layer 2 address which replaces the original Layer 2 destination address of a received unicast packet to be routed. A priority field 345 is used for queuing purposes by the external port which actually sends the packet outside the MLDNE. The age fields 343 and 344 help minimize the number of entries in the forwarding memory by indicating that a recently received packet has matched the corresponding type 1 or type 2 entry.
A NEW VID address field 353 allows the MLDNE to be configured to support virtual LANs (VLANs). The associated data also includes a NEW VLAN identification (VID) TAG field, used to notify the subsystem of a need to change the packet's VID, particularly when forwarding the packet across subnetworks. The inbound subsystem in response will either insert a new tag, or replace an existing tag with the value in the NEW
VID field. For example, when routing between VLANs requires the forwarded packet's tag to be different from the received packet's tag, then the NEW VID field will contain the replacement tag for the subsystem to replace before forwarding the packet.
Whenever a packet is sent across an internal link, additional control information may be made available over the internal link to the outbound subsystem receiving the packet. Such information, in addition to the sa_replace bit discussed below, includes an orig_tag bit which indicates whether or not the received packet was originally tagged with VLAN information, a mod_tag bit which indicates whether the tag was modified by the inbound subsystem, and a dont_tag bit which indicates that the received packet should not be tagged by the outbound subsystem.
Finally, the associated memory can be configured to include a multicast route field 355 which activates multicast routing capability in the subsystem as further explained below.
The routing operation of the MLDNE 201 will be described for an exemplary embodiment using the flow diagram of Figures 5-7 in conjunction with the exemplary network application in Figure 4. References to fields in the forwarding and associated memories are found in Figure 3. In the example below, the journey of a packet is traced beginning with a client C in subnetwork 103 coupled to an external connection of MLDNE 201. The client C sends a packet to server 105 which is identified in the Layer 3 destination address field of the packet's header. The packet must traverse a router 107 which is assumed to have a Layer 2 address known by the MLDNE 201.
Beginning with block 503 in Figure 5, a packet is received by the MLDNE 201 at external port El of the inbound subsystem 410. The packet includes a message originated from a client C having a Layer 3 address in a logically defined network subnetwork 103. Subsystem 410 is configured to recognize that external ports El and E2 couple the subnetwork 103.
When the packet is received by switching element 411, operation continues with decision block 507 where first header portion, including the Layer 2 destination address in the present embodiment, of the received packet is compared with a router address of the MLDNE 201. The router address may be a Layer 2 address assigned to external port Ei, or a Layer 2 address assigned to the MLDNE as a whole. Normally, the MLDNE will be configured so that each external port is assigned its own router address. If the first header portion of the received packet matches the router address, then operation proceeds to block 515 where the packet is declared to be a potential unicast route candidate. If, however, the first header portion does not match the router address, then operation proceeds to block 509 where the packet is declared as not being a unicast routable packet. As will be appreciated below, such a packet can still be a multicast packet having a multicast route available in the MLDNE.
For a unicast packet of the route class, block 517 performs a search of the forwarding memory 413 for a matching type 2 entry using "route" as the class field 323.
The search of the forwarding memory in block 517 leads to the decision block 521 where the test is whether a type 2 matching entry exists in the forwarding memory 413. If not, then operation proceeds with block 523 where relevant portions of the received packet headers are sent to the CPS via the CPS port in subsystem 410 and the CPS bus 451.
When the CPS 460 receives the portions of the headers of the "missed" packet from subsystem 410 in block 533, the CPS then examines access policies and class of service policies that have been preconfigured in the CPS, and the CPS Layer 2 and Layer 3 topology tables. The CPS has the option of denying service to the path requested by the received packet, performing the routing function entirely in its own software, or preparing a type 2 entry in the inbound system's forwarding memory for the route.
The routing algorithms of the MLDNE 201 are implemented by the CPS. If a unicast route exists or can be readily computed for the received packet, then the CPS decides in decision block 537 to proceed with block 539 and add a route class type 2 entry 321 to the forwarding memory, and associated data to the associated memory, of the inbound subsystem 410. If the neighbor node connects to an external port of the inbound subsystem 410, as determined by the CPS consulting a Layer 2 table in the central memory, then the external port is identified in the new type 2 entry's associated subsystem port field 347. Similarly, if the neighbor node connects to the subsystem 420, then an internal port 11 or F? is identified.
Returning to decision block 521, if the packet matches an existing route class type 2 entry in the forwarding memory 413 of the inbound subsystem 410, then the received packet is forwarded as a unicast packet as illustrated in exemplary form in Figure 6.
Turning now to Figure 6 and staying in the inbound subsystem, the switching element 41 1 evaluates whether the unicast packet's time to live has been exceeded. A time to live field is assumed to exist in the received packet's headers. If the packet has been circulating through the network too long as indicated by its time to live field, then the inbound subsystem only sends the received packet to the CPS, and then a time exceeded error message in accordance with, for example, the Internal Control Message
Protocol (ICMP) or as discussed in the Request For Comments (RFC) maintained by the
Internet community, is generated by the CPS as in block 609.
If, on the other hand, the packet's time to live (TTL) has not been exceeded, then operation continues with block 619 where the TTL is decremented. This modification to the packet's header will normally require compensating the packet's Layer 3 header check sum as in block 621. In block 61 1, the switching element 41 1 replaces the Layer 2 destination address of the received packet with the next hop Layer 2 address found in the associated memory corresponding to the matching type 2 entry determined in block 521 of Figure 5.
If the MLDNE 201 is configured to support VLANs, then decision block 615 determines whether a new VLAN identification tag is required by checking the status of the NEW VID tag field 351.
Whether or not the packet is to be forwarded outside the MLDNE by another subsystem (as indicated by the subsystem port field 347 associated with the matching type 2 entry) a first control signal, such as a sa_replace bit, is prepared in block 621. The sa_replace bit will be handed off to the external and internal ports indicated in the subsystem port field 347, and thus may be transferred over an internal link 441, together with the packet, to the subsystem 420. The first control signal will notify the subsystem (either the inbound one or another subsystem) to replace the Layer 2 source address of the packet with the source address of the external port used for forwarding the packet.
In the example of Figure 4, the packet together with any control information, including first control signal, are processed by internal port 12 in switching element 411, and delivered to the internal link 441 to connect with the outbound subsystem 420 in block 627. Alternatively, however, the modified packet and control information stay in the inbound subsystem and are processed by an external port, where operation continues in block 630.
In block 627, the packet is received over the internal links in outbound subsystem 420. A type 1 matching cycle then begins and decision block 629 is reached to determine whether a matching type 1 entry exists in the forwarding memory 423. If a type 1 entry exists then operation continues with block 630. The operation from block 630 to block 637 are performed by the "outbound" subsystem where the packet leaves the MLDNE. be it the inbound subsystem 410 or a different subsystem 420. If the sa_replace bit, as checked in decision block 630, is set, then the switching element replaces a third header portion, including at least the Layer 2 source address of the received packet, with the Layer 2 address of the external port E3 through which the packet must be forwarded. The external port E3 was identified in the associated data (in associated memory) corresponding to either the matching type 1 entry found in block 629 (the packet came across internal link) or the matching type 2 entry found in block 521 (the packet remained in inbound subsystem).
The MLDNE can be configured so that each external port is assigned a unique Layer 2 address. Alternatively, a single source address may be assigned to the MLDNE as a whole and shared by all external ports. In either case, following the replacement of the third header portion, the cyclic redundancy code (CRC) of the packet's headers is recomputed in block 635 and the packet is then forwarded to the neighbor node being the router 107 in Figure 4.
In the above example, the packet's journey has been described originating from the client C and traveling through subsystem 410, internal link 441, and subsystem 420 in MLDNE 201. The packet is then received by router 107 and forwarded according to conventional means to server 105. The above, of course, assumed that a route for the server 105 as a destination through router 107 had been previously obtained by the MLDNE 201 using conventional techniques for determining the routes.
The above also covered the situation where although a unicast packet falls within the route class, no type 2 matching entry existed in the inbound subsystem to be used for routing the packet through the MLDNE. Thus, the decision as to whether or not a received packet will be routed is made in the inbound subsystem, in particular, in decision blocks 507 and 521 of Figure 5. Note also that routing policies as well as class of service queuing have the granularity and flexibility of Layer 3 end-to-end addresses and protocol based classification. These routing policies and class of service queuing are identified in the associated data corresponding to each matching type 2 entry, and may be sent across the internal link to a separate outbound subsystem.
Multicast Routing
Having discussed the unicast routing aspects of the invention, the routing features of the invention for multicast packets are now presented while referring once again to the entries in the forwarding and associated memories of Figure 3 and the flow diagram of Figure 7. Although multicast routing in the invention's MLDNE can be supported by similar hardware structures that implement unicast routing in the MLDNE, multicast does present significantly different problems to the network element designer. For instance, the routing protocols used to derive the type 2 entries in the forwarding memory include protocols such as MOSPF and DVMRP which are well-known in the art. These multicast routing protocols produce a loop-free distribution tree for the packet's group destination network layer multicast address and a source network layer address for the sender.
The MLDNE has a local multicast forwarding rule which yields a number of external ports (and their corresponding subsystems) for forwarding the packet, as a function of a received multicast packet group destination Layer 3 address, source Layer 3 address, and the inbound subsystem port of arrival. This dependency is reflected in the type 2 entry in the forwarding memory of Figure 3 as the fields 327, 325, and 337, respectively, to be matched with a received packet's headers. The inbound port of arrival field 337 is included to prevent forwarding duplicate packets over alternate paths.
To identify a received packet as a candidate for multicast routing, the MLDNE is configured to identify a multicast packet based on at least two criteria. First, the packet headers must match a given class. Second, the packet's headers must match an existing type 2 entry that refers to a multicast group destination address. The matching type 2 entry for the multicast case may be created as a result of executing a multicast registration protocol such as IGMP.
Figure 7 illustrates an exemplary flow diagram for routing a received multicast packet through the MLDNE 201 of Figure 4. When a packet is received by the subsystem 410 and the packet headers match a certain class and a type 2 entry 321 which has a multicast route field 355 indicating that the entry is for multicast routing, as in block 703, control is transferred to the decision block 705. If the packet's time to live has not been exceeded, then the routing operation continues in block 709 in the inbound subsystem 410 by decrementing the time to live field in the received packet's header. If the packet's TTL was exceeded, then in block 707 the packet may be flooded, not routed, to its VLAN. A packet's VLAN, in general, defines the Layer 2 topology used for flooding, in other words the broadcast domain.
Proceeding to block 71 1, the inbound subsystem 410 determines whether a new VLAN tag is required for the received packet, based on the NEW VID tag field 351 in the associated memory. If so, then the VID in the Layer 2 header of the packet is replaced with the destination VID of the next hop, as found in the associated memory, as in block 713. Note that block 713 is performed only if the Layer 3 multicast destination address of the received packet refers to endstations that lie within the same VLAN. Such a determination was made by the CPS when the type 2 entry was created.
Whether or not VLANs are supported by the MLDNE, in block 715 the inbound subsystem 410 prepares to notify the external ports that will forward the packets outside the MLDNE of a need to route the packet by setting the first control signal (sa_replace bit) to indicate to the forwarding external ports that the Layer 2 source address of the packet to be forwarded must be replaced with the source address of the external port. Once the changes have been made to the network layer header, in particular, the portion that includes the time to live (TTL) field, the inbound subsystem compensates the packet's header check sum value in block 717. The inbound subsystem 410 then hands off copies of the packet to the external and internal ports of the inbound subsystem 410 that are identified in the subsystem ports field 347 of the associated memory as corresponding to the matching type 2 entry, as in block 719.
In the case where a copy of the packet traverses an internal link and arrives at a different subsystem 420 in block 720, operation proceeds with decision block 721 where a second control signal, here called the distributed flow (DF or distrib_flow) bit, may be received by the outbound subsystem 420. If the DF bit is set, then a class filter determines the class of the packet, based upon the packet's headers, and a type 2 search (with the identified class) is conducted in block 722.
The distrib_flow construct allows the CPS to define a type 2 entry in the outbound subsystem 420 corresponding to the matching multicast route entry in the inbound subsystem. This allows different priorities to be assigned by the CPS to the different external ports that will service the multicast route, to further control queuing granularity for packets traversing the MLDNE. A force_be bit (placed by the CPS and obtained after a type 2 search in the outbound subsystem) in the associated data of the matching type 2 entry overrides the priority received over the internal link with the packet, such that the packet will be forced to the lowest priority, thus providing some granularity in queuing at the external ports.
If the distrib_flow bit is not set, then a type 1 search is performed on the forwarding memory 423, and the packet is forwarded or flooded accordingly without the type 2 queuing granularity discussed above.
If a matching type 1 or type 2 entry is found, then the packet is handed off to the external ports identified in the associated memory corresponding to the matching entry. Thereafter, operation proceeds with block 723. Thus, a multicast route requires two type 2 entries to be created by the CPS where the inbound and outbound subsystems are different.
The operations from block 723 to block 729 are performed by the outbound subsystem, be it the subsystem 410 or subsystem 420. The outbound subsystem in decision block 723 determines whether the sa_replace bit has been set to indicate that the Layer 2 source address of each copy of the packet should be replaced with the Layer 2 address of the corresponding external port used for forwarding the packet outside the MLDNE. If not, then the packet may be forwarded using a Layer 2 search result.
If there is an indication to replace the Layer 2 source address for routing purposes, then in block 725, the outbound subsystem, in particular an external port of the outbound subsystem, replaces the Layer 2 source address of the packet with a Layer 2 address of the external port. Operation then proceeds with block 727 where a CRC is recomputed for the modified Layer 2 header, and the packet is forwarded in block 729.
An innovative structure and method for transmitting the packet and control information across the internal link will now be described with reference to Figures 8A and 8B. Figure 8 A is a simplified diagram of the packet structure utilized. More particularly, as the inbound subsystem has determined certain information regarding the packet, e.g., routing, it is advantageous to simply convey this information to the outbound subsystem so that subsequent processing, such as the header field replacement, can easily be performed without reperforming the same steps performed by the inbound subsystem. Furthermore, it is desirable to maintain end-to-end error robustness. Thus, the inbound subsystem encapsulates the packet 800 with control information 805 and a cycle redundancy code (CRC) 810. The outbound system receives the encapsulated packet, determines frame validity using CRC 810, strips the CRC 810 and removes the control information 805 to determine the subsequent processing to be performed to output the packet.
The control information includes information to instruct the outbound subsystem how to update the header information, if needed, before output. In the present embodiment, the control information includes the following:
• replace_sa - when set, indicates that the source address field of the header is to be replace with the outbound subsystem's output MAC address;
• orig_tag - when set, indicates that the VLAN tag is the original tag the packet arrived with at the inbound subsystem; • mod_tag - when set, indicates that the VLAN tag the packet arrived with has been modified;
• dont_tag -when set, indicates that the VLAN tag may not to be used regardless of the state of the orig_tag and the mod_tag (in the present embodiment, this is typically used when packets arrive from the CPS 260);
• distributed_flow - when set, indicates whether a Layer 3 or Layer 2 search should be conducted initially for the packet;
• priority (2) - indicates the queuing priority level in the subsystem external ports for the particular packet;
• reserved (9)
A simplified block diagram illustrating the process for header field replacement of packets communicated through internal links is illustrated in the diagram of Figure 8B. For purposes of explanation, a number of functional elements not relevant to the process of performing header field replacement are not shown or described. However, it is readily apparent to one skilled in the art that the inbound subsystem includes elements to process the received packet prior to transmission to the outbound system and the outbound system includes elements that perform other function in addition to those described herein.
Referring to Figure 8A, the inbound system 825 receives the packet and accesses the memory containing the database (not shown) to obtain information regarding the packet, e.g., if the packet is to be routed or if VLAN routing is supported. Certain control information is generated and provided to the cascading output process (COP) 835 which prepends the control information to the packet and outputs the packet with the prepended control information to the output interface 840 which generates and appends a CRC to encapsulate the packet for output to the outbound subsystem 830. Preferably the output interface is a media access controller (MAC); however, other interfaces could be used.
The outbound subsystem 830 receives the encapsulated packet at the input interface 845, which is preferably a MAC, performs frame validity checking and strips the CRC. The input interface 845 outputs to the cascading input process (CIP) 850 the packet stripped of the CRC and the CIP 850 removes the control information and forwards the packet, stripped of the encapsulating CRC and control information, to the packet memory 855. The control information is stored in the control field 857 corresponding to the packet stored in the memory 855. The output port process 860 retrieves the packet and the control information from the packet memory 855 and based upon the control information, selectively performs modifications to the packet and issues control signals to the output interface 865 (i.e., MAC).
In one embodiment, which occurs when the packet is to be routed, the OPP 860 strips the last 4 bytes of the packet corresponding to the CRC and asserts control signals to the MAC 865 to append a CRC and replace the source address with its own MAC address. For example, the OPP 860 issues a replace_SA signal and clears a no_CRC bit in a control word sent to the MAC 865. In another embodiment, when VLAN routing is supported, depending upon the state of the control signals, the OPP 860 removes the VLAN tag field in the packet, strips the last 4 bytes of the packet corresponding to the CRC and issues a control signal to the MAC 865 to append a CRC. More particularly, the OPP 860 decodes , orig_tag, mod_tag and dont_tag and a fourth indicator, tag_enable. Tag_enable is an internal variable which indicates that the network segment connected to this output port does not support VLAN tagging. This variable is determined by a network management mechanism based on the underlying network topology. The result of the decoding process indicates whether the OPP 860 is to strip the tag and whether the MAC 865 is to generate a CRC. The OPP decodes according the following table:
Figure imgf000021_0001
Thus if the tag is to be stripped, the OPP 860 removes the tag, preferably as the tag is transferred to the MAC 86. If no CRC is to be generated, the OPP 860 sends a signal indicating that no CRC is to be generated (e.g., set no_CRC) and the MAC 865 transmits the packet as it is received. If the CRC is to be generated, the last 4 bytes are removed from the packet by the OPP 860 a signal to generate the CRC is sent to the MAC 865, (e.g., clear no-CRC). The MAC 865, based upon the control signals received from the OPP 860, replaces the source address field with its own MAC address and generates a CRC that is appended to the end of the packet as the packet is output.
The encapsulation process can potentially extend the packet by a number of bytes. This can negatively affect the capacity of the link. In order to compensate for this capacity loss and also to allow the reception of frames that may longer than standard protocols define, the protocol parameter (in the present embodiment the Ethernet protocol) are fine tuned to reduce the preamble size by 5 bytes, the inteφacket gap by 5 byes and increase the maximum packet size by 10 bytes.
The embodiments of the routing apparatus and methods in the MLDNE 201 described above for exemplary puφoses are, of course, subject to other variations in structure and implementation within the capabilities of one reasonably skilled in the art. Thus, the details above should be inteφreted as illustrative and not in a limiting sense.

Claims

CLAIMSWhat is claimed is:
1 . In a switch comprising a plurality of switch elements, an apparatus for selectively performing header field replacement of packets communicated between two switch elements, comprising: a cascading output process (COP) located in a first switch element and configured to receive a packet, said packet comprising a header, data and cycle redundancy code (CRC), said COP further configured to receive control information and modify the packet by prepending control information to the packet, said control information providing information regarding a type of the packet, said COP further configured to output the modified packet; an output interface located in the first switch element, said output interface coupled to receive the modified packet and is configured to selectively generate a CRC to append to the modified packet , said output interface further configured to output the modified packet; an input interface located in a second switch element and configured to receive the packet output by the output interface of the first switch element, check frame validity of the packet using the appended CRC, and strips the appended CRC from the packet ; a cascading input process (CIP) located in the second switch element and coupled to the input interface, said CIP, said CIP configured to strip the control information to provide the control information to the second switch element to enable the second switch element to selectively modify the header of the packet prior to output from the second switch element.
2. The switch as set forth in claim 1 , wherein said CIP is further configured to output the packet and the control information to indicate additional modification of the packet prior to output from the switch; said second switch element further comprising; an output port process (OPP) configured to receive the packet and the control information, said OPP configured, in response to said control information, to selectively generate at least one control signal to notify that the packet is to be modified prior to output from the switch and to output a selectively modified packet; an output interface, said output interface coupled to receive the at least one control signal and the selectively modified packet and is configured to output a packet from the switch that corresponds to the selectively modified input packet, said output interface further configured to selectively modify, in response to the at least one control signal, at least one header field and the CRC of the selectively modified packet prior to transmission of the output packet from the switch.
3. The apparatus as set forth in claim 1, wherein said control information comprises a field to indicate that the source address field of the header is to be replaced prior to output of the modified input packet, said field set when the input packet is to be routed.
4. The apparatus as set forth in claim 1, wherein the control signals selectively indicate generation of a CRC and a replacement of a source address.
5. The apparatus as set forth in claim 1 , wherein the output interface of the second switch element is configured to insert the address of the output interface in a source address field of the header in response to the receipt of the at least one control signal indicating replacement of the source address, and to generate a CRC in response to the at least one control signal indicating regeneration of the CRC.
6. The apparatus as set forth in claim 1, wherein the OPP is further configured to strip off the CRC during transmission of the modified input packet to the output interface if the output interface is to generate the CRC.
7. The apparatus as set forth in claim 1 , wherein the output interface is a
MAC.
8. The apparatus as set forth in claim 1 , wherein the at least one control signal comprises a replace_sa signal.
9. The apparatus as set forth in claim 1 , wherein the at least one control signal comprises a state of a NO_CRC bit in a control word transmitted to the MAC by the OPP.
10. The apparatus as set forth in claim 1, wherein the switch supports virtual local area networks (VLANs) and the control information comprises an indication of whether the tagged packet is tagged as it arrived, whether the tagged packet arrived tagged but the tag has been modified and tags are not to be used.
1 1. The apparatus as set forth in claim 10, wherein said OPP determines whether to strip the tag and send a control signal to the output interface to regenerate and append a CRC according to the following table:
Figure imgf000025_0001
wherein tag_enable is a network variable indicative that the receiving node does not support VLAN routing.
12. The apparatus as set forth in claim 1 , wherein the packet further comprises a preamble and inteφacket gap which is reduced in size in order to append the CRC and prepend the control information without slowing down the data rates.
13. In a switch comprising a plurality of switch elements, a method for selectively performing header field replacement of packets communicated between two switch elements of the plurality of switch elements, comprising: modifying a packet in a first switch element by prepending control information to the packet, said control information providing information regarding a type of the packet; generating a cycle redundancy code (CRC) in the first switch element to append to the modified packet to produce an encapsulated packet; said first element communicating the encapsulated packet to a second switch element; checking frame validity of the encapsulated packet received at the second switch element; stripping the appended CRC and the control information from the encapsulated packet ; providing the control information to the second switch element to enable the second switch element to selectively modify the header of the packet prior to output from the second switch element.
14. The method as set forth in claim 13, further comprising the steps of: said second switch element selectively generating at least one control signal to notify that the packet is to be modified prior to output from the switch; selectively modifying in response to the at least one control signal, at least one header field and the CRC of the selectively modified packet prior to transmission of the output packet from the switch.
15. The method as set forth in claim 13, wherein said control information comprises a field to indicate that the source address field of the header is to be replaced prior to output of the modified input packet, said field set when the input packet is to be routed.
16. The method as set forth in claim 13, wherein the control signals selectively indicate generation of a CRC and a replacement of a source address.
17. The method as set forth in claim 13 further comprising the steps of: inserting the address of the output interface in a source address field of the header in response to the receipt of the at least one control signal indicating replacement of the source address; and generating a CRC in response to the at least one control signal indicating regeneration of the CRC.
18. The method as set forth in claim 17, further comprising the step of stripping off the CRC if a CRC is to be generated.
19. The method as set forth in claim 13, wherein the switch supports virtual local area networks (VLANs) and the control information comprises an indication of whether the tagged packet is tagged as it arrived, whether the tagged packet arrived tagged but the tag has been modified and tags are not to be used.
20. The apparatus as set forth in claim 19, further comprising the step of determining whether to strip the tag and regenerate and append a CRC according to the following table:
Figure imgf000027_0001
wherein tag_enable is a network variable indicative that the receiving node does not support VLAN routing.
21. The method as set forth in claim 13, wherein the packet further comprises a preamble and inteφacket gap, said method further comprising the step of reducing the packet in size by reducing a size of the preamble and inteφacket gap.
PCT/US1998/013200 1997-06-30 1998-06-24 Mechanism for packet field replacement in a multi-layer distributed network element WO1999000944A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP98931579A EP1005741A4 (en) 1997-06-30 1998-06-24 Mechanism for packet field replacement in a multi-layer distributed network element
JP50571399A JP2002507364A (en) 1997-06-30 1998-06-24 A mechanism for packet field replacement in multilayer distributed network elements

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/885,257 1997-06-30
US08/885,257 US6014380A (en) 1997-06-30 1997-06-30 Mechanism for packet field replacement in a multi-layer distributed network element

Publications (1)

Publication Number Publication Date
WO1999000944A1 true WO1999000944A1 (en) 1999-01-07

Family

ID=25386498

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1998/013200 WO1999000944A1 (en) 1997-06-30 1998-06-24 Mechanism for packet field replacement in a multi-layer distributed network element

Country Status (4)

Country Link
US (1) US6014380A (en)
EP (1) EP1005741A4 (en)
JP (1) JP2002507364A (en)
WO (1) WO1999000944A1 (en)

Cited By (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000056024A2 (en) * 1999-03-17 2000-09-21 Broadcom Corporation Network switch
WO2000072533A1 (en) * 1999-05-21 2000-11-30 Broadcom Corporation Stacked network switch configuration
WO2001001724A2 (en) * 1999-06-30 2001-01-04 Broadcom Corporation Method and network switch for constructing an address table in a network switch
WO2001015393A1 (en) * 1999-08-20 2001-03-01 Broadcom Corporation Cluster switching architecture
EP1093266A2 (en) * 1999-09-23 2001-04-18 Nortel Networks Limited Telecommunications switches and methods for their operation
US6335935B2 (en) 1998-07-08 2002-01-01 Broadcom Corporation Network switching architecture with fast filtering processor
US6430188B1 (en) 1998-07-08 2002-08-06 Broadcom Corporation Unified table for L2, L3, L4, switching and filtering
US6535510B2 (en) 2000-06-19 2003-03-18 Broadcom Corporation Switch fabric with path redundancy
US6678678B2 (en) 2000-03-09 2004-01-13 Braodcom Corporation Method and apparatus for high speed table search
US6810037B1 (en) 1999-03-17 2004-10-26 Broadcom Corporation Apparatus and method for sorted table binary search acceleration
US6813268B1 (en) 1999-05-21 2004-11-02 Broadcom Corporation Stacked network switch configuration
US6826561B2 (en) 2000-05-22 2004-11-30 Broadcom Corporation Method and apparatus for performing a binary search on an expanded tree
US6839349B2 (en) 1999-12-07 2005-01-04 Broadcom Corporation Mirroring in a stacked network switch configuration
US6850542B2 (en) 2000-11-14 2005-02-01 Broadcom Corporation Linked network switch configuration
US6851000B2 (en) 2000-10-03 2005-02-01 Broadcom Corporation Switch having flow control management
US6859454B1 (en) 1999-06-30 2005-02-22 Broadcom Corporation Network switch with high-speed serializing/deserializing hazard-free double data rate switching
US6876653B2 (en) 1998-07-08 2005-04-05 Broadcom Corporation Fast flexible filter processor based architecture for a network device
US6988177B2 (en) 2000-10-03 2006-01-17 Broadcom Corporation Switch memory management using a linked list structure
US6993027B1 (en) 1999-03-17 2006-01-31 Broadcom Corporation Method for sending a switch indicator to avoid out-of-ordering of frames in a network switch
US6996099B1 (en) 1999-03-17 2006-02-07 Broadcom Corporation Network switch having a programmable counter
US6999455B2 (en) 2000-07-25 2006-02-14 Broadcom Corporation Hardware assist for address learning
US7009973B2 (en) 2000-02-28 2006-03-07 Broadcom Corporation Switch using a segmented ring
US7009968B2 (en) 2000-06-09 2006-03-07 Broadcom Corporation Gigabit switch supporting improved layer 3 switching
US7020166B2 (en) 2000-10-03 2006-03-28 Broadcom Corporation Switch transferring data using data encapsulation and decapsulation
US7031302B1 (en) 1999-05-21 2006-04-18 Broadcom Corporation High-speed stats gathering in a network switch
US7035255B2 (en) 2000-11-14 2006-04-25 Broadcom Corporation Linked network switch configuration
US7035286B2 (en) 2000-11-14 2006-04-25 Broadcom Corporation Linked network switch configuration
US7082133B1 (en) 1999-09-03 2006-07-25 Broadcom Corporation Apparatus and method for enabling voice over IP support for a network switch
US7103053B2 (en) 2000-05-03 2006-09-05 Broadcom Corporation Gigabit switch on chip architecture
US7120117B1 (en) 2000-08-29 2006-10-10 Broadcom Corporation Starvation free flow control in a shared memory switching device
US7120155B2 (en) 2000-10-03 2006-10-10 Broadcom Corporation Switch having virtual shared memory
US7126947B2 (en) 2000-06-23 2006-10-24 Broadcom Corporation Switch having external address resolution interface
US7131001B1 (en) 1999-10-29 2006-10-31 Broadcom Corporation Apparatus and method for secure filed upgradability with hard wired public key
US7143294B1 (en) 1999-10-29 2006-11-28 Broadcom Corporation Apparatus and method for secure field upgradability with unpredictable ciphertext
US7227862B2 (en) 2000-09-20 2007-06-05 Broadcom Corporation Network switch having port blocking capability
US7274705B2 (en) 2000-10-03 2007-09-25 Broadcom Corporation Method and apparatus for reducing clock speed and power consumption
US7315552B2 (en) 1999-06-30 2008-01-01 Broadcom Corporation Frame forwarding in a switch fabric
US7355970B2 (en) 2001-10-05 2008-04-08 Broadcom Corporation Method and apparatus for enabling access on a network switch
US7366208B2 (en) 1999-11-16 2008-04-29 Broadcom Network switch with high-speed serializing/deserializing hazard-free double data rate switch
US7366171B2 (en) 1999-03-17 2008-04-29 Broadcom Corporation Network switch
US7420977B2 (en) 2000-10-03 2008-09-02 Broadcom Corporation Method and apparatus of inter-chip bus shared by message passing and memory access
US7424012B2 (en) 2000-11-14 2008-09-09 Broadcom Corporation Linked network switch configuration
US7539134B1 (en) 1999-11-16 2009-05-26 Broadcom Corporation High speed flow control methodology
US7593953B1 (en) 1999-11-18 2009-09-22 Broadcom Corporation Table lookup mechanism for address resolution
US7624324B2 (en) 2005-02-18 2009-11-24 Fujitsu Limited File control system and file control device
US7787471B2 (en) 2003-11-10 2010-08-31 Broadcom Corporation Field processor for a network device
US7869411B2 (en) 2005-11-21 2011-01-11 Broadcom Corporation Compact packet operation device and method
US7983291B2 (en) 2005-02-18 2011-07-19 Broadcom Corporation Flexible packet modification engine for a network device
US8103800B2 (en) 2003-06-26 2012-01-24 Broadcom Corporation Method and apparatus for multi-chip address resolution lookup synchronization in a network environment
EP2497023A1 (en) * 2009-11-02 2012-09-12 Hewlett Packard Development Company, L.P. Multiprocessing computing with distributed embedded switching
US8320240B2 (en) 2004-11-30 2012-11-27 Broadcom Corporation Rate limiting and minimum and maximum shaping in a network device

Families Citing this family (108)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6088749A (en) * 1995-11-30 2000-07-11 Excel Switching Corp. Universal API with generic call processing message having user-defined PPL event ID and generic response message for communications between telecommunications switch and host application
US6493347B2 (en) * 1996-12-16 2002-12-10 Juniper Networks, Inc. Memory organization in a switching device
US5991305A (en) * 1997-02-14 1999-11-23 Advanced Micro Devices, Inc. Integrated multiport switch having independently resettable management information base (MIB)
US6185207B1 (en) * 1997-06-19 2001-02-06 International Business Machines Corporation Communication system having a local area network adapter for selectively deleting information and method therefor
US6304912B1 (en) * 1997-07-24 2001-10-16 Fujitsu Limited Process and apparatus for speeding-up layer-2 and layer-3 routing, and for determining layer-2 reachability, through a plurality of subnetworks
US6172980B1 (en) * 1997-09-11 2001-01-09 3Com Corporation Multiple protocol support
US6115379A (en) * 1997-09-11 2000-09-05 3Com Corporation Unicast, multicast, and broadcast method and apparatus
US8782199B2 (en) * 1997-10-14 2014-07-15 A-Tech Llc Parsing a packet header
US6434620B1 (en) * 1998-08-27 2002-08-13 Alacritech, Inc. TCP/IP offload network interface device
US6757746B2 (en) * 1997-10-14 2004-06-29 Alacritech, Inc. Obtaining a destination address so that a network interface device can write network data without headers directly into host memory
US8621101B1 (en) 2000-09-29 2013-12-31 Alacritech, Inc. Intelligent network storage interface device
US6226680B1 (en) 1997-10-14 2001-05-01 Alacritech, Inc. Intelligent network interface system method for protocol processing
US8539112B2 (en) 1997-10-14 2013-09-17 Alacritech, Inc. TCP/IP offload device
US6697868B2 (en) * 2000-02-28 2004-02-24 Alacritech, Inc. Protocol processing stack for use with intelligent network interface device
DE19882822T1 (en) * 1997-11-17 2001-03-22 Seagate Technology Method and dedicated frame buffer for loop initialization and for responses
US6084878A (en) * 1997-12-18 2000-07-04 Advanced Micro Devices, Inc. External rules checker interface
US6188694B1 (en) * 1997-12-23 2001-02-13 Cisco Technology, Inc. Shared spanning tree protocol
US6301224B1 (en) * 1998-01-13 2001-10-09 Enterasys Networks, Inc. Network switch with panic mode
US6469987B1 (en) 1998-01-13 2002-10-22 Enterasys Networks, Inc. Virtual local area network with trunk stations
US6112251A (en) * 1998-01-13 2000-08-29 Cabletron Systems, Inc. Virtual local network for sending multicast transmissions to trunk stations
US6295296B1 (en) * 1998-09-08 2001-09-25 Cisco Technology, Inc. Use of a single data structure for label forwarding and imposition
US6785274B2 (en) * 1998-10-07 2004-08-31 Cisco Technology, Inc. Efficient network multicast switching apparatus and methods
US6912223B1 (en) * 1998-11-03 2005-06-28 Network Technologies Inc. Automatic router configuration
GB9824594D0 (en) * 1998-11-11 1999-01-06 3Com Technologies Ltd Modifying tag fields in ethernet data packets
US6704318B1 (en) * 1998-11-30 2004-03-09 Cisco Technology, Inc. Switched token ring over ISL (TR-ISL) network
US6526052B1 (en) 1998-12-23 2003-02-25 Enterasys Networks, Inc. Virtual local area networks having rules of precedence
JP3645735B2 (en) * 1999-02-24 2005-05-11 株式会社日立製作所 Network relay device and network relay method
US6542470B1 (en) * 1999-05-26 2003-04-01 3Com Corporation Packet expansion with preservation of original cyclic redundancy code check indication
JP4110671B2 (en) 1999-05-27 2008-07-02 株式会社日立製作所 Data transfer device
US6557044B1 (en) 1999-06-01 2003-04-29 Nortel Networks Limited Method and apparatus for exchange of routing database information
US6625773B1 (en) * 1999-06-09 2003-09-23 International Business Machines Corporation System for multicast communications in packet switched networks
SE9902336A0 (en) * 1999-06-18 2000-12-19 Ericsson Telefon Ab L M Method and system of communication
US6633565B1 (en) * 1999-06-29 2003-10-14 3Com Corporation Apparatus for and method of flow switching in a data communications network
US6789116B1 (en) 1999-06-30 2004-09-07 Hi/Fn, Inc. State processor for pattern matching in a network monitor device
US6771646B1 (en) 1999-06-30 2004-08-03 Hi/Fn, Inc. Associative cache structure for lookups and updates of flow records in a network monitor
CN1293478C (en) * 1999-06-30 2007-01-03 倾向探测公司 Method and apparatus for monitoring traffic in a network
US6826195B1 (en) 1999-12-28 2004-11-30 Bigband Networks Bas, Inc. System and process for high-availability, direct, flexible and scalable switching of data packets in broadband networks
US6611526B1 (en) 2000-05-08 2003-08-26 Adc Broadband Access Systems, Inc. System having a meshed backplane and process for transferring data therethrough
US6853680B1 (en) 2000-05-10 2005-02-08 Bigband Networks Bas, Inc. System and process for embedded cable modem in a cable modem termination system to enable diagnostics and monitoring
US6671739B1 (en) * 2000-07-10 2003-12-30 International Business Machines Corporation Controlling network access by modifying packet headers at a local hub
US7924837B1 (en) * 2000-07-31 2011-04-12 Avaya Communication Israel Ltd. IP multicast in VLAN environment
US6850495B1 (en) * 2000-08-31 2005-02-01 Verizon Communications Inc. Methods, apparatus and data structures for segmenting customers using at least a portion of a layer 2 address header or bits in the place of a layer 2 address header
US8087064B1 (en) 2000-08-31 2011-12-27 Verizon Communications Inc. Security extensions using at least a portion of layer 2 information or bits in the place of layer 2 information
US8019901B2 (en) 2000-09-29 2011-09-13 Alacritech, Inc. Intelligent network storage interface system
CA2358607A1 (en) * 2000-10-24 2002-04-24 General Instrument Corporation Packet identifier (pid) aliasing for a broadband audio, video and data router
US20020191603A1 (en) * 2000-11-22 2002-12-19 Yeshik Shin Method and system for dynamic segmentation of communications packets
US6963569B1 (en) 2000-12-29 2005-11-08 Cisco Technology, Inc. Device for interworking asynchronous transfer mode cells
FR2823042B1 (en) * 2001-03-29 2003-07-04 Cit Alcatel PACKET SYNCHRONIZED BIDIRECTIONAL TRANSMISSION METHOD
US20020184368A1 (en) * 2001-04-06 2002-12-05 Yunsen Wang Network system, method and protocols for hierarchical service and content distribution via directory enabled network
US7609689B1 (en) * 2001-09-27 2009-10-27 Cisco Technology, Inc. System and method for mapping an index into an IPv6 address
US8543681B2 (en) * 2001-10-15 2013-09-24 Volli Polymer Gmbh Llc Network topology discovery systems and methods
US8868715B2 (en) * 2001-10-15 2014-10-21 Volli Polymer Gmbh Llc Report generation and visualization systems and methods and their use in testing frameworks for determining suitability of a network for target applications
US7283504B1 (en) * 2001-10-24 2007-10-16 Bbn Technologies Corp. Radio with internal packet network
US7088739B2 (en) * 2001-11-09 2006-08-08 Ericsson Inc. Method and apparatus for creating a packet using a digital signal processor
US7209435B1 (en) 2002-04-16 2007-04-24 Foundry Networks, Inc. System and method for providing network route redundancy across Layer 2 devices
US7543087B2 (en) * 2002-04-22 2009-06-02 Alacritech, Inc. Freeing transmit memory on a network interface device prior to receiving an acknowledgement that transmit data has been received by a remote device
AU2003230913A1 (en) * 2002-05-01 2003-11-17 Manticom Network, Inc. Method and system to implement a simplified shortest path routing scheme in a shared access ring topology
CN1663147A (en) 2002-06-21 2005-08-31 威德菲公司 Wireless local area network repeater
US6895481B1 (en) 2002-07-03 2005-05-17 Cisco Technology, Inc. System and method for decrementing a reference count in a multicast environment
US20040141356A1 (en) * 2002-08-29 2004-07-22 Maria Gabrani Data communications method and apparatus
US8885688B2 (en) * 2002-10-01 2014-11-11 Qualcomm Incorporated Control message management in physical layer repeater
US8462668B2 (en) * 2002-10-01 2013-06-11 Foundry Networks, Llc System and method for implementation of layer 2 redundancy protocols across multiple networks
WO2004034600A1 (en) 2002-10-11 2004-04-22 Widefi, Inc. Reducing loop effects in a wireless local area network repeater
US8078100B2 (en) 2002-10-15 2011-12-13 Qualcomm Incorporated Physical layer repeater with discrete time filter for all-digital detection and delay generation
MXPA05003929A (en) * 2002-10-15 2005-06-17 Widefi Inc Wireless local area network repeater with automatic gain control for extending network coverage.
GB2411324B (en) * 2002-10-24 2006-02-01 Widefi Inc Wireless local area network repeater with in-band control channel
US7230935B2 (en) * 2002-10-24 2007-06-12 Widefi, Inc. Physical layer repeater with selective use of higher layer functions based on network operating conditions
EP1568167A4 (en) * 2002-11-15 2010-06-16 Qualcomm Inc Wireless local area network repeater with detection
JP2006510326A (en) * 2002-12-16 2006-03-23 ワイデファイ インコーポレイテッド Improved wireless network repeater
US7292569B1 (en) 2003-02-26 2007-11-06 Cisco Technology, Inc. Distributed router forwarding architecture employing global translation indices
US6996070B2 (en) * 2003-12-05 2006-02-07 Alacritech, Inc. TCP/IP offload device with reduced sequential processing
KR100547828B1 (en) * 2003-12-18 2006-01-31 삼성전자주식회사 Gigabit Ethernet-based passive optical subscriber network and its method for more accurate detection of data errors for secure data transmission
CN1300992C (en) * 2003-12-30 2007-02-14 华为技术有限公司 Method of realizing multitransmission
US8027642B2 (en) 2004-04-06 2011-09-27 Qualcomm Incorporated Transmission canceller for wireless local area network
EP1745567B1 (en) 2004-05-13 2017-06-14 QUALCOMM Incorporated Non-frequency translating repeater with detection and media access control
EP1769645A4 (en) * 2004-06-03 2010-07-21 Qualcomm Inc Frequency translating repeater with low cost high performance local oscillator architecture
US8248939B1 (en) 2004-10-08 2012-08-21 Alacritech, Inc. Transferring control of TCP connections between hierarchy of processing mechanisms
US20060078127A1 (en) * 2004-10-08 2006-04-13 Philip Cacayorin Dispersed data storage using cryptographic scrambling
WO2006081405A2 (en) * 2005-01-28 2006-08-03 Widefi, Inc. Physical layer repeater configuration for increasing mino performance
US7865624B1 (en) 2005-04-04 2011-01-04 Oracle America, Inc. Lookup mechanism based on link layer semantics
US7529245B1 (en) 2005-04-04 2009-05-05 Sun Microsystems, Inc. Reorder mechanism for use in a relaxed order input/output system
US7415034B2 (en) * 2005-04-04 2008-08-19 Sun Microsystems, Inc. Virtualized partitionable shared network interface
US7987306B2 (en) * 2005-04-04 2011-07-26 Oracle America, Inc. Hiding system latencies in a throughput networking system
US7779164B2 (en) * 2005-04-04 2010-08-17 Oracle America, Inc. Asymmetrical data processing partition
US7443878B2 (en) * 2005-04-04 2008-10-28 Sun Microsystems, Inc. System for scaling by parallelizing network workload
US7992144B1 (en) 2005-04-04 2011-08-02 Oracle America, Inc. Method and apparatus for separating and isolating control of processing entities in a network interface
US7415035B1 (en) 2005-04-04 2008-08-19 Sun Microsystems, Inc. Device driver access method into a virtualized network interface
US7664127B1 (en) * 2005-04-05 2010-02-16 Sun Microsystems, Inc. Method for resolving mutex contention in a network system
US7567567B2 (en) * 2005-04-05 2009-07-28 Sun Microsystems, Inc. Network system including packet classification for partitioned resources
US8510491B1 (en) 2005-04-05 2013-08-13 Oracle America, Inc. Method and apparatus for efficient interrupt event notification for a scalable input/output device
US7843926B1 (en) 2005-04-05 2010-11-30 Oracle America, Inc. System for providing virtualization of network interfaces at various layers
US7353360B1 (en) 2005-04-05 2008-04-01 Sun Microsystems, Inc. Method for maximizing page locality
US7889734B1 (en) 2005-04-05 2011-02-15 Oracle America, Inc. Method and apparatus for arbitrarily mapping functions to preassigned processing entities in a network system
US8762595B1 (en) 2005-04-05 2014-06-24 Oracle America, Inc. Method for sharing interfaces among multiple domain environments with enhanced hooks for exclusiveness
US8274989B1 (en) * 2006-03-31 2012-09-25 Rockstar Bidco, LP Point-to-multipoint (P2MP) resilience for GMPLS control of ethernet
US8379676B1 (en) * 2006-06-01 2013-02-19 World Wide Packets, Inc. Injecting in-band control messages without impacting a data rate
WO2008027531A2 (en) * 2006-09-01 2008-03-06 Qualcomm Incorporated Repeater having dual receiver or transmitter antenna configuration with adaptation for increased isolation
WO2008036401A2 (en) * 2006-09-21 2008-03-27 Qualcomm Incorporated Method and apparatus for mitigating oscillation between repeaters
RU2414064C2 (en) 2006-10-26 2011-03-10 Квэлкомм Инкорпорейтед Repeater techniques for multiple input multiple output system using beam formers
US8458350B2 (en) * 2006-11-03 2013-06-04 Rockwell Automation Technologies, Inc. Control and communications architecture
US8023973B2 (en) * 2007-01-03 2011-09-20 Motorola Solutions, Inc. Expandable text messaging service protocol for use with a two-way radio transceiver
US20080263171A1 (en) * 2007-04-19 2008-10-23 Alacritech, Inc. Peripheral device that DMAS the same data to different locations in a computer
US8539513B1 (en) 2008-04-01 2013-09-17 Alacritech, Inc. Accelerating data transfer in a virtual computer system with tightly coupled TCP connections
US8341286B1 (en) 2008-07-31 2012-12-25 Alacritech, Inc. TCP offload send optimization
US9306793B1 (en) 2008-10-22 2016-04-05 Alacritech, Inc. TCP offload device that batches session layer headers to reduce interrupts as well as CPU copies
US9565132B2 (en) * 2011-12-27 2017-02-07 Intel Corporation Multi-protocol I/O interconnect including a switching fabric
US9712323B2 (en) * 2014-10-09 2017-07-18 Fujitsu Limited Detection of unauthorized entities in communication systems
US11706607B1 (en) 2021-06-16 2023-07-18 T-Mobile Usa, Inc. Location based routing that bypasses circuit-based networks

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5742604A (en) * 1996-03-28 1998-04-21 Cisco Systems, Inc. Interswitch link mechanism for connecting high-performance network switches

Family Cites Families (83)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4652874A (en) * 1984-12-24 1987-03-24 Motorola, Inc. Serial communication interface for a local network controller
US4850042A (en) * 1987-04-14 1989-07-18 Westinghouse Electric Corp. Dual media local area network interfacing
US4807111A (en) * 1987-06-19 1989-02-21 International Business Machines Corporation Dynamic queueing method
US4899333A (en) * 1988-03-31 1990-02-06 American Telephone And Telegraph Company At&T Bell Laboratories Architecture of the control of a high performance packet switching distribution network
US4922503A (en) * 1988-10-28 1990-05-01 Infotron Systems Corporation Local area network bridge
US4933938A (en) * 1989-03-22 1990-06-12 Hewlett-Packard Company Group address translation through a network bridge
US5220562A (en) * 1989-05-12 1993-06-15 Hitachi, Ltd. Bridge apparatus and a communication system between networks using the bridge apparatus
US5179557A (en) * 1989-07-04 1993-01-12 Kabushiki Kaisha Toshiba Data packet communication system in which data packet transmittal is prioritized with queues having respective assigned priorities and frequency weighted counting of queue wait time
US5210746A (en) * 1990-04-16 1993-05-11 Motorola, Inc. Communication system network having communication system fallback operation
US5301333A (en) * 1990-06-14 1994-04-05 Bell Communications Research, Inc. Tree structured variable priority arbitration implementing a round-robin scheduling policy
US5309437A (en) * 1990-06-29 1994-05-03 Digital Equipment Corporation Bridge-like internet protocol router
US5231633A (en) * 1990-07-11 1993-07-27 Codex Corporation Method for prioritizing, selectively discarding, and multiplexing differing traffic type fast packets
US5150358A (en) * 1990-08-23 1992-09-22 At&T Bell Laboratories Serving constant bit rate traffic in a broadband data switch
US5481540A (en) * 1990-08-24 1996-01-02 At&T Corp. FDDI bridge frame learning and filtering apparatus and method
US5251205A (en) * 1990-09-04 1993-10-05 Digital Equipment Corporation Multiple protocol routing
CA2065578C (en) * 1991-04-22 1999-02-23 David W. Carr Packet-based data compression method
US5500860A (en) * 1991-06-14 1996-03-19 Digital Equipment Corporation Router using multiple hop redirect messages to enable bridge like data forwarding
US5392432A (en) * 1991-08-27 1995-02-21 At&T Corp. Method for automatic system resource reclamation for object-oriented systems with real-time constraints
CA2092134C (en) * 1992-03-24 1998-07-21 Anthony J. Mazzola Distributed routing network element
US5343471A (en) * 1992-05-11 1994-08-30 Hughes Aircraft Company Address filter for a transparent bridge interconnecting local area networks
US5742760A (en) * 1992-05-12 1998-04-21 Compaq Computer Corporation Network packet switch using shared memory for repeating and bridging packets at media rate
US5457681A (en) * 1992-06-05 1995-10-10 Washington University ATM-Ethernet portal/concentrator
US5425028A (en) * 1992-07-16 1995-06-13 International Business Machines Corporation Protocol selection and address resolution for programs running in heterogeneous networks
US5291482A (en) * 1992-07-24 1994-03-01 At&T Bell Laboratories High bandwidth packet switch
US5490252A (en) * 1992-09-30 1996-02-06 Bay Networks Group, Inc. System having central processor for transmitting generic packets to another processor to be altered and transmitting altered packets back to central processor for routing
JP3104429B2 (en) * 1992-10-08 2000-10-30 株式会社日立製作所 Common buffer type ATM switch having copy function and copy method thereof
US5649109A (en) * 1992-10-22 1997-07-15 Digital Equipment Corporation Apparatus and method for maintaining forwarding information in a bridge or router using multiple free queues having associated free space sizes
US5410722A (en) * 1993-01-21 1995-04-25 Conner Peripherals, Inc. Queue system for dynamically allocating and moving memory registers between a plurality of pseudo queues
US5459714A (en) * 1993-02-22 1995-10-17 Advanced Micro Devices, Inc. Enhanced port activity monitor for an integrated multiport repeater
US5485578A (en) * 1993-03-08 1996-01-16 Apple Computer, Inc. Topology discovery in a multiple-ring network
US5386413A (en) * 1993-03-19 1995-01-31 Bell Communications Research, Inc. Fast multilevel hierarchical routing table lookup using content addressable memory
JPH077524A (en) * 1993-04-06 1995-01-10 Siemens Ag Method for accessing of communication subscriber to address identifier
AU675302B2 (en) * 1993-05-20 1997-01-30 Nec Corporation Output-buffer switch for asynchronous transfer mode
US5426736A (en) * 1993-05-26 1995-06-20 Digital Equipment Corporation Method and apparatus for processing input/output commands in a storage system having a command queue
US5394402A (en) * 1993-06-17 1995-02-28 Ascom Timeplex Trading Ag Hub for segmented virtual local area network with shared media access
JP2546505B2 (en) * 1993-06-23 1996-10-23 日本電気株式会社 Address learning device in CLAD
US5555405A (en) * 1993-07-06 1996-09-10 Digital Equipment Corporation Method and apparatus for free space management in a forwarding database having forwarding entry sets and multiple free space segment queues
US5515376A (en) * 1993-07-19 1996-05-07 Alantec, Inc. Communication apparatus and methods
US5473607A (en) * 1993-08-09 1995-12-05 Grand Junction Networks, Inc. Packet filtering for data networks
US5422838A (en) * 1993-10-25 1995-06-06 At&T Corp. Content-addressable memory with programmable field masking
US5485455A (en) * 1994-01-28 1996-01-16 Cabletron Systems, Inc. Network having secure fast packet switching and guaranteed quality of service
JP2713153B2 (en) * 1994-03-09 1998-02-16 日本電気株式会社 Bridge device
JPH07254906A (en) * 1994-03-16 1995-10-03 Mitsubishi Electric Corp Shift register having priority processing function, packet communication switching device using it, atm network using it, packet communication system having priority processing and atm communication system with priority processing
US5459717A (en) * 1994-03-25 1995-10-17 Sprint International Communications Corporation Method and apparatus for routing messagers in an electronic messaging system
EP0676878A1 (en) * 1994-04-07 1995-10-11 International Business Machines Corporation Efficient point to point and multi point routing mechanism for programmable packet switching nodes in high speed data transmission networks
DE69428186T2 (en) * 1994-04-28 2002-03-28 Hewlett Packard Co Multicast device
US5461611A (en) * 1994-06-07 1995-10-24 International Business Machines Corporation Quality of service management for source routing multimedia packet networks
US5583981A (en) * 1994-06-28 1996-12-10 Microsoft Corporation Method and system for changing the size of edit controls on a graphical user interface
EP0691769A1 (en) * 1994-07-07 1996-01-10 International Business Machines Corporation Voice circuit emulation system in a packet switching network
US5751967A (en) * 1994-07-25 1998-05-12 Bay Networks Group, Inc. Method and apparatus for automatically configuring a network device to support a virtual network
US5640605A (en) * 1994-08-26 1997-06-17 3Com Corporation Method and apparatus for synchronized transmission of data between a network adaptor and multiple transmission channels using a shared clocking frequency and multilevel data encoding
US5619500A (en) * 1994-09-01 1997-04-08 Digital Link Corporation ATM network interface
US5594727A (en) * 1994-09-19 1997-01-14 Summa Four, Inc. Telephone switch providing dynamic allocation of time division multiplex resources
US5490139A (en) * 1994-09-28 1996-02-06 International Business Machines Corporation Mobility enabling access point architecture for wireless attachment to source routing networks
US5675741A (en) * 1994-10-25 1997-10-07 Cabletron Systems, Inc. Method and apparatus for determining a communications path between two nodes in an Internet Protocol (IP) network
US5784573A (en) * 1994-11-04 1998-07-21 Texas Instruments Incorporated Multi-protocol local area network controller
KR0132960B1 (en) * 1994-12-22 1998-04-21 양승택 Method and apparatus for detecting congestion status of
US5550816A (en) * 1994-12-29 1996-08-27 Storage Technology Corporation Method and apparatus for virtual switching
JP3099663B2 (en) * 1995-02-09 2000-10-16 株式会社デンソー Communications system
US5706472A (en) * 1995-02-23 1998-01-06 Powerquest Corporation Method for manipulating disk partitions
US5561666A (en) * 1995-03-06 1996-10-01 International Business Machines Corporation Apparatus and method for determining operational mode for a station entering a network
US5633865A (en) * 1995-03-31 1997-05-27 Netvantage Apparatus for selectively transferring data packets between local area networks
US5619661A (en) * 1995-06-05 1997-04-08 Vlsi Technology, Inc. Dynamic arbitration system and method
US5636371A (en) * 1995-06-07 1997-06-03 Bull Hn Information Systems Inc. Virtual network mechanism to access well known port application programs running on a single host system
US5734865A (en) * 1995-06-07 1998-03-31 Bull Hn Information Systems Inc. Virtual local area network well-known port routing mechanism for mult--emulators in an open system environment
US5651002A (en) * 1995-07-12 1997-07-22 3Com Corporation Internetworking device with enhanced packet header translation and memory
US5754540A (en) * 1995-07-18 1998-05-19 Macronix International Co., Ltd. Expandable integrated circuit multiport repeater controller with multiple media independent interfaces and mixed media connections
US5691984A (en) * 1995-08-15 1997-11-25 Honeywell Inc. Compact, adaptable brouting switch
US5740175A (en) * 1995-10-03 1998-04-14 National Semiconductor Corporation Forwarding database cache for integrated switch controller
US5757771A (en) * 1995-11-14 1998-05-26 Yurie Systems, Inc. Queue management to serve variable and constant bit rate traffic at multiple quality of service levels in a ATM switch
EP0873626B1 (en) * 1995-11-15 2006-05-10 Enterasys Networks, Inc. Distributed connection-oriented services for switched communications networks
US5684800A (en) * 1995-11-15 1997-11-04 Cabletron Systems, Inc. Method for establishing restricted broadcast groups in a switched network
US5754801A (en) * 1995-11-20 1998-05-19 Advanced Micro Devices, Inc. Computer system having a multimedia bus and comprising a centralized I/O processor which performs intelligent data transfers
US5740375A (en) * 1996-02-15 1998-04-14 Bay Networks, Inc. Forwarding internetwork packets by replacing the destination address
US5724358A (en) * 1996-02-23 1998-03-03 Zeitnet, Inc. High speed packet-switched digital switch and method
US5781549A (en) * 1996-02-23 1998-07-14 Allied Telesyn International Corp. Method and apparatus for switching data packets in a data network
US5764634A (en) * 1996-03-13 1998-06-09 International Business Machines Corporation Lan switch with zero latency
US5740171A (en) * 1996-03-28 1998-04-14 Cisco Systems, Inc. Address translation mechanism for a high-performance network switch
US5764636A (en) * 1996-03-28 1998-06-09 Cisco Technology, Inc. Color blocking logic mechanism for a high-performance network switch
US5923654A (en) * 1996-04-25 1999-07-13 Compaq Computer Corp. Network switch that includes a plurality of shared packet buffers
US5802052A (en) * 1996-06-26 1998-09-01 Level One Communication, Inc. Scalable high performance switch element for a shared memory packet or ATM cell switch fabric
US5748905A (en) * 1996-08-30 1998-05-05 Fujitsu Network Communications, Inc. Frame classification using classification keys
US5827508A (en) * 1996-09-27 1998-10-27 The Procter & Gamble Company Stable photoprotective compositions

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5742604A (en) * 1996-03-28 1998-04-21 Cisco Systems, Inc. Interswitch link mechanism for connecting high-performance network switches

Cited By (96)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6795447B2 (en) 1998-07-08 2004-09-21 Broadcom Corporation High performance self balancing low cost network switching architecture based on distributed hierarchical shared
US7415022B2 (en) 1998-07-08 2008-08-19 Broadcom Corporation Network switching architecture with multiple table synchronization, and forwarding of both IP and IPX packets
US7020137B2 (en) 1998-07-08 2006-03-28 Broadcom Corporation Network switching architecture with fast filtering processor
US6876653B2 (en) 1998-07-08 2005-04-05 Broadcom Corporation Fast flexible filter processor based architecture for a network device
US7103055B2 (en) 1998-07-08 2006-09-05 Broadcom Corporation Unified table for L2, L3, L4, switching and filtering
US7746854B2 (en) 1998-07-08 2010-06-29 Broadcom Corporation Fast flexible filter processor based architecture for a network device
US6643261B2 (en) 1998-07-08 2003-11-04 Broadcom Corporation High performance self balancing low cost network switching architecture based on distributed hierarchical shared memory
US6335935B2 (en) 1998-07-08 2002-01-01 Broadcom Corporation Network switching architecture with fast filtering processor
US6335932B2 (en) 1998-07-08 2002-01-01 Broadcom Corporation High performance self balancing low cost network switching architecture based on distributed hierarchical shared memory
US6430188B1 (en) 1998-07-08 2002-08-06 Broadcom Corporation Unified table for L2, L3, L4, switching and filtering
US6560229B1 (en) 1998-07-08 2003-05-06 Broadcom Corporation Network switching architecture with multiple table synchronization, and forwarding of both IP and IPX packets
US7106734B2 (en) 1998-07-08 2006-09-12 Broadcom Corporation Network switching architecture with multiple table synchronization, and forwarding of both IP and IPX packets
US8411574B2 (en) 1999-03-05 2013-04-02 Broadcom Corporation Starvation free flow control in a shared memory switching device
US6996099B1 (en) 1999-03-17 2006-02-07 Broadcom Corporation Network switch having a programmable counter
US6993027B1 (en) 1999-03-17 2006-01-31 Broadcom Corporation Method for sending a switch indicator to avoid out-of-ordering of frames in a network switch
US7310332B2 (en) 1999-03-17 2007-12-18 Broadcom Corporation Network switch memory interface configuration
US6707817B1 (en) 1999-03-17 2004-03-16 Broadcom Corporation Method for handling IP multicast packets in network switch
US6707818B1 (en) 1999-03-17 2004-03-16 Broadcom Corporation Network switch memory interface configuration
US7184441B1 (en) 1999-03-17 2007-02-27 Broadcom Corporation Network switch stacking configuration
US6810037B1 (en) 1999-03-17 2004-10-26 Broadcom Corporation Apparatus and method for sorted table binary search acceleration
US7366171B2 (en) 1999-03-17 2008-04-29 Broadcom Corporation Network switch
WO2000056024A3 (en) * 1999-03-17 2001-01-04 Broadcom Corp Network switch
WO2000056024A2 (en) * 1999-03-17 2000-09-21 Broadcom Corporation Network switch
US6813268B1 (en) 1999-05-21 2004-11-02 Broadcom Corporation Stacked network switch configuration
US7031302B1 (en) 1999-05-21 2006-04-18 Broadcom Corporation High-speed stats gathering in a network switch
US7593403B2 (en) 1999-05-21 2009-09-22 Broadcom Corporation Stacked network switch configuration
WO2000072533A1 (en) * 1999-05-21 2000-11-30 Broadcom Corporation Stacked network switch configuration
US7315552B2 (en) 1999-06-30 2008-01-01 Broadcom Corporation Frame forwarding in a switch fabric
WO2001001724A3 (en) * 1999-06-30 2001-05-17 Broadcom Corp Method and network switch for constructing an address table in a network switch
US6859454B1 (en) 1999-06-30 2005-02-22 Broadcom Corporation Network switch with high-speed serializing/deserializing hazard-free double data rate switching
WO2001001724A2 (en) * 1999-06-30 2001-01-04 Broadcom Corporation Method and network switch for constructing an address table in a network switch
WO2001015393A1 (en) * 1999-08-20 2001-03-01 Broadcom Corporation Cluster switching architecture
US7082133B1 (en) 1999-09-03 2006-07-25 Broadcom Corporation Apparatus and method for enabling voice over IP support for a network switch
US7577148B2 (en) 1999-09-03 2009-08-18 Broadcom Corporation Apparatus and method for enabling Voice Over IP support for a network switch
EP1093266A3 (en) * 1999-09-23 2003-09-03 Nortel Networks Limited Telecommunications switches and methods for their operation
EP1093266A2 (en) * 1999-09-23 2001-04-18 Nortel Networks Limited Telecommunications switches and methods for their operation
US7143294B1 (en) 1999-10-29 2006-11-28 Broadcom Corporation Apparatus and method for secure field upgradability with unpredictable ciphertext
US7634665B2 (en) 1999-10-29 2009-12-15 Broadcom Corporation Apparatus and method for secure field upgradability with unpredictable ciphertext
US7131001B1 (en) 1999-10-29 2006-10-31 Broadcom Corporation Apparatus and method for secure filed upgradability with hard wired public key
US7539134B1 (en) 1999-11-16 2009-05-26 Broadcom Corporation High speed flow control methodology
US7366208B2 (en) 1999-11-16 2008-04-29 Broadcom Network switch with high-speed serializing/deserializing hazard-free double data rate switch
US8081570B2 (en) 1999-11-16 2011-12-20 Broadcom Corporation High speed flow control methodology
US8086571B2 (en) 1999-11-18 2011-12-27 Broadcom Corporation Table lookup mechanism for address resolution
US7593953B1 (en) 1999-11-18 2009-09-22 Broadcom Corporation Table lookup mechanism for address resolution
US7715328B2 (en) 1999-12-07 2010-05-11 Broadcom Corporation Mirroring in a stacked network switch configuration
US6839349B2 (en) 1999-12-07 2005-01-04 Broadcom Corporation Mirroring in a stacked network switch configuration
US7009973B2 (en) 2000-02-28 2006-03-07 Broadcom Corporation Switch using a segmented ring
US7260565B2 (en) 2000-03-09 2007-08-21 Broadcom Corporation Method and apparatus for high speed table search
US6678678B2 (en) 2000-03-09 2004-01-13 Braodcom Corporation Method and apparatus for high speed table search
US7103053B2 (en) 2000-05-03 2006-09-05 Broadcom Corporation Gigabit switch on chip architecture
US7675924B2 (en) 2000-05-03 2010-03-09 Broadcom Corporation Gigabit switch on chip architecture
US6826561B2 (en) 2000-05-22 2004-11-30 Broadcom Corporation Method and apparatus for performing a binary search on an expanded tree
US7610271B2 (en) 2000-05-22 2009-10-27 Broadcom Corporation Method and apparatus for performing a binary search on an expanded tree
US7020139B2 (en) 2000-06-09 2006-03-28 Broadcom Corporation Trunking and mirroring across stacked gigabit switches
US7009968B2 (en) 2000-06-09 2006-03-07 Broadcom Corporation Gigabit switch supporting improved layer 3 switching
US7139269B2 (en) 2000-06-09 2006-11-21 Broadcom Corporation Cascading of gigabit switches
US7099317B2 (en) 2000-06-09 2006-08-29 Broadcom Corporation Gigabit switch with multicast handling
US7075939B2 (en) 2000-06-09 2006-07-11 Broadcom Corporation Flexible header protocol for network switch
US7106736B2 (en) 2000-06-09 2006-09-12 Broadcom Corporation Gigabit switch supporting multiple stacking configurations
US7050430B2 (en) 2000-06-09 2006-05-23 Broadcom Corporation Gigabit switch with fast filtering processor
US7046679B2 (en) 2000-06-09 2006-05-16 Broadcom Corporation Gigabit switch with frame forwarding and address learning
US7519059B2 (en) 2000-06-19 2009-04-14 Broadcom Corporation Switch fabric with memory management unit for improved flow control
US6950430B2 (en) 2000-06-19 2005-09-27 Broadcom Corporation Switch fabric with path redundancy
US6535510B2 (en) 2000-06-19 2003-03-18 Broadcom Corporation Switch fabric with path redundancy
US8274971B2 (en) 2000-06-19 2012-09-25 Broadcom Corporation Switch fabric with memory management unit for improved flow control
US6567417B2 (en) 2000-06-19 2003-05-20 Broadcom Corporation Frame forwarding in a switch fabric
US7136381B2 (en) 2000-06-19 2006-11-14 Broadcom Corporation Memory management unit architecture for switch fabric
US8027341B2 (en) 2000-06-23 2011-09-27 Broadcom Corporation Switch having external address resolution interface
US7126947B2 (en) 2000-06-23 2006-10-24 Broadcom Corporation Switch having external address resolution interface
US6999455B2 (en) 2000-07-25 2006-02-14 Broadcom Corporation Hardware assist for address learning
US7120117B1 (en) 2000-08-29 2006-10-10 Broadcom Corporation Starvation free flow control in a shared memory switching device
US7227862B2 (en) 2000-09-20 2007-06-05 Broadcom Corporation Network switch having port blocking capability
US7856015B2 (en) 2000-09-20 2010-12-21 Broadcom Corporation Network switch having port blocking capability
US7420977B2 (en) 2000-10-03 2008-09-02 Broadcom Corporation Method and apparatus of inter-chip bus shared by message passing and memory access
US7020166B2 (en) 2000-10-03 2006-03-28 Broadcom Corporation Switch transferring data using data encapsulation and decapsulation
US6851000B2 (en) 2000-10-03 2005-02-01 Broadcom Corporation Switch having flow control management
US6988177B2 (en) 2000-10-03 2006-01-17 Broadcom Corporation Switch memory management using a linked list structure
US7274705B2 (en) 2000-10-03 2007-09-25 Broadcom Corporation Method and apparatus for reducing clock speed and power consumption
US7656907B2 (en) 2000-10-03 2010-02-02 Broadcom Corporation Method and apparatus for reducing clock speed and power consumption
US7120155B2 (en) 2000-10-03 2006-10-10 Broadcom Corporation Switch having virtual shared memory
US7050431B2 (en) 2000-11-14 2006-05-23 Broadcom Corporation Linked network switch configuration
US7035255B2 (en) 2000-11-14 2006-04-25 Broadcom Corporation Linked network switch configuration
US7339938B2 (en) 2000-11-14 2008-03-04 Broadcom Corporation Linked network switch configuration
US7792104B2 (en) 2000-11-14 2010-09-07 Broadcom Corporation Linked network switch configuration
US6850542B2 (en) 2000-11-14 2005-02-01 Broadcom Corporation Linked network switch configuration
US7035286B2 (en) 2000-11-14 2006-04-25 Broadcom Corporation Linked network switch configuration
US7424012B2 (en) 2000-11-14 2008-09-09 Broadcom Corporation Linked network switch configuration
US7355970B2 (en) 2001-10-05 2008-04-08 Broadcom Corporation Method and apparatus for enabling access on a network switch
US8103800B2 (en) 2003-06-26 2012-01-24 Broadcom Corporation Method and apparatus for multi-chip address resolution lookup synchronization in a network environment
US7787471B2 (en) 2003-11-10 2010-08-31 Broadcom Corporation Field processor for a network device
US8320240B2 (en) 2004-11-30 2012-11-27 Broadcom Corporation Rate limiting and minimum and maximum shaping in a network device
US7983291B2 (en) 2005-02-18 2011-07-19 Broadcom Corporation Flexible packet modification engine for a network device
US7624324B2 (en) 2005-02-18 2009-11-24 Fujitsu Limited File control system and file control device
US7869411B2 (en) 2005-11-21 2011-01-11 Broadcom Corporation Compact packet operation device and method
EP2497023A1 (en) * 2009-11-02 2012-09-12 Hewlett Packard Development Company, L.P. Multiprocessing computing with distributed embedded switching
EP2497023A4 (en) * 2009-11-02 2013-09-18 Hewlett Packard Development Co Multiprocessing computing with distributed embedded switching

Also Published As

Publication number Publication date
EP1005741A1 (en) 2000-06-07
US6014380A (en) 2000-01-11
EP1005741A4 (en) 2005-03-30
JP2002507364A (en) 2002-03-05

Similar Documents

Publication Publication Date Title
US6014380A (en) Mechanism for packet field replacement in a multi-layer distributed network element
US5920566A (en) Routing in a multi-layer distributed network element
US6115378A (en) Multi-layer distributed network element
JP4076586B2 (en) Systems and methods for multilayer network elements
US11303515B2 (en) IP MPLS PoP virtualization and fault tolerant virtual router
JP3842303B2 (en) System and method for multilayer network elements
EP1035685B1 (en) Data communication system with distributed multicasting
KR100612318B1 (en) Apparatus and method for implementing vlan bridging and a vpn in a distributed architecture router
EP1468528B1 (en) Method and apparatus for priority-based load balancing for use in an extended local area network
EP0978977A1 (en) A method and system for improving high speed internetwork data transfers
US20080198849A1 (en) Scaling virtual private networks using service insertion architecture
JP2004534431A (en) Network tunneling
JPH05199229A (en) Router using multiple-hop transfer message enabling bridge-type data transfer
US6343330B1 (en) Arrangement for preventing looping of explorer frames in a transparent bridging domain having multiple entry points
US6947415B1 (en) Method and apparatus for processing packets in a routing switch
JPH10190715A (en) Network switching system
JP2022074129A (en) Method for sending bierv6 packet and first network device
Cisco Configuring Transparent Bridging
Cisco Internetworking Technology Overview
CN108199960B (en) Multicast data message forwarding method, entrance routing bridge, exit routing bridge and system
WO2024061184A1 (en) Correspondence acquisition method, parameter notification method, and apparatus, device and medium
CN116886663A (en) E-TREE implementation mode, device and communication equipment based on RFC 8317
Jamoussi et al. Nortel's Virtual Network Switching (VNS) Overview

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1998931579

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1998931579

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1998931579

Country of ref document: EP