WO2002082723A2 - Multiprotocol wireless gateway - Google Patents

Multiprotocol wireless gateway Download PDF

Info

Publication number
WO2002082723A2
WO2002082723A2 PCT/US2002/008170 US0208170W WO02082723A2 WO 2002082723 A2 WO2002082723 A2 WO 2002082723A2 US 0208170 W US0208170 W US 0208170W WO 02082723 A2 WO02082723 A2 WO 02082723A2
Authority
WO
WIPO (PCT)
Prior art keywords
processing
packets
ingress
egress
card
Prior art date
Application number
PCT/US2002/008170
Other languages
French (fr)
Other versions
WO2002082723A3 (en
Inventor
Michael J. Badamo
David G. Barger
Tony M. Cantrell
Wayne Mcninch
Christopher C. Skiscim
David M. Summers
Peter Szydlo
Original Assignee
Megisto Systems
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Megisto Systems filed Critical Megisto Systems
Priority to EP02763851A priority Critical patent/EP1371198A2/en
Priority to AU2002338382A priority patent/AU2002338382A1/en
Priority to JP2002580556A priority patent/JP2005503691A/en
Publication of WO2002082723A2 publication Critical patent/WO2002082723A2/en
Publication of WO2002082723A3 publication Critical patent/WO2002082723A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/66Arrangements for connecting between networks having differing types of switching systems, e.g. gateways
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/09Mapping addresses
    • H04L61/25Mapping addresses of the same type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • H04L63/0272Virtual private networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/10Network architectures or network communication protocols for network security for controlling access to devices or network resources
    • H04L63/102Entity profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/08Protocols for interworking; Protocol conversion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/03Protecting confidentiality, e.g. by encryption
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/10Flow control between communication endpoints
    • H04W28/14Flow control between communication endpoints using intermediate storage
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/18Information format or content conversion, e.g. adaptation by the network of the transmitted or received information for the purpose of wireless delivery to users or terminals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W8/00Network data management
    • H04W8/26Network addressing or numbering for mobility support
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W88/00Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
    • H04W88/16Gateway arrangements

Definitions

  • the present invention generally relates to the mobile Internet and more particularly relates to network infrastructure devices such as mobile Internet gateways that allow wireless data communication users to access content through the Internet protocol (IP) network.
  • IP Internet protocol
  • the invention also relates to a process by which users of the IP network (or users connected through the IP network) can communicate with users of wireless data communications devices.
  • a gateway device In order for users of wireless data communications devices to access content on or through the IP network., a gateway device is required that provides various access services and subscriber management. Such a gateway also provides a means by which users on the IP network (or connected through the IP network) can communicate with users of wireless data communications devices.
  • the architecture of such a device must adhere to and process the mobile protocols, be scalable and reliable, and be capable of flexibly providing protocol services to and from the IP network.
  • Traffic arriving from, or destined for the IP Router Network e.g. the Internet
  • the device should also be able to provide protocol services to the radio access network (RAN) and to the IP Network, scale to large numbers of users without significant degradation in performance and provide a highly reliable system.
  • RAN radio access network
  • Devices have been used that include line cards directly connected to a forwarding device connected to the bus and a control device connected to the bus. The forwarding device performs the transmit, receive, buffering, encapsulation, de-encapsulation and filtering functions.
  • the forwarding device performs all processes related to layer two tunnel traffic. All forwarding decisions, as to an ingress processing (including de- encapsulation, decryption, etc.), are made in one location. Given the dynamics of a system requiring access by multiple users and the possible transfer of large amounts of data, such a system must either limit the number of users to avoid data processing bottlenecks, or the system must seek faster and faster processing with faster and higher volume buses.
  • a network infrastructure device particularly for handling traffic arriving from or destined to RAN users, including users of a data communications protocol [s] specific to mobile and RAN technology and for handling traffic arriving from, or destined to the IP router
  • a network gateway device with a physical interface for connection to a medium.
  • the device includes an ingress processor system for ingress processing of all or part of packets received from the physical interface and for sending ingress processed packets for egress processing.
  • the device also includes an egress processor system for receiving ingress processed packets and for egress processing of all or part of received packets for sending to the physical interface.
  • Interconnections are provided including an interconnection between the ingress processor system and the egress processor system, an interconnection between the ingress processor system and the physical interface and an interconnection between the egress processor system and the physical interface.
  • the device may have a single packet queue establishing a queue of packets awaiting transmission.
  • the packet queue may be the exclusive buffer for packets between packets entering the device and packet transmission.
  • the device allows packets to exit the device at a rate of the line established at the physical interface.
  • the ingress processing system processes packets including at least one or more of protocol translation, de-encapsulation, decryption, authentication, point-to-point protocol (PPP) termination and network address translation (NAT).
  • the egress processing system processes packets including at least one or more of protocol translation, encapsulation, encryption, generation of authentication data, PPP generation and NAT.
  • the ingress and egress processor systems may advantageously respectively include a fast path processor subsystem processing packets at speeds greater than or equal to the rate at which they enter the device.
  • the fast path processor system may provide protocol translation processing converting packets from one protocol to another protocol.
  • Each of the ingress and egress processor system may also include a security processor subsystem for processing security packets requiring one or more of decryption and authentication, the processing occurring concurrently with fast path processor packet processing.
  • the processor systems may also include a special care packet processor for additional packet processing concurrently with fast path processor packet processing.
  • the special care packet processor preferably processes packets including one or more of network address translation (NAT) processing and NAT processing coupled with application layer gateway processing (NAT-ALG).
  • the processor systems may also include a control packet processor for additional packet processing concurrently with fast path processor packet processing, including processing packets signaling the start and end of data sessions, packets used to convey information to a particular protocol and packets dependent on interaction with external entities.
  • the physical interface may include one or more line cards.
  • the ingress processor system may be provided as part of a service card.
  • the egress processor system may be provided as part of the service card or as part of another service card.
  • Such a card arrangement may be interconnected with a line card bus connected to the line card, a service card bus connected to at least one of the service card and the another service card and a switch fabric connecting the line card to at least one of the service card and the another service card.
  • the switch fabric may be used to connect any one of the line cards to any one of the service cards, whereby any line card can send packet traffic to any service card and routing of packet traffic is configured as one of statically and dynamically by the line card.
  • the service card bus may include a static bus part for connection of one of the service cards through the switch fabric to one of the line cards and a dynamic bus for connecting a service card to another service card through a fabric card.
  • This allows any service card to send packet traffic requiring ingress processing to any other service card for ingress processing and allowing any service card to send traffic requiring egress processing to any other service card for egress processing. With this the system can make use of unused capacity that may exist on other service cards.
  • a gateway process is provided including receiving packets from a network via a physical interface connected to a medium. The process includes the ingress processing of packets with an ingress processing system.
  • This processing includes one or more of protocol translation processing, de-encapsulation, decryption, authentication, point-to-point protocol (PPP) termination and network address translation (NAT).
  • the packets are then transferred to an egress packet processing subsystem.
  • the process also includes the egress processing of the packets with an egress processing system.
  • the processing includes one or more of protocol translation, encapsulation, encryption, generation of authentication data, PPP generation and NAT processing.
  • the line cards can be for various media and protocols.
  • the line cards may have one or multiple ports.
  • One or more of the line cards may be a gigabit Ethernet module, an OC-12 module or modules for other media types such as a 155-Mbps ATM OC-3c Multimode Fiber (MMF) module, a 155-Mbps ATM OC-3c Single-Mode Fiber (SMF) module, a 45-Mbps ATM DS-3 module, a 10/ 100-Mbps Ethernet I/O module, a 45-Mbps Clear-Channel DS-3 I/O module, a 52-Mbps HSSI I/O module, a 45-Mbps Channelized DS-3 I/O module, a 1.544-Mbps Packet TI I/O module and others.
  • MMF 155-Mbps ATM OC-3c Multimode Fiber
  • SMF Single-Mode Fiber
  • Fig. 1 A is a schematic drawing of a system using the device according to the invention.
  • Fig. IB is a schematic drawing of another system using the device according to the invention.
  • Fig. 2A is a diagram showing a processing method and system according to the invention
  • Fig. 2B is a diagram showing further processing aspects of the processing method shown in Figure 2 A
  • Fig. 3 is a diagram showing system components of an embodiment of the device according to the invention
  • Fig. 4A is a schematic representation of ingress protocol stack implementation, enabling processing of packets to produce an end to end packet (i.e. tunnels are terminated, IP Sec packets are decrypted)
  • Fig.4 B is a schematic representation of egress protocol stack implementation, enabling processing of packets including necessary encapsulation and encryption
  • Fig. 5 is a diagram showing service card architecture according to an embodiment of the invention
  • Fig. 6 is a diagram showing the peripheral component interconnect (PCI) data bus structure of a service card according to the embodiment of Fig. 5;
  • PCI peripheral component interconnect
  • Fig. 7 is a diagram showing the common switch interface (CSIX) data bus structure of a service card according to the embodiment of Figure 5;
  • Fig. 8 is a flow diagram showing a process according to the invention.
  • CSIX common switch interface
  • Fig. 9 is a diagram showing single point of queuing features of the invention.
  • the invention comprises a network infrastructure device or mobile Internet gateway 10 as well as a method of communication using the gateway 10.
  • Figures 1 A and IB depict two possible deployments of the invention.
  • the invention can form a separation point between two or more networks, or belong to one or more networks.
  • Gateway 10 handles data traffic to and from mobile subscribers via RAN 14.
  • data traffic arriving from, or destined to users on the RAN 14 must use one or more data communication protocols specific to mobile users and the RAN technology.
  • Traffic arriving from, or destined for the IP Router Network (e.g. the Internet) 12 can use a variety of IP-based protocols, sometimes in combination.
  • the architecture of the gateway 10 described here with the Packet Gateway Node (PGN) 10 solves the problem of being able to provide protocol services to the RAN 14 and to the IP Network 12, and to scale to large numbers of users without significant degradation in performance and provide a highly reliable system. It also provides for management of mobile subscribers (e.g., usage restrictions, policy enforcement) as well as tracking usage for purposes of billing and/or accounting.
  • PDN Packet Gateway Node
  • the IP router network generally designated 12 may include connections to various different networks.
  • the IP router network 12 may include the Internet and may have connections to external Internet protocol networks 19 which in turn provide connection to Internet service provider/active server pages 18, or which may also provide a connection to a corporate network 17.
  • the IP router network 12 may also provide connections to the public switched telephone network (PSTN) gateway 16 or for example to local resources (data storage etc.) 15.
  • PSTN public switched telephone network
  • the showing of Figs. 1A and IB is not meant to be all inclusive. Other networks and network connections of various different protocols may be provided.
  • the PGN10 may provide communications between one or more of the networks or provide communications between users of the same network.
  • the amount of ingress processing differs from egress processing.
  • a request sent for Web content might be very small (with a small amount of ingress processing and a small amount of egress processing).
  • the response might be extremely large (i.e., music file etc.). This may require a great deal of ingress processing and a great deal of egress processing.
  • the serial handling of the ingress and egress processing for both' the request and the response for a line card (for a particular physical interface connection) may cause problems such as delays. That is, when ingress and egress processing are performed serially, e.g., in the same processor or serially with multiple processors, traffic awaiting service can suffer unpredictable delays due to the asymmetric nature of the data flow.
  • FIG 2A shows an aspect of the PGN 10 and of the method of the invention whereby the ingress processing and egress processing are divided among different processing systems.
  • Packets are received at the PGN 10 at physical interface 1 1 and packets are transmitted from the PGN 10 via the physical interface 1 1.
  • the physical interface 1 1 may be provided as one or more line cards 22 as discussed below.
  • An ingress processing system 13 is connected to the physical interface 11 via interconnections 17.
  • the ingress processing system 13 preforms the ingress processing of received packets.
  • This ingress processing of packets includes at least one or more of protocol translation, de-encapsulation, decryption, authentication, point-to- point protocol (PPP) termination and network address translation (NAT).
  • PPP point-to- point protocol
  • NAT network address translation
  • An egress processing system 15 is connected to the physical interface 1 1 via interconnections 17 and is also connected to the ingress processing system 13 by interconnections 17.
  • the egress processing system 13 preforms the ingress processing of received packets.
  • This egress processing of packets includes at least one or more of protocol translation, encapsulation, encryption, generation of authentication data, PPP generation and NAT.
  • the ingress processor 13 and egress processor 15 may be provided as part of a device integrated with the physical interface. Additionally, the ingress processor 13 and egress processor 15 may be provided as part of one or more service cards 24 connected to one or more line cards 22 via the interconnections 17.
  • the processing method and arrangement allows ingress and egress processing to proceed concurrently. As shown in Fig.
  • a service card 24' includes ingress processor system 50 and egress processor system 52. Packets are received from a line card LCI designated 22' and packets enter the ingress processor 50 where they are processed to produce end-to-end packets, i.e., tunnels (wherein the original IP packet header is encapsulated) are terminated, Internet protocol security (IPSec) packets are decrypted, Point-to-Point Protocol (PPP) is terminated and NAT or NAT-ALG is performed.
  • IPSec Internet protocol security
  • the end-to-end packets are then sent to another service card 24" via interconnections 17.
  • the egress processor system 56 encapsulates and encrypts the end-to-end packets and the packets are then sent to the LC2 designated 22" for transmission into the network at interface 1 1.
  • Each of the processor systems 13 and 15 in the example of Fig. 2A and 50, 52, 54 and 56 in the example of Fig. 2B is preferably provided with purpose built processors. This allows the processing of special packets, security packets, control packets and simple protocol translation concurrently. This allows the PGN 10 to use a single point of queuing for the device.
  • a packet queue establishes a queue of packets awaiting transmission.
  • This packet queue is the exclusive buffer for packets between packets entering the device and packet transmission.
  • the packets exit the device or complete processing at a rate of the line established at the physical interface (at the rate of the packet ingress).
  • Each processor system preferably includes a fast path processor subsystem processing packets at speeds greater than or equal to the rate at which they enter the device.
  • the fast path processor system provides protocol translation processing converting packets from one protocol to another protocol.
  • Each processor preferably includes a security processor subsystem for processing security packets and preferably a control subsystem for control packets and a special care subsystem for special care packets.
  • the processor subsystems process concurrently.
  • the device allows context (information related to user traffic) to be virtually segregated from other context. Further, the use of multiple service cards allows context to be physically segregated, if this is required.
  • FIG. 3 shows a diagram of an embodiment of the hardware architecture.
  • the system architecture of device 10 divides packet processing from traffic to and from the line cards (LCs) 22 via a switch fabric or fabric card (FC) 20. Processing is performed in service cards (SC) 24.
  • the LCs 22 are each connected to the FC 20 via a LC bus 26 (static LC bus).
  • the SCs 24 are connected by an SC static bus 28, SC dynamic bus (primary) 30 and SC dynamic bus (secondary) 32.
  • a control card (CC) 36 is connected to LCs 24 via serial control bus 38.
  • the CC 36 is connected to SCs 24 via PCI bus 34.
  • a display card (DC) 42 may be connected to the CC 36 via DC buses 44.
  • DC display card
  • One or more redundant cards may be provided for any of the cards(modules) described herein (plural SCs, LCs, CCs, FCs may be provided). Also, Multiple PCI buses may be provided for redundancy.
  • the architecture of the PGN 10 allows all major component types, making up the device 10, to be identical. This allows for N+l redundancy (N active components, 1 spare), or 1+1 redundancy (1 spare for each active component).
  • LCs 22 and several SCs 24 may be used as part of a single PGN 10. The number may vary depending upon the access need (types of connection and number of users) as well as in dependance upon the redundancy provided.
  • the LCs 22 each provide a network interface 11 for network traffic 13.
  • the LCs 22 handle all media access controller (MAC) and physical layer (Phy) functions for the system.
  • the FC 20 handles inter-card routing of data packets.
  • the SCs 24 each may implement forwarding path and protocol stacks.
  • the packets handled within the architecture are broadly categorized as fast path packets, special care packets, security packets and control packets.
  • Fast path packets are those packets requiring protocol processing and protocol translation (converting from one protocol to another) at speeds greater than or equal to the rate at which they enter the device.
  • Special care packets require additional processing in addition to the fast path packets. This might include Network Address Translation (NAT) or NAT coupled with application layer gateway processing (NAT-ALG).
  • Security packets require encryption, decryption authentication or the generation of authentication data.
  • Control packets signal the start and end of data sessions, or are used to convey information to a particular protocol (i.e., the destination is unreachable). Control packets may also be dependent on interaction with external entities such as policy servers.
  • the processing is divided according to the amount of processing required of the packet.
  • the different classes of packet traffic are then dispatched to specialized processing elements so they may be processed concurrently.
  • the concurrent nature of the processing allows for gains in throughput and speed not achievable by the usual serial processing approaches.
  • all fast path processing is performed at a rate greater than or equal to that of the rate of ingress to the PGN 10. This eliminates the need for any queuing of packets until the point at which they are awaiting transmission. Thus the users of the device do not experience delays due to fast path protocol processing or protocol translation.
  • Packet manipulation with respect to tunnel termination, encryption, queuing and scheduling takes place on the SC 24.
  • the master of the system is the CC 36.
  • the CC 36 manages the system, and acts as the point of communication with other entities in the network, i.e. the policy servers and the accounting manager.
  • the flexible routing therefore enables any service card 24 or line card 22, in particular a spare service card 24 or line card 22, to assume the role of another service card 24 or line card 22 by only changing the routing through the switch fabric card (FC) 20.
  • the PGN 10 divides the processing of in-bound protocols (e.g., the ingress path of LCI 22' through ingress processor 50 as shown in Fig. 2B), the out-bound protocols (e.g., the egress path of LC2 22" through egress processor 56 as shown in Fig. 2B), protocol control messaging, and the special handling of traffic requiring encryption.
  • IP Internet protocol
  • the Internet protocol preferably is used at the network layer functioning above the physical/link layer (physical infrastructure, link protocols - PPP, Ethernet, etc.) and below the application layer (interface with user, transport protocols etc.).
  • the device 10 can be used with the IPSec protocol for securing a stream of IP packets.
  • the PGN 10 will perform ingress processing including implementing protocol stacks 55 in a software process including deencapsulating and deencrypting on the ingress side and implementing protocol stack 57 including encapsulating and encrypting on the egress side.
  • Fig.4a illustrates this schematically with the ingress protocol stack 55 implementation being shown with processing proceeding from the IP layer 53 to the IP security layer 51. This can involve for example de-encapsulating and decrypting, protocol translating, authenticating, PPP terminating and NAT with the output being end-to-end packets.
  • Fig.4b schematically illustrates the egress side protocol stack 57 implementation, wherein the end-to-end packets may be encapsulated, encrypted protocol translated, with authentication data generation, PPP generation and NAT.
  • the IPSec encapsulation and/or encryption is shown moving from the IP security layer 51 to the IP layer 53.
  • Any line card 22 can send traffic to any service card 24. This routing can be configured statically or can be determined dynamically by the line card 22.
  • Any service card 24 can send traffic requiring ingress processing (e.g. from SCI 24' to SC2 24") to any other service card 24 for ingress processing.
  • Line cards 22 with the capability to classify ingress traffic can thus make use of unused capacity on the ingress service cards 24 by changing the routing.
  • Ingress processing 50 is physically separate from egress processing 56 (and also separate from processing at 52 and 54). This enables ingress processing to proceed concurrently with egress processing resulting in a performance gain over a serialized approach.
  • Any service card 24 handling ingress processing (e.g., at 50) can send traffic to any other service card 24 for egress processing (e.g., at 56).
  • the device can make use of unused capacity that may exist on other service cards 24.
  • the line cards (LC-x) 22 handle the physical interfaces.
  • the line cards 22 are connected via the bus 38 to the (redundant) switch fabric card(s) (FC).
  • Line card 22s may be provided as two types, intelligent and non-intelligent.
  • An intelligent line card 22 can perform packet classification (up to Layer 3, network layer) whereas the non-intelligent line cards 22 cannot.
  • classified packets can be routed, via the FC 20, to any service card 24 (SC) where ingress and egress processing occurs.
  • SC service card 24
  • This allows for load balancing since the LC 22 can route to the SC 24 with the least loaded ingress processor.
  • the assignment of LCs 22 to SCs 24 is static, but programmable.
  • FIG. 5 shows the arrangement of service cards 24 (SC-x).
  • SC 24 provides ingress processing with ingress processing subsystem 62 (for fast path processing) and egress processing with physically separate egress processing subsystem 64 (for fast processing).
  • the processing functions of these subsystems 62 and 64 are separate.
  • Each ingress processing system contains separate paths 66 for special processing and separate components 68, 70 and 73 for special processing.
  • Each egress processing system contains a separate path 69 for special processing and the separate components 68, 70 and 74 for special processing.
  • IP packets enter the SC 24' through the FC interface 20, this is traffic coming, e.g., from LC 1 22'.
  • Packets enter the ingress processor system 50, where they are classified as subscriber data or control data packets. Control packets are sent up to one of two microprocessors, the control processor 70 or the special care processor 68. Protocol stacks (e.g., 55 or 57), implemented in software, process the packets at the control processor 70 or the special care processor 68.
  • a subscriber data packet is processed by the ingress processing subsystem 62 and or security subsystem 73 to produce an end-to-end packet (i.e.
  • the end-to-end packet is sent to another SC 24" via the FC 20. Packets enter the SC 24" through the interface 72 to the FC 20. The packets enter the egress processor system. This may be by use of another service card (e.g., SC 24") where all the necessary encapsulation and encryption is performed. The packet is next sent to, e.g., LC2 22" that must transmit the packet into the network. Protocol stacks running on the control and special care processors may also inject a packet into the egress processor for transmission.
  • ingress-to-egress ingress-to-ingress (dividing ingress processing over more than one service card 24) and egress-to-egress allows the device to dynamically adapt to changing network loads as sessions are established and torn down.
  • Processing resources for ingress and egress can be allocated on different service cards 24 for a given subscriber's traffic to balance the processing load, thus providing a mechanism to maintain high levels of throughput.
  • a subscriber data session is established on a given SC 24 for ingress and the same, or another SC 24 for egress. Information associated with this session, its context, is maintained or persists on the ingress and egress processor (e.g., of the processing subsystems 62 and 64).
  • ingress to ingress permits the traffic to enter via a different LC 22 (because of the nature of the mobile user, such a user could have moved and may now be coming in via a different path) and be handled by the ingress processing subsystem SC 24 holding the context (e.g., by Ingress processing subsystem 62 of SC 24').
  • the context information may be held and controlled by memory controller 76. Moving context data can be problematic.
  • Processing subscriber data packets on the SC 24 occurs in one of three modes, fast path, security and special care path.
  • Fast path processing is aptly named because it includes any processing of packets through the SC 24 at a rate greater than or equal to the ingress rate of the packets.
  • These processing functions are implemented in the ingress processing subsystem 62 and egress processing subsystem 64 using custom-built hardware. Packets that require processing that cannot be done in the fast path are shunted off on the path 66 or 69 for either special care processing with processor 68 or security processing with processor 73 or 74.
  • Special care processing includes packets requiring PPP and GTP re-ordering or packets requiring NAT-ALG.
  • Security processing is performed for IPSec packets or packets requiring IPSec treatment.
  • the internal interfaces of PGN 10 enable the connections amongst ingress and egress processing functions.
  • the ingress and egress PCI buses 66 and 69 are the central data plane interfaces from the control plane to the data plane.
  • the ingress PCI bus 66 (see Fig.
  • the control processor subsystem 70 includes local system controller 86, synchronous dynamic random access memory (SDRAM) 87, cache 88, global system controller 83 (providing a connection to PCI bus 34), SDRAM 85 and control processor 90.
  • the global system controller 83, the control processor 90 and the local system controller 86 are connected together via a bus connection 67.
  • the egress PCI bus 69 connects egress processor FPGA 81, encryption subsystem or security subsystem 74, special care processor 68 and control processor system 70.
  • Each of the ingress PCI bus 66 and egress PCI bus 69 have an aggregate bandwidth of approximately 4Gb/s. They are used to pass data packets to and from the fast path hardware. For this reason, the egress processor FPGA 62 is the controller on the egress PCI bus 69, and the ingress processor FPGA 64 (connected to egress processor 81) is the controller on the ingress PCI bus 66. These PCI buses 66 and 69 are shared with the control plane. Control plane functions on the PCI bus 34 are discussed below.
  • the special care subsystem 68, the control processor system 70 and the security subsystems 74 interface to the ingress and egress processing subsystems 62 and 64 via the pair of PCI bus 66 amd 69.
  • Figure 6 shows how these buses 66 and 69 connect system components together.
  • One PCI bus 66 is specific to ingress traffic, the other PCI bus 69 carries egress traffic.
  • the ingress processor subsystem (ingress FPGA) 62 is connected to ingress PCI bus 66.
  • the egress processor subsystem 64 (and connected FPGA 64 with connected egress processor 81) is connected to ingress PCI bus 69.
  • the controller 70 including local system controller 66 (e.g., Galileo 64260) with SDRAM 87, with control processor 90 and cache 88 work with the special care subsystem 68, acting as a bridge between the buses 66 and 69.
  • the security subsystems 73 and 74 are respectively connected to buses 66 and 69. This arrangement will allow egress traffic to get to the ingress bus on the same SC and vice-versa. This may be utilized only for the case of IPSec processing.
  • Each of the PCI busses 66 and 69 are 64 bits wide and run at 66 Mhz. This provides a bus bandwidth of 4.2 Gb/s. Assuming 60% utilization on these buses, they have an effective bandwidth of 2.5 Gb/s. If the system is loaded with 50% of the line traffic going to the special care processors of the special care subsystem and 25% going to the security subsystem 74, half of which going over the bridge, this would use up 1.75 Gb/s.
  • Figure 7 shows the data buses 28, 32 and 30 on which packets are carried to and from the ingress and egress processing cores 62 and 64 via CSIX buses.
  • the ingress processor subsystem 62 has a 3.2Gb/s (32bitxl00Mhz) primary input from CSIX bus 91 with switch fabric interface part (e.g., VSC872) 71.
  • Bus 91 carries data from the line card 22' via bus 28 and via the FC 20.
  • the ingress processor subsystem 62 has a set of two (2) 3.2Gb/s primary outputs with CSIX busses 77 with switch fabric interface part (e.g., VSC872) 72" that will carry end to end data packets to the switch fabric (dynamic section) 20 for egress processing on the egress service card 24".
  • the connected service card e.g., SC 24
  • the ingress processing element 62 has a secondary output in addition.
  • This 3.2Gb/s bi-directional CSIX link 80/83 with switch fabric interface part (VSC872) 72' to the switch fabric 20 is for ingress processor system 50 (e.g., of one SC 24') to ingress processor 56 (cross service card, e.g., to another service card 24") packet transfers.
  • the egress processing subsystem 64 receives data at inputs from two 3.2Gb/s CSIX links 77 out of the switch fabric interface part (e.g., VSC872) 72". Packets coming to the egress processor subsystem 64 on these links have already been processed down to the end-to-end packet.
  • the egress processor e.g., 52 or 56
  • the packet traverses the static switch fabric 20 on its way to the line card 22.
  • Each of the static buses 26 and 28 are comprised of 4 high-speed unidirectional differential pairs. Two pairs support subscriber data in the ingress direction while the other two pairs support subscriber data in the egress direction. Each differential pair is a 2.64384 Gbps high-speed LVDS channel. Each channel contains both clock and data information and is encoded to aid in clock recovery at the receiver. At this channel rate the information rate is 2.5 Gbps. Since unidirectional subscriber data flows in 2 channels, or pairs, between LCs 22 and SCs 24 for each static bus 26 and 28, the aggregate information rate is 5 Gbps per direction per bus.
  • the primary dynamic buses 30 connect the ingress processor of one service card 24 to the egress processor of another service card 24 via the fabric card 20 on a frame-by-frame basis.
  • Each primary dynamic bus 30 is comprised of 8 high-speed unidirectional differential pairs. Four pairs support subscriber data in the ingress direction while the other four pairs support subscriber data in the egress direction.
  • Each differential pair is a 2.64384 Gbps high-speed LVDS channel. Each channel contains both clock and data information and is encoded to aid in clock recovery at the receiver. At this channel rate the information rate is 2.5 Gbps. Since unidirectional subscriber data flows in 4 channels, or pairs, the aggregate information rate for a given direction is 10 Gbps.
  • Secondary dynamic buses 32 are electrically identical to the static buses, but since they are dynamic, subscriber data may be rerouted on a frame-by-frame basis.
  • the process of the invention is illustrated generally in the flow diagram of Fig. 8.
  • the process begins at 100 by providing the device infrastructure in the form of connection buses 28, 30 and 32 and providing a switch fabric 20 for selectively interconnecting the connection buses.
  • At least a first line card 22', second line card 22", a first service card 24', a second service card 24", and a control card 36 are provided.
  • a redundant line card 22, redundant service card 24, a redundant fabric card 20 and a redundant control card 36 may be provided.
  • the fabric card 20 or fabric cards 20 are connected and configured to establish a substantially static connection from first line card 22' via line card bus 26 through fabric card 20 to service card static bus 28 to service card 1 designated 24'.
  • the fabric card 20, as indicated at 102, also provides a connection from line card 22 designated 22", the associated line card bus 26, the fabric card 20 and the service card static bus 28 associated with service card 2 designated 24".
  • Step 104 shows the further steps of receiving packets at the first line card 22' transferring the packets via LC bus -26, fabric card 20, SC static bus 28 to the first service card 24'.
  • the first service card 24' processes packets with ingress processing system 50.
  • control packets are sent to either control processor 62 or special care processor 66 and subscriber data packets are processed to produce the end-to-end packets as shown at 106.
  • the necessary de-encapsulation and decryption are performed.
  • the end-to-end packets are transferred via FC20 to the egress processing system 56 of the second service card 24" via dynamic bus 30 (primary dynamic bus).
  • the egress packet processor of second service card 24" processes the end-to-end packets including encapsulation and encryption.
  • the packets are then sent to a line card, such as second line card 22" as indicated at step 112.
  • the line card then transmits packets into the network as shown at 114.
  • the protocol stack 55 running on the control processor 62 and special care subsystem 66 may also inject a packet into the ingress processor for transmission.
  • the control processor 62 of service card 24" and the special care processor 66 of service card 24" may also treat further packets for egress processing '
  • the entire system may be monitored using a display card 42 via display buses 44.
  • the line cards may be monitored via serial control buses 38.
  • the control card 36 may have other output interfaces such as EMS interfaces 48 which can include any one or several of 10/100 base T outputs 43 and serial output 47 and a PCMCIA (or compact flash) output 49.
  • the device 10 supports a single point of queuing.
  • a customer set 120 each set 120 comprising multiple individuals, will be assured of a certain set of protocol services and a portion of the total bandwidth available within the device. It is therefore necessary to be able to monitor the rate of egress of the customer set's traffic.
  • Figure 9 shows multiple customer sets 120 entering the device using different physical interfaces 22.
  • customer set #5 can enter the device using LC-5 and LC-7.
  • the ingress protocol processing for this customer set #5 is hosted on SC-3 and SC-4 as indicated by ingress traffic 122 while egress processing is hosted on SC-6 as shown by traffic after ingrees protocol processing 124.
  • the FC switches the ingress traffic from LC-5 and LC-7 to the two SCs 3 and 4 for ingress protocol processing.
  • SC-6 provides the common point of aggregation and contains one or more queues (at the single location) for holding a customer set's traffic awaiting egress 126 to the LC. Queuing is necessary as the ingress rate of the customer set's aggregated traffic may, at times, exceed the egress rate of a particular physical interface. Monitoring of the egress rate of the customer set's traffic then occurs at the point of aggregation.
  • the invention provides a device based on modular units.
  • the term card is used to denote such a modular unit.
  • the modules may be added and subtracted and combined with identical redundant modules.
  • the principals of this invention may be practiced with a single unit (without modules) or with features of modules described herein combined with other features in different functional groups.

Abstract

A network gateway device has a physical interface for connection to a medium. The device has an ingress processor system for ingress processing of all or part of packets received from the physical interface and for sending ingress processed packets for egress processing. The device has an egress processor system for receiving ingress processed packets and for egress processing of all or part of received packets for sending to physical interface. Interconnections are provided, including an interconnection between the ingress processor and the egress processor, including an interconnection between the ingress processor and the physical interface, and including an interconnection between the ingress processor and the physical interface. A packet queue is provided with packets awaiting transmission. The packet queue may be the exclusive buffer for packets between packets entering the device and packet transmission. The packets may exit the device at a rate of the line established at the physical interface. The ingress processing system processes packets including at least one or more of protocol translation, de-encapsulation, decryption, authentication, point-to-point protocol (PPP) termination and network address translation (NAT). The egress processing system processes packets including at least one or more of protocol translation, encapsulation, encryption, generation of authentication data, PPP generation and NAT.

Description

NETWORK INFRASTRUCTURE DEVICE FOR DATA TRAFFIC TO AND FROM MOBILE UNITS
FIELD OF THE INVENTION
The present invention generally relates to the mobile Internet and more particularly relates to network infrastructure devices such as mobile Internet gateways that allow wireless data communication users to access content through the Internet protocol (IP) network. The invention also relates to a process by which users of the IP network (or users connected through the IP network) can communicate with users of wireless data communications devices.
BACKGROUND OF THE INVENTION
In order for users of wireless data communications devices to access content on or through the IP network., a gateway device is required that provides various access services and subscriber management. Such a gateway also provides a means by which users on the IP network (or connected through the IP network) can communicate with users of wireless data communications devices.
The architecture of such a device must adhere to and process the mobile protocols, be scalable and reliable, and be capable of flexibly providing protocol services to and from the IP network. Traffic arriving from, or destined for the IP Router Network (e.g. the Internet) can use a variety of IP-based protocols, sometimes in combination. The device should also be able to provide protocol services to the radio access network (RAN) and to the IP Network, scale to large numbers of users without significant degradation in performance and provide a highly reliable system. Devices have been used that include line cards directly connected to a forwarding device connected to the bus and a control device connected to the bus. The forwarding device performs the transmit, receive, buffering, encapsulation, de-encapsulation and filtering functions. In such an arrangement the forwarding device performs all processes related to layer two tunnel traffic. All forwarding decisions, as to an ingress processing (including de- encapsulation, decryption, etc.), are made in one location. Given the dynamics of a system requiring access by multiple users and the possible transfer of large amounts of data, such a system must either limit the number of users to avoid data processing bottlenecks, or the system must seek faster and faster processing with faster and higher volume buses.
SUMMARY AND OBJECTS OF THE INVENTION It is an object of the invention to provide a network device, particularly a gateway device with an ingress processor system for ingress processing of all or part of received packets, which is at least partially separate from an egress processor system for receiving ingress processed packets and for egress processing of all or part of received packets whereby packet processing is efficiently handled. It is another object of the invention to provide a network infrastructure device, particularly for handling traffic arriving from or destined to RAN users, including users of a data communications protocol [s] specific to mobile and RAN technology and for handling traffic arriving from, or destined to the IP router network (e.g. the Internet) in which the system architecture of the device provides protocol services to the RAN and the IP network and is able to scale to large numbers of users without processing or transfer bottlenecks, and without significant degradation in performance while providing a highly reliable device.
Is a further object of the invention to provide a network gateway device for communications back and forth between RAN technology and IP network systems providing protocol services for handling traffic between the systems and for processing packets from line cards connected as part of the gateway device with ingress packet processing at least partially physically separate from egress packet processing.
According to the invention, a network gateway device is provided with a physical interface for connection to a medium. The device includes an ingress processor system for ingress processing of all or part of packets received from the physical interface and for sending ingress processed packets for egress processing. The device also includes an egress processor system for receiving ingress processed packets and for egress processing of all or part of received packets for sending to the physical interface. Interconnections are provided including an interconnection between the ingress processor system and the egress processor system, an interconnection between the ingress processor system and the physical interface and an interconnection between the egress processor system and the physical interface. Advantageously, the device may have a single packet queue establishing a queue of packets awaiting transmission. The packet queue may be the exclusive buffer for packets between packets entering the device and packet transmission. The device allows packets to exit the device at a rate of the line established at the physical interface. The ingress processing system processes packets including at least one or more of protocol translation, de-encapsulation, decryption, authentication, point-to-point protocol (PPP) termination and network address translation (NAT). The egress processing system processes packets including at least one or more of protocol translation, encapsulation, encryption, generation of authentication data, PPP generation and NAT. The ingress and egress processor systems may advantageously respectively include a fast path processor subsystem processing packets at speeds greater than or equal to the rate at which they enter the device. The fast path processor system may provide protocol translation processing converting packets from one protocol to another protocol.
Each of the ingress and egress processor system may also include a security processor subsystem for processing security packets requiring one or more of decryption and authentication, the processing occurring concurrently with fast path processor packet processing. The processor systems may also include a special care packet processor for additional packet processing concurrently with fast path processor packet processing. The special care packet processor preferably processes packets including one or more of network address translation (NAT) processing and NAT processing coupled with application layer gateway processing (NAT-ALG). The processor systems may also include a control packet processor for additional packet processing concurrently with fast path processor packet processing, including processing packets signaling the start and end of data sessions, packets used to convey information to a particular protocol and packets dependent on interaction with external entities.
The physical interface may include one or more line cards. The ingress processor system may be provided as part of a service card. The egress processor system may be provided as part of the service card or as part of another service card. Such a card arrangement may be interconnected with a line card bus connected to the line card, a service card bus connected to at least one of the service card and the another service card and a switch fabric connecting the line card to at least one of the service card and the another service card. The switch fabric may be used to connect any one of the line cards to any one of the service cards, whereby any line card can send packet traffic to any service card and routing of packet traffic is configured as one of statically and dynamically by the line card. The service card bus may include a static bus part for connection of one of the service cards through the switch fabric to one of the line cards and a dynamic bus for connecting a service card to another service card through a fabric card. This allows any service card to send packet traffic requiring ingress processing to any other service card for ingress processing and allowing any service card to send traffic requiring egress processing to any other service card for egress processing. With this the system can make use of unused capacity that may exist on other service cards. According to another aspect of the invention, a gateway process is provided including receiving packets from a network via a physical interface connected to a medium. The process includes the ingress processing of packets with an ingress processing system. This processing includes one or more of protocol translation processing, de-encapsulation, decryption, authentication, point-to-point protocol (PPP) termination and network address translation (NAT). The packets are then transferred to an egress packet processing subsystem. The process also includes the egress processing of the packets with an egress processing system. The processing includes one or more of protocol translation, encapsulation, encryption, generation of authentication data, PPP generation and NAT processing.
The line cards can be for various media and protocols. The line cards may have one or multiple ports. One or more of the line cards may be a gigabit Ethernet module, an OC-12 module or modules for other media types such as a 155-Mbps ATM OC-3c Multimode Fiber (MMF) module, a 155-Mbps ATM OC-3c Single-Mode Fiber (SMF) module, a 45-Mbps ATM DS-3 module, a 10/ 100-Mbps Ethernet I/O module, a 45-Mbps Clear-Channel DS-3 I/O module, a 52-Mbps HSSI I/O module, a 45-Mbps Channelized DS-3 I/O module, a 1.544-Mbps Packet TI I/O module and others.
The various features of novelty which characterize the invention are pointed out with particularity in the claims annexed to and forming a part of this disclosure. For a better understanding of the invention, its operating advantages and specific objects attained by its uses, reference is made to the accompanying drawings and descriptive matter in which preferred embodiments of the invention are illustrated. BRIEF DESCRIPTION OF THE DRAWINGS
In the drawings:
Fig. 1 A is a schematic drawing of a system using the device according to the invention;
Fig. IB is a schematic drawing of another system using the device according to the invention;
Fig. 2A is a diagram showing a processing method and system according to the invention; Fig. 2B is a diagram showing further processing aspects of the processing method shown in Figure 2 A; Fig. 3 is a diagram showing system components of an embodiment of the device according to the invention; Fig. 4A is a schematic representation of ingress protocol stack implementation, enabling processing of packets to produce an end to end packet (i.e. tunnels are terminated, IP Sec packets are decrypted); Fig.4 B is a schematic representation of egress protocol stack implementation, enabling processing of packets including necessary encapsulation and encryption; Fig. 5 is a diagram showing service card architecture according to an embodiment of the invention; Fig. 6 is a diagram showing the peripheral component interconnect (PCI) data bus structure of a service card according to the embodiment of Fig. 5;
Fig. 7 is a diagram showing the common switch interface (CSIX) data bus structure of a service card according to the embodiment of Figure 5; Fig. 8 is a flow diagram showing a process according to the invention; and
Fig. 9 is a diagram showing single point of queuing features of the invention.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
Referring to the drawings in particular, the invention comprises a network infrastructure device or mobile Internet gateway 10 as well as a method of communication using the gateway 10. Figures 1 A and IB depict two possible deployments of the invention. The invention can form a separation point between two or more networks, or belong to one or more networks. Gateway 10 handles data traffic to and from mobile subscribers via RAN 14. As shown in Figure 1 data traffic arriving from, or destined to users on the RAN 14 must use one or more data communication protocols specific to mobile users and the RAN technology. Traffic arriving from, or destined for the IP Router Network (e.g. the Internet) 12 can use a variety of IP-based protocols, sometimes in combination. The architecture of the gateway 10 described here with the Packet Gateway Node (PGN) 10 solves the problem of being able to provide protocol services to the RAN 14 and to the IP Network 12, and to scale to large numbers of users without significant degradation in performance and provide a highly reliable system. It also provides for management of mobile subscribers (e.g., usage restrictions, policy enforcement) as well as tracking usage for purposes of billing and/or accounting.
The IP router network generally designated 12 may include connections to various different networks. The IP router network 12, for example, may include the Internet and may have connections to external Internet protocol networks 19 which in turn provide connection to Internet service provider/active server pages 18, or which may also provide a connection to a corporate network 17. The IP router network 12 may also provide connections to the public switched telephone network (PSTN) gateway 16 or for example to local resources (data storage etc.) 15. The showing of Figs. 1A and IB is not meant to be all inclusive. Other networks and network connections of various different protocols may be provided. The PGN10 may provide communications between one or more of the networks or provide communications between users of the same network.
It is often the case that the amount of ingress processing differs from egress processing. For example, a request sent for Web content might be very small (with a small amount of ingress processing and a small amount of egress processing). However, the response might be extremely large (i.e., music file etc.). This may require a great deal of ingress processing and a great deal of egress processing. The serial handling of the ingress and egress processing for both' the request and the response for a line card (for a particular physical interface connection) may cause problems such as delays. That is, when ingress and egress processing are performed serially, e.g., in the same processor or serially with multiple processors, traffic awaiting service can suffer unpredictable delays due to the asymmetric nature of the data flow. Figure 2A shows an aspect of the PGN 10 and of the method of the invention whereby the ingress processing and egress processing are divided among different processing systems. Packets are received at the PGN 10 at physical interface 1 1 and packets are transmitted from the PGN 10 via the physical interface 1 1. The physical interface 1 1 may be provided as one or more line cards 22 as discussed below. An ingress processing system 13 is connected to the physical interface 11 via interconnections 17. The ingress processing system 13 preforms the ingress processing of received packets. This ingress processing of packets includes at least one or more of protocol translation, de-encapsulation, decryption, authentication, point-to- point protocol (PPP) termination and network address translation (NAT). An egress processing system 15 is connected to the physical interface 1 1 via interconnections 17 and is also connected to the ingress processing system 13 by interconnections 17. The egress processing system 13 preforms the ingress processing of received packets. This egress processing of packets includes at least one or more of protocol translation, encapsulation, encryption, generation of authentication data, PPP generation and NAT. The ingress processor 13 and egress processor 15 may be provided as part of a device integrated with the physical interface. Additionally, the ingress processor 13 and egress processor 15 may be provided as part of one or more service cards 24 connected to one or more line cards 22 via the interconnections 17. The processing method and arrangement allows ingress and egress processing to proceed concurrently. As shown in Fig. 2B one service card 24' may provide the ingress processing and another service card 24" may provide the egress processing. The ingress processing or egress processing may be distributed between more than one service card 24. As shown in Fig. 2B a service card 24' includes ingress processor system 50 and egress processor system 52. Packets are received from a line card LCI designated 22' and packets enter the ingress processor 50 where they are processed to produce end-to-end packets, i.e., tunnels (wherein the original IP packet header is encapsulated) are terminated, Internet protocol security (IPSec) packets are decrypted, Point-to-Point Protocol (PPP) is terminated and NAT or NAT-ALG is performed. The end-to-end packets are then sent to another service card 24" via interconnections 17. At this other service card 24" the egress processor system 56 encapsulates and encrypts the end-to-end packets and the packets are then sent to the LC2 designated 22" for transmission into the network at interface 1 1. Each of the processor systems 13 and 15 in the example of Fig. 2A and 50, 52, 54 and 56 in the example of Fig. 2B is preferably provided with purpose built processors. This allows the processing of special packets, security packets, control packets and simple protocol translation concurrently. This allows the PGN 10 to use a single point of queuing for the device. A packet queue establishes a queue of packets awaiting transmission. This packet queue is the exclusive buffer for packets between packets entering the device and packet transmission. The packets exit the device or complete processing at a rate of the line established at the physical interface (at the rate of the packet ingress). Each processor system preferably includes a fast path processor subsystem processing packets at speeds greater than or equal to the rate at which they enter the device. The fast path processor system provides protocol translation processing converting packets from one protocol to another protocol. Each processor preferably includes a security processor subsystem for processing security packets and preferably a control subsystem for control packets and a special care subsystem for special care packets. The processor subsystems process concurrently. The device allows context (information related to user traffic) to be virtually segregated from other context. Further, the use of multiple service cards allows context to be physically segregated, if this is required.
Figure 3 shows a diagram of an embodiment of the hardware architecture. The system architecture of device 10 divides packet processing from traffic to and from the line cards (LCs) 22 via a switch fabric or fabric card (FC) 20. Processing is performed in service cards (SC) 24. The LCs 22 are each connected to the FC 20 via a LC bus 26 (static LC bus). The SCs 24 are connected by an SC static bus 28, SC dynamic bus (primary) 30 and SC dynamic bus (secondary) 32. A control card (CC) 36 is connected to LCs 24 via serial control bus 38. The CC 36 is connected to SCs 24 via PCI bus 34. A display card (DC) 42 may be connected to the CC 36 via DC buses 44. One or more redundant cards may be provided for any of the cards(modules) described herein (plural SCs, LCs, CCs, FCs may be provided). Also, Multiple PCI buses may be provided for redundancy. The architecture of the PGN 10 allows all major component types, making up the device 10, to be identical. This allows for N+l redundancy (N active components, 1 spare), or 1+1 redundancy (1 spare for each active component).
Several LCs 22 and several SCs 24 may be used as part of a single PGN 10. The number may vary depending upon the access need (types of connection and number of users) as well as in dependance upon the redundancy provided. The LCs 22 each provide a network interface 11 for network traffic 13. The LCs 22 handle all media access controller (MAC) and physical layer (Phy) functions for the system. The FC 20 handles inter-card routing of data packets. The SCs 24 each may implement forwarding path and protocol stacks.
The packets handled within the architecture are broadly categorized as fast path packets, special care packets, security packets and control packets. Fast path packets are those packets requiring protocol processing and protocol translation (converting from one protocol to another) at speeds greater than or equal to the rate at which they enter the device. Special care packets require additional processing in addition to the fast path packets. This might include Network Address Translation (NAT) or NAT coupled with application layer gateway processing (NAT-ALG). Security packets require encryption, decryption authentication or the generation of authentication data. Control packets signal the start and end of data sessions, or are used to convey information to a particular protocol (i.e., the destination is unreachable). Control packets may also be dependent on interaction with external entities such as policy servers. The processing is divided according to the amount of processing required of the packet. The different classes of packet traffic are then dispatched to specialized processing elements so they may be processed concurrently. The concurrent nature of the processing allows for gains in throughput and speed not achievable by the usual serial processing approaches. In addition, all fast path processing is performed at a rate greater than or equal to that of the rate of ingress to the PGN 10. This eliminates the need for any queuing of packets until the point at which they are awaiting transmission. Thus the users of the device do not experience delays due to fast path protocol processing or protocol translation.
Packet manipulation with respect to tunnel termination, encryption, queuing and scheduling takes place on the SC 24. The master of the system is the CC 36. The CC 36 manages the system, and acts as the point of communication with other entities in the network, i.e. the policy servers and the accounting manager.
The flexible routing therefore enables any service card 24 or line card 22, in particular a spare service card 24 or line card 22, to assume the role of another service card 24 or line card 22 by only changing the routing through the switch fabric card (FC) 20. To support scalable performance, the PGN 10 divides the processing of in-bound protocols (e.g., the ingress path of LCI 22' through ingress processor 50 as shown in Fig. 2B), the out-bound protocols (e.g., the egress path of LC2 22" through egress processor 56 as shown in Fig. 2B), protocol control messaging, and the special handling of traffic requiring encryption.
Various protocols may be implemented. The Internet protocol (IP) preferably is used at the network layer functioning above the physical/link layer (physical infrastructure, link protocols - PPP, Ethernet, etc.) and below the application layer (interface with user, transport protocols etc.). The device 10 can be used with the IPSec protocol for securing a stream of IP packets. In such a situation, where secure virtual private networks are established the PGN 10 will perform ingress processing including implementing protocol stacks 55 in a software process including deencapsulating and deencrypting on the ingress side and implementing protocol stack 57 including encapsulating and encrypting on the egress side. Fig.4a illustrates this schematically with the ingress protocol stack 55 implementation being shown with processing proceeding from the IP layer 53 to the IP security layer 51. This can involve for example de-encapsulating and decrypting, protocol translating, authenticating, PPP terminating and NAT with the output being end-to-end packets. Fig.4b schematically illustrates the egress side protocol stack 57 implementation, wherein the end-to-end packets may be encapsulated, encrypted protocol translated, with authentication data generation, PPP generation and NAT. The IPSec encapsulation and/or encryption is shown moving from the IP security layer 51 to the IP layer 53. Any line card 22 can send traffic to any service card 24. This routing can be configured statically or can be determined dynamically by the line card 22. Any service card 24 can send traffic requiring ingress processing (e.g. from SCI 24' to SC2 24") to any other service card 24 for ingress processing. Line cards 22 with the capability to classify ingress traffic can thus make use of unused capacity on the ingress service cards 24 by changing the routing. Ingress processing 50 is physically separate from egress processing 56 (and also separate from processing at 52 and 54). This enables ingress processing to proceed concurrently with egress processing resulting in a performance gain over a serialized approach. Any service card 24 handling ingress processing (e.g., at 50) can send traffic to any other service card 24 for egress processing (e.g., at 56). Thus, the device can make use of unused capacity that may exist on other service cards 24.
The line cards (LC-x) 22 handle the physical interfaces. The line cards 22 are connected via the bus 38 to the (redundant) switch fabric card(s) (FC). Line card 22s may be provided as two types, intelligent and non-intelligent. An intelligent line card 22 can perform packet classification (up to Layer 3, network layer) whereas the non-intelligent line cards 22 cannot. In the former case, classified packets can be routed, via the FC 20, to any service card 24 (SC) where ingress and egress processing occurs. This allows for load balancing since the LC 22 can route to the SC 24 with the least loaded ingress processor. In the latter case, the assignment of LCs 22 to SCs 24 is static, but programmable. Redundancy management is also made easier: In the event of failure of a line card 22, a standby spare can be switched in by re-directing the flow through the FC 20. Figure 5 shows the arrangement of service cards 24 (SC-x). Each SC 24 provides ingress processing with ingress processing subsystem 62 (for fast path processing) and egress processing with physically separate egress processing subsystem 64 (for fast processing). The processing functions of these subsystems 62 and 64 are separate. Each ingress processing system contains separate paths 66 for special processing and separate components 68, 70 and 73 for special processing. Each egress processing system contains a separate path 69 for special processing and the separate components 68, 70 and 74 for special processing.
The role of the service cards, such as SC 24', is to process IP packets. IP packets enter the SC 24' through the FC interface 20, this is traffic coming, e.g., from LC 1 22'. Packets enter the ingress processor system 50, where they are classified as subscriber data or control data packets. Control packets are sent up to one of two microprocessors, the control processor 70 or the special care processor 68. Protocol stacks (e.g., 55 or 57), implemented in software, process the packets at the control processor 70 or the special care processor 68. A subscriber data packet is processed by the ingress processing subsystem 62 and or security subsystem 73 to produce an end-to-end packet (i.e. tunnels are terminated, IPSec packets are decrypted). The end-to-end packet is sent to another SC 24" via the FC 20. Packets enter the SC 24" through the interface 72 to the FC 20. The packets enter the egress processor system. This may be by use of another service card (e.g., SC 24") where all the necessary encapsulation and encryption is performed. The packet is next sent to, e.g., LC2 22" that must transmit the packet into the network. Protocol stacks running on the control and special care processors may also inject a packet into the egress processor for transmission.
The flexible routing of ingress-to-egress, ingress-to-ingress (dividing ingress processing over more than one service card 24) and egress-to-egress allows the device to dynamically adapt to changing network loads as sessions are established and torn down. Processing resources for ingress and egress can be allocated on different service cards 24 for a given subscriber's traffic to balance the processing load, thus providing a mechanism to maintain high levels of throughput. Typically, a subscriber data session is established on a given SC 24 for ingress and the same, or another SC 24 for egress. Information associated with this session, its context, is maintained or persists on the ingress and egress processor (e.g., of the processing subsystems 62 and 64). The routing of ingress to ingress (e.g., from SC 24' to SC 24" via bus 32, FC 20, FC interface 72 and CSIX link 80) permits the traffic to enter via a different LC 22 (because of the nature of the mobile user, such a user could have moved and may now be coming in via a different path) and be handled by the ingress processing subsystem SC 24 holding the context (e.g., by Ingress processing subsystem 62 of SC 24'). This eliminates the need to move the context at the price of maintaining context location. For example, the context information may be held and controlled by memory controller 76. Moving context data can be problematic.
Processing subscriber data packets on the SC 24 occurs in one of three modes, fast path, security and special care path. Fast path processing is aptly named because it includes any processing of packets through the SC 24 at a rate greater than or equal to the ingress rate of the packets. These processing functions are implemented in the ingress processing subsystem 62 and egress processing subsystem 64 using custom-built hardware. Packets that require processing that cannot be done in the fast path are shunted off on the path 66 or 69 for either special care processing with processor 68 or security processing with processor 73 or 74. Special care processing includes packets requiring PPP and GTP re-ordering or packets requiring NAT-ALG. Security processing is performed for IPSec packets or packets requiring IPSec treatment. When special care and security processing is completed, these packets are injected back into the fast path. Thus, while special care or security processing is in progress, the flow of packets not requiring such processing can proceed at a rate greater than or equal to their rate of the ingress. This method of concurrent processing eliminates the need to queue fast path packets thus enabling the device to sustain high and consistent levels of throughput. The internal interfaces of PGN 10 enable the connections amongst ingress and egress processing functions. The ingress and egress PCI buses 66 and 69 are the central data plane interfaces from the control plane to the data plane. The ingress PCI bus 66 (see Fig. 6) provides a connection between the ingress processor field programable gate array (FPGA) 62, encryption subsystem or security subsystem 73, special care processor subsystem 68 and control processor subsystem 70. The control processor subsystem 70 includes local system controller 86, synchronous dynamic random access memory (SDRAM) 87, cache 88, global system controller 83 (providing a connection to PCI bus 34), SDRAM 85 and control processor 90. The global system controller 83, the control processor 90 and the local system controller 86 are connected together via a bus connection 67. The egress PCI bus 69 connects egress processor FPGA 81, encryption subsystem or security subsystem 74, special care processor 68 and control processor system 70.
Each of the ingress PCI bus 66 and egress PCI bus 69 have an aggregate bandwidth of approximately 4Gb/s. They are used to pass data packets to and from the fast path hardware. For this reason, the egress processor FPGA 62 is the controller on the egress PCI bus 69, and the ingress processor FPGA 64 (connected to egress processor 81) is the controller on the ingress PCI bus 66. These PCI buses 66 and 69 are shared with the control plane. Control plane functions on the PCI bus 34 are discussed below.
The special care subsystem 68, the control processor system 70 and the security subsystems 74 interface to the ingress and egress processing subsystems 62 and 64 via the pair of PCI bus 66 amd 69. Figure 6 shows how these buses 66 and 69 connect system components together. One PCI bus 66 is specific to ingress traffic, the other PCI bus 69 carries egress traffic. The ingress processor subsystem (ingress FPGA) 62 is connected to ingress PCI bus 66. The egress processor subsystem 64 (and connected FPGA 64 with connected egress processor 81) is connected to ingress PCI bus 69.
The controller 70 including local system controller 66 (e.g., Galileo 64260) with SDRAM 87, with control processor 90 and cache 88 work with the special care subsystem 68, acting as a bridge between the buses 66 and 69. The security subsystems 73 and 74 are respectively connected to buses 66 and 69. This arrangement will allow egress traffic to get to the ingress bus on the same SC and vice-versa. This may be utilized only for the case of IPSec processing. Each of the PCI busses 66 and 69 are 64 bits wide and run at 66 Mhz. This provides a bus bandwidth of 4.2 Gb/s. Assuming 60% utilization on these buses, they have an effective bandwidth of 2.5 Gb/s. If the system is loaded with 50% of the line traffic going to the special care processors of the special care subsystem and 25% going to the security subsystem 74, half of which going over the bridge, this would use up 1.75 Gb/s.
2 ((lGb/s^O.50) + (lGb/s O.25) + (lGb/s O.25/2)) = 1.75Gb/s. This leaves 1.5 Gb/s for control traffic to pass between the control processor, the special care processor, the ingress processor, the egress processor and the security subsystem.
Figure 7 shows the data buses 28, 32 and 30 on which packets are carried to and from the ingress and egress processing cores 62 and 64 via CSIX buses. The ingress processor subsystem 62 has a 3.2Gb/s (32bitxl00Mhz) primary input from CSIX bus 91 with switch fabric interface part (e.g., VSC872) 71. Bus 91 carries data from the line card 22' via bus 28 and via the FC 20. The ingress processor subsystem 62 has a set of two (2) 3.2Gb/s primary outputs with CSIX busses 77 with switch fabric interface part (e.g., VSC872) 72" that will carry end to end data packets to the switch fabric (dynamic section) 20 for egress processing on the egress service card 24". The connected service card (e.g., SC 24") is packet dependent. The ingress processing element 62 has a secondary output in addition. This 3.2Gb/s bi-directional CSIX link 80/83 with switch fabric interface part (VSC872) 72' to the switch fabric 20 is for ingress processor system 50 (e.g., of one SC 24') to ingress processor 56 (cross service card, e.g., to another service card 24") packet transfers.
The egress processing subsystem 64 receives data at inputs from two 3.2Gb/s CSIX links 77 out of the switch fabric interface part (e.g., VSC872) 72". Packets coming to the egress processor subsystem 64 on these links have already been processed down to the end-to-end packet. The egress processor (e.g., 52 or 56) sends a completely processed packet out to the line card 22 via a 3.2Gb/s CSIX link 95 to the switch fabric interface part 71. The packet traverses the static switch fabric 20 on its way to the line card 22.
The LC static buses 26, and SC static buses 28, interconnect line cards 22 and service cards 24 through the fabric card 20. These connections are established when the control card configures the fabric card 20. Connections made between LCs 22 and SCs 24 may be made to be virtually static. The connections may rarely change. Some reasons for connection changes are protection switchover and re-provisioning of hardware.
Each of the static buses 26 and 28 are comprised of 4 high-speed unidirectional differential pairs. Two pairs support subscriber data in the ingress direction while the other two pairs support subscriber data in the egress direction. Each differential pair is a 2.64384 Gbps high-speed LVDS channel. Each channel contains both clock and data information and is encoded to aid in clock recovery at the receiver. At this channel rate the information rate is 2.5 Gbps. Since unidirectional subscriber data flows in 2 channels, or pairs, between LCs 22 and SCs 24 for each static bus 26 and 28, the aggregate information rate is 5 Gbps per direction per bus.
The primary dynamic buses 30 connect the ingress processor of one service card 24 to the egress processor of another service card 24 via the fabric card 20 on a frame-by-frame basis. Each primary dynamic bus 30 is comprised of 8 high-speed unidirectional differential pairs. Four pairs support subscriber data in the ingress direction while the other four pairs support subscriber data in the egress direction. Each differential pair is a 2.64384 Gbps high-speed LVDS channel. Each channel contains both clock and data information and is encoded to aid in clock recovery at the receiver. At this channel rate the information rate is 2.5 Gbps. Since unidirectional subscriber data flows in 4 channels, or pairs, the aggregate information rate for a given direction is 10 Gbps. Secondary dynamic buses 32 are electrically identical to the static buses, but since they are dynamic, subscriber data may be rerouted on a frame-by-frame basis.
The process of the invention is illustrated generally in the flow diagram of Fig. 8. The process begins at 100 by providing the device infrastructure in the form of connection buses 28, 30 and 32 and providing a switch fabric 20 for selectively interconnecting the connection buses. At least a first line card 22', second line card 22", a first service card 24', a second service card 24", and a control card 36 are provided. Advantageously a redundant line card 22, redundant service card 24, a redundant fabric card 20 and a redundant control card 36 may be provided. The fabric card 20 or fabric cards 20 are connected and configured to establish a substantially static connection from first line card 22' via line card bus 26 through fabric card 20 to service card static bus 28 to service card 1 designated 24'. In this configuration, the fabric card 20, as indicated at 102, also provides a connection from line card 22 designated 22", the associated line card bus 26, the fabric card 20 and the service card static bus 28 associated with service card 2 designated 24". Step 104 shows the further steps of receiving packets at the first line card 22' transferring the packets via LC bus -26, fabric card 20, SC static bus 28 to the first service card 24'. As can be appreciated from Fig. 5, the first service card 24' processes packets with ingress processing system 50. As indicated above, control packets are sent to either control processor 62 or special care processor 66 and subscriber data packets are processed to produce the end-to-end packets as shown at 106. At step 106 the necessary de-encapsulation and decryption are performed. As shown at 108, the end-to-end packets are transferred via FC20 to the egress processing system 56 of the second service card 24" via dynamic bus 30 (primary dynamic bus). At step 1 10 the egress packet processor of second service card 24" processes the end-to-end packets including encapsulation and encryption. The packets are then sent to a line card, such as second line card 22" as indicated at step 112. The line card then transmits packets into the network as shown at 114. The protocol stack 55 running on the control processor 62 and special care subsystem 66 may also inject a packet into the ingress processor for transmission. The control processor 62 of service card 24" and the special care processor 66 of service card 24" may also treat further packets for egress processing '
The entire system may be monitored using a display card 42 via display buses 44. The line cards may be monitored via serial control buses 38. The control card 36 may have other output interfaces such as EMS interfaces 48 which can include any one or several of 10/100 base T outputs 43 and serial output 47 and a PCMCIA (or compact flash) output 49.
To support quality of service for multiple sets of customers, the device 10 supports a single point of queuing. Typically, a customer set 120, each set 120 comprising multiple individuals, will be assured of a certain set of protocol services and a portion of the total bandwidth available within the device. It is therefore necessary to be able to monitor the rate of egress of the customer set's traffic. Figure 9 shows multiple customer sets 120 entering the device using different physical interfaces 22.
Because of the distributed nature of the physical ingress, in particular because members of a customer set 120 may ingress on any physical interface and because all processing is performed at a rate greater than or equal to the ingress rate, a common point of aggregation is established on the egress portion of the SC. Referring to Figure 9, customer set #5 can enter the device using LC-5 and LC-7. The ingress protocol processing for this customer set #5 is hosted on SC-3 and SC-4 as indicated by ingress traffic 122 while egress processing is hosted on SC-6 as shown by traffic after ingrees protocol processing 124. The FC switches the ingress traffic from LC-5 and LC-7 to the two SCs 3 and 4 for ingress protocol processing. Since egress processing is hosted on SC-6, the FC 20 switches this traffic 124 to SC-6 for egress processing following ingress protocol processing. SC-6 provides the common point of aggregation and contains one or more queues (at the single location) for holding a customer set's traffic awaiting egress 126 to the LC. Queuing is necessary as the ingress rate of the customer set's aggregated traffic may, at times, exceed the egress rate of a particular physical interface. Monitoring of the egress rate of the customer set's traffic then occurs at the point of aggregation.
The invention provides a device based on modular units. The term card is used to denote such a modular unit. The modules may be added and subtracted and combined with identical redundant modules. However, the principals of this invention may be practiced with a single unit (without modules) or with features of modules described herein combined with other features in different functional groups.
While specific embodiments of the invention have been shown and described in detail to illustrate the application of the principles of the invention, it will be understood that the invention may be embodied otherwise without departing from such principles.

Claims

WHAT IS CLAIMED IS:
1. A network gateway device, comprising: a physical interface for connection to a medium; an ingress processor system for ingress processing of all or part of packets received from said physical interface and for sending ingress processed packets for egress processing; an egress processor system for receiving ingress processed packets and for egress processing of all or part of said received packets for sending to the physical interface; interconnections including an interconnection between said ingress processor system and said egress processor system, an interconnection between said ingress processor system and said physical interface and an interconnection between said egress processor system and said physical interface.
2. A network gateway device according to claim 1 , further comprising a packet queue establishing a queue of packet locations awaiting transmission, said packet queue being the exclusive buffer location for packets between packets entering the device and packet transmission.
3. A network gateway device according to claim 1 , wherein packets exit the device at a rate of a line established at the physical interface.
4. A network gateway device according to claim 1 , wherein said ingress processing system processes packets including at least one or more of protocol translation, de- encapsulation, decryption, authentication, point-to-point protocol (PPP), termination and network address translation (NAT) and said egress processing system processes packets including at least one or more of protocol translation, encapsulation, encryption, generation of authentication data, PPP generation and NAT.
5. A network gateway device according to claim 1, wherein said ingress processor system includes a fast path processor subsystem processing packets at speeds greater than or equal to a rate at which they enter the device.
6. A network gateway device according to claim 5, wherein said fast path processor system provides protocol translation processing converting packets from one protocol to another protocol.
7. A network gateway device according to claim 5, wherein said egress processor system includes a fast path processor subsystem processing packets at speeds greater than or equal to a rate at which they are to leave the device.
8. A network device according to claim 5, wherein said ingress processor system includes a security processor subsystem for processing security packets requiring one or more of decryption and authentication, said processing occurring concurrently with fast path processor packet processing.
9. A network device according to claim 7, wherein said egress processor system includes a security processor subsystem for processing security packets requiring one or more of encryption and generation of authentication data, said processing occurring concurrently with fast path processor packet processing.
10. A network device according to claim 7, wherein said ingress processor system includes a special care packet processor for additional packet processing concurrently with fast path processor packet processing, said special care packet processor processing packets including one or more of network address translation (NAT) processing and NAT processing coupled with application layer gateway processing (NAT-ALG).
11. A network device according to claim 7, wherein said ingress processor system includes a control packet processor for additional packet processing concurrently with fast path processor packet processing, including processing packets signaling the start and end of data sessions, packets used to convey information to a particular protocol and packets dependent on interaction with external entities.
12. A network device according to claim 1, wherein said physical interface includes a line card and said ingress processor system is provided as part of a service card and said egress processor system is provided in one of said service card and another service card and said interconnections include: a line card bus connected to said line card; a service card bus connected to at least one of said service card and said another service card; and a switch fabric connecting said line card to at least one of said service card and said another service card.
13. A network device according to claim 12, wherein said service card includes said ingress processor system and said egress processor system and said another service card includes another ingress processor system for processing all or part of packets received from said line card and for sending ingress processed packets for egress processing and another egress processor system for receiving ingress processed packets and for processing all or part of received packets for sending to said line card, whereby packets may be sent between service cards for ingress processing by one service card and egress processing by another service card or for ingress processing using more than one service card.
14. A network gateway device according to claim 13, wherein each of said service cards is identical and a spare service cards is provided, for functionally replacing any one of the other service cards to provide redundancy.
15. A network gateway device according to claim 13, wherein said physical interface includes another line card connected by said switch fabric to at least one of said service card and said another service card.
16. A network gateway device according to claim 15, wherein said switch fabric connects any one of said line cards to any one of said service cards, whereby any line card can send packet traffic to any service card and routing of packet traffic is configured one of statically and dynamically by the said line card.
17. A network gateway device according to claim 13, wherein: said service card bus includes a static bus part for connection of one of said service cards through said switch fabric to one of said line cards and a dynamic bus for connecting a service card to another service card through said switch fabric allowing any service card to send packet traffic requiring ingress processing to any other service card for ingress processing and allowing any service card to send traffic requiring egress processing to any other service card for egress processing, whereby the system can make use of unused capacity that may exist on other service cards.
18. A network gateway process, comprising: receiving packets from a network via a physical interface connected to a medium; ingress processing of packets, with an ingress processing system, including one or more of protocol translation processing, de-encapsulation, decryption, authentication, point-to- point protocol (PPP) termination and network address translation (NAT); transferring packets to an egress packet processing system; egress processing said packets, with the egress processing system, including one or more of protocol translation, encapsulation, encryption, generation of authentication data, PPP generation and NAT processing.
19. A process according to claim 18, further comprising: establishing a queue of packets awaiting transmission; and transmitting queued packets via the physical interface, said packet queue being the exclusive buffer for packets between packets entering the ingress processing system and packet transmission.
20. A process according to claim 18, wherein packets are processed by said ingress processor at a rate of ingress at the physical interface.
21. A process according to claim 18, wherein said ingress processor system includes a fast path processor subsystem processing packets at speeds greater than or equal to the rate at which packets enter the ingress processor system.
22. A process according to claim 21, wherein said fast path processor subsystem provides protocol translation processing converting packets from one protocol to another protocol.
23. A process according to claim 21 , wherein said ingress processor system includes a security processor subsystem for processing security packets requiring one or more of decryption and authentication, said processing occurring concurrently with fast path processor packet processing.
24. A process according to claim 21 , wherein said ingress processor system includes a special care packet processor for additional packet processing concurrently with fast path processor packet processing, said special care packet processor processing packets including one or more of network address translation (NAT) processing and NAT processing coupled with application layer gateway processing (NAT-ALG).
25. A process according to claim 21 , wherein said ingress processor system includes a control packet processor for additional packet processing concurrently with fast path processor packet processing, including processing packets signaling the start and end of data sessions, packets used to convey information to a particular protocol and packets dependent on interaction with external entities.
26. A process according to claim 21 , further comprising: providing said physical interface including a line card; providing said ingress processor system as part of a service card; providing said egress processor system in one of the service card and another service, providing a line card bus connected to the line card; providing a service card bus connected to at least one of the service card and the another service card; and providing a switch fabric connecting the line card to at least one of the service card and the another service card.
27. A process according to claim 26 , further comprising: providing said ingress processor system and said egress processor system as part of said service card; providing another service card with another ingress processor system for processing all or part of packets received from said line card and for sending ingress processed packets for egress processing and another egress processor system for receiving ingress processed packets and for processing all or part of received packets for sending to the line card; sending packets between service cards for ingress processing by one service card and egress processing by another service card or for ingress processing using more than one service card.
28. A process according to claim 26, further comprising: providing another line card as part of said physical interface; connecting said another line card, via said switch fabric to at least one of said service card and said another service card.
29. A process according to claim 28, further comprising: using said switch fabric to connect any one of said line cards to any one of said service cards, whereby any line card can send packet traffic to any service card and routing of packet traffic is configured one of statically and dynamically by said line card.
30. A process according to claim 28, further comprising: providing said service card bus as a static bus for connection of one of said service cards through said switch fabric to one of said line cards and a dynamic bus for connecting a service card to another service card through said switch fabric allowing any service card to send packet traffic requiring ingress processing to any other service card for ingress processing and allowing any service card to send traffic requiring egress processing to any other service card for egress processing, whereby the system can make use of unused capacity that may exist on other service cards.
31. A process according to claim 18, further comprising: receiving packets from a network with a first packet protocol as part of said step of receiving packets; using a first module ingress processing subsystem for said step of ingress processing of packets to produce end-to-end packets; transferring the end-to-end packets to a second module egress packet processing subsystem; using a second module egress processing subsystem for egress packet processing to produce packets for sending to a network with a second packet protocol; receiving packets from the network with the second packet protocol; using a second module ingress processing subsystem for ingress processing to produce end-to-end packets; transferring the end-to-end packets to a first module egress processing subsystem; using the first module egress packet processing subsystem for egress packet processing to produce packets for sending to the network with the first packet protocol.
32. A process according to claim 18, further comprising providing a switch fabric; connecting a first line card to the switch fabric via a bus, the first line card providing a network interface; connecting a first service card to the switch fabric via a bus; connecting a second line card to the switch fabric via a bus, the second line card providing a network interface; connecting a second service card to the switch fabric via a bus; transferring packets from the first line card to the first service card; processing packets at the first service card including one or more of de-encapsulation and decryption as part of said step of said step of ingress processing of packets; transferring packets from the first service card to the second service card; processing packets at the second service card including one or more of encapsulation and encryption as part of said step of egress processing packets; transferring packets from the second service card to the second line card.
33. A process according to claim 32, wherein each of said first service card and said second service card process ingress packets from a line card, including encapsulation and encryption processing separate from processing egress packets to a line card, including de- encapsulation and decryption with separate processing subsystems.
34. A process according to claim 29, further comprising: segregating traffic including physically segregating data traffic using one or more service card and one or more line card with traffic flows segregated from data traffic on one or more other service card and one or more other line card.
PCT/US2002/008170 2001-03-17 2002-03-15 Multiprotocol wireless gateway WO2002082723A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP02763851A EP1371198A2 (en) 2001-03-17 2002-03-15 Multiprotocol wireless gateway
AU2002338382A AU2002338382A1 (en) 2001-03-17 2002-03-15 Multiprotocol wireless gateway
JP2002580556A JP2005503691A (en) 2001-03-17 2002-03-15 Network infrastructure device for data traffic to or from a mobile device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/811,204 2001-03-17
US09/811,204 US20020181476A1 (en) 2001-03-17 2001-03-17 Network infrastructure device for data traffic to and from mobile units

Publications (2)

Publication Number Publication Date
WO2002082723A2 true WO2002082723A2 (en) 2002-10-17
WO2002082723A3 WO2002082723A3 (en) 2003-08-07

Family

ID=25205872

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2002/008170 WO2002082723A2 (en) 2001-03-17 2002-03-15 Multiprotocol wireless gateway

Country Status (5)

Country Link
US (1) US20020181476A1 (en)
EP (1) EP1371198A2 (en)
JP (1) JP2005503691A (en)
AU (1) AU2002338382A1 (en)
WO (1) WO2002082723A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007535001A (en) * 2004-04-27 2007-11-29 インテル・コーポレーション 装置 Device and method for performing cryptographic processing
JP2008500590A (en) * 2004-06-25 2008-01-10 インテル・コーポレーション Apparatus and method for performing MD5 digesting

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7596139B2 (en) 2000-11-17 2009-09-29 Foundry Networks, Inc. Backplane interface adapter with error control and redundant fabric
US7016361B2 (en) * 2002-03-02 2006-03-21 Toshiba America Information Systems, Inc. Virtual switch in a wide area network
US20120155466A1 (en) 2002-05-06 2012-06-21 Ian Edward Davis Method and apparatus for efficiently processing data packets in a computer network
US7187687B1 (en) 2002-05-06 2007-03-06 Foundry Networks, Inc. Pipeline method and system for switching packets
US20040098510A1 (en) * 2002-11-15 2004-05-20 Ewert Peter M. Communicating between network processors
JP4431315B2 (en) * 2003-01-14 2010-03-10 株式会社日立製作所 Packet communication method and packet communication apparatus
US7337314B2 (en) * 2003-04-12 2008-02-26 Cavium Networks, Inc. Apparatus and method for allocating resources within a security processor
US7661130B2 (en) * 2003-04-12 2010-02-09 Cavium Networks, Inc. Apparatus and method for allocating resources within a security processing architecture using multiple queuing mechanisms
US7657933B2 (en) 2003-04-12 2010-02-02 Cavium Networks, Inc. Apparatus and method for allocating resources within a security processing architecture using multiple groups
US6901072B1 (en) * 2003-05-15 2005-05-31 Foundry Networks, Inc. System and method for high speed packet transmission implementing dual transmit and receive pipelines
US20050102474A1 (en) * 2003-11-06 2005-05-12 Sridhar Lakshmanamurthy Dynamically caching engine instructions
US20050108479A1 (en) * 2003-11-06 2005-05-19 Sridhar Lakshmanamurthy Servicing engine cache requests
US7536692B2 (en) 2003-11-06 2009-05-19 Intel Corporation Thread-based engine cache partitioning
US7721300B2 (en) * 2004-01-07 2010-05-18 Ge Fanuc Automation North America, Inc. Methods and systems for managing a network
US20050193178A1 (en) * 2004-02-27 2005-09-01 William Voorhees Systems and methods for flexible extension of SAS expander ports
US7817659B2 (en) * 2004-03-26 2010-10-19 Foundry Networks, Llc Method and apparatus for aggregating input data streams
US8730961B1 (en) 2004-04-26 2014-05-20 Foundry Networks, Llc System and method for optimizing router lookup
US7920542B1 (en) * 2004-04-28 2011-04-05 At&T Intellectual Property Ii, L.P. Method and apparatus for providing secure voice/multimedia communications over internet protocol
US7466712B2 (en) * 2004-07-30 2008-12-16 Brocade Communications Systems, Inc. System and method for providing proxy and translation domains in a fibre channel router
US8059664B2 (en) * 2004-07-30 2011-11-15 Brocade Communications Systems, Inc. Multifabric global header
US7936769B2 (en) 2004-07-30 2011-05-03 Brocade Communications System, Inc. Multifabric zone device import and export
US8448162B2 (en) 2005-12-28 2013-05-21 Foundry Networks, Llc Hitless software upgrades
US8238255B2 (en) 2006-11-22 2012-08-07 Foundry Networks, Llc Recovering from failures without impact on data traffic in a shared bus architecture
US7626982B2 (en) * 2006-12-01 2009-12-01 Time Warner Cable, Inc. System and method for communication over an adaptive service bus
CN101202719A (en) * 2006-12-15 2008-06-18 鸿富锦精密工业(深圳)有限公司 Network device and communication redundancy method thereof
US8155011B2 (en) 2007-01-11 2012-04-10 Foundry Networks, Llc Techniques for using dual memory structures for processing failure detection protocol packets
US8509236B2 (en) 2007-09-26 2013-08-13 Foundry Networks, Llc Techniques for selecting paths and/or trunk ports for forwarding traffic flows
US8599850B2 (en) * 2009-09-21 2013-12-03 Brocade Communications Systems, Inc. Provisioning single or multistage networks using ethernet service instances (ESIs)
US8830930B2 (en) * 2010-08-16 2014-09-09 Electronics And Telecommunications Research Institute Device in wireless network, device resource management apparatus, gateway and network server, and control method of the network server
JP6429188B2 (en) * 2014-11-25 2018-11-28 APRESIA Systems株式会社 Relay device
CN108886495B (en) * 2016-02-18 2022-07-05 瑞萨电子株式会社 Message processor
US20220374376A1 (en) * 2021-05-19 2022-11-24 Sony Semiconductor Solutions Corporation Memory mapping of legacy i/f protocols over tdd

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997008838A2 (en) * 1995-08-14 1997-03-06 Ericsson Inc. Method and apparatus for modifying a standard internetwork protocol layer header
EP0838930A2 (en) * 1996-10-25 1998-04-29 Digital Equipment Corporation Pseudo network adapter for frame capture, encapsulation and encryption
US5949785A (en) * 1995-11-01 1999-09-07 Whittaker Corporation Network access communications system and methodology

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB8425375D0 (en) * 1984-10-08 1984-11-14 Gen Electric Co Plc Data communication systems
US5229990A (en) * 1990-10-03 1993-07-20 At&T Bell Laboratories N+K sparing in a telecommunications switching environment
US5276684A (en) * 1991-07-22 1994-01-04 International Business Machines Corporation High performance I/O processor
US5495478A (en) * 1994-11-14 1996-02-27 Dsc Communications Corporation Apparatus and method for processing asynchronous transfer mode cells
US5615211A (en) * 1995-09-22 1997-03-25 General Datacomm, Inc. Time division multiplexed backplane with packet mode capability
US5781320A (en) * 1996-08-23 1998-07-14 Lucent Technologies Inc. Fiber access architecture for use in telecommunications networks
US6038228A (en) * 1997-04-15 2000-03-14 Alcatel Usa Sourcing, L.P. Processing call information within a telecommunications network
US6259699B1 (en) * 1997-12-30 2001-07-10 Nexabit Networks, Llc System architecture for and method of processing packets and/or cells in a common switch
US6272129B1 (en) * 1999-01-19 2001-08-07 3Com Corporation Dynamic allocation of wireless mobile nodes over an internet protocol (IP) network
US6591306B1 (en) * 1999-04-01 2003-07-08 Nec Corporation IP network access for portable devices
US6680933B1 (en) * 1999-09-23 2004-01-20 Nortel Networks Limited Telecommunications switches and methods for their operation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997008838A2 (en) * 1995-08-14 1997-03-06 Ericsson Inc. Method and apparatus for modifying a standard internetwork protocol layer header
US5949785A (en) * 1995-11-01 1999-09-07 Whittaker Corporation Network access communications system and methodology
EP0838930A2 (en) * 1996-10-25 1998-04-29 Digital Equipment Corporation Pseudo network adapter for frame capture, encapsulation and encryption

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007535001A (en) * 2004-04-27 2007-11-29 インテル・コーポレーション 装置 Device and method for performing cryptographic processing
JP2008500590A (en) * 2004-06-25 2008-01-10 インテル・コーポレーション Apparatus and method for performing MD5 digesting

Also Published As

Publication number Publication date
WO2002082723A3 (en) 2003-08-07
US20020181476A1 (en) 2002-12-05
JP2005503691A (en) 2005-02-03
AU2002338382A1 (en) 2002-10-21
EP1371198A2 (en) 2003-12-17

Similar Documents

Publication Publication Date Title
US20020181476A1 (en) Network infrastructure device for data traffic to and from mobile units
US20070280223A1 (en) Hybrid data switching for efficient packet processing
US20020184487A1 (en) System and method for distributing security processing functions for network applications
US7283538B2 (en) Load balanced scalable network gateway processor architecture
McAuley Protocol design for high speed networks
US6157649A (en) Method and system for coordination and control of data streams that terminate at different termination units using virtual tunneling
US20180287818A1 (en) Non-blocking any-to-any data center network having multiplexed packet spraying within access node groups
US7836443B2 (en) Network application apparatus
US5280481A (en) Local area network transmission emulator
US6160811A (en) Data packet router
JP3873639B2 (en) Network connection device
US20030074473A1 (en) Scalable network gateway processor architecture
JPH1132059A (en) High-speed internet access
US20090323554A1 (en) Inter-office communication methods and devices
US6947416B1 (en) Generalized asynchronous HDLC services
US7535895B2 (en) Selectively switching data between link interfaces and processing engines in a network switch
US7680102B2 (en) Method and system for connecting manipulation equipment between operator's premises and the internet
Dayananda et al. Architecture for inter-cloud services using IPsec VPN
EP1636926B1 (en) Network switch for link interfaces and processing engines
JP4189965B2 (en) Communication node
US11929934B2 (en) Reliable credit-based communication over long-haul links
WO2018093290A1 (en) Method for providing broadband data transmission services
US7535894B2 (en) System and method for a communication network
Mandviwalla et al. DRA: A dependable architecture for high-performance routers

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2002763851

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2002580556

Country of ref document: JP

WWP Wipo information: published in national office

Ref document number: 2002763851

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWW Wipo information: withdrawn in national office

Ref document number: 2002763851

Country of ref document: EP