WO2004045162A2 - Traffic management architecture - Google Patents

Traffic management architecture Download PDF

Info

Publication number
WO2004045162A2
WO2004045162A2 PCT/GB2003/004893 GB0304893W WO2004045162A2 WO 2004045162 A2 WO2004045162 A2 WO 2004045162A2 GB 0304893 W GB0304893 W GB 0304893W WO 2004045162 A2 WO2004045162 A2 WO 2004045162A2
Authority
WO
WIPO (PCT)
Prior art keywords
processor
packets
packet
sorting
queue
Prior art date
Application number
PCT/GB2003/004893
Other languages
French (fr)
Other versions
WO2004045162A3 (en
Inventor
Anthony Spencer
Original Assignee
Clearspeed Technology Plc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Clearspeed Technology Plc filed Critical Clearspeed Technology Plc
Priority to CN2003801085295A priority Critical patent/CN1736068B/en
Priority to US10/534,346 priority patent/US20050243829A1/en
Priority to GB0511589A priority patent/GB2412035B/en
Priority to AU2003283559A priority patent/AU2003283559A1/en
Publication of WO2004045162A2 publication Critical patent/WO2004045162A2/en
Publication of WO2004045162A3 publication Critical patent/WO2004045162A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/80Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/466Transaction processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • H04L47/562Attaching a time tag to queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/60Queue scheduling implementing hierarchical scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6215Individual queue per QOS, rate or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/624Altering the ordering of packets in an individual queue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9042Separate storage for different parts of the packet, e.g. header and payload

Definitions

  • a router's switch fabric can deliver packets from multiple ingress ports to one of a number of egress ports. The linecard connected to this egress port must then transmit these packets over some communication medium to the next router in the network.
  • the rate of transmission is normally limited to a standard rate.
  • an OC-768 link would transmit packets over an optical fibre at a rate of 40 Gbits/s.
  • the time-averaged rate of delivery cannot exceed 40 Gbits/s for this example.
  • the short term delivery of traffic by the fabric is "bursty" in nature with rates often peaking above the 40 Gbits/s threshold. Since the rate of receipt can be greater than the rate of transmission, short term packet queueing is required at egress to prevent packet loss.
  • a simple FIFO queue is adequate for this purpose for routers which provide a flat grade of service to all packets. However, more complex schemes are required in routers which provide Traffic Management.
  • Email can be carried on a best effort service where no guarantees are made regarding rate of or delay in delivery.
  • Real-time voice data has a much more demanding requirement for reserved transmission bandwidth and guaranteed minimum delay in delivery. This cannot be achieved if all traffic is buffered in the same FLFO queue.
  • a queue per so-called "Class of Service” is required so that traffic routed through higher priority queues can bypass that in lower priority queues. Certain queues may also be assured a guaranteed portion of the available output line bandwidth.
  • Packets are placed in queues according to their required class of service. For every forwarding treatment that a system provides, a queue must be implemented. These queues are then managed by the following mechanisms:
  • Scheduling controls the de-queuing process by dividing the available output line bandwidth between the queues.
  • WFQ Weighted Fair Queueing
  • DRR Deficit Round Robin
  • WRED Weighted Random Early Detect
  • Priority queue ordering for some (FQ) scheduling algorithms is a non-trivial problem at high speeds.
  • Figure 1 shows the basic layout of the current approach to traffic management. It can be thought of as a "queue first, think later" strategy.
  • Data received at the input l is split into a number of queues in parallel channels 2.1 to 2.n.
  • a traffic scheduler processor 3 receives the data from the parallel channels and sorts them into order. The order may be determined by the priority attributes, for example, mentioned above. State is stored in memory 4 accessible by the processor. The output from the processor represents the new order as determined by the processor in dependence on the quality of service attributes assigned to the data at the outset.
  • the traffic scheduler 3 determines the order of de-queuing. Since the scheduling decision can be processing-intensive as the number of input queues increases, queues are often arranged into small groups which are locally scheduled into an intermediate output queue.
  • This output queue is then the input queue to a following scheduling stage.
  • the scheduling problem is thus simplified using a "divide-and-conquer" approach, whereby high performance can be achieved through parallelism between groups of queues in a tree type structure, or so-called hierarchical link sharing scheme.
  • the invention provides a system comprising means for sorting incoming data packets in real time before said packets are stored in memory.
  • the invention provides a data packet handling system, comprising means whereby incoming data packets are assigned an exit order before being stored in memory.
  • the sorting means may be responsive to information contained within a packet and/or within a table and/or information associated with a data packet stream in which said packet is located, whereby to determine an exit order number for that packet.
  • the packets may be inserted into one or more queues by a queue manager adapted to insert packets into the queue means in exit order. There may be means to drop certain packets before being output from said queue means or before being queued in the queue means.
  • the system may be such that the sorting means and the queue means process only packet records containing information about the packets, whereas data portions of the packets are stored in the memory for output in accordance with an exit order determined for the corresponding packet record.
  • a state engine may control access to the shared state.
  • Tables of information for sorting said packets or said packet records may be provided, wherein said tables are stored locally to each processor or to each processor element of a parallel processor.
  • the tables may be the same on each processor or on each processor element of a parallel processor.
  • the tables may be different on different processors or on different processor elements of a parallel processor.
  • the processors or processor elements may share information from their respective tables, such that: (a) the information held in the table for one processor is directly accessible by a different processor or the information held in the table in one processor element may be accessible by other processing element(s) of the processor; and (b) processors may have access to tables in other processors or processor elements have access to other processor elements in the processor, whereby processors or processor elements can perform table lookups on behalf of other processor(s) or processor elements of the processor.
  • the invention also encompasses a computer system, comprising a data handling system as previously specified; a network processing system, comprising a data handling system as previously specified; and a data carrier containing program means adapted to perform a corresponding method.
  • Figure 1 is a schematic representation of a prior art traffic handler
  • Figure 2 is a schematic representation of a traffic handler in accordance with the invention.
  • FIG. 2 shows schematically the basic structure underlying the new strategy for effective traffic management. It could be described as a "think first, queue later TM" strategy.
  • Packet data (traffic) received at the input 20 has the header portions stripped off and record portions of fixed length generated therefrom, containing information about the data, so that the record portions and the data portions can be handled separately.
  • the data portions take the lower path and are stored in Memory Hub 21.
  • a processor 22 such as a SIMD parallel processor, comprising one or more arrays of processor elements (PEs).
  • PEs processor elements
  • each PE typically, each PE contains its own processor unit, local memory and register(s).
  • the present architecture shares state 23 in the PE arrays under the control of a State Engine (not shown) communicating with the PE array(s). It should be emphasised that only the record portions are processed in the PE array. The record portions are all the same length, so their handling is predictable, at least in terms of length. The record portions are handled in the processor 22. Here, information about the incoming packets is distributed amongst the PEs in the array. This array basically performs the same function as the processor 3 in the prior art ( Figure 1) but the operations are spread over the PE array for vastly more rapid processing.
  • the memory hub 21 can handle packets streaming in at real time.
  • the memory hub can nevertheless divide larger data portions into fragments, if necessary, and store them in physically different locations, provided, of course, there are pointers to the different fragments to ensure read out of the entire content of such data packets.
  • multiple PEs are permitted to access (and modify) the state variables.
  • the chip will also include necessary additional components, such as a distributor and a collector per PE array to distribute data to the individual PEs and to collect processed data from the PEs, plus semaphore block(s) and interface elements.
  • a distributor and a collector per PE array to distribute data to the individual PEs and to collect processed data from the PEs, plus semaphore block(s) and interface elements.

Abstract

An architecture for sorting incoming data packets in real time, on the fly, processes the packets and places them into an exit order queue before storing the packets. This is in contrast to the traditional way of storing first then sorting later and provides rapid processing capability. A processor (22) generates packet records from an input stream (20) and determines an exit order number for the related packet. The records are stored in an orderlist manager (24) whilst the data portions are stored in memory hub (21) for later retrieval in the exit order stored in the manager (24). The processor (22) is preferably a parallel processor array using SIMD and provided with rapid access to shared state (23) by a state engine.

Description

TRAFFIC MANAGEMENT ARCHITECTURE
Field of the Invention
The present invention concerns the management of traffic, such as data and communications traffic, and provides an architecture for a traffic manager that surpasses known traffic management schemes in terms of speed, efficiency and reliability. Background to the Invention
The problem that modern traffic management schemes have to contend with is the sheer volume. Data arrives at a traffic handler from multiple sources at unknown rates and volumes and has to be received, sorted and passed on "on the fly" to the next items of handling downstream. Received data may be associated with a number of attributes by which priority allocation, for example, is applied to individual data packets or streams, depending on the class of service offered to an individual client. Some traffic may therefore have to be queued whilst later arriving but higher priority traffic is processed. A router's switch fabric can deliver packets from multiple ingress ports to one of a number of egress ports. The linecard connected to this egress port must then transmit these packets over some communication medium to the next router in the network. The rate of transmission is normally limited to a standard rate. For instance, an OC-768 link would transmit packets over an optical fibre at a rate of 40 Gbits/s. With many independent ingress paths delivering packets for transmission at egress, the time-averaged rate of delivery cannot exceed 40 Gbits/s for this example. Although over time the input and output rates are equivalent, the short term delivery of traffic by the fabric is "bursty" in nature with rates often peaking above the 40 Gbits/s threshold. Since the rate of receipt can be greater than the rate of transmission, short term packet queueing is required at egress to prevent packet loss. A simple FIFO queue is adequate for this purpose for routers which provide a flat grade of service to all packets. However, more complex schemes are required in routers which provide Traffic Management. In a converged internetwork, different end user applications require different grades of service in order to run effectively. Email can be carried on a best effort service where no guarantees are made regarding rate of or delay in delivery. Real-time voice data has a much more demanding requirement for reserved transmission bandwidth and guaranteed minimum delay in delivery. This cannot be achieved if all traffic is buffered in the same FLFO queue. A queue per so-called "Class of Service" is required so that traffic routed through higher priority queues can bypass that in lower priority queues. Certain queues may also be assured a guaranteed portion of the available output line bandwidth. On first sight the traffic handling task appears to be straightforward. Packets are placed in queues according to their required class of service. For every forwarding treatment that a system provides, a queue must be implemented. These queues are then managed by the following mechanisms:
• Queue management assigns buffer space to queues and prevents overflow
• Measures are implemented to cause traffic sources to slow their transmission rates if queues become backlogged
• Scheduling controls the de-queuing process by dividing the available output line bandwidth between the queues.
Different service levels can be provided by weighting the amount of bandwidth and buffer space allocated to different queues, and by prioritised packet dropping in times of congestion. Weighted Fair Queueing (WFQ), Deficit Round Robin (DRR) scheduling, Weighted Random Early Detect (WRED) are just a few of the many algorithms which might be employed to perform these scheduling and congestion avoidance tasks. In reality, system realisation is confounded by some difficult implementation issues:
• High line speeds can cause large packet backlogs to rapidly develop during brief congestion events. Large memories of the order 500 MBytes to 1 GBytes are required for 40 Gbits/s line rates.
• The packet arrival rate can be very high due to overspeed in the packet delivery from the switch fabric. This demands high data read and write bandwidth into memory. More importantly, high address bandwidth is also required. • The processing overhead of some scheduling and congestion avoidance algorithms is high.
Priority queue ordering for some (FQ) scheduling algorithms is a non-trivial problem at high speeds.
• A considerable volume of state must be maintained in support of scheduling and congestion avoidance algorithms, to which low latency access is required. The volume of state increases with the number of queues implemented. • As new standards and algorithms emerge, the specification is a moving target. To find a flexible (ideally programmable) solution is therefore a high priority. In a conventional approach to traffic scheduling, one might typically place packets directly into an appropriate queue on arrival, and then subsequently dequeue packets from those queues into an output stream.
Figure 1 shows the basic layout of the current approach to traffic management. It can be thought of as a "queue first, think later" strategy. Data received at the input lis split into a number of queues in parallel channels 2.1 to 2.n. A traffic scheduler processor 3 receives the data from the parallel channels and sorts them into order. The order may be determined by the priority attributes, for example, mentioned above. State is stored in memory 4 accessible by the processor. The output from the processor represents the new order as determined by the processor in dependence on the quality of service attributes assigned to the data at the outset. The traffic scheduler 3 determines the order of de-queuing. Since the scheduling decision can be processing-intensive as the number of input queues increases, queues are often arranged into small groups which are locally scheduled into an intermediate output queue. This output queue is then the input queue to a following scheduling stage. The scheduling problem is thus simplified using a "divide-and-conquer" approach, whereby high performance can be achieved through parallelism between groups of queues in a tree type structure, or so-called hierarchical link sharing scheme.
This approach works in hardware up to a point. For the exceptionally large numbers of input queues (of the order 64k) required for per- flow traffic handling, the first stage becomes unmanageably wide to a point that it becomes impractical to implement the required number of schedulers. Alternatively, in systems which aggregate all traffic into a small number of queues parallelism between hardware schedulers cannot be exploited. It then becomes extremely difficult to implement a single scheduler - even in optimised hardware - that can meet the required performance point. With other congestion avoidance and queue management tasks to perform in addition to scheduling, it is apparent that a new approach to traffic handling is required. The queue first, think later strategy often fails and data simply has to be jettisoned. There is therefore a need for an approach to traffic management that does not suffer from the same defects as the prior art and does not introduce its own fallibilities.
Summary of the invention
In one aspect, the invention provides a system comprising means for sorting incoming data packets in real time before said packets are stored in memory.
In another aspect, the invention provides a data packet handling system, comprising means whereby incoming data packets are assigned an exit order before being stored in memory.
In yet another aspect, the invention provides a method for sorting incoming data packets in real time, comprising sorting the packets into an exit order before storing them in memory.
The sorting means may be responsive to information contained within a packet and/or within a table and/or information associated with a data packet stream in which said packet is located, whereby to determine an exit order number for that packet. The packets may be inserted into one or more queues by a queue manager adapted to insert packets into the queue means in exit order. There may be means to drop certain packets before being output from said queue means or before being queued in the queue means.
The system may be such that the sorting means and the queue means process only packet records containing information about the packets, whereas data portions of the packets are stored in the memory for output in accordance with an exit order determined for the corresponding packet record.
The sorting means preferably comprises a parallel processor, such as an array processor, more preferably a SLMD processor. There may be further means to provide access for the parallel processors to shared state.
A state engine may control access to the shared state.
Tables of information for sorting said packets or said packet records may be provided, wherein said tables are stored locally to each processor or to each processor element of a parallel processor. The tables may be the same on each processor or on each processor element of a parallel processor. The tables may be different on different processors or on different processor elements of a parallel processor.
The processors or processor elements may share information from their respective tables, such that: (a) the information held in the table for one processor is directly accessible by a different processor or the information held in the table in one processor element may be accessible by other processing element(s) of the processor; and (b) processors may have access to tables in other processors or processor elements have access to other processor elements in the processor, whereby processors or processor elements can perform table lookups on behalf of other processor(s) or processor elements of the processor. The invention also encompasses a computer system, comprising a data handling system as previously specified; a network processing system, comprising a data handling system as previously specified; and a data carrier containing program means adapted to perform a corresponding method.
Brief Description of the Drawings
The invention will be described with reference to the following drawings, in which: Figure 1 is a schematic representation of a prior art traffic handler; and Figure 2 is a schematic representation of a traffic handler in accordance with the invention.
Detailed Description of the Illustrated Embodiments
The present invention turns current thinking on its head. Figure 2 shows schematically the basic structure underlying the new strategy for effective traffic management. It could be described as a "think first, queue later ™" strategy. Packet data (traffic) received at the input 20 has the header portions stripped off and record portions of fixed length generated therefrom, containing information about the data, so that the record portions and the data portions can be handled separately. Thus, the data portions take the lower path and are stored in Memory Hub 21. At this stage, no attempt is made to organise the data portions in any particular order. However, the record portions are passed to a processor 22, such as a SIMD parallel processor, comprising one or more arrays of processor elements (PEs). Typically, each PE contains its own processor unit, local memory and register(s).
In contrast to the prior architecture outlined in Figure 1, the present architecture shares state 23 in the PE arrays under the control of a State Engine (not shown) communicating with the PE array(s). It should be emphasised that only the record portions are processed in the PE array. The record portions are all the same length, so their handling is predictable, at least in terms of length. The record portions are handled in the processor 22. Here, information about the incoming packets is distributed amongst the PEs in the array. This array basically performs the same function as the processor 3 in the prior art (Figure 1) but the operations are spread over the PE array for vastly more rapid processing. This processing effectively "time-stamps" the packet records to indicate when the corresponding data should be exited, assuming that it should actually be exited and not jettisoned, for example. The results of this processing are sent to the orderlist manager 24, which is an "intelligent" queue system which places the record portions in the appropriate exit order, for example in bins allocated to groups of data exit order numbers. The manager 24 is preferably dynamic, so that new data packets with exit numbers having a higher priority than those already in an appropriate exit number bin can take over the position previously allocated. It should be noted that the PE array 22 simply calculates the order in which the data portions are to be output but the record portions themselves do not have to be put in that order. In other words, the PEs do not have to maintain the order of packets being processed nor sort them before they are queued.
Previous systems in which header and data portions were treated as one entity became unwieldy, slow and cumbersome because of the innate difficulty of preserving the integrity of the whole packet yet still providing enough bandwidth to handle the combination. In the present invention, it is only necessary for the Memory Hub 21 to provide sufficient bandwidth to handle just the data portions. The memory hub can handle packets streaming in at real time. The memory hub can nevertheless divide larger data portions into fragments, if necessary, and store them in physically different locations, provided, of course, there are pointers to the different fragments to ensure read out of the entire content of such data packets. In order to overcome the problem of sharing state over all the PEs in the array, multiple PEs are permitted to access (and modify) the state variables. Such access is under the control of a State Engine (not shown), which automatically handles the "serialisation" problem of parallel access to shared state. The output 25, in dependence on the exit order queue held in the Orderlist Manager 24, instructs the Memory Hub 21 to read out the corresponding packets in that required order, thereby releasing memory locations for newly received data packets in the process. The chain-dotted line 26 enclosing the PE array 22, shared state/State Engine 23 and Orderlist Manager 24 signifies that this combination of elements can be placed on a single chip and that this chip can be replicated, so that there may be one or two (or more) chips interfacing with single input 20, output 25 and Memory Hub 21. As is customary, the chip will also include necessary additional components, such as a distributor and a collector per PE array to distribute data to the individual PEs and to collect processed data from the PEs, plus semaphore block(s) and interface elements. The following features are significant to the new architecture:
• There are no separate, physical stage one input queues. • Packets are effectively sorted directly into the output queue on arrival. A group of input queues thus exists in the sense of being interleaved together within the single output queue.
• These interleaved "input queues" are represented by state in the queue state engine. This state may track queue occupancy, finish time/number of the last packet in the queue etc. Occupancy can be used to determine whether or not a newly arrived packet should be placed in the output queue or whether it should be dropped (congestion management). Finish numbers are used to preserve the order of the "input queues" within the output queue and determine an appropriate position in the output queue for newly arrived packets (scheduling). • Scheduling and congestion avoidance decisions are thus made "on the fly" prior to enqueuing (ie "Think first, queue later"™).
• This technique is made possible by the deployment of a high performance data flow processor which can perform the required functions at wire speed. Applicant's array processor is ideal for this purpose, providing a large number of processing cycles per packet for packets arriving at rates as high as one every couple of system clock cycles.
Ancillary features
Class of Service (CoS) tables:
CoS parameters are used in scheduling and congestion avoidance calculations. They are conventionally read by processors as a fixed group of values from a class of service table in a shared memory. This places further demands on system bus and memory access bandwidth. The table size also limits the number of different classes of service which may be stored. An intrinsic capability of Applicant's array processor is rapid, parallel local memory access. This can be used to advantage as follows:
• The Class of Service table is mapped into each PE's memory. This means that all passive state does not require lookup from external memory. The enormous internal memory addressing bandwidth of SIMD processor is utilised.
• By performing multiple lookups into local memories in a massively parallel fashion instead of single large lookups from a shared external table there is a huge number of different Class of Service combinations available from a relatively small volume of memory. Table sharing between PEs - PEs can perform proxy lookups on behalf of each other. A single CoS table can therefore be split across two PEs, thus halving the memory requirement.
Summary
It can thus be appreciated that the present invention is capable of providing the following key features, marking considerable improvements over the prior art:
• Traditional packet scheduling involves parallel enqueuing and then serialised scheduling from those queues. For high performance traffic handling we have turned this around. Arriving packets are first processed in parallel and subsequently enqueued in a serial orderlist. This is referred to as "Think First Queue Later"™ • The deployment of a single pipeline parallel processing architecture (Applicant's array processor) is innovative in a Traffic Handling application. It provides the wire speed processing capability which is essential for the implementation of this concept.
• An alternate form of parallelism (compared to independent parallel schedulers) is thus exploited in order to solve the processing issues in high speed Traffic Handling.

Claims

Claims
1. A system comprising means for sorting incoming data packets in real time before said packets are stored in memory.
2. A data packet handling system, comprising means whereby incoming data packets are assigned an exit order before being stored in memory.
3. A system as claimed in claim 1 or claim 2 wherein the sorting means is responsive to information contained within a packet whereby to determine an exit order number for that packet.
4. A system as claimed in claim 2, wherein the sorting means is responsive to information contained in a table whereby to determine an exit order number for that packet.
5. A system as claimed in claim 2, wherein the sorting means is responsive to information associated with a data packet stream in which said packet is located whereby to determine an exit order number for that packet.
6. A system as claimed in claim 1 or claim 2, comprising queue means to queue sorted packets for output in exit order.
7. A system as claimed in claim 6, wherein said sorting means is adapted to insert sorted packets in said queue means in exit order.
8. A system as claimed in claim 6 or 7, wherein said queue means is a single queue.
9. A system as claimed in claim 8, wherein said single queue provides a plurality of virtual queues.
10. A system as claimed in claim 6, further comprising a queue manager adapted to insert packets into said queue means in exit order.
11. A system as claimed in claim 6, further comprising means to drop certain packets before being output from said queue means.
12. A system as claimed in claim 6, further comprising means to drop certain packets before being queued in said queue means.
13. A system as claimed in any of the preceding claims, wherein: said sorting means and said queue means process only packet records containing information about said packets, and data portions of said packets are stored in said memory for output in accordance with an exit order determined for the corresponding packet record.
14. A system as claimed in any of the preceding claims, wherein said sorting means comprises a parallel processor.
15. A system as claimed in claim 14, wherein said parallel processor is an array processor.
16. A system as claimed in claim 14, wherein said array processor is a SIMD processor.
17. A system as claimed in claim 14, 15 or 16, further comprising means to provide access for said parallel processors to shared state.
18. A system as claimed in claim 17, further comprising a state engine to control said access to said shared state.
19. A system as claimed in any of claims 1 to 18, further comprising tables of information for sorting said packets or said packet records, wherein said tables are stored locally to each processor or to each processor element of a parallel processor.
20. A system as claimed in claim 19, wherein said tables are the same on each processor or on each processor element of a parallel processor.
21. A system as claimed in claim 19, wherein said tables are different on different processors or on different processor elements of a parallel processor.
22. A system as claimed in claim 19, wherein said processors or processor elements share information from their respective tables, such that:
(a) the information held in the table for one processor is directly accessible by a different processor or the information held in the table in one processor element is accessible by other processing element(s) of the processor; and
(b) processors have access to tables in other processors or processor elements have access to other processor elements in the processor, whereby processors or processor elements can perform table lookups on behalf of other processor(s) or processor elements of the processor.
23. A system as claimed in any of the preceding claims, wherein said sorting means implement algorithms for packet scheduling in accordance with predetermined criteria, such as WFQ, DFR, congestion avoidance (eg WRED) or other prioritisation and sorting.
24. A method for sorting incoming data packets in real time, comprising sorting the packets into an exit order before storing them in memory.
25. A method as claimed in claim 24, wherein the sorting is responsive to information contained within a packet whereby to assign an exit order number for that packet.
26. A method as claimed in claim 24, wherein the sorting is responsive to information contained in a table whereby to determine an exit order number for that packet.
27. A method as claimed in claim 24, wherein the sorting is responsive to information associated with a data packet stream in which said packet is located whereby to determine an exit order number for that packet.
28. A method as claimed in claim 24, further comprising queuing sorted packets for output in exit order.
29. A method as claimed in claim 28, wherein said packets are inserted into a queue means in exit order determined by the means performing the sorting.
30. A method as claimed in claim 28, comprising inserting sorted packets into a queue means in exit order under control of a queue manager.
31. A method as claimed in claim 29 or 30, wherein said queuing is performed using a single output queue.
32. A method as claimed in claim 31 , further comprising providing a plurality of virtual queues by means of said single output queue.
33. A method as claimed in claim 28, further comprising dropping certain packets before being output from said queue means.
34. A method as claimed in claim 28, further comprising dropping certain packets before being queued in said queue means.
35. A method as claimed in any of claims 24-34, wherein: said sorting and said queuing operations are performed only on packet records containing information about said packets, said method further comprising: storing data portions of said packets in said memory for output in accordance with an exit order number determined for the corresponding packet record.
36. A method as claimed in any of claims 24-34, wherein said sorting is performed by a parallel processor.
37. A method as claimed in claim 36, wherein said parallel processor is an array processor.
38. A method as claimed in claim 36, wherein said array processor is a SIMD processor.
39. A method as claimed in claim 36, 37 or 38, further comprising providing access for said processors to shared state under control of a state engine.
40. A method as claimed in claim 39, further comprising providing tables of information for sorting said packets or said packet records, wherein said tables are stored locally to each processor or to each processor element of a parallel processor.
41. A method as claimed in claim 40, wherein said tables are the same on each processor or on each processor element of a parallel processor.
42. A method as claimed in claim 40, wherein said tables are different on different processors or on each processor element of a parallel processor.
43. A method as claimed in claim 40, wherein said processors or processor elements share information from their respective tables, such that:
(a) the information held in the table for one processor is made directly accessible by a different processor or the information held in the table of one processor element is made directly accessible to other processor element(s) of the processor; and
(b) access is provided for said processor or processor elements to tables in other processors or processor elements, whereby processors or processor elements can perform table lookups on behalf of another processor or processor element.
44. A system as claimed in any of claims 1-23, wherein said sorting means implement algorithms for packet scheduling in accordance with predetermined criteria, such as WFQ, DFR, congestion avoidance (eg WRED) or other priori tisation and sorting.
45. A computer system, comprising a data handling system as claimed in any of claims 1-23.
46. A network processing system, comprising a data handling system as claimed in any of claims 1-23.
47. A computer system adapted to perform the method as claimed in any of claims 24-43.
48. A network processing system adapted to perform the method as claimed in any of claims 24-43.
49. A computer system as claimed in claim 45 implemented as one or more silicon integrated circuits.
50. A data carrier containing program means adapted to perform the method as claimed in any of claim 24 to 43.
PCT/GB2003/004893 2002-11-11 2003-11-11 Traffic management architecture WO2004045162A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN2003801085295A CN1736068B (en) 2002-11-11 2003-11-11 Flow management structure system
US10/534,346 US20050243829A1 (en) 2002-11-11 2003-11-11 Traffic management architecture
GB0511589A GB2412035B (en) 2002-11-11 2003-11-11 Traffic management architecture
AU2003283559A AU2003283559A1 (en) 2002-11-11 2003-11-11 Traffic management architecture

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0226249.1 2002-11-11
GBGB0226249.1A GB0226249D0 (en) 2002-11-11 2002-11-11 Traffic handling system

Publications (2)

Publication Number Publication Date
WO2004045162A2 true WO2004045162A2 (en) 2004-05-27
WO2004045162A3 WO2004045162A3 (en) 2004-09-16

Family

ID=9947583

Family Applications (4)

Application Number Title Priority Date Filing Date
PCT/GB2003/004893 WO2004045162A2 (en) 2002-11-11 2003-11-11 Traffic management architecture
PCT/GB2003/004867 WO2004044733A2 (en) 2002-11-11 2003-11-11 State engine for data processor
PCT/GB2003/004866 WO2004045161A1 (en) 2002-11-11 2003-11-11 Packet storage system for traffic handling
PCT/GB2003/004854 WO2004045160A2 (en) 2002-11-11 2003-11-11 Data packet handling in computer or communication systems

Family Applications After (3)

Application Number Title Priority Date Filing Date
PCT/GB2003/004867 WO2004044733A2 (en) 2002-11-11 2003-11-11 State engine for data processor
PCT/GB2003/004866 WO2004045161A1 (en) 2002-11-11 2003-11-11 Packet storage system for traffic handling
PCT/GB2003/004854 WO2004045160A2 (en) 2002-11-11 2003-11-11 Data packet handling in computer or communication systems

Country Status (5)

Country Link
US (5) US7522605B2 (en)
CN (4) CN1736069B (en)
AU (4) AU2003283539A1 (en)
GB (5) GB0226249D0 (en)
WO (4) WO2004045162A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010107413A1 (en) 2009-03-18 2010-09-23 Texas Research International, Inc. Environmental damage sensor

Families Citing this family (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004524617A (en) 2001-02-14 2004-08-12 クリアスピード・テクノロジー・リミテッド Clock distribution system
GB0226249D0 (en) * 2002-11-11 2002-12-18 Clearspeed Technology Ltd Traffic handling system
US7210059B2 (en) 2003-08-19 2007-04-24 Micron Technology, Inc. System and method for on-board diagnostics of memory modules
US7310752B2 (en) * 2003-09-12 2007-12-18 Micron Technology, Inc. System and method for on-board timing margin testing of memory modules
US7120743B2 (en) 2003-10-20 2006-10-10 Micron Technology, Inc. Arbitration system and method for memory responses in a hub-based memory system
US6944636B1 (en) * 2004-04-30 2005-09-13 Microsoft Corporation Maintaining time-date information for syncing low fidelity devices
US7310748B2 (en) * 2004-06-04 2007-12-18 Micron Technology, Inc. Memory hub tester interface and method for use thereof
US8316431B2 (en) * 2004-10-12 2012-11-20 Canon Kabushiki Kaisha Concurrent IPsec processing system and method
US20060101210A1 (en) * 2004-10-15 2006-05-11 Lance Dover Register-based memory command architecture
US20060156316A1 (en) * 2004-12-18 2006-07-13 Gray Area Technologies System and method for application specific array processing
EP1832054B1 (en) * 2004-12-23 2018-03-21 Symantec Corporation Method and apparatus for network packet capture distributed storage system
US20100195538A1 (en) * 2009-02-04 2010-08-05 Merkey Jeffrey V Method and apparatus for network packet capture distributed storage system
US7392229B2 (en) * 2005-02-12 2008-06-24 Curtis L. Harris General purpose set theoretic processor
US7746784B2 (en) * 2006-03-23 2010-06-29 Alcatel-Lucent Usa Inc. Method and apparatus for improving traffic distribution in load-balancing networks
US8065249B1 (en) 2006-10-13 2011-11-22 Harris Curtis L GPSTP with enhanced aggregation functionality
US7774286B1 (en) 2006-10-24 2010-08-10 Harris Curtis L GPSTP with multiple thread functionality
US8166212B2 (en) * 2007-06-26 2012-04-24 Xerox Corporation Predictive DMA data transfer
US7830918B2 (en) * 2007-08-10 2010-11-09 Eaton Corporation Method of network communication, and node and system employing the same
JP5068125B2 (en) * 2007-09-25 2012-11-07 株式会社日立国際電気 Communication device
US8521732B2 (en) 2008-05-23 2013-08-27 Solera Networks, Inc. Presentation of an extracted artifact based on an indexing technique
US8625642B2 (en) 2008-05-23 2014-01-07 Solera Networks, Inc. Method and apparatus of network artifact indentification and extraction
US8004998B2 (en) * 2008-05-23 2011-08-23 Solera Networks, Inc. Capture and regeneration of a network data using a virtual software switch
US20090292736A1 (en) * 2008-05-23 2009-11-26 Matthew Scott Wood On demand network activity reporting through a dynamic file system and method
JP5300355B2 (en) * 2008-07-14 2013-09-25 キヤノン株式会社 Network protocol processing apparatus and processing method thereof
US9213665B2 (en) * 2008-10-28 2015-12-15 Freescale Semiconductor, Inc. Data processor for processing a decorated storage notify
US8627471B2 (en) * 2008-10-28 2014-01-07 Freescale Semiconductor, Inc. Permissions checking for data processing instructions
US8266498B2 (en) 2009-03-31 2012-09-11 Freescale Semiconductor, Inc. Implementation of multiple error detection schemes for a cache
US20110125748A1 (en) * 2009-11-15 2011-05-26 Solera Networks, Inc. Method and Apparatus for Real Time Identification and Recording of Artifacts
US20110125749A1 (en) * 2009-11-15 2011-05-26 Solera Networks, Inc. Method and Apparatus for Storing and Indexing High-Speed Network Traffic Data
US8472455B2 (en) * 2010-01-08 2013-06-25 Nvidia Corporation System and method for traversing a treelet-composed hierarchical structure
US8295287B2 (en) * 2010-01-27 2012-10-23 National Instruments Corporation Network traffic shaping for reducing bus jitter on a real time controller
US8990660B2 (en) 2010-09-13 2015-03-24 Freescale Semiconductor, Inc. Data processing system having end-to-end error correction and method therefor
US8504777B2 (en) 2010-09-21 2013-08-06 Freescale Semiconductor, Inc. Data processor for processing decorated instructions with cache bypass
US8667230B1 (en) 2010-10-19 2014-03-04 Curtis L. Harris Recognition and recall memory
KR20120055779A (en) * 2010-11-23 2012-06-01 한국전자통신연구원 System and method for communicating audio data based zigbee and method thereof
KR20120064576A (en) * 2010-12-09 2012-06-19 한국전자통신연구원 Apparatus for surpporting continuous read/write in asymmetric storage system and method thereof
US8849991B2 (en) 2010-12-15 2014-09-30 Blue Coat Systems, Inc. System and method for hypertext transfer protocol layered reconstruction
US8666985B2 (en) 2011-03-16 2014-03-04 Solera Networks, Inc. Hardware accelerated application-based pattern matching for real time classification and recording of network traffic
US8566672B2 (en) 2011-03-22 2013-10-22 Freescale Semiconductor, Inc. Selective checkbit modification for error correction
US8607121B2 (en) 2011-04-29 2013-12-10 Freescale Semiconductor, Inc. Selective error detection and error correction for a memory interface
US8990657B2 (en) 2011-06-14 2015-03-24 Freescale Semiconductor, Inc. Selective masking for error correction
US9525642B2 (en) 2012-01-31 2016-12-20 Db Networks, Inc. Ordering traffic captured on a data connection
US9100291B2 (en) 2012-01-31 2015-08-04 Db Networks, Inc. Systems and methods for extracting structured application data from a communications link
US9092318B2 (en) * 2012-02-06 2015-07-28 Vmware, Inc. Method of allocating referenced memory pages from a free list
US9665233B2 (en) * 2012-02-16 2017-05-30 The University Utah Research Foundation Visualization of software memory usage
WO2014110281A1 (en) 2013-01-11 2014-07-17 Db Networks, Inc. Systems and methods for detecting and mitigating threats to a structured data storage system
CN103338159B (en) * 2013-06-19 2016-08-10 华为技术有限公司 Polling dispatching implementation method and device
WO2015085087A1 (en) * 2013-12-04 2015-06-11 Db Networks, Inc. Ordering traffic captured on a data connection
JP6249403B2 (en) * 2014-02-27 2017-12-20 国立研究開発法人情報通信研究機構 Optical delay line and electronic buffer fusion type optical packet buffer control device
US10210592B2 (en) 2014-03-30 2019-02-19 Teoco Ltd. System, method, and computer program product for efficient aggregation of data records of big data
WO2016145405A1 (en) * 2015-03-11 2016-09-15 Protocol Insight, Llc Intelligent packet analyzer circuits, systems, and methods
KR102449333B1 (en) 2015-10-30 2022-10-04 삼성전자주식회사 Memory system and read request management method thereof
US10924416B2 (en) 2016-03-23 2021-02-16 Clavister Ab Method for traffic shaping using a serial packet processing algorithm and a parallel packet processing algorithm
SE1751244A1 (en) * 2016-03-23 2017-10-09 Clavister Ab Method for traffic shaping using a serial packet processing algorithm and a parallel packet processing algorithm
CN107786465B (en) * 2016-08-27 2021-06-04 华为技术有限公司 Method and device for processing low-delay service flow
WO2018081582A1 (en) * 2016-10-28 2018-05-03 Atavium, Inc. Systems and methods for random to sequential storage mapping
CN107656895B (en) * 2017-10-27 2023-07-28 上海力诺通信科技有限公司 Orthogonal platform high-density computing architecture with standard height of 1U
RU2718215C2 (en) * 2018-09-14 2020-03-31 Общество С Ограниченной Ответственностью "Яндекс" Data processing system and method for detecting jam in data processing system
US11138044B2 (en) * 2018-09-26 2021-10-05 Micron Technology, Inc. Memory pooling between selected memory resources
US11093403B2 (en) 2018-12-04 2021-08-17 Vmware, Inc. System and methods of a self-tuning cache sizing system in a cache partitioning system
EP3866417A1 (en) * 2020-02-14 2021-08-18 Deutsche Telekom AG Method for an improved traffic shaping and/or management of ip traffic in a packet processing system, telecommunications network, network node or network element, program and computer program product

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0372795A2 (en) * 1988-12-06 1990-06-13 AT&T Corp. Bandwidth allocation and congestion control scheme for an integrated voice and data network
EP1137225A1 (en) * 2000-02-28 2001-09-26 Alcatel A switch and a switching method
US20020075882A1 (en) * 1998-05-07 2002-06-20 Marc Donis Multiple priority buffering in a computer network

Family Cites Families (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5187780A (en) * 1989-04-07 1993-02-16 Digital Equipment Corporation Dual-path computer interconnect system with zone manager for packet memory
DE69132495T2 (en) * 1990-03-16 2001-06-13 Texas Instruments Inc Distributed processing memory
US5280483A (en) * 1990-08-09 1994-01-18 Fujitsu Limited Traffic control system for asynchronous transfer mode exchange
US5765011A (en) * 1990-11-13 1998-06-09 International Business Machines Corporation Parallel processing system having a synchronous SIMD processing with processing elements emulating SIMD operation using individual instruction streams
ATE180586T1 (en) * 1990-11-13 1999-06-15 Ibm PARALLEL ASSOCIATIVE PROCESSOR SYSTEM
JP2596718B2 (en) * 1993-12-21 1997-04-02 インターナショナル・ビジネス・マシーンズ・コーポレイション How to manage network communication buffers
US5949781A (en) * 1994-08-31 1999-09-07 Brooktree Corporation Controller for ATM segmentation and reassembly
US5513134A (en) 1995-02-21 1996-04-30 Gte Laboratories Incorporated ATM shared memory switch with content addressing
US5633865A (en) * 1995-03-31 1997-05-27 Netvantage Apparatus for selectively transferring data packets between local area networks
DE69841486D1 (en) * 1997-05-31 2010-03-25 Texas Instruments Inc Improved packet switching
US6757798B2 (en) * 1997-06-30 2004-06-29 Intel Corporation Method and apparatus for arbitrating deferred read requests
US5956340A (en) * 1997-08-05 1999-09-21 Ramot University Authority For Applied Research And Industrial Development Ltd. Space efficient fair queuing by stochastic Memory multiplexing
US6088771A (en) * 1997-10-24 2000-07-11 Digital Equipment Corporation Mechanism for reducing latency of memory barrier operations on a multiprocessor system
US6052375A (en) * 1997-11-26 2000-04-18 International Business Machines Corporation High speed internetworking traffic scaler and shaper
US6097403A (en) * 1998-03-02 2000-08-01 Advanced Micro Devices, Inc. Memory including logic for operating upon graphics primitives
US6359879B1 (en) * 1998-04-24 2002-03-19 Avici Systems Composite trunking
US6314489B1 (en) * 1998-07-10 2001-11-06 Nortel Networks Limited Methods and systems for storing cell data using a bank of cell buffers
US6356546B1 (en) * 1998-08-11 2002-03-12 Nortel Networks Limited Universal transfer method and network with distributed switch
US6829218B1 (en) * 1998-09-15 2004-12-07 Lucent Technologies Inc. High speed weighted fair queuing system for ATM switches
US6396843B1 (en) * 1998-10-30 2002-05-28 Agere Systems Guardian Corp. Method and apparatus for guaranteeing data transfer rates and delays in data packet networks using logarithmic calendar queues
SE9803901D0 (en) * 1998-11-16 1998-11-16 Ericsson Telefon Ab L M a device for a service network
US6246682B1 (en) * 1999-03-05 2001-06-12 Transwitch Corp. Method and apparatus for managing multiple ATM cell queues
US6952401B1 (en) * 1999-03-17 2005-10-04 Broadcom Corporation Method for load balancing in a network switch
US6574231B1 (en) * 1999-05-21 2003-06-03 Advanced Micro Devices, Inc. Method and apparatus for queuing data frames in a network switch port
US6671292B1 (en) * 1999-06-25 2003-12-30 Telefonaktiebolaget Lm Ericsson (Publ) Method and system for adaptive voice buffering
US6643298B1 (en) * 1999-11-23 2003-11-04 International Business Machines Corporation Method and apparatus for MPEG-2 program ID re-mapping for multiplexing several programs into a single transport stream
US7102999B1 (en) * 1999-11-24 2006-09-05 Juniper Networks, Inc. Switching device
US6662263B1 (en) * 2000-03-03 2003-12-09 Multi Level Memory Technology Sectorless flash memory architecture
ATE331369T1 (en) * 2000-03-06 2006-07-15 Ibm SWITCHING DEVICE AND METHOD
US6907041B1 (en) * 2000-03-07 2005-06-14 Cisco Technology, Inc. Communications interconnection network with distributed resequencing
CA2301973A1 (en) * 2000-03-21 2001-09-21 Spacebridge Networks Corporation System and method for adaptive slot-mapping input/output queuing for tdm/tdma systems
US6975629B2 (en) * 2000-03-22 2005-12-13 Texas Instruments Incorporated Processing packets based on deadline intervals
US7139282B1 (en) * 2000-03-24 2006-11-21 Juniper Networks, Inc. Bandwidth division for packet processing
CA2337674A1 (en) * 2000-04-20 2001-10-20 International Business Machines Corporation Switching arrangement and method
JP4484317B2 (en) * 2000-05-17 2010-06-16 株式会社日立製作所 Shaping device
US6937561B2 (en) 2000-06-02 2005-08-30 Agere Systems Inc. Method and apparatus for guaranteeing data transfer rates and enforcing conformance with traffic profiles in a packet network
JP3640160B2 (en) * 2000-07-26 2005-04-20 日本電気株式会社 Router device and priority control method used therefor
DE60119866T2 (en) * 2000-09-27 2007-05-10 International Business Machines Corp. Switching device and method with separate output buffers
US20020062415A1 (en) * 2000-09-29 2002-05-23 Zarlink Semiconductor N.V. Inc. Slotted memory access method
US6647477B2 (en) * 2000-10-06 2003-11-11 Pmc-Sierra Ltd. Transporting data transmission units of different sizes using segments of fixed sizes
US6871780B2 (en) * 2000-11-27 2005-03-29 Airclic, Inc. Scalable distributed database system and method for linking codes to internet information
US6888848B2 (en) * 2000-12-14 2005-05-03 Nortel Networks Limited Compact segmentation of variable-size packet streams
US7035212B1 (en) * 2001-01-25 2006-04-25 Optim Networks Method and apparatus for end to end forwarding architecture
US20020126659A1 (en) * 2001-03-07 2002-09-12 Ling-Zhong Liu Unified software architecture for switch connection management
US6728857B1 (en) * 2001-06-20 2004-04-27 Cisco Technology, Inc. Method and system for storing and retrieving data using linked lists
US7382787B1 (en) * 2001-07-30 2008-06-03 Cisco Technology, Inc. Packet routing and switching device
US7349403B2 (en) * 2001-09-19 2008-03-25 Bay Microsystems, Inc. Differentiated services for a network processor
US6900920B2 (en) * 2001-09-21 2005-05-31 The Regents Of The University Of California Variable semiconductor all-optical buffer using slow light based on electromagnetically induced transparency
US20030081623A1 (en) * 2001-10-27 2003-05-01 Amplify.Net, Inc. Virtual queues in a single queue in the bandwidth management traffic-shaping cell
US7215666B1 (en) * 2001-11-13 2007-05-08 Nortel Networks Limited Data burst scheduling
US20030145086A1 (en) * 2002-01-29 2003-07-31 O'reilly James Scalable network-attached storage system
US20040022094A1 (en) * 2002-02-25 2004-02-05 Sivakumar Radhakrishnan Cache usage for concurrent multiple streams
US6862639B2 (en) * 2002-03-11 2005-03-01 Harris Corporation Computer system including a receiver interface circuit with a scatter pointer queue and related methods
US7126959B2 (en) * 2002-03-12 2006-10-24 Tropic Networks Inc. High-speed packet memory
US6928026B2 (en) * 2002-03-19 2005-08-09 Broadcom Corporation Synchronous global controller for enhanced pipelining
US20030188056A1 (en) * 2002-03-27 2003-10-02 Suresh Chemudupati Method and apparatus for packet reformatting
US7239608B2 (en) * 2002-04-26 2007-07-03 Samsung Electronics Co., Ltd. Router using measurement-based adaptable load traffic balancing system and method of operation
JP3789395B2 (en) * 2002-06-07 2006-06-21 富士通株式会社 Packet processing device
US20040039884A1 (en) * 2002-08-21 2004-02-26 Qing Li System and method for managing the memory in a computer system
US6950894B2 (en) * 2002-08-28 2005-09-27 Intel Corporation Techniques using integrated circuit chip capable of being coupled to storage system
US7180899B2 (en) * 2002-10-29 2007-02-20 Cisco Technology, Inc. Multi-tiered Virtual Local area Network (VLAN) domain mapping mechanism
GB0226249D0 (en) * 2002-11-11 2002-12-18 Clearspeed Technology Ltd Traffic handling system
KR100532325B1 (en) * 2002-11-23 2005-11-29 삼성전자주식회사 Input control method and apparatus for turbo decoder
GB2421158B (en) * 2003-10-03 2007-07-11 Avici Systems Inc Rapid alternate paths for network destinations
US7668100B2 (en) * 2005-06-28 2010-02-23 Avaya Inc. Efficient load balancing and heartbeat mechanism for telecommunication endpoints

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0372795A2 (en) * 1988-12-06 1990-06-13 AT&T Corp. Bandwidth allocation and congestion control scheme for an integrated voice and data network
US20020075882A1 (en) * 1998-05-07 2002-06-20 Marc Donis Multiple priority buffering in a computer network
EP1137225A1 (en) * 2000-02-28 2001-09-26 Alcatel A switch and a switching method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010107413A1 (en) 2009-03-18 2010-09-23 Texas Research International, Inc. Environmental damage sensor

Also Published As

Publication number Publication date
GB2413031A (en) 2005-10-12
WO2004045160A2 (en) 2004-05-27
GB2412035B (en) 2006-12-20
CN1736066A (en) 2006-02-15
US20110069716A1 (en) 2011-03-24
GB2411271B (en) 2006-07-26
GB0226249D0 (en) 2002-12-18
CN1735878A (en) 2006-02-15
GB2412035A (en) 2005-09-14
WO2004044733A3 (en) 2005-03-31
GB2412537B (en) 2006-02-01
CN100557594C (en) 2009-11-04
GB0509997D0 (en) 2005-06-22
AU2003283539A1 (en) 2004-06-03
GB2413031B (en) 2006-03-15
US20050246452A1 (en) 2005-11-03
US7522605B2 (en) 2009-04-21
WO2004045160A8 (en) 2005-04-14
CN1736066B (en) 2011-10-05
AU2003283544A1 (en) 2004-06-03
GB0511588D0 (en) 2005-07-13
GB2411271A (en) 2005-08-24
WO2004045162A3 (en) 2004-09-16
US7882312B2 (en) 2011-02-01
AU2003283559A1 (en) 2004-06-03
AU2003283545A1 (en) 2004-06-03
CN1736068B (en) 2012-02-29
WO2004045160A3 (en) 2004-12-02
GB2412537A (en) 2005-09-28
WO2004045161A1 (en) 2004-05-27
US8472457B2 (en) 2013-06-25
WO2004044733A2 (en) 2004-05-27
GB0511587D0 (en) 2005-07-13
CN1736069A (en) 2006-02-15
GB0511589D0 (en) 2005-07-13
CN1736069B (en) 2012-07-04
US7843951B2 (en) 2010-11-30
US20050243829A1 (en) 2005-11-03
CN1736068A (en) 2006-02-15
US20050257025A1 (en) 2005-11-17
AU2003283545A8 (en) 2004-06-03
US20050265368A1 (en) 2005-12-01

Similar Documents

Publication Publication Date Title
US20050243829A1 (en) Traffic management architecture
US6959002B2 (en) Traffic manager for network switch port
US7426185B1 (en) Backpressure mechanism for switching fabric
JP4605911B2 (en) Packet transmission device
US8644327B2 (en) Switching arrangement and method with separated output buffers
US6687781B2 (en) Fair weighted queuing bandwidth allocation system for network switch port
US7953002B2 (en) Buffer management and flow control mechanism including packet-based dynamic thresholding
US7990858B2 (en) Method, device and system of scheduling data transport over a fabric
CA2575869C (en) Hierarchal scheduler with multiple scheduling lanes
US7190674B2 (en) Apparatus for controlling packet output
US7346067B2 (en) High efficiency data buffering in a computer network device
US20050018601A1 (en) Traffic management
GB2339371A (en) Rate guarantees through buffer management
US7522620B2 (en) Method and apparatus for scheduling packets
US7116680B1 (en) Processor architecture and a method of processing
US7269180B2 (en) System and method for prioritizing and queuing traffic
US8879578B2 (en) Reducing store and forward delay in distributed systems
US7623456B1 (en) Apparatus and method for implementing comprehensive QoS independent of the fabric system
US11824791B2 (en) Virtual channel starvation-free arbitration for switches
Benet et al. Providing in-network support to coflow scheduling
EP1774721B1 (en) Propagation of minimum guaranteed scheduling rates
Hou et al. Service disciplines for guaranteed performance service
Feng Design of per Flow Queuing Buffer Management and Scheduling for IP Routers
Martínez et al. Towards a cost-effective interconnection network architecture with QoS and congestion management support
Roidel et al. Fair Scheduling for Input-Queued Switches

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref document number: 0511589

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20031111

WWE Wipo information: entry into national phase

Ref document number: 20038A85295

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 10534346

Country of ref document: US

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP