WO2001026309A1 - Hierarchical output-queued packet-buffering system and method - Google Patents
Hierarchical output-queued packet-buffering system and method Download PDFInfo
- Publication number
- WO2001026309A1 WO2001026309A1 PCT/US2000/027753 US0027753W WO0126309A1 WO 2001026309 A1 WO2001026309 A1 WO 2001026309A1 US 0027753 W US0027753 W US 0027753W WO 0126309 A1 WO0126309 A1 WO 0126309A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- packet
- queues
- level
- priority
- buffer
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2441—Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/52—Queue scheduling by attributing bandwidth to queues
- H04L47/521—Static queue service slot or fixed bandwidth allocation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/6205—Arrangements for avoiding head of line blocking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/62—Queue scheduling characterised by scheduling criteria
- H04L47/6215—Individual queue per QOS, rate or priority
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/30—Peripheral units, e.g. input or output ports
- H04L49/3027—Output queuing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/9047—Buffering arrangements including multiple buffers, e.g. buffer pools
- H04L49/9052—Buffering arrangements including multiple buffers, e.g. buffer pools with buffers of different sizes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5678—Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
- H04L2012/5679—Arbitration or scheduling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5678—Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
- H04L2012/5681—Buffer or queue management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/20—Support for services
- H04L49/205—Quality of Service based
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/25—Routing or path finding in a switch fabric
- H04L49/253—Routing or path finding in a switch fabric using establishment or release of connections between ports
- H04L49/254—Centralised controller, i.e. arbitration or scheduling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/30—Peripheral units, e.g. input or output ports
- H04L49/3009—Header conversion, routing tables or routing tags
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/50—Overload detection or protection within a single switching element
- H04L49/505—Corrective measures
- H04L49/508—Head of Line Blocking Avoidance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/9023—Buffering arrangements for implementing a jitter-buffer
Definitions
- the present invention relates generally to communication systems, and in particular to movement of data flows in packet-based communication architectures.
- Data communication involves the exchange of data between two or more entities
- the data can be, for example, information transferred
- the protocols define how the packets are constructed and treated as they travel from source to
- bandwidth information-carrying capacity at high speeds with substantial reliability.
- Bandwidth is further increased by "multiplexing" strategies, which allow multiple data streams to be sent over the same communication medium without interfering with each other.
- TDM time- division multiplexing
- time slot i.e., a short window of availability recurring at fixed intervals (with other time slots scheduled during the intervals).
- Each time slot represents a separate communication channel.
- time slots are then multiplexed onto higher speed lines in a predefined bandwidth hierarchy.
- DWDM dense wavelength division multiplexing
- the channels are different wavelengths of light, which may be carried simultaneously over the same fiber without
- networks are designed to balance traffic across different branches as well as to other networks, so that
- Packet routing is handled by communication devices such as switches, routers, and bridges.
- a communication device 150 receives information (in the form of packets/frames, cells, or TDM frames) from a communication
- the communication device 150 can contain a number of network interface cards (NICs), such as NIC 160 and NIC 180, each having
- Input ports 162, 164, and 166 receive information from the communication network 110 and transfer them to a number of packet processing engines (not shown) that process the packets and prepare them for transmission at one of the output ports 168, 170, and 172, which correspond to a
- An ideal communication device would be capable of aggregating incoming data from numerous input channels and outputting
- congestion i.e.. high quality of service, or QoS
- QoS quality of service
- a switch 200 includes a series of p input ports denoted as INi ...IN P and a series of p output ports denoted as OUTi ...OUT p .
- a typical switch is configured to accommodate multiple plug-in network interface cards, with each card carrying a fixed number of input and output ports.
- each input port is directly connected to every output port; as a result, packets can travel between ports with minimal delay.
- An incoming packet is examined to
- Full-mesh switches can also be used to implement an output-buffered architecture that can accommodate rich QoS mechanisms; for example, some customers may pay higher fees for better service guarantees, and different kinds of traffic may be accorded different priorities.
- output port output the packets in accordance with the priority levels associated with their respective queues. As shown in Fig. 2A, for example, a series of n priority queues 205 ⁇ , 205
- ...205 n is associated with output port OUTi, and a distributed scheduler module 210 selects packets from these queues from transmission in accordance with their queue-level priorities.
- Proportional fairness recognizes that packet size can vary, so that if prioritization were applied strictly on a per-packet basis, larger
- a switch 250 based on a partial-mesh design is depicted in Fig. 2B.
- the switch 250 also contains a series of p input ports and a complementary series of p output ports. In this case, however, each input port
- a central scheduling module 255 connects input ports to output ports on an as-need basis.
- partial-mesh architectures support high aggregate bandwidths, but will block, or congest, when certain traffic patterns appear at the
- output queues 260 organized as p sets of q queues - that is, q priority queues for each output port 1 through p. In this way, incoming packets can be prioritized before they have a chance to cause
- the present invention utilizes a hierarchically organized output-queuing system that
- the architecture of the present invention facilitates output-
- a packet-buffering system and method incorporating aspects of the
- present invention is used in transferring packets from a series of input ports to a series of output ports in a communication device that is coupled to a communications network.
- a first packet buffer is organized into a first series of queues.
- the first-series queues can
- Each first-series priority queue set is also associated with one of the output ports of the
- a second packet buffer (and, if desired, additional packet buffers) is also organized into a series of queues that can be grouped into priority queue sets
- the first packet buffer receives packets from the input ports of the communication device at the aggregate network rate (i.e., the overall transmission rate of the network itself).
- the aggregate network rate i.e., the overall transmission rate of the network itself.
- received packets are then examined by an address lookup engine to ascertain their forwarding
- the packets are transferred at the aggregate network rate to first-series queues having priority levels consistent
- second-series queues at a rate less than the aggregate network rate. These second-series queues are part of the second-series priority queue set whose priorities are consistent with those of the received packets and which are also associated with the designated output ports. The order in which the packets are transferred from the first-series queues to the second-series queues is based
- any of various dequeuing systems associated with that second packet buffer, together with a scheduler, may schedule and transfer the packets to the designated output ports. Alternatively (and as discussed below), the packets may be transferred to additional, similarly organized packet
- the type of memory selected for use as the first packet buffer should have performance
- characteristics that include relatively fast access times e.g., embedded ASIC packet buffers,
- the first-series queues have a relatively shallow
- bandwidth means the speed at which the queues can absorb
- the second packet buffer is able to receive packets from the first packet buffer at less
- the queue depth of the second-series queues is typically larger than the queue depth of the first series queues. Consequently, the performance characteristics of the memory forming the second packet buffer does not require access times as fast as those of the first packet buffer (e.g., field-configurable memory elements such as DRAM,
- packet buffers is equal to or greater than a sum of the first packet-buffer bandwidths, although the individual second packet buffer bandwidths are less than the aggregate first buffer bandwidth.
- second packet buffers can exhibit substantially similar performance characteristics.
- a homogeneous memory can be organized to accommodate both first-series and second-series
- the present invention can accommodate a third packet
- This third packet buffer coupled to and receiving packets from at least one of the second packet buffers for subsequent transfer to a designated output port.
- This third packet buffer would also be comprised of third-series queues grouped as third-series priority queue sets so that third-series
- the sum of the third packet-buffer bandwidths would generally be equal to or greater than that of the corresponding second packet-buffer bandwidths and the sum of third packet-buffer depths would generally exceed the sum of the second packet-buffer depths.
- packets may be aggregated into queue flows with a
- the hierarchical memory architecture of the present invention overcomes the hierarchical memory architecture of the present invention
- the benefits of the present invention not only include enhancing the scalability of full- mesh systems (output-queued) while avoiding head of line blocking, but they are also beneficial in partial-mesh systems.
- queued packet-buffering systems can be interconnected by a partial-mesh interconnect and still preserve many of the QoS features of the singular system.
- FIG. 1 schematically illustrates a prior-art communication device coupling a communication network to other networks, such as LANs, MANs, and WANs;
- FIG. 2A schematically illustrates a prior-art, full-mesh interconnect system implementing
- FIG. 2B schematically illustrates a prior-art, partial-mesh interconnect system exhibiting
- FIG. 2C schematically illustrates a prior-art, partial-mesh interconnect system
- FIG. 3 A schematically illustrates a hierarchical queue system in accordance with an embodiment of the present invention
- FIG. 3B schematically illustrates several components in a network interface card that are
- FIG. 4 provides a flow diagram of the steps performed when operating the network interface card of FIG. 3B, in accordance with one embodiment of the present invention
- FIG. 5 illustrates the memory, packet, and queue structure of the hierarchical queue system of the network interface card of FIG. 3B, in accordance with one embodiment of the
- FIG. 6 provides a flow diagram of the steps performed by the dequeue and hierarchical queue system of FIG. 5, in accordance with one embodiment of the present invention
- FIG. 7 illustrates the memory, packet, and queue structure of the hierarchical queue
- FIG. 8 illustrates an embodiment of the hierarchical queue system in a partial-mesh
- the present invention incorporates a hierarchical queue system 320 to transfer packets received over the communication network 110 from a plurality of input
- the hierarchical queue system 320 buffers the received packets in a plurality of memory elements, such as a level-one memory
- level-two memory 3144 a level-two memory 3144 and a level-X memory 316.
- Level-one memory 312 must be fast enough to buffer at line rate the aggregate traffic of all input ports 302, 304, 306 without loss.
- Level-one memory can be typically constructed of
- memory bandwidth can be increased by making the memory width wider. But because the memory storage density is also limited by the technology of the day. making the memories wider necessitates that they become shallower. The resulting reduction in memory depth can be recovered by adding a plurality of level-two memories 314, 316 whose aggregate bandwidth is equal to or greater than the bandwidth of the level-one memory 312.
- network environment may be achieved as memory technology improves, the problem resurfaces when trying to scale the communication device 150 at even higher packet-buffer bandwidths.
- Hierarchical queue system 320 incorporates memory levels 314, 316 that are organized according to successively deeper packet-buffer depths (i.e.. capable of storing more bytes) and that exhibit
- level-two memory 314 and level-X memory 316 essentially make up for the sacrifice in packet-buffer depth in the level-one memory 312 through organization into deeper packet-buffer depths.
- Hierarchical queue system 320 can exhibit substantially similar performance characteristics
- level-two memory 314 and level- X memory 316 allow the use of denser memory types (i.e., greater packet-buffer depth) for the
- system 320 of the present invention can be implemented in a wide variety of communication devices (e.g., switches and routers), in a shared memory accessible to one or more
- NIC network interface card
- the NIC 328 receives packets from the packet-based communication
- the forwarding engine 330 together with the ALE 332, detennine the destination output ports of the packets by looking up the
- the modified packets are then routed to the full-mesh interconnect 311 via the
- the hierarchical queue system 320 of the NIC 328 normally receives the modified packets via the full-mesh interconnect 311 so that it can funnel packets originally received at the input ports 162, 164, 166, 224, 226, 228 of any NIC installed within the communication device 150, including the packets received by the input ports 302, 304, 306 of its own NIC 328, to one or more of the output ports 322, 324, 326 of its own NIC 328.
- packets received at input ports 302, 304, 306 are transferred directly to the
- the forwarding engine 330 bypass the interconnect interface 310 and full-mesh interconnect 311 altogether.
- the forwarding engine 330 bypass the interconnect interface 310 and full-mesh interconnect 311 altogether.
- forwarding engine 330 transfers the packets to the interconnect interface 310, which then directly forwards the packets to the hierarchical queue system 320. thus bypassing the full-mesh
- the modified packets are received at a first-level memory 312 of the hierarchical queue system (step 418).
- the packets in the first-level memory are received at a first-level memory 312 of the hierarchical queue system (step 418).
- step 420 coiTesponding to memory elements organized into increasingly deeper queue depths as described below.
- packets are scheduled for transmission to the selected output ports 322, 324, 326 (step 424).
- the packets are then transmitted from the selected output ports 322, 324, 326 to a communication network such as the LAN 120, MAN 130, or WAN 140.
- a forwarding engine 330 associated with the input port 302 is selected.
- the selected forwarding engine parses the received packet header.
- the forwarding engine 330 processes the packet header by checking the integrity of the
- ALE 332 are used to report the processing activity involving this packet header to modules external to the selected forwarding engine, and communicating with the ALE 332 to obtain routing infonriation for one of the output ports 322, 324, 326 associated with the destination of the packet.
- the engine can modify the packet header to include routing information (e.g., by prepending a
- the modified packet header is then written to a buffer of the forwarding engine 330 where it is
- the modified packets 510 which are received at the first-level memory or first packet buffer 312 (step 610), comprise a plurality of packets having varying priority levels and designated for various output ports (i.e., physical or virtual ports) of the NIC 328.
- the received packets 510 comprise a plurality of packets having varying priority levels and designated for various output ports (i.e., physical or virtual ports) of the NIC 328.
- packets 510 may include a plurality of high-priority packets 512, medium-priority packets 514, and low-priority packets 516, some of which are destined for output port 322 and others for one
- the present invention examines the forwarding vectors and the packet header information in the received packets 510 to determine their destination output port 322 (step 612).
- the received packets 510 for a particular output port 322 are
- step 614 organized into groups of queues or priority queue sets that correspond, for example, to
- a high-priority queue set 520 (including high-priority packets 512), a medium-priority queue set 522 (including medium-priority packets 514), and a low-priority queue set 524 (including low-
- the packets in the first-series priority queue sets 520. 522, 524 of the first packet buffer 312 are then funneled into second-series priority queue sets 530, 532, 534 in the second level
- the second-series queue sets 530. 532. 534 are associated with the same output port 322 as the first-series priority queue sets 520, 522, 524.
- the second-series queue sets 530, 532, 534 comprise second-series queues that have a greater buffer depth 536 than the corresponding first-series queues in the first-series queue sets so as to provide deeper buffering at a slower operating rate (and thus enable the use of less expensive memory as
- buffer depth refers to the maximum
- first packet buffer 312 operates at the aggregate network
- the first packet-buffer 312 is able to receive packet data in the amount and rate that such data is provided by the communication network 110. In order to support these operating parameters while remaining non-blocking and output buffered, the first
- the packet buffer 312 uses a wide data bus (to achieve high data rates) and a multiple bank architecture (to achieve high frame rates).
- the first packet buffer 312 is also relatively shallow (e.g., tens of thousands of packets of storage) so that the first packet-buffer depth 526 of the first-
- the second-series queues have a greater packet- buffer depth 536 (e.g., millions of packets of storage).
- the second packet-buffer depth is often
- a sum of the second packet-buffer bandwidths of all the second packet buffers can exceed the sum of the first packet-buffer bandwidths of all the first packet buffers.
- the packet-handling capabilities of the second packet buffers are equal to, and may in fact be greater than, the capabilities of the first packet buffers.
- individual second packet-buffer bandwidths are typically less than the aggregate bandwidth of the
- queues in the hierarchical queue system 340 enables the use of different memory types for the first and second packet buffers and can thus result in significant cost savings without material
- first and second packet buffers can be organized within the same pool of memory and exhibit the same performance characteristics (with just a difference in their buffer depths), but this implementation is not as cost effective.
- the hierarchical queue system 320 incorporates more than two levels of packet buffering, such as a level-X memory 316. Similarly, the level-X memory 316 would provide a packet-buffer depth 542 that exceeds the depth 536 of the corresponding second packet buffer. Once the received packets 510 have been funneled down to the lowest level of memory (with the
- the first packet buffer 312 receives packets in parallel from all of the NICS 160, 180, 328 of the communication device 150 via the
- Enqueue engines 313 parse the forwarding vectors to determine whether the received packets are destined for this NIC 328. If the packets are destined for an output port 322. 326 of the NIC 328, the enqueue engines further determine the priority level for the received packets 510 and determine which of the queues (with a consistent priority
- each memory level of the hierarchical queue system 320 will buffer the received packet.
- the received packets 510 are then sorted by output port and priority level and grouped into first - series queues in the first packet buffer 312.
- the packets in the first-series queues are then transferred to corresponding second-series queues in the second packet buffer 314.
- the second packet buffer 314 provides the bulk of the
- RED Early Detection
- wRED weighted RED
- level-X memories 314, 316 facilitates the implementation of a richer set of QoS mechanisms.
- the distributed scheduler 210 can donate bandwidth from idle high-priority queues to busy lower-priority queues that have packets to transmit.
- the higher-priority queues are
- the reverse may also be done (i.e., donating bandwidth from idle low-priority queues to higher-priority queues).
- QoS techniques may be used such as combining pure priority scheduling with Weighted Fair Queuing and bandwidth donation.
- the hierarchical queue system 320 can also be used to aggregate
- the sorting burden on the first-level memory 710 is alleviated, because the first-level memory 710 need only sort through the prioritized queue flows to locate packets destined for the output port 322 associated with the first-level memory 710 rather than sort by both priority level and output
- a level- zero memory 710 sorts the received packets 510 by priority level into priority queue sets 712,
- a subset of the packets in the level-zero memory 710 that correspond to a particular output port 322 of the NIC 328 are then transferred to
- the first-level memory 710. which organizes the packet data into priority queue sets 520. 522.
- a communication device 810 includes a plurality of instances 820', 820". 820'" of the hierarchical queue system of the present invention.
- the communication device 810 receives packets from a full-mesh or partial-mesh interconnect 850.
- Incoming
- packets enter a level-zero memory 840 and are prioritized/sorted by an enqueue engine 842.
- the prioritized packets are routed to one of the plurality of instances of the hierarchical queue system 820', 820", 820"' that is associated with a particular destination outport port (not shown) of the communication device 810 for which the packets are destined.
- the level-zero memory 840 will route the packets to a level-zero memory 880 of the communication device 870 via the full-mesh or partial-mesh interconnect 850.
- the packets will then be prioritized/sorted by enqueue engine 882 and routed
- the interconnection of the level-zero memory 840, 850 via a partial-mesh interconnect is
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA002388348A CA2388348A1 (en) | 1999-10-06 | 2000-10-06 | Hierarchical output-queued packet-buffering system and method |
JP2001529151A JP2003511909A (en) | 1999-10-06 | 2000-10-06 | Packet buffering system and method with output queued in a hierarchy |
EP00973429A EP1222780A1 (en) | 1999-10-06 | 2000-10-06 | Hierarchical output-queued packet-buffering system and method |
AU11934/01A AU1193401A (en) | 1999-10-06 | 2000-10-06 | Hierarchical output-queued packet-buffering system and method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15792599P | 1999-10-06 | 1999-10-06 | |
US60/157,925 | 1999-10-06 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2001026309A1 true WO2001026309A1 (en) | 2001-04-12 |
Family
ID=22565924
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2000/027753 WO2001026309A1 (en) | 1999-10-06 | 2000-10-06 | Hierarchical output-queued packet-buffering system and method |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP1222780A1 (en) |
JP (1) | JP2003511909A (en) |
AU (1) | AU1193401A (en) |
CA (1) | CA2388348A1 (en) |
WO (1) | WO2001026309A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014000467A1 (en) * | 2012-06-29 | 2014-01-03 | 华为技术有限公司 | Method for adjusting bandwidth in network virtualization system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5440523A (en) * | 1993-08-19 | 1995-08-08 | Multimedia Communications, Inc. | Multiple-port shared memory interface and associated method |
DE19617816A1 (en) * | 1996-05-03 | 1997-11-13 | Siemens Ag | Method for the optimized transmission of ATM cells over connection sections |
US5831980A (en) * | 1996-09-13 | 1998-11-03 | Lsi Logic Corporation | Shared memory fabric architecture for very high speed ATM switches |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2682434B2 (en) * | 1994-04-05 | 1997-11-26 | 日本電気株式会社 | Output buffer type ATM switch |
JP3673025B2 (en) * | 1995-09-18 | 2005-07-20 | 株式会社東芝 | Packet transfer device |
JP2827998B2 (en) * | 1995-12-13 | 1998-11-25 | 日本電気株式会社 | ATM exchange method |
US6324165B1 (en) * | 1997-09-05 | 2001-11-27 | Nec Usa, Inc. | Large capacity, multiclass core ATM switch architecture |
-
2000
- 2000-10-06 AU AU11934/01A patent/AU1193401A/en not_active Abandoned
- 2000-10-06 CA CA002388348A patent/CA2388348A1/en not_active Abandoned
- 2000-10-06 JP JP2001529151A patent/JP2003511909A/en active Pending
- 2000-10-06 EP EP00973429A patent/EP1222780A1/en not_active Withdrawn
- 2000-10-06 WO PCT/US2000/027753 patent/WO2001026309A1/en not_active Application Discontinuation
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5440523A (en) * | 1993-08-19 | 1995-08-08 | Multimedia Communications, Inc. | Multiple-port shared memory interface and associated method |
DE19617816A1 (en) * | 1996-05-03 | 1997-11-13 | Siemens Ag | Method for the optimized transmission of ATM cells over connection sections |
US5831980A (en) * | 1996-09-13 | 1998-11-03 | Lsi Logic Corporation | Shared memory fabric architecture for very high speed ATM switches |
Non-Patent Citations (1)
Title |
---|
WOODWORTH C B ET AL: "A FLEXIBLE BROADBAND PACKET SWITCH FOR A MULTIMEDIA INTEGRATED NETWORK", PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON COMMUNICATIONS,US,NEW YORK, IEEE, vol. -, 23 June 1991 (1991-06-23), pages 78 - 85, XP000269383, ISBN: 0-7803-0006-8 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014000467A1 (en) * | 2012-06-29 | 2014-01-03 | 华为技术有限公司 | Method for adjusting bandwidth in network virtualization system |
US9699113B2 (en) | 2012-06-29 | 2017-07-04 | Huawei Technologies Co., Ltd. | Method and apparatus for bandwidth adjustment in network virtualization system |
Also Published As
Publication number | Publication date |
---|---|
AU1193401A (en) | 2001-05-10 |
JP2003511909A (en) | 2003-03-25 |
EP1222780A1 (en) | 2002-07-17 |
CA2388348A1 (en) | 2001-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6850490B1 (en) | Hierarchical output-queued packet-buffering system and method | |
US7099275B2 (en) | Programmable multi-service queue scheduler | |
US8023521B2 (en) | Methods and apparatus for differentiated services over a packet-based network | |
US7936770B1 (en) | Method and apparatus of virtual class of service and logical queue representation through network traffic distribution over multiple port interfaces | |
US6680933B1 (en) | Telecommunications switches and methods for their operation | |
US7796610B2 (en) | Pipeline scheduler with fairness and minimum bandwidth guarantee | |
US20030048792A1 (en) | Forwarding device for communication networks | |
US7065089B2 (en) | Method and system for mediating traffic between an asynchronous transfer mode (ATM) network and an adjacent network | |
US7023856B1 (en) | Method and system for providing differentiated service on a per virtual circuit basis within a packet-based switch/router | |
US20030198241A1 (en) | Allocating buffers for data transmission in a network communication device | |
US7385993B2 (en) | Queue scheduling mechanism in a data packet transmission system | |
GB2339371A (en) | Rate guarantees through buffer management | |
KR20060023579A (en) | Method and system for open-loop congestion control in a system fabric | |
US7197051B1 (en) | System and method for efficient packetization of ATM cells transmitted over a packet network | |
US7382792B2 (en) | Queue scheduling mechanism in a data packet transmission system | |
US7324536B1 (en) | Queue scheduling with priority and weight sharing | |
WO2001026309A1 (en) | Hierarchical output-queued packet-buffering system and method | |
EP1521411A2 (en) | Method and apparatus for request/grant priority scheduling | |
JP3570991B2 (en) | Frame discard mechanism for packet switching | |
Song et al. | Two scheduling algorithms for input-queued switches guaranteeing voice QoS | |
Li | System architecture and hardware implementations for a reconfigurable MPLS router | |
Katevenis et al. | ATLAS I: A Single-Chip ATM Switch with HIC Links and Multi-Lane Back-Pressure | |
Li et al. | Performance evaluation of crossbar switch fabrics in core routers | |
Li et al. | Architecture and performance of a multi-Tbps protocol independent switching fabric | |
Pi et al. | An integrated scheduling and buffer management scheme for packet-switched routers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2388348 Country of ref document: CA |
|
ENP | Entry into the national phase |
Ref country code: JP Ref document number: 2001 529151 Kind code of ref document: A Format of ref document f/p: F |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2000973429 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 2000973429 Country of ref document: EP |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 2000973429 Country of ref document: EP |