Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20030131198 A1
Publication typeApplication
Application numberUS 10/041,678
Publication dateJul 10, 2003
Filing dateJan 7, 2002
Priority dateJan 7, 2002
Also published asUS7181573
Publication number041678, 10041678, US 2003/0131198 A1, US 2003/131198 A1, US 20030131198 A1, US 20030131198A1, US 2003131198 A1, US 2003131198A1, US-A1-20030131198, US-A1-2003131198, US2003/0131198A1, US2003/131198A1, US20030131198 A1, US20030131198A1, US2003131198 A1, US2003131198A1
InventorsGilbert Wolrich, Mark Rosenbluth, Debra Bernstein
Original AssigneeGilbert Wolrich, Rosenbluth Mark B., Debra Bernstein
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Queue array caching in network devices
US 20030131198 A1
Abstract
In response to receiving a request to perform an enqueue or dequeue operation a corresponding queue descriptor specifying the structure of the queue is referenced to execute the operation. The queue descriptor is stored in a processor's memory controller logic.
Images(5)
Previous page
Next page
Claims(23)
What is claimed is:
1. A method executed in a processor comprising:
receiving a request to perform an enqueue or a dequeue operation with respect to a particular queue; and
referencing a corresponding queue descriptor stored in a cache in a processor's memory controller logic to execute the operations, the queue descriptor specifying a structure of the queue.
2. The method of claim 1 further comprising:
maintaining a list of addresses of a subset of queue descriptors stored in a memory in a content addressable memory.
3. The method of claim 2 further comprising:
storing in the cache a queue descriptor corresponding to each address in the list.
4. The method of claim 3 further comprising:
tracking an address stored in the content addressable memory, the address corresponding to a queue descriptor that was least recently used for an enqueue or dequeue operation.
5. The method of claim 4 further comprising:
removing the least-recently-used address from the list if the list lacks an entry corresponding to the queue specified by the request; and
replacing the removed address with an address corresponding to the specified queue.
6. The method of claim 3 further comprising:
issuing commands to the memory controller logic to return and fetch queue descriptors to and from the memory to maintain coherence between the queue descriptors in the cache and the list of addresses in the content addressable memory.
7. The method of claim 6 further comprising:
modifying the queue descriptor referenced by the enqueue or dequeue operation; and
returning the modified queue descriptors to memory from the cache.
8. The method of claim 1 further comprising:
executing an enqueue operation without waiting for completion of a previous dequeue operation.
9. An apparatus comprising:
a memory to store queue descriptors, each of which specifies a structure of a respective queue;
a network processor coupled to the memory further comprising:
memory controller logic that includes a cache to store a subset of the queue descriptors in the memory; and
a programming engine that accesses a list of addresses in the memory corresponding to the queue descriptors stored in the cache; and
wherein the processor is configured to reference a corresponding queue descriptor in the cache in response to a request to perform an enqueue or a dequeue operation with respect to a particular queue.
10. The apparatus of claim 9 wherein the programming engine includes a content addressable memory to store the list of addresses.
11. The apparatus of claim 10 wherein the content addressable memory is configured to track which address in the list was least recently used by the processor for an enqueue or dequeue operation.
12. The apparatus of claim 9 wherein the programming engine is configured to:
remove the least-recently-used address from its list of addresses if the list lacks an entry corresponding to the queue specified by the request; and
replace the removed address with an address corresponding to the specified queue.
13. The apparatus of claim 9 wherein the programming engine is configured to issue commands to the memory controller logic to return and fetch queue descriptors to and from memory to maintain coherence between the queue descriptors in the cache and the list of addresses in the programming engine.
14. The apparatus of claim 9 wherein the processor is configured to return to memory from the cache a queue descriptor modified by an enqueue or dequeue operation.
15. The apparatus of claim 9 wherein the processor is configured to execute an enqueue operation without waiting for completion of a previous dequeue operation if the queue would otherwise be unempty upon completion of the dequeue operation.
16. An article comprising a computer-readable medium that stores computer-executable instructions for causing a computer system to:
reference a queue descriptor stored in a cache in a processor's memory controller logic, in response to receiving a request to perform an enqueue or dequeue operation with respect to a particular queue, the queue descriptor specifying the structure of the queue
17. The article of claim 16 comprising instructions for causing the computer system to:
maintain in a content addressable memory a list of addresses of a subset of queue descriptors stored in a memory.
18. The article of claim 17 comprising instructions for causing the computer system to:
store in the cache a queue descriptor corresponding to each address in the list.
19. The article of claim 18 comprising instructions for causing the computer system to:
track an address in the content addressable memory, the address corresponding to a queue descriptor that was least recently used for an enqueue or dequeue operation.
20. The article of claim 19 comprising instructions for causing the computer system to:
remove the least-recently-used address from the list if the list lacks an entry corresponding to the queue specified by the request; and
replace the removed address with an address corresponding to the specified queue.
21. The article of claim 18 comprising instructions for causing the computer system to:
issue commands to the memory controller logic to return and fetch queue descriptors to and from the memory to maintain coherence between the queue descriptors in the cache and the list of addresses in the content addressable memory.
22. The article of claim 21 comprising instructions for causing the computer system to:
return a queue descriptor modified by an enqueue or dequeue operation from the cache to memory.
23. The article of claim 16 comprising instructions for causing a computer system to:
execute an enqueue operation without waiting for completion of a previous dequeue operation if the queue would otherwise be unempty upon completion of the dequeue operation.
Description
    BACKGROUND
  • [0001]
    This invention relates to queue arrays for use in network devices.
  • [0002]
    Network devices such as routers and switches can have line speeds that can be faster than 10 Gigabits. For maximum efficiency the network device should be able to process data packets, storing them to and retrieving them from memory at a rate at least equal to the line rate. However, current network devices may lack the necessary speed to process data packets at the line speeds.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • [0003]
    [0003]FIG. 1 is a block diagram of a network system.
  • [0004]
    [0004]FIG. 2 is a block diagram of a network device.
  • [0005]
    [0005]FIG. 3 shows a queue and queue descriptor.
  • [0006]
    [0006]FIG. 4 illustrates an enqueue and a dequeue operation.
  • DETAILED DESCRIPTION
  • [0007]
    Referring to FIG. 1, a network system 2 for processing data packets includes one or more sources 4 of data packets coupled to a network device 6 and one or more destinations 8 for the data packets. Each source 4 can include other network devices connected over a communications path operating at high data packet transfer line speeds. Examples of such communications paths include an optical carrier (OC)-192 line, and a 10-Gigabit line. Likewise, the destinations 8 also can include other network devices, as well as a similar network connection.
  • [0008]
    The network device 6 includes a processor 10 that uses a memory (not shown) storing memory data structures. The processor executes instructions and operates with the memory data structures as configured to receive, store and forward the data packets to a specified destination. The network device 6 can be part of, a network switch or a network router and so forth. The processor 10 also includes one or more programming engines. The programming engine (“PE”) includes a sixteen-entry content addressable memory (“CAM”). The CAM tracks, which of its entries is the least-recently-used (“LRU”).
  • [0009]
    Referring to FIG. 2, the network device 6 includes memory 14 coupled to the processor 10. The memory 14 stores output queues 18 and their corresponding queue descriptors 20. The processor 10 includes memory controller logic 38 that includes a cache 12 to store some of the queue descriptors 20 as described below. The processor 10 also has a queue manager 42 that can be implemented as a programming engine. A CAM 44 serves as a tag store holding the addresses of queue descriptors 20 that are stored in the cache.
  • [0010]
    The queue manager 42 receives enqueue requests from a set of programming engines that function as a receive pipeline 46. The receive pipeline 46 is programmed to process and classify data packets received by the network device 6 from sources 4 (FIG. 1). The enqueue requests specify which output queue 18 an arriving packet should be added to. Another programming engine functions as a transmit scheduler 48 to send dequeue requests to the queue manager 42. The dequeue requests specify the output queue 18 from which a packet is to be removed for transmittal to a destination 8 (FIG. 1).
  • [0011]
    An enqueue operation adds information that arrived in a data packet to one of the output queues 18 and updates the corresponding queue descriptor 20. A dequeue operation removes information from one of the output queues 18 and updates the corresponding queue descriptor 20, to allow the network device 6 to transmit the information to the appropriate destination 8.
  • [0012]
    An example of an output queue 18 and its corresponding queue descriptor 20 is shown in FIG. 3. The output queue 18 includes a linked list of elements 22, each of which contains a pointer 24 to the next element 22 in the output queue 18. The pointer 26 of the last element 22 in the queue 18 contains a null value. A function of the address of each element 22 implicitly maps to the information 26 stored in the memory 14 that the element 22 represents. For example, the first element 22 a of output queue 18 shown in FIG. 3 is located at address A. The location in memory of the information 26 a that element 22 a represents is implicit from the element's address A, illustrated by dashed arrow 27 a. Element 22 a contains the address B, which serves as a pointer 24 to the next element 22 b in the output queue 18, located at address B.
  • [0013]
    The queue descriptor 20 includes a head pointer 28, a tail pointer 30 and a count 32. The head pointer 28 points to the first element 22 of the output queue 18, and the tail pointer 30 points to the last element 22 of the output queue 18. The count 32 identifies the number (N) of elements 22 in the output queue 18.
  • [0014]
    Executing enqueue and dequeue operations for a large number of queues 18 in the memory 14 at high-bandwidth line rates can be accomplished by storing some of the queue descriptors 20 in the cache 12 (FIG. 2). The queue manager 42 implements a software-controlled tag store in its CAM 44 to identify the addresses in memory 14 of the sixteen queue descriptors 20 most-recently-used in enqueue or dequeue operations. The cache 12 stores the corresponding queue descriptors 20 (the head pointer 28, tail pointer 30 and count 32) stored at the addresses identified in the tag store 44.
  • [0015]
    The queue manager 42 issues commands to return queue descriptors 20 to memory 14 and fetch new queue descriptors from memory such that the queue descriptors stored in the cache 12 remain coherent with the addresses in the tag store 44. The queue manager 42 also issues commands to the memory controller logic 38 to indicate which queue descriptor 18 in the cache 12 should be used to execute the command. The commands that reference the head pointer 28 or tail pointer 30 (see FIG. 3) of a queue descriptor 20 in the cache 12 are executed in the order in which they arrive at the memory controller 38.
  • [0016]
    Referring to FIG. 4, when performing an enqueue operation, the address in memory 14 of a new element 22 e to be added to the queue 18 is stored (as indicated by dashed line 40) in the pointer 24 d of the element 22 d that currently is at the address indicated by the tail pointer 30 for that queue. The address of the new element 22 e address then is stored in the tail pointer 30 of the corresponding queue descriptor 20 in the cache 12, as indicated by dashed line 31. Because only a single write operation to memory 14 is required for an enqueue operation, only two cycles are required to update the cache 12. Subsequent enqueue operations to the same queue 18 then can be initiated.
  • [0017]
    For dequeue operations, the address contained in the head pointer 28 is returned to the queue manager 42 (FIG. 2) to indicate (by implicit mapping) the location in memory 14 of the information 26 e to be sent to a specified destination device 8 (FIG. 1). The pointer 24 a in the element 22 a is read to obtain the address of the next element 22 b in the queue 18. The address of next element 22 b is written to the head pointer of the corresponding queue descriptor 20 in the cache 12 (indicated by dashed line 29). Subsequent dequeue operations to the same queue 18 are delayed until the head pointer 28 in the cache 12 is updated. However, so long as the element 22 being read is not the only element in the queue 18, an enqueue operation with respect to the queue 18 can proceed even if a dequeue operation is in progress because the tail pointer 30 is not affected by the dequeue operation.
  • [0018]
    An advantage of locating the cache 12 of queue descriptors 20 at the memory controller logic 38 includes allowing for low latency access to and from the cache 12 and the memory 14. Also, having the control structure for queue operations in a programming engine can allow for flexible high performance while using existing micro-engine hardware.
  • [0019]
    Various features of the system can be implemented in hardware, software or a combination of hardware and software. For example, some aspects of the system can be implemented in computer programs executing on programmable computers. Each program can be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. Furthermore, each such computer program can be stored on a storage medium, such as read only memory (ROM) readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage medium is read by the computer to perform the functions described above.
  • [0020]
    Other implementations are within the scope of the following claims.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US3373408 *Apr 16, 1965Mar 12, 1968Rca CorpComputer capable of switching between programs without storage and retrieval of the contents of operation registers
US3478322 *May 23, 1967Nov 11, 1969IbmData processor employing electronically changeable control storage
US3792441 *Mar 8, 1972Feb 12, 1974Burroughs CorpMicro-program having an overlay micro-instruction
US3940745 *Apr 26, 1974Feb 24, 1976Ing. C. Olivetti & C., S.P.A.Data processing unit having a plurality of hardware circuits for processing data at different priority levels
US4130890 *Jun 8, 1977Dec 19, 1978Itt Industries, Inc.Integrated DDC memory with bitwise erase
US4400770 *Nov 10, 1980Aug 23, 1983International Business Machines CorporationCache synonym detection and handling means
US4514807 *Feb 13, 1984Apr 30, 1985Tatsuo NogiParallel computer
US4523272 *Apr 8, 1982Jun 11, 1985Hitachi, Ltd.Bus selection control in a data transmission apparatus for a multiprocessor system
US4745544 *Dec 12, 1985May 17, 1988Texas Instruments IncorporatedMaster/slave sequencing processor with forced I/O
US4866664 *Mar 9, 1987Sep 12, 1989Unisys CorporationIntercomputer communication control apparatus & method
US5140685 *Sep 16, 1991Aug 18, 1992Unisys CorporationRecord lock processing for multiprocessing data system with majority voting
US5142683 *Oct 7, 1991Aug 25, 1992Unisys CorporationIntercomputer communication control apparatus and method
US5155831 *Apr 24, 1989Oct 13, 1992International Business Machines CorporationData processing system with fast queue store interposed between store-through caches and a main memory
US5155854 *Feb 3, 1989Oct 13, 1992Digital Equipment CorporationSystem for arbitrating communication requests using multi-pass control unit based on availability of system resources
US5168555 *Sep 6, 1989Dec 1, 1992Unisys CorporationInitial program load control
US5173897 *Dec 19, 1990Dec 22, 1992Alcatel N.V.Method of restoring the correct cell sequence, particularly in an atm exchange, and output unit therefor
US5255239 *Aug 13, 1991Oct 19, 1993Cypress Semiconductor CorporationBidirectional first-in-first-out memory device with transparent and user-testable capabilities
US5263169 *Oct 20, 1991Nov 16, 1993Zoran CorporationBus arbitration and resource management for concurrent vector signal processor architecture
US5268900 *Jul 5, 1991Dec 7, 1993Codex CorporationDevice and method for implementing queueing disciplines at high speeds
US5347648 *Jul 15, 1992Sep 13, 1994Digital Equipment CorporationEnsuring write ordering under writeback cache error conditions
US5367678 *Dec 6, 1990Nov 22, 1994The Regents Of The University Of CaliforniaMultiprocessor system having statically determining resource allocation schedule at compile time and the using of static schedule with processor signals to control the execution time dynamically
US5390329 *Jul 20, 1994Feb 14, 1995Cray Research, Inc.Responding to service requests using minimal system-side context in a multiprocessor environment
US5392391 *Oct 18, 1991Feb 21, 1995Lsi Logic CorporationHigh performance graphics applications controller
US5392411 *Feb 3, 1993Feb 21, 1995Matsushita Electric Industrial Co., Ltd.Dual-array register file with overlapping window registers
US5392412 *Oct 3, 1991Feb 21, 1995Standard Microsystems CorporationData communication controller for use with a single-port data packet buffer
US5404464 *Feb 11, 1993Apr 4, 1995Ast Research, Inc.Bus control system and method that selectively generate an early address strobe
US5404482 *Jun 22, 1992Apr 4, 1995Digital Equipment CorporationProcessor and method for preventing access to a locked memory block by recording a lock in a content addressable memory with outstanding cache fills
US5432918 *Jun 22, 1992Jul 11, 1995Digital Equipment CorporationMethod and apparatus for ordering read and write operations using conflict bits in a write queue
US5448702 *Mar 2, 1993Sep 5, 1995International Business Machines CorporationAdapters with descriptor queue management capability
US5450351 *Nov 19, 1993Sep 12, 1995International Business Machines CorporationContent addressable memory implementation with random access memory
US5452437 *Nov 18, 1991Sep 19, 1995Motorola, Inc.Methods of debugging multiprocessor system
US5459842 *Jun 26, 1992Oct 17, 1995International Business Machines CorporationSystem for combining data from multiple CPU write requests via buffers and using read-modify-write operation to write the combined data to the memory
US5463625 *Oct 1, 1993Oct 31, 1995International Business Machines CorporationHigh performance machine for switched communications in a heterogeneous data processing network gateway
US5467452 *Jul 14, 1993Nov 14, 1995International Business Machines CorporationRouting control information via a bus selectively controls whether data should be routed through a switch or a bus according to number of destination processors
US5517648 *Mar 16, 1995May 14, 1996Zenith Data Systems CorporationSymmetric multiprocessing system with unified environment and distributed system functions
US5542070 *Dec 19, 1994Jul 30, 1996Ag Communication Systems CorporationMethod for rapid development of software systems
US5542088 *Apr 29, 1994Jul 30, 1996Intergraph CorporationMethod and apparatus for enabling control of task execution
US5544236 *Jun 10, 1994Aug 6, 1996At&T Corp.Access to unsubscribed features
US5550816 *Dec 29, 1994Aug 27, 1996Storage Technology CorporationMethod and apparatus for virtual switching
US5557766 *Oct 21, 1992Sep 17, 1996Kabushiki Kaisha ToshibaHigh-speed processor for handling multiple interrupts utilizing an exclusive-use bus and current and previous bank pointers to specify a return bank
US5568617 *Jan 13, 1994Oct 22, 1996Hitachi, Ltd.Processor element having a plurality of processors which communicate with each other and selectively use a common bus
US5574922 *Jun 17, 1994Nov 12, 1996Apple Computer, Inc.Processor with sequences of processor instructions for locked memory updates
US5592622 *May 10, 1995Jan 7, 19973Com CorporationNetwork intermediate system with message passing architecture
US5613071 *Jul 14, 1995Mar 18, 1997Intel CorporationMethod and apparatus for providing remote memory access in a distributed memory multiprocessor system
US5613136 *Oct 29, 1993Mar 18, 1997University Of Iowa Research FoundationLocality manager having memory and independent code, bus interface logic, and synchronization components for a processing element for intercommunication in a latency tolerant multiple processor
US5623489 *Apr 17, 1995Apr 22, 1997Ipc Information Systems, Inc.Channel allocation system for distributed digital switching network
US5627829 *Jun 6, 1995May 6, 1997Gleeson; Bryan J.Method for reducing unnecessary traffic over a computer network
US5630130 *Dec 10, 1993May 13, 1997Centre Electronique Horloger S.A.Multi-tasking low-power controller having multiple program counters
US5644623 *Jan 31, 1996Jul 1, 1997Safco Technologies, Inc.Automated quality assessment system for cellular networks by using DTMF signals
US5649092 *Jan 11, 1996Jul 15, 1997Unisys CorporationFault tolerant apparatus and method for maintaining one or more queues that are shared by multiple processors
US5649157 *Mar 30, 1995Jul 15, 1997Hewlett-Packard Co.Memory controller with priority queues
US5671446 *Mar 16, 1995Sep 23, 1997Apple Computer, Inc.Method and apparatus for atomically accessing a queue in a memory structure where LIFO is converted to FIFO
US5684962 *Aug 24, 1995Nov 4, 1997Telco Systems, Inc.Context control block for computer communications
US5850395 *Jul 18, 1996Dec 15, 1998Fujitsu Network Communications, Inc.Asynchronous transfer mode based service consolidation switch
US5872769 *Jul 18, 1996Feb 16, 1999Fujitsu Network Communications, Inc.Linked list structures for multiple levels of control in an ATM switch
US5873089 *Aug 15, 1996Feb 16, 1999Hewlett-Packard CompanyData handling system with circular queue formed in paged memory
US5893162 *Feb 5, 1997Apr 6, 1999Transwitch Corp.Method and apparatus for allocation and management of shared memory with data in memory stored as multiple linked lists
US6085217 *Mar 28, 1997Jul 4, 2000International Business Machines CorporationMethod and apparatus for controlling the assignment of units of work to a workload enclave in a client/server system
US6320861 *May 15, 1998Nov 20, 2001Marconi Communications, Inc.Hybrid scheme for queuing in a shared memory ATM switch buffer
US6347341 *Feb 22, 1999Feb 12, 2002International Business Machines CorporationComputer program product used for exchange and transfer of data having a siga vector and utilizing a queued direct input-output device
US6351474 *Jan 14, 1998Feb 26, 2002Skystream Networks Inc.Network distributed remultiplexer for video program bearing transport streams
US6385658 *Jun 27, 1997May 7, 2002Compaq Information Technologies Group, L.P.Method and apparatus for synchronized message passing using shared resources
US6393531 *Feb 25, 1999May 21, 2002Advanced Micro Devices, Inc.Queue based data control mechanism for queue based memory controller
US6426957 *Dec 11, 1998Jul 30, 2002Fujitsu Network Communications, Inc.Asynchronous transfer mode based service consolidation switch
US6438651 *Nov 1, 1999Aug 20, 2002International Business Machines CorporationMethod, system, and program for managing requests to a cache using flags to queue and dequeue data in a buffer
US6470415 *Oct 13, 1999Oct 22, 2002Alacritech, Inc.Queue system involving SRAM head, SRAM tail and DRAM body
US6523060 *Apr 7, 1995Feb 18, 2003Cisco Technology, Inc.Method and apparatus for the management of queue pointers by multiple processors in a digital communications network
US6658546 *Feb 23, 2001Dec 2, 2003International Business Machines CorporationStoring frame modification information in a bank in memory
US6684303 *Mar 20, 2003Jan 27, 2004Micron Technology, Inc.Method and device to use memory access request tags
US6687247 *Oct 27, 1999Feb 3, 2004Cisco Technology, Inc.Architecture for high speed class of service enabled linecard
US6724721 *May 7, 1999Apr 20, 2004Cisco Technology, Inc.Approximated per-flow rate limiting
US6731596 *Apr 1, 1999May 4, 2004Advanced Micro Devices, Inc.Network switch having system for automatically detecting change in network node connection
US6754223 *Aug 16, 2000Jun 22, 2004Conexant Systems, Inc.Integrated circuit that processes communication packets with co-processor circuitry to determine a prioritized processing order for a core processor
US6757791 *Mar 30, 1999Jun 29, 2004Cisco Technology, Inc.Method and apparatus for reordering packet data units in storage queues for reading and writing memory
US6768717 *May 26, 2000Jul 27, 2004Network Equipment Technologies, Inc.Apparatus and method for traffic shaping in a network switch
US6791989 *Dec 30, 1999Sep 14, 2004Agilent Technologies, Inc.Fibre channel interface controller that performs non-blocking output and input of fibre channel data frames and acknowledgement frames to and from a fibre channel
US6795447 *Mar 19, 2002Sep 21, 2004Broadcom CorporationHigh performance self balancing low cost network switching architecture based on distributed hierarchical shared
US6804239 *Aug 16, 2000Oct 12, 2004Mindspeed Technologies, Inc.Integrated circuit that processes communication packets with co-processor circuitry to correlate a packet stream with context information
US6810426 *Jan 30, 2002Oct 26, 2004Nomadix, Inc.Methods and systems providing fair queuing and priority scheduling to enhance quality of service in a network
US6813249 *Feb 16, 1999Nov 2, 2004Efficient Networks, Inc.System and method for prefetching data
US6816498 *Oct 10, 2000Nov 9, 2004Advanced Micro Devices, Inc.Method for aging table entries in a table supporting multi-key searches
US6822958 *Sep 25, 2000Nov 23, 2004Integrated Device Technology, Inc.Implementation of multicast in an ATM switch
US6822959 *Jul 31, 2001Nov 23, 2004Mindspeed Technologies, Inc.Enhancing performance by pre-fetching and caching data directly in a communication processor's register set
US6839748 *Apr 21, 2000Jan 4, 2005Sun Microsystems, Inc.Synchronous task scheduler for corba gateway
US6842457 *Jul 17, 2000Jan 11, 2005Broadcom CorporationFlexible DMA descriptor support
US6850999 *Nov 27, 2002Feb 1, 2005Cisco Technology, Inc.Coherency coverage of data across multiple packets varying in sizes
US6868087 *Nov 15, 2000Mar 15, 2005Texas Instruments IncorporatedRequest queue manager in transfer controller with hub and ports
US6888830 *Aug 16, 2000May 3, 2005Mindspeed Technologies, Inc.Integrated circuit that processes communication packets with scheduler circuitry that executes scheduling algorithms based on cached scheduling parameters
US6975637 *Mar 31, 2000Dec 13, 2005Broadcom CorporationApparatus for ethernet PHY/MAC communication
US20010014100 *Mar 22, 2001Aug 16, 2001Hideo AbePacket buffer equipment
US20020131443 *Oct 23, 2001Sep 19, 2002Robert RobinettBandwidth optimization of video program bearing transport streams
US20020144006 *Mar 30, 2001Oct 3, 2002Cranston Wayne M.High performance interprocess communication
US20020196778 *Nov 26, 2001Dec 26, 2002Michel ColmantMethod and structure for variable-length frame support in a shared memory switch
US20030110166 *Dec 12, 2001Jun 12, 2003Gilbert WolrichQueue management
US20030115347 *Dec 18, 2001Jun 19, 2003Gilbert WolrichControl mechanisms for enqueue and dequeue operations in a pipelined network processor
US20030115426 *Dec 17, 2001Jun 19, 2003Rosenbluth Mark B.Congestion management for high speed queuing
US20030140196 *Jan 23, 2002Jul 24, 2003Gilbert WolrichEnqueue operations for multi-buffer packets
US20040179533 *Mar 13, 2003Sep 16, 2004AlcatelDynamic assignment of re-assembly queues
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US7751402Oct 10, 2003Jul 6, 2010Intel CorporationMethod and apparatus for gigabit packet assignment for multithreaded packet processing
US7895239 *Feb 22, 2011Intel CorporationQueue arrays in network devices
US8316191Nov 20, 2012Intel CorporationMemory controllers for processor having multiple programmable units
US8380923Nov 8, 2010Feb 19, 2013Intel CorporationQueue arrays in network devices
US8738886Feb 17, 2004May 27, 2014Intel CorporationMemory mapping in a processor having multiple programmable units
US9128818May 23, 2014Sep 8, 2015Intel CorporationMemory mapping in a processor having multiple programmable units
US20030131022 *Jan 4, 2002Jul 10, 2003Gilbert WolrichQueue arrays in network devices
US20050198361 *Dec 29, 2003Sep 8, 2005Chandra Prashant R.Method and apparatus for meeting a given content throughput using at least one memory channel
USRE41849Oct 19, 2010Intel CorporationParallel multi-threaded processing
Classifications
U.S. Classification711/136, 711/108, 711/E12.02
International ClassificationH04L12/861, H04L12/883, H04L12/879, H04L12/743, G06F12/08, G06F12/00
Cooperative ClassificationH04L49/9047, H04L49/90, H04L45/7453, H04L49/901, G06F12/0875, H04L49/9021
European ClassificationH04L49/90E, H04L49/90C, H04L49/90M, H04L45/7453, H04L49/90, G06F12/08B14
Legal Events
DateCodeEventDescription
Apr 4, 2002ASAssignment
Owner name: INTEL CORPORATION, CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WOLRICH, GILBERT;ROSENBLUTH, MARK B.;BERNSTEIN, DEBRA;REEL/FRAME:012770/0678
Effective date: 20020307
Sep 27, 2010REMIMaintenance fee reminder mailed
Feb 20, 2011LAPSLapse for failure to pay maintenance fees
Apr 12, 2011FPExpired due to failure to pay maintenance fee
Effective date: 20110220