CA2328268A1 - Queue manager for a buffer - Google Patents
Queue manager for a buffer Download PDFInfo
- Publication number
- CA2328268A1 CA2328268A1 CA002328268A CA2328268A CA2328268A1 CA 2328268 A1 CA2328268 A1 CA 2328268A1 CA 002328268 A CA002328268 A CA 002328268A CA 2328268 A CA2328268 A CA 2328268A CA 2328268 A1 CA2328268 A1 CA 2328268A1
- Authority
- CA
- Canada
- Prior art keywords
- fifo buffer
- data
- output
- input
- storage device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F5/00—Methods or arrangements for data conversion without changing the order or content of the data handled
- G06F5/06—Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/16—Handling requests for interconnection or transfer for access to memory bus
- G06F13/1668—Details of memory controller
- G06F13/1673—Details of memory controller using buffers
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Transfer Systems (AREA)
- Communication Control (AREA)
- Dram (AREA)
- Memory System Of A Hierarchy Structure (AREA)
- Memory System (AREA)
Abstract
A bandwidth conserving queue manager for a FIFO buffer is provided, preferably on an ASIC chip and preferably including separate DRAM storage that maintains a FIFO
queue which can extend beyond the data storage space of the FIFO buffer to provide additional data storage space as needed. FIFO buffers are used on the ASIC chip to store and retrieve multiple queue entries. As long as the total size of the queue does not exceed the storage available in the buffers, no additional data storage is needed. However, when some predetermined amount of the buffer storage space in the FIFO buffers is exceeded, data are written to and read from the additional data storage, and preferably in packets which are of optimum size for maintaining peak performance of the data storage device and which are written to the data storage device in such a way that they are queued in a first-in, first-out (FIFO) sequence of addresses. Preferably, the data are written to and are read from the DRAM in burst mode.
queue which can extend beyond the data storage space of the FIFO buffer to provide additional data storage space as needed. FIFO buffers are used on the ASIC chip to store and retrieve multiple queue entries. As long as the total size of the queue does not exceed the storage available in the buffers, no additional data storage is needed. However, when some predetermined amount of the buffer storage space in the FIFO buffers is exceeded, data are written to and read from the additional data storage, and preferably in packets which are of optimum size for maintaining peak performance of the data storage device and which are written to the data storage device in such a way that they are queued in a first-in, first-out (FIFO) sequence of addresses. Preferably, the data are written to and are read from the DRAM in burst mode.
Description
QUEUE MANAGER FOR A BUFFER
Field of the Invention This invention relates generally to management of queues of data being received from an outside source and inputted into a device for further processing. In more particular aspects, this invention relates to an improved DRAM used in conjunction with a FIFO buffer for controlling the queue of received data.
Background of the Invention There are many applications in which data is received at a higher rate than it can be utilized by a particular device for short periods of time, thus necessitating queuing data for orderly input into the device on which it is to be used. A common type of queue is first-in, first-out (FIFO) buffers which temporarily store the data being received from some outside source for input into the receiving device at a rate the receiving device can accommodate. One of the problems encountered is that the FIFO buffers may exceed their capacity to store data inputted faster than it can be outputted. Thus, there is a need for a technique for managing data in an orderly way with minimum overhead for periods of time when such data being inputted is greater than the storage capacity of the FIFO buffer or buffers.
Summary of the Invention According to the present invention, a bandwidth conserving queue manager for a FIFO buffer is provided, preferably on an ASIC chip and preferably including a separate DRAM that maintains a FIFO queue which can extend beyond the data storage space of the FIFO buffer to provide additional data storage space as needed. FIFO buffers are used on the ASIC
chip to store and retrieve multiple queue entries. As long as the total size of the queue does not exceed the storage available in the buffers, no additional data storage is needed. However, when the buffer storage space in the FIFO buffers is exceeded, data are written to and read from the additional data storage, preferably a DRAM and preferably in packets which are of optimum size for maintaining peak performance of the data storage device and which are written to the data storage device in such a way that they are queued in a first-in, first-out (FIFO) sequence of addresses.
The DRAM can be a separate chip, or it can be formed on the ASIC. In either case, its memory is separate from the FIFO
buffer or buffers.
Description of the Drawings Figure 1 is a high level diagrammatic view of the structure of the managed DRAM queue manager of the present invention;
Figure 2 is a detailed view, somewhat diagrammatic, of the input FIFO buffer;
and Figure 3 is a detailed view, somewhat diagrammatic, of the output FIFO buffer.
Description of the Preferred Embodiment Referring now to the drawings and, for the present, to Figure 1, an overview of the structure and operation of the bandwidth conserving DRAM queue manager according to the present invention is shown. The queue manager is formed on an ASIC chip 10. The queue manager receives data input 12 from an outside source which is inputted to an input FIFO (first-in, first-out) buffer 14 in which the data is arranged in a queue. Data 16 is outputted from the input FIFO buffer 14 to a memory interface 18 and to a multiplexor (Mux) 20. The memory interface 18 connects to a DRAM
chip 22 which is a separate chip. (However, the DRAM could be formed on the ASIC 10.) The multiplexor 20 is controlled by multiplexor control logic 24 to output data 16 from FIFO buffer 14 selectively to the DRAM chip 22 or to an output FIFO buffer 32. The FIFO
buffer 32 outputs data 34 to the device (not shown) to which data is being supplied.
In general, the queue manager shown in Figure 1 operates in the following manner: Data 12 to be written into the queue is inputted to the input FIFO buffer 14. Data 16 leaving the FIFO may go either to the output FIFO 32 or to the external memory interface 18 and then to the DRAM chip 22 as controlled by the mux 20 and by mux control logic 24 depending on whether or not there is enough room in the input FIFO buffer 14 and the output FIFO buffer 32 for the data being read from an external source. The mux 20 is controlled based on this condition; i.e., whether the input FIFO
buffer 14 and output FIFO buffer 32 are full or at least have a predetermined percentage of capacity filled. When there is more data to be stored in the input FIFO buffer 14 and output FIFO buffer 32 than the maximum permitted, the mux 20 selects data to be written to the external memory interface 18 and the data is then stored in the DRAM chip 22. As the output FIFO buffer 32 is read out, the data is read from the DRAM chip 22 through the memory interface, to the output FIFO buffer 32 under the control of the mux control logic 24. Thus, as long as the amount of input data 12 being read from an external source does not exceed a preselected capacity of the input FIFO buffer 14 and output FIFO buffer 32, the data is passed from the input FIFO buffer 14 directly to the output FIFO
buffer 32.
However, when the amount of data 12 being inputted exceeds the capacity or predetermined percentage of capacity of the input FIFO buffer 14 and the output FIFO buffer 32, then the data is written by the input FIFO buffer 14 to the DRAM chip 22 through the memory interface 18. The DRAM chip 22 is structured to be written and read on a first-in, first-out basis at contiguous addresses so that address mapping is not required as in a conventional cache memory. The data is written to the input FIFO buffer 14 from the external source and to the output FIFO buffer 32 from the input FIFO buffer 14 one data item at a time. However, preferably the data is written to the memory interface 18 and thence to the DRAM chip 22, and read from the DRAM
chip 22 by output FIFO 32 in bursts of multiple data items to utilize the optimum transfer rate of the DRAM chip 22.
Moreover, because the DRAM is arranged so that it is ordered on a first-in, first-out basis, the burst capabilities can be used and no address tags need be applied to the data written thereto. Thus, for example, the data can be written to and read from the DRAM chip 22 in data packets of three items, rather than have to read each data item individually by address. It is also preferred that the DRAM
be a DDR (double data rate) DRAM. Double data rate DRAM allows twice the data bandwidth for a given number of I/O pins on the ASIC package as does standard synchronous DRAM. This is accomplished by launching and capturing data on both the rising and falling edge of the clock signal.
RAMBUS is another scheme of increasing the bandwidth per in which may be beneficial in some applications.
Referring now to Figure 2, a more detailed depiction of the input FIFO buffer 14 is shown.
The input FIFO buffer 14 includes latches at storage locations 40a, 40b, 40c, 40d, 40e and 40f for six different data items. The data items are read one data item at a time from an external source and are written in the FIFO buffer 14, one data item at a time, under control selectors 42a, 42b and 42c.
A write pointer 44 and read pointer 46 are both provided which provide outputs to a comparator 48.
The output of the comparator 48 goes to the mux control logic 24. As indicated above, the data is written in bursts, e.g. three data items from the FIFO buffer 14 to the DRAM
20 or one data item at a time to the output FIFO buffer 32 responsive to the control of the mux 20. A
detailed view of the output FIFO buffer 32 is shown in Figure 3.
Shown in Figure 3 are data item latches at storage locations SOa, SOb, SOc, SOd, SOe and SOf and selectors 52a, 52b, 52c, 52d, 52e and 52f which control the inputs 54a, 54b, 54c, 54d, 54e and 54f to storage locations SOa - SO~. Data outputs 56a, 56b, 56c, 56d, 56e and 56f from the data item storage SOa - SOf are provided which go to a selector 58 to provide the data output 34, the data being outputted one data item at a time. A write pointer 62 and a read pointer 64 are provided which output signals to a comparator 66. Comparator 66 outputs its difference to the mux control logic 24.
Also, the DRAM 20 has a write pointer, a read pointer and a comparator (all not shown), the output of which DRAM comparator is also provided to the mux control logic 24 As indicated above, the data is written to the output FIFO 32 from the DRAM in multiple data items to utilize the optimum data rate transfer of the DRAM. The memory interface is responsible for maintaining pointers to the head and tail portions of the queue which is stored in the DRAM chip 22. By having contiguous addresses and head and tail pointers, the need for individual addresses is eliminated, and the DRAM chip 22 acts in a FIFO mode.
The multiplexor 20 is controlled by the multiplexor control logic 24 in the following way:
Initially, data 12 is inputted to the input FIFO queue in the FIFO buffer 14 one data item at a time;
and, assuming the output FIFO buffer 32 is empty, the data is passed from the input FIFO buffer 14 directly to the output FIFO buffer 32 by the action of the mux 20. When the output FIFO buffer 32 is completely full and the input FIFO buffer 14 is half full, the mux 20 is switched by the control logic 24 responsive to the comparators 48 and 66 to pass data through the memory interface 18 to the DRAM chip 22 on the write cycle in multiple data items and for the output FIFO 32 to read data from the DRAM chip 22 through the memory interface 18 on the read cycle in multiple data items.
When the comparator in the DRAM indicates that there are no more data items stored in the DRAM
chip 22 , the mux 20 is switched back to pass the data from the input FIFO
buffer 14 to the output FIFO buffer 32.
The control of the memory interface, as indicated above, is accomplished by a write pointer to keep track of where the next group of data items will be written and a read pointer to keep track of from where the next group of data items will be read. The comparator determines if these two pointers are the same, which indicates the buffer is either full or empty. The read and write pointers work in the following way: When the read and write pointers are at the same data location on a read cycle, it means the storage locations are empty, and when the read and write pointers are at the same location on a write cycle, it means that the storage locations are full.
Thus, the read and write pointers and comparators 44, 46 and 48 and read and write pointers and comparators 62, 64 and 66, operate to indicate whether the data storage in the input FIFO buffer 14 is full or empty and the data storage in output FIFO buffer 32 is full or empty and to control the operation of the mux 20 accordingly. The read and write and comparator in the DRAM operate in the same way. (It should be noted that in some applications a linked list of data items can be used rather than read and write pointers).
The bus width of the interfaces to the input data 12 and output data 34 can be the same as the bus width at the memory bus interface. However, different bus widths may be desirable, especially if a DDR DRAM is used. The trade-off which must be made based on the particular application is the amount of on-chip buffering which will be provided (silicon area) versus the efficiency of the data transfer (bandwidth). In most cases, the bandwidth is more important. The maximum bandwidth is determined by the width of the DRAM interface and the rate at which it can accept commands and data. These rates are a property of the DRAM and the width is selectable, although the number of I/Os on an ASIC is usually a limiting factor. When these issues are weighed, there will be a particular minimum packet size required to maintain this maximum bandwidth. The input data 12 and output data 34 widths will usually be dictated by the particular application so the variable is on the on-chip buffer size which would be the minimum DRAM packet size divided by the data item size times four. (The input and output FIFOs each need to be able to store two memory packets worth of data.) To summarize the operation of the device of this invention, data is read into the input FIFO
buffer 14 from an outside source and is written from the input FIFO buffer 14 to the output FIFO
buffer 32 as long as the output FIFO buffer 32 is not full. When the output FIFO buffer 32 becomes full and the input FIFO buffer 14 becomes half full, the mux 20 shifts and allows the input FIFO
buffer 14 to write data to the DRAM chip 22 and allows the output FIFO buffer 32 to read data from the DRAM chip 22. The output from the output FIFO buffer 32 is outputted as output 34. When the output FIFO buffer 32 and the DRAM chip 22 are empty, the mux 20 then allows the input FIFO
buffer 14 to write directly to the output FIFO buffer 32. Thus, the DRAM chip 22 acts as an additional buffer space when the data input 12 is greater than input FIFO
buffer 14 and output FIFO
buffer 32 can handle.
Field of the Invention This invention relates generally to management of queues of data being received from an outside source and inputted into a device for further processing. In more particular aspects, this invention relates to an improved DRAM used in conjunction with a FIFO buffer for controlling the queue of received data.
Background of the Invention There are many applications in which data is received at a higher rate than it can be utilized by a particular device for short periods of time, thus necessitating queuing data for orderly input into the device on which it is to be used. A common type of queue is first-in, first-out (FIFO) buffers which temporarily store the data being received from some outside source for input into the receiving device at a rate the receiving device can accommodate. One of the problems encountered is that the FIFO buffers may exceed their capacity to store data inputted faster than it can be outputted. Thus, there is a need for a technique for managing data in an orderly way with minimum overhead for periods of time when such data being inputted is greater than the storage capacity of the FIFO buffer or buffers.
Summary of the Invention According to the present invention, a bandwidth conserving queue manager for a FIFO buffer is provided, preferably on an ASIC chip and preferably including a separate DRAM that maintains a FIFO queue which can extend beyond the data storage space of the FIFO buffer to provide additional data storage space as needed. FIFO buffers are used on the ASIC
chip to store and retrieve multiple queue entries. As long as the total size of the queue does not exceed the storage available in the buffers, no additional data storage is needed. However, when the buffer storage space in the FIFO buffers is exceeded, data are written to and read from the additional data storage, preferably a DRAM and preferably in packets which are of optimum size for maintaining peak performance of the data storage device and which are written to the data storage device in such a way that they are queued in a first-in, first-out (FIFO) sequence of addresses.
The DRAM can be a separate chip, or it can be formed on the ASIC. In either case, its memory is separate from the FIFO
buffer or buffers.
Description of the Drawings Figure 1 is a high level diagrammatic view of the structure of the managed DRAM queue manager of the present invention;
Figure 2 is a detailed view, somewhat diagrammatic, of the input FIFO buffer;
and Figure 3 is a detailed view, somewhat diagrammatic, of the output FIFO buffer.
Description of the Preferred Embodiment Referring now to the drawings and, for the present, to Figure 1, an overview of the structure and operation of the bandwidth conserving DRAM queue manager according to the present invention is shown. The queue manager is formed on an ASIC chip 10. The queue manager receives data input 12 from an outside source which is inputted to an input FIFO (first-in, first-out) buffer 14 in which the data is arranged in a queue. Data 16 is outputted from the input FIFO buffer 14 to a memory interface 18 and to a multiplexor (Mux) 20. The memory interface 18 connects to a DRAM
chip 22 which is a separate chip. (However, the DRAM could be formed on the ASIC 10.) The multiplexor 20 is controlled by multiplexor control logic 24 to output data 16 from FIFO buffer 14 selectively to the DRAM chip 22 or to an output FIFO buffer 32. The FIFO
buffer 32 outputs data 34 to the device (not shown) to which data is being supplied.
In general, the queue manager shown in Figure 1 operates in the following manner: Data 12 to be written into the queue is inputted to the input FIFO buffer 14. Data 16 leaving the FIFO may go either to the output FIFO 32 or to the external memory interface 18 and then to the DRAM chip 22 as controlled by the mux 20 and by mux control logic 24 depending on whether or not there is enough room in the input FIFO buffer 14 and the output FIFO buffer 32 for the data being read from an external source. The mux 20 is controlled based on this condition; i.e., whether the input FIFO
buffer 14 and output FIFO buffer 32 are full or at least have a predetermined percentage of capacity filled. When there is more data to be stored in the input FIFO buffer 14 and output FIFO buffer 32 than the maximum permitted, the mux 20 selects data to be written to the external memory interface 18 and the data is then stored in the DRAM chip 22. As the output FIFO buffer 32 is read out, the data is read from the DRAM chip 22 through the memory interface, to the output FIFO buffer 32 under the control of the mux control logic 24. Thus, as long as the amount of input data 12 being read from an external source does not exceed a preselected capacity of the input FIFO buffer 14 and output FIFO buffer 32, the data is passed from the input FIFO buffer 14 directly to the output FIFO
buffer 32.
However, when the amount of data 12 being inputted exceeds the capacity or predetermined percentage of capacity of the input FIFO buffer 14 and the output FIFO buffer 32, then the data is written by the input FIFO buffer 14 to the DRAM chip 22 through the memory interface 18. The DRAM chip 22 is structured to be written and read on a first-in, first-out basis at contiguous addresses so that address mapping is not required as in a conventional cache memory. The data is written to the input FIFO buffer 14 from the external source and to the output FIFO buffer 32 from the input FIFO buffer 14 one data item at a time. However, preferably the data is written to the memory interface 18 and thence to the DRAM chip 22, and read from the DRAM
chip 22 by output FIFO 32 in bursts of multiple data items to utilize the optimum transfer rate of the DRAM chip 22.
Moreover, because the DRAM is arranged so that it is ordered on a first-in, first-out basis, the burst capabilities can be used and no address tags need be applied to the data written thereto. Thus, for example, the data can be written to and read from the DRAM chip 22 in data packets of three items, rather than have to read each data item individually by address. It is also preferred that the DRAM
be a DDR (double data rate) DRAM. Double data rate DRAM allows twice the data bandwidth for a given number of I/O pins on the ASIC package as does standard synchronous DRAM. This is accomplished by launching and capturing data on both the rising and falling edge of the clock signal.
RAMBUS is another scheme of increasing the bandwidth per in which may be beneficial in some applications.
Referring now to Figure 2, a more detailed depiction of the input FIFO buffer 14 is shown.
The input FIFO buffer 14 includes latches at storage locations 40a, 40b, 40c, 40d, 40e and 40f for six different data items. The data items are read one data item at a time from an external source and are written in the FIFO buffer 14, one data item at a time, under control selectors 42a, 42b and 42c.
A write pointer 44 and read pointer 46 are both provided which provide outputs to a comparator 48.
The output of the comparator 48 goes to the mux control logic 24. As indicated above, the data is written in bursts, e.g. three data items from the FIFO buffer 14 to the DRAM
20 or one data item at a time to the output FIFO buffer 32 responsive to the control of the mux 20. A
detailed view of the output FIFO buffer 32 is shown in Figure 3.
Shown in Figure 3 are data item latches at storage locations SOa, SOb, SOc, SOd, SOe and SOf and selectors 52a, 52b, 52c, 52d, 52e and 52f which control the inputs 54a, 54b, 54c, 54d, 54e and 54f to storage locations SOa - SO~. Data outputs 56a, 56b, 56c, 56d, 56e and 56f from the data item storage SOa - SOf are provided which go to a selector 58 to provide the data output 34, the data being outputted one data item at a time. A write pointer 62 and a read pointer 64 are provided which output signals to a comparator 66. Comparator 66 outputs its difference to the mux control logic 24.
Also, the DRAM 20 has a write pointer, a read pointer and a comparator (all not shown), the output of which DRAM comparator is also provided to the mux control logic 24 As indicated above, the data is written to the output FIFO 32 from the DRAM in multiple data items to utilize the optimum data rate transfer of the DRAM. The memory interface is responsible for maintaining pointers to the head and tail portions of the queue which is stored in the DRAM chip 22. By having contiguous addresses and head and tail pointers, the need for individual addresses is eliminated, and the DRAM chip 22 acts in a FIFO mode.
The multiplexor 20 is controlled by the multiplexor control logic 24 in the following way:
Initially, data 12 is inputted to the input FIFO queue in the FIFO buffer 14 one data item at a time;
and, assuming the output FIFO buffer 32 is empty, the data is passed from the input FIFO buffer 14 directly to the output FIFO buffer 32 by the action of the mux 20. When the output FIFO buffer 32 is completely full and the input FIFO buffer 14 is half full, the mux 20 is switched by the control logic 24 responsive to the comparators 48 and 66 to pass data through the memory interface 18 to the DRAM chip 22 on the write cycle in multiple data items and for the output FIFO 32 to read data from the DRAM chip 22 through the memory interface 18 on the read cycle in multiple data items.
When the comparator in the DRAM indicates that there are no more data items stored in the DRAM
chip 22 , the mux 20 is switched back to pass the data from the input FIFO
buffer 14 to the output FIFO buffer 32.
The control of the memory interface, as indicated above, is accomplished by a write pointer to keep track of where the next group of data items will be written and a read pointer to keep track of from where the next group of data items will be read. The comparator determines if these two pointers are the same, which indicates the buffer is either full or empty. The read and write pointers work in the following way: When the read and write pointers are at the same data location on a read cycle, it means the storage locations are empty, and when the read and write pointers are at the same location on a write cycle, it means that the storage locations are full.
Thus, the read and write pointers and comparators 44, 46 and 48 and read and write pointers and comparators 62, 64 and 66, operate to indicate whether the data storage in the input FIFO buffer 14 is full or empty and the data storage in output FIFO buffer 32 is full or empty and to control the operation of the mux 20 accordingly. The read and write and comparator in the DRAM operate in the same way. (It should be noted that in some applications a linked list of data items can be used rather than read and write pointers).
The bus width of the interfaces to the input data 12 and output data 34 can be the same as the bus width at the memory bus interface. However, different bus widths may be desirable, especially if a DDR DRAM is used. The trade-off which must be made based on the particular application is the amount of on-chip buffering which will be provided (silicon area) versus the efficiency of the data transfer (bandwidth). In most cases, the bandwidth is more important. The maximum bandwidth is determined by the width of the DRAM interface and the rate at which it can accept commands and data. These rates are a property of the DRAM and the width is selectable, although the number of I/Os on an ASIC is usually a limiting factor. When these issues are weighed, there will be a particular minimum packet size required to maintain this maximum bandwidth. The input data 12 and output data 34 widths will usually be dictated by the particular application so the variable is on the on-chip buffer size which would be the minimum DRAM packet size divided by the data item size times four. (The input and output FIFOs each need to be able to store two memory packets worth of data.) To summarize the operation of the device of this invention, data is read into the input FIFO
buffer 14 from an outside source and is written from the input FIFO buffer 14 to the output FIFO
buffer 32 as long as the output FIFO buffer 32 is not full. When the output FIFO buffer 32 becomes full and the input FIFO buffer 14 becomes half full, the mux 20 shifts and allows the input FIFO
buffer 14 to write data to the DRAM chip 22 and allows the output FIFO buffer 32 to read data from the DRAM chip 22. The output from the output FIFO buffer 32 is outputted as output 34. When the output FIFO buffer 32 and the DRAM chip 22 are empty, the mux 20 then allows the input FIFO
buffer 14 to write directly to the output FIFO buffer 32. Thus, the DRAM chip 22 acts as an additional buffer space when the data input 12 is greater than input FIFO
buffer 14 and output FIFO
buffer 32 can handle.
Claims (13)
1. A queue manager for managing data input to a system from an outside source, comprising:
an input FIFO buffer for receiving and storing data items from said outside source;
an output FIFO buffer for receiving, storing and outputting data items to said system;
a memory storage device interfacing with said input FIFO buffer, and said output FIFO
buffer, and a control mechanism to selectively write data from said input FIFO
buffer to said memory storage device, and read data from said memory storage device to said output FIFO buffer.
an input FIFO buffer for receiving and storing data items from said outside source;
an output FIFO buffer for receiving, storing and outputting data items to said system;
a memory storage device interfacing with said input FIFO buffer, and said output FIFO
buffer, and a control mechanism to selectively write data from said input FIFO
buffer to said memory storage device, and read data from said memory storage device to said output FIFO buffer.
2. The invention as defined in claim 1 wherein said data is stored in said input FIFO
buffer and said output FIFO buffer as data items, and said control circuit includes circuit logic to write data to said output data buffer and read data from said input FIFO
buffer in multiple packets of data items.
buffer and said output FIFO buffer as data items, and said control circuit includes circuit logic to write data to said output data buffer and read data from said input FIFO
buffer in multiple packets of data items.
3. The invention as defined in claim 2 wherein said memory storage device is configured to read and write data in burst mode.
4. The invention as defined in claim 1 wherein said memory storage device includes at least one DRAM chip.
5. The invention as defined in claim 1 wherein said control logic includes logic to connect said input FIFO buffer to said output FIFO buffer until said output FIFO buffer is filled to a first predetermined amount and said input FIFO buffer is filled to a record predetermined amount, and thereafter connect said input FIFO buffer to said memory storage device until said memory storage device is empty and said output FIFO buffer is empty, and then connect said input FIFO
buffer to said output FIFO buffer.
buffer to said output FIFO buffer.
6. The invention as defined in claim 5 wherein first predetermined amount is completely full and second record predetermined amount is one-half full.
7. The invention as defined in claim 1 wherein said control mechanism includes a multiplexor.
8. The invention as defined in claim 4 wherein said DRAM chip is a double density DRAM chip.
9. A method for managing data input to a system from an outside source, comprising the steps of:
providing an input FIFO buffer for receiving and storing data items from said outside source;
providing an output FIFO buffer for receiving, storing and outputting data items to said system;
providing a memory storage device interfacing with said input FIFO buffer, and said output FIFO buffer;
providing input data to said input FIFO buffer and output data from said output FIFO buffer, and controlling the data flow so as to connect said input FIFO buffer to said output FIFO buffer until said output FIFO buffer is filled to a first predetermined amount and said input FIFO buffer is filled to a second predetermined amount, and thereafter connecting said input FIFO buffer to said memory storage device and said output FIFO buffer to said memory storage device until said memory storage device is empty and said output FIFO buffer is empty, and then connecting said input FIFO buffer to said output FIFO buffer.
providing an input FIFO buffer for receiving and storing data items from said outside source;
providing an output FIFO buffer for receiving, storing and outputting data items to said system;
providing a memory storage device interfacing with said input FIFO buffer, and said output FIFO buffer;
providing input data to said input FIFO buffer and output data from said output FIFO buffer, and controlling the data flow so as to connect said input FIFO buffer to said output FIFO buffer until said output FIFO buffer is filled to a first predetermined amount and said input FIFO buffer is filled to a second predetermined amount, and thereafter connecting said input FIFO buffer to said memory storage device and said output FIFO buffer to said memory storage device until said memory storage device is empty and said output FIFO buffer is empty, and then connecting said input FIFO buffer to said output FIFO buffer.
10. The invention as defined in claim 9 wherein said data is written to said input FIFO
buffer and said output FIFO buffer as data items, and data is written to said output data buffer and read data from said input FIFO buffer in multiple packets of data items.
buffer and said output FIFO buffer as data items, and data is written to said output data buffer and read data from said input FIFO buffer in multiple packets of data items.
11. The invention as defined in claim 9 wherein data is written to and read from said memory storage device in burst mode.
12. The invention as defined in claim 9 wherein said memory storage device includes at least one DRAM chip.
13. The invention as defined in claim 9 wherein said first predetermined amount is completely full and said second predetermined amount is one-half full.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/477,179 | 2000-01-04 | ||
US09/477,179 US6557053B1 (en) | 2000-01-04 | 2000-01-04 | Queue manager for a buffer |
Publications (1)
Publication Number | Publication Date |
---|---|
CA2328268A1 true CA2328268A1 (en) | 2001-07-04 |
Family
ID=23894844
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002328268A Abandoned CA2328268A1 (en) | 2000-01-04 | 2000-12-12 | Queue manager for a buffer |
Country Status (6)
Country | Link |
---|---|
US (1) | US6557053B1 (en) |
JP (1) | JP3560056B2 (en) |
KR (1) | KR100420422B1 (en) |
CN (1) | CN1128410C (en) |
CA (1) | CA2328268A1 (en) |
TW (1) | TW563018B (en) |
Families Citing this family (53)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7406554B1 (en) * | 2000-07-20 | 2008-07-29 | Silicon Graphics, Inc. | Queue circuit and method for memory arbitration employing same |
US20040047209A1 (en) * | 2000-11-22 | 2004-03-11 | Chuen-Der Lien | FIFO memory devices having multi-port cache memory arrays therein that support hidden EDC latency and bus matching and methods of operating same |
US6546461B1 (en) * | 2000-11-22 | 2003-04-08 | Integrated Device Technology, Inc. | Multi-port cache memory devices and FIFO memory devices having multi-port cache memory devices therein |
US7076610B2 (en) * | 2000-11-22 | 2006-07-11 | Integrated Device Technology, Inc. | FIFO memory devices having multi-port cache memory arrays therein that support hidden EDC latency and bus matching and methods of operating same |
US6987775B1 (en) * | 2001-08-15 | 2006-01-17 | Internet Machines Corp. | Variable size First In First Out (FIFO) memory with head and tail caching |
US6967951B2 (en) | 2002-01-11 | 2005-11-22 | Internet Machines Corp. | System for reordering sequenced based packets in a switching network |
US6892285B1 (en) * | 2002-04-30 | 2005-05-10 | Cisco Technology, Inc. | System and method for operating a packet buffer |
US20040028164A1 (en) * | 2002-08-07 | 2004-02-12 | Hongtao Jiang | System and method for data transition control in a multirate communication system |
US7093047B2 (en) * | 2003-07-03 | 2006-08-15 | Integrated Device Technology, Inc. | Integrated circuit memory devices having clock signal arbitration circuits therein and methods of performing clock signal arbitration |
EP1505506A1 (en) * | 2003-08-05 | 2005-02-09 | Sap Ag | A method of data caching |
US7515584B2 (en) * | 2003-09-19 | 2009-04-07 | Infineon Technologies Ag | Switching data packets in an ethernet switch |
US7421532B2 (en) * | 2003-11-18 | 2008-09-02 | Topside Research, Llc | Switching with transparent and non-transparent ports |
US7454552B2 (en) * | 2003-11-18 | 2008-11-18 | Topside Research, Llc | Switch with transparent and non-transparent ports |
US7539190B2 (en) * | 2004-01-05 | 2009-05-26 | Topside Research, Llc | Multicasting in a shared address space |
US7426602B2 (en) * | 2004-01-08 | 2008-09-16 | Topside Research, Llc | Switch for bus optimization |
US7042792B2 (en) * | 2004-01-14 | 2006-05-09 | Integrated Device Technology, Inc. | Multi-port memory cells for use in FIFO applications that support data transfers between cache and supplemental memory arrays |
US20050188125A1 (en) * | 2004-02-20 | 2005-08-25 | Lim Ricardo T. | Method and apparatus for burst mode data transfers between a CPU and a FIFO |
GB0404696D0 (en) * | 2004-03-02 | 2004-04-07 | Level 5 Networks Ltd | Dual driver interface |
US7484045B2 (en) * | 2004-03-30 | 2009-01-27 | Intel Corporation | Store performance in strongly-ordered microprocessor architecture |
US20060031565A1 (en) * | 2004-07-16 | 2006-02-09 | Sundar Iyer | High speed packet-buffering system |
US7246300B1 (en) | 2004-08-06 | 2007-07-17 | Integrated Device Technology Inc. | Sequential flow-control and FIFO memory devices having error detection and correction capability with diagnostic bit generation |
JP2006178618A (en) * | 2004-12-21 | 2006-07-06 | Nec Corp | Fault tolerant computer and data transmission control method |
CN100369019C (en) * | 2005-01-10 | 2008-02-13 | 英业达股份有限公司 | Electronic datagram queue processing method and system |
CN100372406C (en) * | 2005-02-25 | 2008-02-27 | 华为技术有限公司 | Method and apparatus for transmitting data between substation plates |
US8065493B2 (en) * | 2005-06-09 | 2011-11-22 | Nxp B.V. | Memory controller and method for coupling a network and a memory |
FR2889328B1 (en) * | 2005-07-26 | 2007-09-28 | Atmel Nantes Sa Sa | FIFO-TYPE UNIDIRECTIONAL INTERFACING DEVICE BETWEEN A MASTER BLOCK AND AN SLAVE BLOCK, MASTER BLOCK, AND CORRESPONDING SLAVE BLOCK |
US20070216696A1 (en) * | 2006-03-16 | 2007-09-20 | Toshiba (Australia) Pty. Limited | System and method for document rendering employing bit-band instructions |
US7756134B2 (en) | 2006-05-02 | 2010-07-13 | Harris Corporation | Systems and methods for close queuing to support quality of service |
US7894509B2 (en) * | 2006-05-18 | 2011-02-22 | Harris Corporation | Method and system for functional redundancy based quality of service |
US7856012B2 (en) | 2006-06-16 | 2010-12-21 | Harris Corporation | System and methods for generic data transparent rules to support quality of service |
US8064464B2 (en) | 2006-06-16 | 2011-11-22 | Harris Corporation | Method and system for inbound content-based QoS |
US7990860B2 (en) | 2006-06-16 | 2011-08-02 | Harris Corporation | Method and system for rule-based sequencing for QoS |
US8516153B2 (en) | 2006-06-16 | 2013-08-20 | Harris Corporation | Method and system for network-independent QoS |
US7916626B2 (en) | 2006-06-19 | 2011-03-29 | Harris Corporation | Method and system for fault-tolerant quality of service |
US8730981B2 (en) | 2006-06-20 | 2014-05-20 | Harris Corporation | Method and system for compression based quality of service |
US7769028B2 (en) | 2006-06-21 | 2010-08-03 | Harris Corporation | Systems and methods for adaptive throughput management for event-driven message-based data |
US8300653B2 (en) | 2006-07-31 | 2012-10-30 | Harris Corporation | Systems and methods for assured communications with quality of service |
JP2008165485A (en) * | 2006-12-28 | 2008-07-17 | Fujitsu Ltd | Semiconductor device and buffer control circuit |
CN101232434B (en) * | 2007-01-22 | 2011-08-24 | 中兴通讯股份有限公司 | Apparatus for performing asynchronous data transmission with double port RAM |
US7594047B2 (en) * | 2007-07-09 | 2009-09-22 | Hewlett-Packard Development Company, L.P. | Buffer circuit |
CN101552702B (en) * | 2008-12-31 | 2011-12-21 | 成都市华为赛门铁克科技有限公司 | Detection system and method for data processing system |
WO2010122613A1 (en) * | 2009-04-24 | 2010-10-28 | パナソニック株式会社 | Fifo buffer device |
CN102609235B (en) * | 2011-01-25 | 2014-08-20 | 中兴通讯股份有限公司 | Method and system for updating data after data reading of double-port RAM (random-access memory) |
US9417935B2 (en) | 2012-05-01 | 2016-08-16 | Microsoft Technology Licensing, Llc | Many-core process scheduling to maximize cache usage |
US8650538B2 (en) | 2012-05-01 | 2014-02-11 | Concurix Corporation | Meta garbage collection for functional code |
US8793669B2 (en) | 2012-07-17 | 2014-07-29 | Concurix Corporation | Pattern extraction from executable code in message passing environments |
US9575813B2 (en) | 2012-07-17 | 2017-02-21 | Microsoft Technology Licensing, Llc | Pattern matching process scheduler with upstream optimization |
KR20140078912A (en) * | 2012-12-18 | 2014-06-26 | 삼성전자주식회사 | Memory system and SoC comprising thereof |
US11099746B2 (en) * | 2015-04-29 | 2021-08-24 | Marvell Israel (M.I.S.L) Ltd. | Multi-bank memory with one read port and one or more write ports per cycle |
US11023275B2 (en) * | 2017-02-09 | 2021-06-01 | Intel Corporation | Technologies for queue management by a host fabric interface |
CN110058816B (en) * | 2019-04-10 | 2020-09-18 | 中国人民解放军陆军工程大学 | DDR-based high-speed multi-user queue manager and method |
CN110688238B (en) * | 2019-09-09 | 2021-05-07 | 无锡江南计算技术研究所 | Method and device for realizing queue of separated storage |
CN114546263B (en) * | 2022-01-23 | 2023-08-18 | 苏州浪潮智能科技有限公司 | Data storage method, system, equipment and medium |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR890004820B1 (en) * | 1984-03-28 | 1989-11-27 | 인터내셔널 비지네스 머신즈 코포레이션 | Stacked double density memory module using industry standard memory chips |
US5043981A (en) | 1990-05-29 | 1991-08-27 | Advanced Micro Devices, Inc. | Method of and system for transferring multiple priority queues into multiple logical FIFOs using a single physical FIFO |
US5524265A (en) | 1994-03-08 | 1996-06-04 | Texas Instruments Incorporated | Architecture of transfer processor |
US5590304A (en) * | 1994-06-13 | 1996-12-31 | Covex Computer Corporation | Circuits, systems and methods for preventing queue overflow in data processing systems |
US5553061A (en) * | 1994-06-27 | 1996-09-03 | Loral Fairchild Corporation | Packet processor having service priority and loss priority features |
US5638503A (en) | 1994-07-07 | 1997-06-10 | Adobe Systems, Inc. | Method and apparatus for generating bitmaps from outlines containing bezier curves |
JP3810449B2 (en) * | 1994-07-20 | 2006-08-16 | 富士通株式会社 | Queue device |
JPH08202566A (en) * | 1995-01-24 | 1996-08-09 | Nissin Electric Co Ltd | Inter-processor communication system |
US5519701A (en) | 1995-03-29 | 1996-05-21 | International Business Machines Corporation | Architecture for high performance management of multiple circular FIFO storage means |
US5604742A (en) | 1995-05-31 | 1997-02-18 | International Business Machines Corporation | Communications system and method for efficient management of bandwidth in a FDDI station |
US5673416A (en) * | 1995-06-07 | 1997-09-30 | Seiko Epson Corporation | Memory request and control unit including a mechanism for issuing and removing requests for memory access |
US5781182A (en) | 1996-11-19 | 1998-07-14 | Winbond Electronics Corp. | Line buffer apparatus with an extendible command |
KR100245276B1 (en) * | 1997-03-15 | 2000-02-15 | 윤종용 | Burst mode random access memory device |
US6058439A (en) * | 1997-03-31 | 2000-05-02 | Arm Limited | Asynchronous first-in-first-out buffer circuit burst mode control |
US6044419A (en) * | 1997-09-30 | 2000-03-28 | Intel Corporation | Memory handling system that backfills dual-port buffer from overflow buffer when dual-port buffer is no longer full |
KR100256967B1 (en) * | 1997-12-31 | 2000-05-15 | 윤종용 | Message buffering method |
US6295563B1 (en) * | 1998-01-30 | 2001-09-25 | Unisys Corporation | Control system for recreating of data output clock frequency which matches data input clock frequency during data transferring |
US6314478B1 (en) * | 1998-12-29 | 2001-11-06 | Nec America, Inc. | System for accessing a space appended to a circular queue after traversing an end of the queue and upon completion copying data back to the queue |
-
2000
- 2000-01-04 US US09/477,179 patent/US6557053B1/en not_active Expired - Fee Related
- 2000-10-11 TW TW089121153A patent/TW563018B/en active
- 2000-12-12 CA CA002328268A patent/CA2328268A1/en not_active Abandoned
- 2000-12-18 KR KR10-2000-0077615A patent/KR100420422B1/en not_active IP Right Cessation
- 2000-12-18 JP JP2000384352A patent/JP3560056B2/en not_active Expired - Fee Related
- 2000-12-27 CN CN00137006A patent/CN1128410C/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
US6557053B1 (en) | 2003-04-29 |
TW563018B (en) | 2003-11-21 |
KR100420422B1 (en) | 2004-03-04 |
CN1303053A (en) | 2001-07-11 |
JP3560056B2 (en) | 2004-09-02 |
JP2001222505A (en) | 2001-08-17 |
KR20010070306A (en) | 2001-07-25 |
CN1128410C (en) | 2003-11-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6557053B1 (en) | Queue manager for a buffer | |
US7675925B2 (en) | Method and apparatus for providing a packet buffer random access memory | |
US6226338B1 (en) | Multiple channel data communication buffer with single transmit and receive memories | |
CN104821887B (en) | The device and method of processing are grouped by the memory with different delays | |
US20060072598A1 (en) | Variable size FIFO memory | |
EP1345125A2 (en) | Dynamic random access memory system with bank conflict avoidance feature | |
US20080294856A1 (en) | Memory arbitration system and method having an arbitration packet protocol | |
US6304936B1 (en) | One-to-many bus bridge using independently and simultaneously selectable logical FIFOS | |
US6993602B2 (en) | Configuring queues based on a given parameter | |
US7126959B2 (en) | High-speed packet memory | |
US20030174708A1 (en) | High-speed memory having a modular structure | |
US20030229734A1 (en) | FIFO scheduling time sharing | |
US20060039284A1 (en) | Method and apparatus for processing a complete burst of data | |
US7304949B2 (en) | Scalable link-level flow-control for a switching device | |
US7216185B2 (en) | Buffering apparatus and buffering method | |
US7991968B1 (en) | Queue memory management | |
WO2005101762A1 (en) | Method and apparatus for processing a complete burst of data | |
US20080114926A1 (en) | Device and Method for Storing and Processing Data Units | |
US6625711B1 (en) | Method and/or architecture for implementing queue expansion in multiqueue devices | |
US5732011A (en) | Digital system having high speed buffering | |
US6831920B1 (en) | Memory vacancy management apparatus and line interface unit | |
CN113821457B (en) | High-performance read-write linked list caching device and method | |
JP4904136B2 (en) | Single-port memory controller for bidirectional data communication and control method thereof | |
KR100785892B1 (en) | Apparatus and method for controlling single port memory of bi-directional data communication | |
WO2005101763A1 (en) | Method and apparatus for forwarding bursty data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request | ||
FZDE | Discontinued |