WO2002082286A2 - Apparatus and method for efficiently sharing memory bandwidth in a network processor - Google Patents

Apparatus and method for efficiently sharing memory bandwidth in a network processor Download PDF

Info

Publication number
WO2002082286A2
WO2002082286A2 PCT/GB2002/001484 GB0201484W WO02082286A2 WO 2002082286 A2 WO2002082286 A2 WO 2002082286A2 GB 0201484 W GB0201484 W GB 0201484W WO 02082286 A2 WO02082286 A2 WO 02082286A2
Authority
WO
WIPO (PCT)
Prior art keywords
memory
controller
request
slice
epc
Prior art date
Application number
PCT/GB2002/001484
Other languages
French (fr)
Other versions
WO2002082286A3 (en
Inventor
Peter Barri
Jean Calvignac
Marco Heddes
Joseph Logan
Alex Niemegeers
Fabrice Verplanken
Miroslav Vrana
Original Assignee
International Business Machines Corporation
Ibm United Kingdom Limited
Alcatel
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corporation, Ibm United Kingdom Limited, Alcatel filed Critical International Business Machines Corporation
Priority to JP2002580183A priority Critical patent/JP4336108B2/en
Priority to KR1020037011580A priority patent/KR100590387B1/en
Priority to DE60205231T priority patent/DE60205231T2/en
Priority to AT02708513T priority patent/ATE300763T1/en
Priority to EP02708513A priority patent/EP1374072B1/en
Priority to AU2002242874A priority patent/AU2002242874A1/en
Publication of WO2002082286A2 publication Critical patent/WO2002082286A2/en
Publication of WO2002082286A3 publication Critical patent/WO2002082286A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/18Handling requests for interconnection or transfer for access to memory bus based on priority control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1605Handling requests for interconnection or transfer for access to memory bus based on arbitration
    • G06F13/161Handling requests for interconnection or transfer for access to memory bus based on arbitration with latency improvement

Definitions

  • the present invention relates to computers and network processors in general and in particular to memory systems to be used with said computers and network processors.
  • a Clear Channel is a channel with high bandwidth which transmits large amounts of data on a single flow, as opposed to channelized links which carry several lower bandwidth data flows on a single physical link.
  • High speed storage systems such as Static Random Access Memories (SRAM) etc., can be used to meet high bandwidth requirements . But these memories are expensive and as a result increase the price of the devices in which they are used. The cost problem is further worsened if such high price memories were to be used to build storage systems for computers and Network Processors .
  • SRAM Static Random Access Memories
  • the present invention provides a method for optimizing memory utilization comprising the acts of: receiving in a memory arbiter a plurality of memory access requests, at least one of the memory access requests being associated with a priority designation (the priority designation could arrive with the request, or be assigned subsequent to arrival) ; analyzing by said memory arbiter the memory access requests having a particular priority designation to determine the magnitude of memory bandwidth required; and sharing memory access with at least one other request if the memory request with the particular priority designation does not require full memory bandwidth.
  • high speed memories that are low cost and have high densities.
  • high speed memories preferably have large Bandwidth (BW) providing large amounts of data in a relatively short time interval.
  • BW Bandwidth
  • the memory access request having a particular priority designation is permitted to utilize full memory bandwidth, if the memory request requires full memory bandwidth.
  • the particular priority designation is the highest.
  • the memory is provided having multiple buffers arranged in'"at least one slice and each buffer in the at least one slice • ⁇ partitioned into multiple Quadwords, and wherein the memory requests are seeking access to the at least one slice from multiple requesters .
  • high speed memories that are low cost and have high densities .
  • high speed memories preferably have large Bandwidth (BW) providing large amounts of data in a relatively short time interval.
  • BW Bandwidth
  • the invention preferably includes methods that optimize utilization of a memory system by the resources using said memory system.
  • the requests, read or write, from multiple requesters are preferably bundled so that for each memory access cycle the maximum allowable unit of information is read or written. By so doing so the information throughput is enhanced thereby allowing the use of relatively low cost, high density relatively slow access time memories, such as DDR DRAM, to be used to build storage for computer Network Processors or similar devices .
  • the requesters i.e. of memory access requests
  • EPC Embedded Processor Complex
  • the memory system preferably includes a plurality of buffers, formed from DDR DRAM modules, which are arranged into groups termed "slices". Each DDR DRAM is preferably partitioned into a plurality of buffers (1 through N) and is controlled by a DRAM controller. Each Buffer is preferably partitioned into sections termed "Quadwords". In one embodiment the Buffer is partitioned in Quadwords A, B, C, D. Both the buffer and Quadwords in the buffer are addressable.
  • a memory arbiter monitors requests from the Receiver Controller, EPC Controller and Transmitter Controller.
  • the memory arbiter preferably uses the requests to form Memory Access Vector per slice of memory.
  • memory access priority is preferably given to the Transmitter Controller. If the request from the Transmitter Controller requires full memory bandwidth, the Memory Access Vector is preferably based upon the Transmitter Controller Request only. If less than full memory bandwidth is required, by the Transmitter Controller, any pending read requests by the EPC Controller is preferably merged with that from the Transmitter Controller to form the memory access vector per slice.
  • the DRAM Controller in response to the memory access vector preferably output a full buffer of information containing data for the Transmitter Controller (if a full buffer of-data has been requested) or data for the Transmitter Controller and EPC (if transmitter had requested less than full buffer) .
  • any excess capacity, resulting from the Transmitter Controller requesting less than a full buffer of data, is preferably allocated to the EPC Controller.
  • the Arbiter gives priority to the Receiver Controller.
  • any write requests in which the Receiver Controller has less than a full buffer payload is preferably augmented with data from the EPC.
  • the invention provides a network processor includes: a memory system for storing information; and a memory arbiter for granting access operatively coupled to said memory system; said memory arbiter including one or more request registers in which memory access requests are received, at least one priority register for storing priority designation for the requesters and a controller operatively coupled to the request registers and priority register said controller including circuits for monitoring requests and request priorities to generate memory access vectors in which the highest priority request is allowed to share memory bandwidth with at least one other memory request if the highest priority request demands less than the full bandwidth.
  • a method to optimize utilization of memory comprising the acts of: a) providing the memory having multiple buffers arranged in at least one slice and each buffer in said at least one slice partitioned into multiple Quadwords; b) accepting in a memory arbiter multiple memory requests seeking access to the at least one slice from multiple requesters; c) assigning a predetermined priority by said memory arbiter to each one of the requests; d) analyzing by said memory arbiter the highest priority request to detect percentage of memory bandwidth required for said highest priority request; and e) sharing memory access with a lower priority request if the highest priority request does not utilize full memory bandwidth.
  • a method to optimize utilization of memory comprising the acts of: a) providing the memory having multiple buffers arranged in at least one slice and each buffer in said at ' least' one slice partitioned into multiple Quadwords; b) accepting in a memory arbiter multiple memory requests seeking access to the at least one slice from multiple requesters; c) assigning a predetermined priority by said memory arbiter to each one of the requests; and d) analyzing by said memory arbiter the highest priority request to detect percentage of memory bandwidth required for said highest priority request; and e) permitting the highest priority request to utilize full memory bandwidth if said highest priority request requires full memory bandwidth or sharing the memory bandwidth with lower request if said highest priority request demand less than full memory bandwidth.
  • the multiple requesters includes a receiver controller, transmitter controller and Embedded Processor Complex (EPC) Controller operably coupled within a Network Processor.
  • the transmitter controller has the highest priority.
  • the receiver controller may have the next highest priority.
  • the EPC shares memory bandwidth with the transmitter controller or the receiver controller.
  • the request may include reads and writes.
  • the read request is generated by the transmitter controller.
  • the write request is provided by the receiver controller. Read and write requests may be provided by the EPC controller.
  • a method for optimizing memory utilization comprising the acts of: receiving in a memory arbiter a plurality of memory access requests; providing in said memory arbiter a priority designation for at least one of the memory access requests; analyzing by said memory arbiter the memory access requests having the priority designation to determine magnitude of memory bandwidth required; and permitting the memory access request having the priority designation to utilize full memory bandwidth if said memory request with the priority designation requires full memory bandwidth or sharing the memory bandwidth with other requests if said memory access request having the priority designation demands less .than full memory bandwidth.
  • sharing further includes combining memory bandwidth of the highest priority request with memory bandwidth of lower priority request
  • a network processor which includes: a memory system that stores information; and a memory arbiter that grants access operatively coupled to said memory system; said memory arbiter including one or more request registers in which memory access requests are received, at least one priority register that stores priority designation for the requesters and a first controller operatively coupled to the request registers and priority register said controller including circuits that monitors requests and request priorities to generate memory access vectors in which the highest priority request is allowed utilization of full memory bandwidth if said highest priority request so demands or generate memory access request in which the highest priority request and a lower priority request share the full memory bandwidth.
  • the memory system includes a plurality of buffers arranged in at least one slice, and each buffer is partitioned into Quadwords .
  • each slice is operatively coupled to at least one buffer controller.
  • each buffer is 64 bytes partitioned into four Quadwords of 16 bytes each.
  • Each slice may be fabricated from DDR DRAM.
  • a receiver controller operatively coupled to the memory arbiter.
  • a transmitter controller operatively coupled to said memory arbiter.
  • EPC embedded processor complex
  • a scheduler operatively coupled thereto.
  • the first controller selectively performs the following steps to construct buffer memory access vector: a) Exclude slices scheduled to re-fresh cycle (indicated by each DRAM controller) ; b) Assign slices for all R (Read) requests of Transmitter Controller; c) Complement R- accesses from corresponding EPC queue [Slice; Quadword] ; d) Assign slice to EPC for globally W (Write) excluded slices (e.g.
  • an apparatus comprising: a memory partitioned into N sectors, N greater than 1; and a memory arbiter controller operatively coupled to said memory: said memory arbiter controller receiving at least two memory access requests, assigning memory access priority for said requests, analyzing a selected one of said requests to determine if said selected one of said memory requests will use full memory bandwidth for a particular memory access cycle, generating a memory access vector which assigns full memory bandwidth to the selected one of said requests if full memory bandwidth was requested and sharing memory bandwidth with another request if the full memory bandwidth was not used.
  • a method to access memory comprising the acts of: a) receiving in a memory arbiter Read Requests from a first Requester, said Read Request including information identifying a portion of the memory from which data is to be read; b) determining if data to be returned used all of the available memory bandwidth; and c) complementing the data to be returned for the first requester with data for a second requester if the full memory bandwidth is not used by the first requester.
  • a method comprising the acts of: a) receiving in an arbiter a request from a first requester seeking access to memory; b) determining what portion of memory bandwidth is to be used as a result of said request; c) assigning usage of the total memory bandwidth to the first Requester if the determination in step b) indicates full usage of the memory bandwidth; d) complementing the bandwidth usage of the first Requester with bandwidth usage requested by a second Requester if determination in step b) indicates the bandwidth usage of the first Requester is less than full memory bandwidth.
  • the request may be a read or a write.
  • the second requester includes EPC (Embedded Processor Complex) .
  • the first Requester includes Receiver Controller.
  • Figure 1A shows a block diagram of a Network Processor in which a preferred embodiment of the present invention is used.
  • Figure IB shows a block diagram of Network Processor Requesters and memory system according to teachings of a preferred embodiment of the present invention.
  • Figure 2 shows a block diagram of the memory arbiter according to the teachings of a preferred embodiment of the present invention.
  • Figure 3 shows in accordance with a preferred embodiment of the present invention a functional block diagram illustrating a read request in which the full bandwidth is used by the Transmitter Controller.
  • Figure 4 shows in accordance with a preferred embodiment of the present invention a functional block diagram illustrating a Read Request in which different buffers from the same slice are accessed and assigned to different Target Port (TP) FIFO buffers.
  • the Figure can also be used to illustrate a Write Request in which data would flow in the opposite direction in accordance with an embodiment of the present invention.
  • Figure 5 shows in accordance with a preferred embodiment of the present invention a functional block diagram illustrating a Read Request in which different buffers from different slices are- accessed and one Quadword is allotted to the EPC Controller.
  • Figure 6 shows an alternate embodiment of the memory arbiter.
  • Figure 7 shows a flowchart for the algorithm used in a Write or Read operation in accordance with an embodiment of the present invention.
  • FIG. 1A shows a block diagram of a network processor (NP) in which a preferred embodiment of the present invention set forth below is implemented.
  • the network processor includes an Ingress Side 10 and an Egress Side 12.
  • the Ingress Side 10 and Egress Side-12 are symmetrical about the imaginary axis AA.
  • Data traffic accessing the Ingress Side 10 enters on Flexbus 4 or the WRAP transmission line and exits on Switch Interface 10' .
  • Data entering Egress Side 12 enters on Switch Interface 12 ' and exits on the Conductor labelled Flexbus 4 ' .
  • the Flexbusses 4, 4' and Switch Interfaces (Int) 10', 12' could be considered as Clear Channels delivering 10 Gbps and 14 Gbps, respectively.
  • the Switch Interface 10 ' is coupled to Switch Interface 12 ' by a switch assembly (not shown) such as Prizma developed and marketed by IBM.
  • the Ingress Side 10 includes data flow chip 14, EPC chip 16, data buffers 18 and scheduler chip 20.
  • the Egress Side 12 includes data flow chip 14', EPC chip 16', data buffers 18' and scheduler chip 20'.
  • a WRAP transmission line interconnects data flow chip 14' to data flow chip 14.
  • the named components are interconnected as shown in the figure. It should be noted that elements with like names in the figure are substantially similar and the description of one covers the description of the other. By way of example, EPC chip 16 and EPC chip 16' are substantially identical. Likewise, components with similar names are also identical .
  • the configuration shown in Figure 1A provides bidirectional functionality. A packet walk through the chipset will be used to explain its functionality.
  • An HDLC frame extracted from Sonet/SDH stream by a framer device (not shown) is received on Flexbus 4 (Ingress side) and forwarded into data flow chip 14.
  • the data from the frame is written into data buffers 18.
  • the frame context FCB
  • EPC embedded Processor Complex queues
  • the EPC chip 16 reads out the frame pointers and processes the respective frame in one of its picoprocessors.
  • EPC can issue requests for reading/writing the appropriate parts of the frame (e.g. L2 and L3 headers) from/into the data buffer.
  • EPC passes the frame context to data flow chip which enqueues frame into the queueing structure of the scheduler chip 20.
  • the scheduler in scheduler chip 50 selects the frame from appropriate queue for transmission which means that corresponding frame pointer is passed to data flow chip.
  • the transmitted frame is read out from data buffer and is transmitted on switch interface 10' in the form of PRIZMA cells.
  • PRIZMA cells are sixty-four bytes cells carrying segments of variable-size frames, or complete Asynchronous Transfer Mode (ATM) cells.
  • ATM Asynchronous Transfer Mode
  • the cells are fed over switch interface 10 ' into a cross point switch (not shown) .
  • the cross point switch is a product named PRIZMA manufactured and marketed by International Business Machines Corporation.
  • PRIZMA cells are received on switch interface 12 ' and forwarded to data buffers 18 ' .
  • the frame pointer is enqueued for processing in EPC queues .
  • Egress EPC 16 ' retrieves the frame context from the data flow chip and processes the frame header in the picocode running in the EPC picoprocessors (not shown) .
  • the result of the processing is the frame context, which is passed to data flow chip 14' and is enqueued in appropriate queue of the scheduler 20'.
  • the scheduler in scheduler chip 20' selects the frame to be transmitted and this is then read out from the data buffer 18 ' and transmitted on line interface labelled Flexbus 4 ' of data flow chip 14 ' .
  • Flexbus 4 ' of data flow chip 14 ' .
  • FIG. IB shows a block diagram of the data flow chip and memory system (Sys) 21 according to the teachings of a preferred embodiment of the present invention.
  • Data into the data flow chip is provided on the bus labelled Data_in and data out of the chip is transmitted on the bus labelled Data_out.
  • Data_in and Data_out are Clear Channels transmitting large amounts of data.
  • each slice is made up of a plurality of buffers and are connected by individual bus 0-5 to separate DRAM controller in the data flow chip.
  • the DRAM controller is a conventional DRAM controller and provides write, read, refresh and other functions to the slice which they service. Because DRAM Controllers are well known within the art they will not be discussed further.
  • the functional blocks in the data flow chip includes Receiver Controller 22, Memory arbiter 24, FCB arbiter 26, BCB arbiter 28, EPC controller 30, Buffer acceptance and accounting 32, scheduler interface controller 34, and transmitter controller 36.
  • the QDR SRAM 38 stores a list of buffers that are in memory system 21.
  • QDR SRAM 40 stores information relative to frames in the target blade (T/B) and target port (TP) queues.
  • the memory arbiter 24 interfaces the data flow chip to the memory system 21. To this end the memory arbiter collects read (R) /write (W) requests from transmitter, receiver and embedded processor complex (EPC) controllers 22, 36 and 38, and schedules access towards individual data store memory slice.
  • R read
  • W read
  • EPC embedded processor complex
  • each memory slice includes a plurality of buffers, each buffer being 64 bytes. It should be noted that other size bandwidth of data can be designed. Frame data are then written into different buffers spread over different slices of memory in order to maximize use of memory bandwidth. When reading data from memory, the data is pulled out in 64 bytes. Stated another way, the bandwidth going into or out of memory is 64 bytes. It should be noted that other size bandwidth of data can be designed.
  • the memory arbiter makes sure that any access to the memory (Read or Write) has a payload of 64 bytes. If a request, from a requester, has less than 64 bytes, the payload is augmented by data from another requester.
  • receiver controller 22 receives data from incoming bus labelled Data_in and issues write requests in order to write receive data into individual buffers in memory system 21.
  • transmitter controller 36 issues read requests in order to transmit selected frames on DATA OUT.
  • the EPC controller 30 terminates different messages from/to EPC and issues read/write requests to data store (memory system 21) . It also maintains track of frame waiting before processing (G-FIFOs) .
  • the buffer acceptance and accounting block 32 is responsible for enqueue/discard decision on per frame basis. It also maintains queue-filling level on TB/TP basis and provides this information to the switch fabric on switch interface.
  • BCB and FCB memory arbiters provide scheduling of different accesses for link lists operation such as chain/dechain FCB or BCB, lease/release FCB or BCB.
  • FIG. 2 shows a functional block diagram of the memory arbiter according to a preferred embodiment of the present invention.
  • the function of the memory arbiter is to provide access to memory system 21.
  • the memory arbiter accepts requests from receiver controller 22 , transmitter/controller 36 and EPC controller 30.
  • the requests are prioritized with the transmitter controller 36 having the highest priority, the receiver controller 22 the next highest priority and the EPC controller 30 the lowest. Of course different order or priorities may be selected.
  • In granting permission for access to the memory the arbiter makes sure that for each memory access the maximum bandwidth of data that is permitted by the memory is utilized. As a consequence, if the memory request is for reading data the request would be a read request from the transmitter controller 36 and/or the EPC controller 30.
  • the memory arbiter analyzes the request from the transmitter controller; if the request requires the full memory bandwidth then the arbiter generates the access vector having a command per slice part and an address per bank part. The access vector would be delivered to the appropriate memory controller and the data is extracted from memory.
  • each buffer in the memory has 64 bytes partitioned into 4 Quadwords A, B, C and D with each Quadword having 16 bytes each. If the transmitter controller requires less than 4 Quadwords in any memory access, the unused amount of Quadword is given to the EPC controller. Any request for writing is issued by the receive controller and the EPC controller. Like the read request, all write requests require 4 Quadwords of information to be delivered to the memory controller. If the receiver controller writes less than 4 Quadwords the non-used Quadword is assigned to the EPC controller. As a consequence, every access to the memory writes or reads 4 Quadwords 64 bytes of data. By so doing, the maximum memory bandwidth is utilized with no room for waste cycles .
  • the memory arbiter includes a bus structure 40 interconnecting memory system 21, transmitter controller 22, EPC controller 30 and transmitter controller 36.
  • the receiver controller 22 interfaces the memory arbiter to the switch or line interface.
  • the EPC controller interfaces the arbiter to the EPC chip ( Figure 1) .
  • the transmitter controller 36 interfaces the memory arbiter to Data_out bus i2
  • the arbiter includes memory arbiter controller 42 which receives the respective request shown in the figure generates the access vector which is supplied to respective controller to access individual slices in the memory system.
  • the memory arbiter arbitration works in 11 cycle windows (one cycle equals 6ns) . At the start of the access window, memory arbiter receives the following inputs (request) to be scheduled in next window:
  • Transmitter Controller Request - represented by BCB address of the buffer to be read; RF flag indicating whether buffer can be released and Quadword mask complements each BCBA address .
  • Quadword mask indicating which Quadwords within this buffer are to be effectively read allows the memory arbiter to complement access for unused Quadwords with ones from EPC.
  • 0, 1, 2 or 3 requests can be made by the transmitter controller in one memory access window. The respective requests are shown by arrows interconnecting the transmitter controller 36 with memory arbiter controller 42. The direction of the arrow shows the direction in which the requests flow.
  • Receiver Controller Requests - represented by slice exclusion masks and two Quadword masks Two Quadword masks indirectly indicate how many buffers preferably need to be located per one request (e.g. if one of the Quadword masks is ⁇ 0000' it means only one buffer shall be allocated) and which memory banks (Quadwords) should be used in different buffers.
  • the receiver controller 22 has Second highest priority to access memory. The requests allowed to the receiver controller are shown as arrows emanating from the receiver controller to the memory arbiter controller 42.
  • EPC Controller requests - represented by queues per slice, per action and per Quadword.
  • Memory arbiter shall assign all remaining slices based on weight of individual Quadword requests. This weight is proportional to age of the Quadword request expressed by 6-bit value.
  • memory arbiters can complement accesses of transmitter and receiver by reading corresponding Quadword access requests from EPC requests queueing system. In a preferred embodiment of the invention the maximum number of Quadword accesses that EPC can be granted in one memory access window is limited to 8 read and 8 write Quadwords .
  • the single headed arrow indicates information flow passing from the EPC controller and memory arbiter controller 42.
  • the memory arbiter controller 42 receives the request from the respective requesters (receiver controller 22, transmit controller 36 and EPC controller 30) generates access vector which is delivered to the appropriate slice controller (Ctrl) for reading or writing information from or into buffers in the selected slice. Excess bandwidth that is not utilized by the receiver controller or the transmitter controller is assigned to the EPC controller. As a consequence, all access to memory utilizes the full memory bandwidth. Based upon the input from the requester the memory arbiter controller 42 performs following slice selection algorithm in order to construct buffer memory access vector for next window:
  • the slice selection algorithm can be implemented in logic hardware or can be coded as microcode running in a picoprocessor.
  • the selection of hardware logic or picocode to implement the above selection algorithm is a design of choice and is well within the skill of one skilled in the art given the algorithm set forth above.
  • the alternate memory arbiter includes memory arbiter controller (CTRL) 42 connected to request registers A, B, C and priority table 44.
  • the request registers store requests from the respective requester.
  • Request register A stores request from transmitter (XMIT) controller
  • request register B stores request from receiver (RECV) controller
  • register C stores requests from EPC controller.
  • the priority table 44 stores a priority designation for each of the requesters.
  • transmitter controller which has the highest priority is a 1
  • receiver controller the next highest priority 2 and the EPC controller with the lowest priority is a 3.
  • the memory arbiter controller is designed in accordance with the above select algorithm uses information in the register together with the priority level to generate the memory vector.
  • FIGS 3, 4 and 5 show examples which further explain the invention in accordance with a preferred embodiment.
  • the feature of utilizing full memory bandwidth when data is extracted or read from memory is demonstrated. The data in memory was written in earlier write cycles.
  • Figure 3 shows a functional block diagram illustrating a read request in which the full bandwidth is used by the transmitter controller.
  • the illustration includes memory 51, timing representation 48, memory arbiter controller 42, and preparation areas 46.
  • Memory 51 includes a plurality of buffers arranged in sets termed slice. In the figure slice 0, 1, 2, 3 and 4 are shown. However, this should not be construed as a limitation since additional numbers or fewer slices can be used. In addition, 3 buffers (labelled 1, 2 and 3) are shown in each slice. However, this should be construed as an example and not a limitation. The number of buffers that are used in a slice depends on the choice of the designer.
  • each of the buffers is 64 bytes partitioned into sectors termed Quadword A, B, C and D.
  • Each of the Quadword is 16 bytes.
  • Each slice is fabricated from a DDR DRAM module partitioned into N buffers.
  • the showing of numerals in the buffer 1 means that slice 1 buffer 1 is filled with data containing Quadwords 3, 4, 1 and 2.
  • slice 3, buffer 1 is filled with data containing Quadwords 8, 5, 6 and 7.
  • Each Quadword stores a double word (16 bytes) . It should be noted other granularity could be selected. Quite likely this data was received in the receive controller 22 Figure 1 and at a time earlier loaded or written into memory 51.
  • FIG. 48 shows the graphical representation of the timing of the transfer of the data removed from the storage and rearranged (rotate) .
  • the graphical representation has space for Quadword A, B, C and D.
  • the memory access is 11 cycles approximately 66 nanoseconds . In each memory cycle two buffers in different memory slices can be accessed simultaneously.
  • Memory arbiter 42 was previously discussed and will not be repeated in detail. Suffice it to say that it takes requests from the transmit controller and the EPC and arranges them so as to extract the requested information from memory 51.
  • preparation area 46 contains the resources that are necessary to manage data, remove from storage at the request of the transmitter controller 36 ( Figure IB) .
  • the preparation area 46 includes port control blocks (PCBs) , a set of target port buffers labelled TP 0, TP 1, TP 3, TP 4 ... TP N.
  • PCBs port control blocks
  • the number of TP (Target Port) buffers depends on the designer. Therefore, the showing of 5 should not be construed as a limitation.
  • the close arrow label RR indicates round robin procedure whereby the buffers are filled or serviced in clockwise direction.
  • the PCB contains a listing of buffers in the Preparation Area. In the example shown there are 64 PCBs .
  • the PCB also includes the next slices which are to be extracted from memory.
  • slices 1 and 3 are to be removed and loaded into target port buffer 0.
  • the information in this buffer will subsequently be transported through port 0 onto data_out line 1 ( Figure IB) .
  • Each of the target port buffers contains 128 bytes (8 Quadwords) .
  • the transmitter controller raises requests to the memory arbiter controller requesting that the information in slice 1, buffer 1 and slice 3, buffer 1 of the memory is read and loaded into Target Port buffer 0.
  • the EPC Read Request is also presented to the memory arbiter controller 42. Since the request by the transmitter controller requires transmission of two full buffers there is no room, during this read memory cycle, to accommodate any requests from the EPC for read action. It should be noted that since the request is for slices 1 and 3 the information can be read out simultaneously, rearranged in staging area 48 and transported out into TP buffer 0 for subsequent transmission through port 0. The information from staging area 48 is read into TP buffer 0 in correct sequential order.
  • Figure 3 depicts a case when the transmit controller accesses memory and utilizes the entire bandwidth. In this case the EPC does not -have access to memory during this access window.
  • the Transmitter Controller is receiving two full buffers (one from slice 1 and one from slice 3) and queueing this data into target port buffer 0 (TP 0) .
  • Figure 4 shows an example in which different buffers from the same slice are accessed and distributed to different FIFO port buffers to optimize memory access. As can be seen except for the data pattern stored in Memory 51 the structure of Figure 4 is substantially similar to Figure 3, previously described. Therefore, only the additional features in Figure 4 will be discussed further.
  • Information is loaded into slice 1, buffer 1, Quadwords C and D; slice 1, buffer 2, Quadwords A, B, C and D; Slice 2, buffer 1, Quadwords A and D; and slice 2, buffer 3, Quadwords B, C and D.
  • the Quadwords are identified by respective numbers.
  • the information in the PCBs indicates data at slice 1, buffer 1 is to be loaded into TP buffer 0. Likewise, data in slice 2, buffer 1 is to be loaded into TP 1. It should be noted that additional information may be required in the PCBs in order to give complete and accurate instructions regarding moving data from memory to appropriate TP buffers . In order to make the description less complicated the additional information is omitted. But the disclosure is sufficient to enable one skilled in the art to provide the additional information.
  • arrow 52 indicates Quadwords 1 and 2 Slice 1 is sequentially arranged in TP 0; arrow 54 indicates Quadwords 1 and 2, Slice 2 are sequentially placed in TP 1; arrow 56 shows Quadwords 1 and 4, Slice 2 are sequentially placed in TP 2; arrow 58 shows Quadword 2 buffer 2, slice 2, is placed in TP 3 and arrow 60 shows Quadword 2, buffer 2 is placed in TP 4.
  • Figure 4 the total memory bandwidth is used in each memory cycle even though the data is distributed to different TP FIFO. (Target Port First In First Out) Buffers.
  • Figure 5 the schematic demonstrates the situation wherein the memory request from the transmitter controller is less than a full memory bandwidth and the unused Quadword in Slice 2 is filled with Quadword "2", buffer 3 which is delivered via arrow 62 to the EPC in a RR (Round Robin) manner.
  • the remaining portion of Figure 5 is substantially similar to Figure 4 whose previous description is applicable and incorporated by reference.
  • Step A determines slice eligibility for writing or reading information. If the slice does not need to be refreshed then it is eligible for assignment.
  • Steps B and C relate to read activity performed by the transmit controller whereas Steps D through G relate to write activity performed by the receive controller.
  • the EPC can perform either Write or Read and can be associated either with the Read routine or the Write routine.
  • Step B grants access to the transmit controller to read a requested number of slices R from the memory.
  • R can be 1 through a maximum number. In the embodiment shown in this appli'cation R is set to 3. Of course other values of R can be used depending on the choice of the designer.
  • Step C complements the Read operation by granting unused Quadword in a read request to the EPC . Stated another way, if the Read request does not use all the Quadwords then the unused ones are assigned to the EPC. The EPC can have several Read requests to go to memory. Therefore, the algorithm assigns the Read request to the EPC in a round robin fashion.
  • the WRITE request is controlled by steps D through G and is performed by the Receive controller or EPC.
  • Step D the Receive Controller is based on the principle that buffers from adjoining portions of the same frame cannot be written in the same slice. This means the adjoining information is spread across different slices of the memory. As a consequence, certain slices in memory are not eligible to be written into.
  • Step D the receiver controller identifies those slices that cannot be written.
  • the slices are identified by X which can be from 1 to a max value.
  • Step E the algorithm grants to the EPC eligible slice requested for Write located in the X slices identified in Step D but not located in Step B.
  • Step X represents the slices excluded in Step D while R represents slices used for reading in Step B.
  • Step F the algorithm grants in a round robin fashion the slice requested by the receive controller.
  • the round robin assignment is necessary because the Receive controller can request N number of slices where N is greater than 1. It should be noted that the Receive controller will not ask for slices that are excluded in Step D. In addition, the slices granted to the Receive controller are those not granted for Read (R) .
  • Step G the Quadwords which are not used by the Receive controller for writing is granted to the EPC. The grant is done in a round robin fashion because the EPC can make multiple Write requests .

Abstract

A Network Processor (NP) includes a controller that allows maximum utilization of the memory. The controller includes a memory arbiter that monitors memory access requests from requesters in the NP and awards high priority requesters all the memory bandwidth requested per access to the memory. If the memory bandwidth requested by the high priority requester is less than the full memory bandwidth, the difference between the requested bandwidth and full memory bandwidth is assigned to lower priority requesters. By so doing every memory access utilizes the full memory bandwidth.

Description

APPARATUS AND METHOD FOR EFFICIENTLY SHARING MEMORY BANDWIDTH N A ETWORK PROCESSOR
BACKGROUND OF THE INVENTION
FIELD OF THE INVENTION
The present invention relates to computers and network processors in general and in particular to memory systems to be used with said computers and network processors.
PRIOR ART
The use of network devices such as switches, bridges, computers, network processors etc., for transferring information within communications network is well known in the prior art. One of the requirements placed on these devices is the need for them to transport large volumes of data often referred to as bandwidth.
To meet the high bandwidth requirement the devices are provided with Clear Channels. A Clear Channel is a channel with high bandwidth which transmits large amounts of data on a single flow, as opposed to channelized links which carry several lower bandwidth data flows on a single physical link.
In order to provide Clear Channels with ample supply of data, high speed storage sub- systems are required. High speed storage systems such as Static Random Access Memories (SRAM) etc., can be used to meet high bandwidth requirements . But these memories are expensive and as a result increase the price of the devices in which they are used. The cost problem is further worsened if such high price memories were to be used to build storage systems for computers and Network Processors .
In addition to being expensive, the prior art high speed memories are low density. They can only store a limited amount of data. However, most applications especially those related to internet and other technologies require high density memories or storage systems. As a consequence even the prior art high speed memories are not suitable for many applications. SUMMARY OF THE INVENTION
Accordingly the present invention provides a method for optimizing memory utilization comprising the acts of: receiving in a memory arbiter a plurality of memory access requests, at least one of the memory access requests being associated with a priority designation (the priority designation could arrive with the request, or be assigned subsequent to arrival) ; analyzing by said memory arbiter the memory access requests having a particular priority designation to determine the magnitude of memory bandwidth required; and sharing memory access with at least one other request if the memory request with the particular priority designation does not require full memory bandwidth.
Preferably, there is provided high speed memories that are low cost and have high densities. As used in this document high speed memories preferably have large Bandwidth (BW) providing large amounts of data in a relatively short time interval.
Preferably the memory access request having a particular priority designation is permitted to utilize full memory bandwidth, if the memory request requires full memory bandwidth. In one embodiment, the particular priority designation is the highest.
Preferably the memory is provided having multiple buffers arranged in'"at least one slice and each buffer in the at least one slice • ■ partitioned into multiple Quadwords, and wherein the memory requests are seeking access to the at least one slice from multiple requesters .
Preferably there are provided high speed memories that are low cost and have high densities . As used in this document high speed memories preferably have large Bandwidth (BW) providing large amounts of data in a relatively short time interval.
The invention preferably includes methods that optimize utilization of a memory system by the resources using said memory system. In particular, the requests, read or write, from multiple requesters are preferably bundled so that for each memory access cycle the maximum allowable unit of information is read or written. By so doing so the information throughput is enhanced thereby allowing the use of relatively low cost, high density relatively slow access time memories, such as DDR DRAM, to be used to build storage for computer Network Processors or similar devices . In accordance with a preferred embodiment, the requesters (i.e. of memory access requests) include the Receiver Controller, Embedded Processor Complex (EPC) Controller and Transmitter Controller in a Network Processor or like devices . The memory system preferably includes a plurality of buffers, formed from DDR DRAM modules, which are arranged into groups termed "slices". Each DDR DRAM is preferably partitioned into a plurality of buffers (1 through N) and is controlled by a DRAM controller. Each Buffer is preferably partitioned into sections termed "Quadwords". In one embodiment the Buffer is partitioned in Quadwords A, B, C, D. Both the buffer and Quadwords in the buffer are addressable.
In accordance with a preferred embodiment, a memory arbiter monitors requests from the Receiver Controller, EPC Controller and Transmitter Controller. The memory arbiter preferably uses the requests to form Memory Access Vector per slice of memory. For read requests, memory access priority is preferably given to the Transmitter Controller. If the request from the Transmitter Controller requires full memory bandwidth, the Memory Access Vector is preferably based upon the Transmitter Controller Request only. If less than full memory bandwidth is required, by the Transmitter Controller, any pending read requests by the EPC Controller is preferably merged with that from the Transmitter Controller to form the memory access vector per slice. The DRAM Controller in response to the memory access vector preferably output a full buffer of information containing data for the Transmitter Controller (if a full buffer of-data has been requested) or data for the Transmitter Controller and EPC (if transmitter had requested less than full buffer) . In essence any excess capacity, resulting from the Transmitter Controller requesting less than a full buffer of data, is preferably allocated to the EPC Controller.
In accordance with a preferred embodiment for Write Request, the Arbiter gives priority to the Receiver Controller. In a similar manner, any write requests in which the Receiver Controller has less than a full buffer payload is preferably augmented with data from the EPC.
As a consequence a full buffer of data is preferably always written or read on each memory access
In accordance with another aspect, the invention provides a network processor includes: a memory system for storing information; and a memory arbiter for granting access operatively coupled to said memory system; said memory arbiter including one or more request registers in which memory access requests are received, at least one priority register for storing priority designation for the requesters and a controller operatively coupled to the request registers and priority register said controller including circuits for monitoring requests and request priorities to generate memory access vectors in which the highest priority request is allowed to share memory bandwidth with at least one other memory request if the highest priority request demands less than the full bandwidth.
According to one embodiment, there is provided a method to optimize utilization of memory comprising the acts of: a) providing the memory having multiple buffers arranged in at least one slice and each buffer in said at least one slice partitioned into multiple Quadwords; b) accepting in a memory arbiter multiple memory requests seeking access to the at least one slice from multiple requesters; c) assigning a predetermined priority by said memory arbiter to each one of the requests; d) analyzing by said memory arbiter the highest priority request to detect percentage of memory bandwidth required for said highest priority request; and e) sharing memory access with a lower priority request if the highest priority request does not utilize full memory bandwidth.
According to one embodiment, there is provided a method to optimize utilization of memory comprising the acts of: a) providing the memory having multiple buffers arranged in at least one slice and each buffer in said at ' least' one slice partitioned into multiple Quadwords; b) accepting in a memory arbiter multiple memory requests seeking access to the at least one slice from multiple requesters; c) assigning a predetermined priority by said memory arbiter to each one of the requests; and d) analyzing by said memory arbiter the highest priority request to detect percentage of memory bandwidth required for said highest priority request; and e) permitting the highest priority request to utilize full memory bandwidth if said highest priority request requires full memory bandwidth or sharing the memory bandwidth with lower request if said highest priority request demand less than full memory bandwidth.
Preferably the multiple requesters includes a receiver controller, transmitter controller and Embedded Processor Complex (EPC) Controller operably coupled within a Network Processor. In one embodiment the transmitter controller has the highest priority. The receiver controller may have the next highest priority. In one embodiment, the EPC shares memory bandwidth with the transmitter controller or the receiver controller. The request may include reads and writes. In one embodiment the read request is generated by the transmitter controller. In one embodiment the write request is provided by the receiver controller. Read and write requests may be provided by the EPC controller.
According to one embodiment there is provided a method for optimizing memory utilization comprising the acts of: receiving in a memory arbiter a plurality of memory access requests; providing in said memory arbiter a priority designation for at least one of the memory access requests; analyzing by said memory arbiter the memory access requests having the priority designation to determine magnitude of memory bandwidth required; and permitting the memory access request having the priority designation to utilize full memory bandwidth if said memory request with the priority designation requires full memory bandwidth or sharing the memory bandwidth with other requests if said memory access request having the priority designation demands less .than full memory bandwidth.
Preferably the priority designation is the highest . In one embodiment sharing further includes combining memory bandwidth of the highest priority request with memory bandwidth of lower priority request
According to one embodiment, a network processor is provided which includes: a memory system that stores information; and a memory arbiter that grants access operatively coupled to said memory system; said memory arbiter including one or more request registers in which memory access requests are received, at least one priority register that stores priority designation for the requesters and a first controller operatively coupled to the request registers and priority register said controller including circuits that monitors requests and request priorities to generate memory access vectors in which the highest priority request is allowed utilization of full memory bandwidth if said highest priority request so demands or generate memory access request in which the highest priority request and a lower priority request share the full memory bandwidth.
Preferably the memory system includes a plurality of buffers arranged in at least one slice, and each buffer is partitioned into Quadwords . Preferably each slice is operatively coupled to at least one buffer controller. Preferably each buffer is 64 bytes partitioned into four Quadwords of 16 bytes each. Each slice may be fabricated from DDR DRAM.
Preferably there is further provided a receiver controller operatively coupled to the memory arbiter. There may also be a transmitter controller operatively coupled to said memory arbiter. There may also be an embedded processor complex, EPC, operatively coupled to the memory arbiter. Preferably in the embodiment including a transmitter controller there is a scheduler operatively coupled thereto.
In one embodiment, the first controller selectively performs the following steps to construct buffer memory access vector: a) Exclude slices scheduled to re-fresh cycle (indicated by each DRAM controller) ; b) Assign slices for all R (Read) requests of Transmitter Controller; c) Complement R- accesses from corresponding EPC queue [Slice; Quadword] ; d) Assign slice to EPC for globally W (Write) excluded slices (e.g. slice is excluded by all slice exclusion rules from Receiver) ; e) Assign slices to W requests in RR (Round Robin) fashion between non-excluded slices starting from last assigned slice (slice assigned to Receiver Controller in previous window) ; f) Complement W- accesses from corresponding EPC queue [Slice; Quadword] ; and Assign slice to EPC request according to priority expressed by Weight.
According to one embodiment, there is provided an apparatus comprising: a memory partitioned into N sectors, N greater than 1; and a memory arbiter controller operatively coupled to said memory: said memory arbiter controller receiving at least two memory access requests, assigning memory access priority for said requests, analyzing a selected one of said requests to determine if said selected one of said memory requests will use full memory bandwidth for a particular memory access cycle, generating a memory access vector which assigns full memory bandwidth to the selected one of said requests if full memory bandwidth was requested and sharing memory bandwidth with another request if the full memory bandwidth was not used.
According to one embodiment, there is provided a method to access memory comprising the acts of: a) receiving in a memory arbiter Read Requests from a first Requester, said Read Request including information identifying a portion of the memory from which data is to be read; b) determining if data to be returned used all of the available memory bandwidth; and c) complementing the data to be returned for the first requester with data for a second requester if the full memory bandwidth is not used by the first requester.
According to one embodiment, there is provided a method comprising the acts of: a) receiving in an arbiter a request from a first requester seeking access to memory; b) determining what portion of memory bandwidth is to be used as a result of said request; c) assigning usage of the total memory bandwidth to the first Requester if the determination in step b) indicates full usage of the memory bandwidth; d) complementing the bandwidth usage of the first Requester with bandwidth usage requested by a second Requester if determination in step b) indicates the bandwidth usage of the first Requester is less than full memory bandwidth.
The request may be a read or a write. Preferably the second requester includes EPC (Embedded Processor Complex) . Preferably, the first Requester includes Receiver Controller.
It will be appreciated that the aforementioned could be implemented as a computer program.
BRIEF DESCRIPTION OF THE DRAWINGS
A preferred embodiment of the present invention will now be described, by way of example only, and with reference to the following drawings :
Figure 1A shows a block diagram of a Network Processor in which a preferred embodiment of the present invention is used.
Figure IB shows a block diagram of Network Processor Requesters and memory system according to teachings of a preferred embodiment of the present invention.
Figure 2 shows a block diagram of the memory arbiter according to the teachings of a preferred embodiment of the present invention.
Figure 3 shows in accordance with a preferred embodiment of the present invention a functional block diagram illustrating a read request in which the full bandwidth is used by the Transmitter Controller.
Figure 4 shows in accordance with a preferred embodiment of the present invention a functional block diagram illustrating a Read Request in which different buffers from the same slice are accessed and assigned to different Target Port (TP) FIFO buffers. The Figure can also be used to illustrate a Write Request in which data would flow in the opposite direction in accordance with an embodiment of the present invention.
Figure 5 shows in accordance with a preferred embodiment of the present invention a functional block diagram illustrating a Read Request in which different buffers from different slices are- accessed and one Quadword is allotted to the EPC Controller.
Figure 6 shows an alternate embodiment of the memory arbiter.
Figure 7 shows a flowchart for the algorithm used in a Write or Read operation in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
Figure 1A shows a block diagram of a network processor (NP) in which a preferred embodiment of the present invention set forth below is implemented. The network processor includes an Ingress Side 10 and an Egress Side 12. The Ingress Side 10 and Egress Side-12 are symmetrical about the imaginary axis AA. Data traffic accessing the Ingress Side 10 enters on Flexbus 4 or the WRAP transmission line and exits on Switch Interface 10' . Likewise, data entering Egress Side 12 enters on Switch Interface 12 ' and exits on the Conductor labelled Flexbus 4 ' . The Flexbusses 4, 4' and Switch Interfaces (Int) 10', 12' could be considered as Clear Channels delivering 10 Gbps and 14 Gbps, respectively. The Switch Interface 10 ' is coupled to Switch Interface 12 ' by a switch assembly (not shown) such as Prizma developed and marketed by IBM.
Still referring to Figure 1A, the Ingress Side 10 includes data flow chip 14, EPC chip 16, data buffers 18 and scheduler chip 20. The Egress Side 12 includes data flow chip 14', EPC chip 16', data buffers 18' and scheduler chip 20'. A WRAP transmission line interconnects data flow chip 14' to data flow chip 14. The named components are interconnected as shown in the figure. It should be noted that elements with like names in the figure are substantially similar and the description of one covers the description of the other. By way of example, EPC chip 16 and EPC chip 16' are substantially identical. Likewise, components with similar names are also identical . The configuration shown in Figure 1A provides bidirectional functionality. A packet walk through the chipset will be used to explain its functionality. An HDLC frame extracted from Sonet/SDH stream by a framer device (not shown) is received on Flexbus 4 (Ingress side) and forwarded into data flow chip 14. The data from the frame is written into data buffers 18. Once a complete frame has been written in the data buffers 18, the frame context (FCB) is enqueued into one of the EPC (Embedded Processor Complex) queues (G-FIFO in Figure IB) . The EPC chip 16 reads out the frame pointers and processes the respective frame in one of its picoprocessors. During frame processing EPC can issue requests for reading/writing the appropriate parts of the frame (e.g. L2 and L3 headers) from/into the data buffer. After frame processing is completed, EPC passes the frame context to data flow chip which enqueues frame into the queueing structure of the scheduler chip 20. The scheduler in scheduler chip 50 selects the frame from appropriate queue for transmission which means that corresponding frame pointer is passed to data flow chip. The transmitted frame is read out from data buffer and is transmitted on switch interface 10' in the form of PRIZMA cells. PRIZMA cells are sixty-four bytes cells carrying segments of variable-size frames, or complete Asynchronous Transfer Mode (ATM) cells. The cells are fed over switch interface 10 ' into a cross point switch (not shown) . In a preferred embodiment the cross point switch is a product named PRIZMA manufactured and marketed by International Business Machines Corporation.
Still referring to Figure 1A, in the Egress direction on Egress Side 12 PRIZMA cells are received on switch interface 12 ' and forwarded to data buffers 18 ' . After receiving complete frame the frame pointer is enqueued for processing in EPC queues . Egress EPC 16 ' retrieves the frame context from the data flow chip and processes the frame header in the picocode running in the EPC picoprocessors (not shown) . The result of the processing is the frame context, which is passed to data flow chip 14' and is enqueued in appropriate queue of the scheduler 20'. The scheduler in scheduler chip 20' selects the frame to be transmitted and this is then read out from the data buffer 18 ' and transmitted on line interface labelled Flexbus 4 ' of data flow chip 14 ' . For brevity, only the portion of the network processor that is relevant to the invention in accordance with a preferred embodiment will be discussed further.
Figure IB shows a block diagram of the data flow chip and memory system (Sys) 21 according to the teachings of a preferred embodiment of the present invention. Data into the data flow chip is provided on the bus labelled Data_in and data out of the chip is transmitted on the bus labelled Data_out. As stated previously Data_in and Data_out are Clear Channels transmitting large amounts of data. The memory system 21 is made up of a plurality of DDR DRAM termed, slice 0 through slice N. In the embodiment shown in IB, N=5. As will be explained below, each slice is made up of a plurality of buffers and are connected by individual bus 0-5 to separate DRAM controller in the data flow chip. The DRAM controller is a conventional DRAM controller and provides write, read, refresh and other functions to the slice which they service. Because DRAM Controllers are well known within the art they will not be discussed further.
Still referring to Figure IB, the functional blocks in the data flow chip includes Receiver Controller 22, Memory arbiter 24, FCB arbiter 26, BCB arbiter 28, EPC controller 30, Buffer acceptance and accounting 32, scheduler interface controller 34, and transmitter controller 36. The QDR SRAM 38 stores a list of buffers that are in memory system 21. QDR SRAM 40 stores information relative to frames in the target blade (T/B) and target port (TP) queues. The memory arbiter 24 interfaces the data flow chip to the memory system 21. To this end the memory arbiter collects read (R) /write (W) requests from transmitter, receiver and embedded processor complex (EPC) controllers 22, 36 and 38, and schedules access towards individual data store memory slice. As will be explained subsequently, each memory slice includes a plurality of buffers, each buffer being 64 bytes. It should be noted that other size bandwidth of data can be designed. Frame data are then written into different buffers spread over different slices of memory in order to maximize use of memory bandwidth. When reading data from memory, the data is pulled out in 64 bytes. Stated another way, the bandwidth going into or out of memory is 64 bytes. It should be noted that other size bandwidth of data can be designed. The memory arbiter according to the preferred embodiment makes sure that any access to the memory (Read or Write) has a payload of 64 bytes. If a request, from a requester, has less than 64 bytes, the payload is augmented by data from another requester.
Still referring to Figure IB, receiver controller 22 receives data from incoming bus labelled Data_in and issues write requests in order to write receive data into individual buffers in memory system 21. Likewise, transmitter controller 36 issues read requests in order to transmit selected frames on DATA OUT. The EPC controller 30 terminates different messages from/to EPC and issues read/write requests to data store (memory system 21) . It also maintains track of frame waiting before processing (G-FIFOs) . The buffer acceptance and accounting block 32 is responsible for enqueue/discard decision on per frame basis. It also maintains queue-filling level on TB/TP basis and provides this information to the switch fabric on switch interface. BCB and FCB memory arbiters provide scheduling of different accesses for link lists operation such as chain/dechain FCB or BCB, lease/release FCB or BCB.
Figure 2 shows a functional block diagram of the memory arbiter according to a preferred embodiment of the present invention. The function of the memory arbiter is to provide access to memory system 21. The memory arbiter accepts requests from receiver controller 22 , transmitter/controller 36 and EPC controller 30. The requests are prioritized with the transmitter controller 36 having the highest priority, the receiver controller 22 the next highest priority and the EPC controller 30 the lowest. Of course different order or priorities may be selected. In granting permission for access to the memory the arbiter makes sure that for each memory access the maximum bandwidth of data that is permitted by the memory is utilized. As a consequence, if the memory request is for reading data the request would be a read request from the transmitter controller 36 and/or the EPC controller 30. The memory arbiter analyzes the request from the transmitter controller; if the request requires the full memory bandwidth then the arbiter generates the access vector having a command per slice part and an address per bank part. The access vector would be delivered to the appropriate memory controller and the data is extracted from memory.
As explained herein, each buffer in the memory has 64 bytes partitioned into 4 Quadwords A, B, C and D with each Quadword having 16 bytes each. If the transmitter controller requires less than 4 Quadwords in any memory access, the unused amount of Quadword is given to the EPC controller. Any request for writing is issued by the receive controller and the EPC controller. Like the read request, all write requests require 4 Quadwords of information to be delivered to the memory controller. If the receiver controller writes less than 4 Quadwords the non-used Quadword is assigned to the EPC controller. As a consequence, every access to the memory writes or reads 4 Quadwords 64 bytes of data. By so doing, the maximum memory bandwidth is utilized with no room for waste cycles .
Still referring to Figure 2, the memory arbiter includes a bus structure 40 interconnecting memory system 21, transmitter controller 22, EPC controller 30 and transmitter controller 36. The receiver controller 22 interfaces the memory arbiter to the switch or line interface. The EPC controller interfaces the arbiter to the EPC chip (Figure 1) . The transmitter controller 36 interfaces the memory arbiter to Data_out bus i2
(Figure IB) . The arbiter includes memory arbiter controller 42 which receives the respective request shown in the figure generates the access vector which is supplied to respective controller to access individual slices in the memory system. The memory arbiter arbitration works in 11 cycle windows (one cycle equals 6ns) . At the start of the access window, memory arbiter receives the following inputs (request) to be scheduled in next window:
Transmitter Controller Request - represented by BCB address of the buffer to be read; RF flag indicating whether buffer can be released and Quadword mask complements each BCBA address . Quadword mask indicating which Quadwords within this buffer are to be effectively read allows the memory arbiter to complement access for unused Quadwords with ones from EPC. In a preferred embodiment of this invention 0, 1, 2 or 3 requests can be made by the transmitter controller in one memory access window. The respective requests are shown by arrows interconnecting the transmitter controller 36 with memory arbiter controller 42. The direction of the arrow shows the direction in which the requests flow.
Receiver Controller Requests - represented by slice exclusion masks and two Quadword masks . Two Quadword masks indirectly indicate how many buffers preferably need to be located per one request (e.g. if one of the Quadword masks is λ 0000' it means only one buffer shall be allocated) and which memory banks (Quadwords) should be used in different buffers. As stated previously, the receiver controller 22 has Second highest priority to access memory. The requests allowed to the receiver controller are shown as arrows emanating from the receiver controller to the memory arbiter controller 42.
EPC Controller requests - represented by queues per slice, per action and per Quadword. Memory arbiter shall assign all remaining slices based on weight of individual Quadword requests. This weight is proportional to age of the Quadword request expressed by 6-bit value. Moreover, memory arbiters can complement accesses of transmitter and receiver by reading corresponding Quadword access requests from EPC requests queueing system. In a preferred embodiment of the invention the maximum number of Quadword accesses that EPC can be granted in one memory access window is limited to 8 read and 8 write Quadwords . The single headed arrow indicates information flow passing from the EPC controller and memory arbiter controller 42. Still referring to Figure 2 the memory arbiter controller 42 receives the request from the respective requesters (receiver controller 22, transmit controller 36 and EPC controller 30) generates access vector which is delivered to the appropriate slice controller (Ctrl) for reading or writing information from or into buffers in the selected slice. Excess bandwidth that is not utilized by the receiver controller or the transmitter controller is assigned to the EPC controller. As a consequence, all access to memory utilizes the full memory bandwidth. Based upon the input from the requester the memory arbiter controller 42 performs following slice selection algorithm in order to construct buffer memory access vector for next window:
• Exclude slices scheduled for refresh cycle (indicated by each DRAM controller)
• Assign slices for all R (Read) requests of transmitter controller
• Complement R-accesses from corresponding EPC queue (slice: Quadword)
• Assign slice to EPC for globally W (Write) excluded slices
(slices excluded by all slice exclusion rules from receiver)
• Assign slices to W requests in round robin (RR) fashion between non-excluded slice starting from; last assigned slice (slice assigned to receiver controller in previous window)
• Complement W-access by EPC access from corresponding EPC queue (slice: Quadword)
• Assign slice to EPC requests according to priority expressed by weight.
The slice selection algorithm can be implemented in logic hardware or can be coded as microcode running in a picoprocessor. The selection of hardware logic or picocode to implement the above selection algorithm is a design of choice and is well within the skill of one skilled in the art given the algorithm set forth above.
Turning to Figure 6 for the moment, an alternate embodiment for the memory arbiter is shown. The alternate memory arbiter includes memory arbiter controller (CTRL) 42 connected to request registers A, B, C and priority table 44. The request registers store requests from the respective requester. Request register A stores request from transmitter (XMIT) controller, request register B stores request from receiver (RECV) controller and register C stores requests from EPC controller. The priority table 44 stores a priority designation for each of the requesters. Thus, transmitter controller which has the highest priority is a 1, receiver controller the next highest priority 2, and the EPC controller with the lowest priority is a 3. In operation the memory arbiter controller is designed in accordance with the above select algorithm uses information in the register together with the priority level to generate the memory vector.
Figures 3, 4 and 5 show examples which further explain the invention in accordance with a preferred embodiment. In the figures the feature of utilizing full memory bandwidth when data is extracted or read from memory is demonstrated. The data in memory was written in earlier write cycles.
Figure 3 shows a functional block diagram illustrating a read request in which the full bandwidth is used by the transmitter controller. As stated the transmitter controller has the highest priority for reading data from memory. The illustration includes memory 51, timing representation 48, memory arbiter controller 42, and preparation areas 46. Memory 51 includes a plurality of buffers arranged in sets termed slice. In the figure slice 0, 1, 2, 3 and 4 are shown. However, this should not be construed as a limitation since additional numbers or fewer slices can be used. In addition, 3 buffers (labelled 1, 2 and 3) are shown in each slice. However, this should be construed as an example and not a limitation. The number of buffers that are used in a slice depends on the choice of the designer. As a consequence, the number of buffers in a slice are N, where N can be any number dependent on the designer's choice. As stated above, each of the buffers is 64 bytes partitioned into sectors termed Quadword A, B, C and D. Each of the Quadword is 16 bytes. Each slice is fabricated from a DDR DRAM module partitioned into N buffers. The showing of numerals in the buffer 1 means that slice 1 buffer 1 is filled with data containing Quadwords 3, 4, 1 and 2. Likewise, slice 3, buffer 1 is filled with data containing Quadwords 8, 5, 6 and 7. Each Quadword stores a double word (16 bytes) . It should be noted other granularity could be selected. Quite likely this data was received in the receive controller 22 Figure 1 and at a time earlier loaded or written into memory 51.
Still referring to Figure 3, 48 shows the graphical representation of the timing of the transfer of the data removed from the storage and rearranged (rotate) . The graphical representation has space for Quadword A, B, C and D. The memory access is 11 cycles approximately 66 nanoseconds . In each memory cycle two buffers in different memory slices can be accessed simultaneously. Memory arbiter 42 was previously discussed and will not be repeated in detail. Suffice it to say that it takes requests from the transmit controller and the EPC and arranges them so as to extract the requested information from memory 51.
Still referring to Figure 3, preparation area 46 contains the resources that are necessary to manage data, remove from storage at the request of the transmitter controller 36 (Figure IB) . The preparation area 46 includes port control blocks (PCBs) , a set of target port buffers labelled TP 0, TP 1, TP 3, TP 4 ... TP N. The number of TP (Target Port) buffers depends on the designer. Therefore, the showing of 5 should not be construed as a limitation. The close arrow label RR indicates round robin procedure whereby the buffers are filled or serviced in clockwise direction. The PCB contains a listing of buffers in the Preparation Area. In the example shown there are 64 PCBs . The PCB also includes the next slices which are to be extracted from memory. With reference to the figure, slices 1 and 3 are to be removed and loaded into target port buffer 0. The information in this buffer will subsequently be transported through port 0 onto data_out line 1 (Figure IB) . Each of the target port buffers contains 128 bytes (8 Quadwords) .
Still referring to Figure 3, in operation the transmitter controller raises requests to the memory arbiter controller requesting that the information in slice 1, buffer 1 and slice 3, buffer 1 of the memory is read and loaded into Target Port buffer 0. The EPC Read Request is also presented to the memory arbiter controller 42. Since the request by the transmitter controller requires transmission of two full buffers there is no room, during this read memory cycle, to accommodate any requests from the EPC for read action. It should be noted that since the request is for slices 1 and 3 the information can be read out simultaneously, rearranged in staging area 48 and transported out into TP buffer 0 for subsequent transmission through port 0. The information from staging area 48 is read into TP buffer 0 in correct sequential order. In summary, Figure 3 depicts a case when the transmit controller accesses memory and utilizes the entire bandwidth. In this case the EPC does not -have access to memory during this access window. The Transmitter Controller is receiving two full buffers (one from slice 1 and one from slice 3) and queueing this data into target port buffer 0 (TP 0) .
For Write operation the description in Figure 3 is equally applicable with data flow in the opposite direction to Write. As a consequence further description of the Write operation is not given. Figure 4 shows an example in which different buffers from the same slice are accessed and distributed to different FIFO port buffers to optimize memory access. As can be seen except for the data pattern stored in Memory 51 the structure of Figure 4 is substantially similar to Figure 3, previously described. Therefore, only the additional features in Figure 4 will be discussed further. Regarding Memory 51, information is loaded into slice 1, buffer 1, Quadwords C and D; slice 1, buffer 2, Quadwords A, B, C and D; Slice 2, buffer 1, Quadwords A and D; and slice 2, buffer 3, Quadwords B, C and D. The Quadwords are identified by respective numbers. The information in the PCBs indicates data at slice 1, buffer 1 is to be loaded into TP buffer 0. Likewise, data in slice 2, buffer 1 is to be loaded into TP 1. It should be noted that additional information may be required in the PCBs in order to give complete and accurate instructions regarding moving data from memory to appropriate TP buffers . In order to make the description less complicated the additional information is omitted. But the disclosure is sufficient to enable one skilled in the art to provide the additional information.
Because the transmitter request is less than a full memory bandwidth of data (for any of the TP) , full memory bandwidth Quadwords are read in each memory cycle and redistributed to the TPs.
Still referring to Figure 4, for buffer 1 only slices C and D are requested by the transmitter controller for TP buffer 0, A and B are unused and are filled with slices 4 and 1 from buffer ~2 for TP 2. Likewise, for slice 2, buffer 1, only Quadwords A and D are used by the transmitter controller for TP buffer 1. Therefore, .unused Quadword B is filled with data labelled "2" from slice 2, buffer 2 and Quadword C is filled with data labelled "2" from buffer 3. The data is arranged by numbers as shown in the staging area and in the respective Target Port FIFO buffers. The movement of data from the memory 51 to respective TP buffers is indicated by appropriate single headed arrow. By way of examples arrow 52 indicates Quadwords 1 and 2 Slice 1 is sequentially arranged in TP 0; arrow 54 indicates Quadwords 1 and 2, Slice 2 are sequentially placed in TP 1; arrow 56 shows Quadwords 1 and 4, Slice 2 are sequentially placed in TP 2; arrow 58 shows Quadword 2 buffer 2, slice 2, is placed in TP 3 and arrow 60 shows Quadword 2, buffer 2 is placed in TP 4.
It should be noted that in Figure 4 the total memory bandwidth is used in each memory cycle even though the data is distributed to different TP FIFO. (Target Port First In First Out) Buffers. Turning to Figure 5, the schematic demonstrates the situation wherein the memory request from the transmitter controller is less than a full memory bandwidth and the unused Quadword in Slice 2 is filled with Quadword "2", buffer 3 which is delivered via arrow 62 to the EPC in a RR (Round Robin) manner. The remaining portion of Figure 5 is substantially similar to Figure 4 whose previous description is applicable and incorporated by reference.
Figure 7 shows a flowchart for the algorithm used to grant memory access to the transmitter controller, receiver controller or EPC. The algorithm includes process steps labelled Step A through Step G. Step A is a global step relating to all the slices (S=0 through N, N being the total number of slices in the system for example in the present embodiment S is labelled 0 through 5) . In summary, Step A determines slice eligibility for writing or reading information. If the slice does not need to be refreshed then it is eligible for assignment. Steps B and C relate to read activity performed by the transmit controller whereas Steps D through G relate to write activity performed by the receive controller. The EPC can perform either Write or Read and can be associated either with the Read routine or the Write routine.
Still referring to Figure 7, Step B grants access to the transmit controller to read a requested number of slices R from the memory. R can be 1 through a maximum number. In the embodiment shown in this appli'cation R is set to 3. Of course other values of R can be used depending on the choice of the designer. Step C complements the Read operation by granting unused Quadword in a read request to the EPC . Stated another way, if the Read request does not use all the Quadwords then the unused ones are assigned to the EPC. The EPC can have several Read requests to go to memory. Therefore, the algorithm assigns the Read request to the EPC in a round robin fashion.
Still referring to Figure 7, the WRITE request is controlled by steps D through G and is performed by the Receive controller or EPC. In Step D the Receive Controller is based on the principle that buffers from adjoining portions of the same frame cannot be written in the same slice. This means the adjoining information is spread across different slices of the memory. As a consequence, certain slices in memory are not eligible to be written into. In Step D the receiver controller identifies those slices that cannot be written. in Step D the slices are identified by X which can be from 1 to a max value. In Step E the algorithm grants to the EPC eligible slice requested for Write located in the X slices identified in Step D but not located in Step B. In Step Ξ, X represents the slices excluded in Step D while R represents slices used for reading in Step B.
In Step F the algorithm grants in a round robin fashion the slice requested by the receive controller. The round robin assignment is necessary because the Receive controller can request N number of slices where N is greater than 1. It should be noted that the Receive controller will not ask for slices that are excluded in Step D. In addition, the slices granted to the Receive controller are those not granted for Read (R) . In Step G the Quadwords which are not used by the Receive controller for writing is granted to the EPC. The grant is done in a round robin fashion because the EPC can make multiple Write requests .

Claims

1. A method for optimizing memory utilization comprising the steps of:
receiving in a memory arbiter a plurality of memory access requests, at least one of the memory access requests being associated with a priority designation;
analyzing by said memory arbiter the memory access requests having a particular priority designation to determine the magnitude of memory bandwidth required; and
sharing memory access with at least one other request if the memory request with the particular priority designation does not require full memory bandwidth.
2. The method of claim 1 further comprising the step of:
permitting the memory access request having a particular priority designation to utilize full memory bandwidth if said memory request with the particular priority designation requires full memory bandwidth.
3. The method of claim 1 or 2 , wherein the particular priority designation is the highest.
4. The method of claim 1, 2 or 3 comprising:
providing the memory having multiple buffers arranged in at least one slice and each buffer in said at least one slice partitioned into multiple Quadwords, and wherein the memory requests are seeking access to the at least one slice from multiple requesters .
5. The method of any preceding claim, wherein multiple requesters make memory access requests, said multiple requesters including a receiver controller, transmitter controller and Embedded Processor Complex (EPC) Controller operably coupled within a Network Processor.
6. The method of claim 5 wherein the transmitter controller has the highest priority and the receiver controller has next highest priority.
7. The method of Claim 6 wherein the EPC controller shares memory bandwidth with the transmitter controller or the receiver controller.
8. The method of any of claims 5 to 7 wherein the requests include reads and writes and wherein at least one read request is generated by the transmitter controller and at least one write request is provided by the receiver controller.
9. The method of Claim 8 wherein read and write requests are provided by the EPC controller.
10. A network processor includes:
a memory system for storing information; and
a memory arbiter for granting access operatively coupled to said memory system; said memory arbiter including one or more request registers in which memory access requests are received, at least on& priority register for storing priority designation for the requesters and a controller operatively coupled to the request registers and priority register said controller including circuits for monitoring requests and request priorities to generate memory access vectors in which the highest priority request is allowed to share memory bandwidth with at least one other memory request if the highest priority request demands less than the full bandwidth.
11. The network processor of claim 10, wherein the highest priority request 'is -allowed utilization of full memory bandwidth if said highest priority request so demands.
12. The network processor of Claim 10 or 11 wherein the memory system includes a plurality of buffers arranged in at least one slice, and each buffer is partitioned into Quadwords.
13. The network processor of Claim 12 further including a receiver controller operatively coupled to the memory arbiter; a transmitter controller operatively coupled to said memory arbiter; and an Embedded Processor Complex (EPC) Controller, operatively coupled to the memory arbiter.
14. The network processor of Claim 13 wherein the controller selectively performs the following steps to construct buffer memory access vector:
a) Excluding slices scheduled to re-fresh cycle; b) Assigning slices for all R (Read) requests of Transmitter Controller;
c) Complementing R- accesses from corresponding EPC queue [Slice; Quadword] ;
d) Assigning slice to EPC for globally W (Write) excluded slices;
e) Assigning slices to W requests in RR (Round Robin) fashion between non-excluded slices starting from last assigned slice (slice assigned to Receiver Controller in previous window) ;
f) Complementing W- accesses from corresponding EPC queue [Slice; Quadword] ; and
Assigning slice to EPC request according to priority expressed by Weight.
15. A computer program for optimizing memory utilization comprising program code means adapted to perform the method of any of claims 1 to 9.
PCT/GB2002/001484 2001-04-03 2002-03-28 Apparatus and method for efficiently sharing memory bandwidth in a network processor WO2002082286A2 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
JP2002580183A JP4336108B2 (en) 2001-04-03 2002-03-28 Apparatus and method for efficiently sharing memory bandwidth in a network processor
KR1020037011580A KR100590387B1 (en) 2001-04-03 2002-03-28 Apparatus and method for efficiently sharing memory bandwidth in a network processor
DE60205231T DE60205231T2 (en) 2001-04-03 2002-03-28 DEVICE AND METHOD FOR EFFICIENT ALLOCATION OF MEMORY BAND WIDTH IN A NETWORK PROCESSOR
AT02708513T ATE300763T1 (en) 2001-04-03 2002-03-28 APPARATUS AND METHOD FOR EFFICIENT ALLOCATION OF MEMORY BANDWIDTH IN A NETWORK PROCESSOR
EP02708513A EP1374072B1 (en) 2001-04-03 2002-03-28 Apparatus and method for efficiently sharing memory bandwidth in a network processor
AU2002242874A AU2002242874A1 (en) 2001-04-03 2002-03-28 Apparatus and method for efficiently sharing memory bandwidth in a network processor

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US28106301P 2001-04-03 2001-04-03
US60/281,063 2001-04-03
US10/068,392 US6757795B2 (en) 2001-04-03 2002-02-05 Apparatus and method for efficiently sharing memory bandwidth in a network processor
US10/068,392 2002-02-05

Publications (2)

Publication Number Publication Date
WO2002082286A2 true WO2002082286A2 (en) 2002-10-17
WO2002082286A3 WO2002082286A3 (en) 2003-10-30

Family

ID=26748923

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2002/001484 WO2002082286A2 (en) 2001-04-03 2002-03-28 Apparatus and method for efficiently sharing memory bandwidth in a network processor

Country Status (10)

Country Link
US (1) US6757795B2 (en)
EP (1) EP1374072B1 (en)
JP (1) JP4336108B2 (en)
KR (1) KR100590387B1 (en)
CN (1) CN1251102C (en)
AT (1) ATE300763T1 (en)
AU (1) AU2002242874A1 (en)
DE (1) DE60205231T2 (en)
TW (1) TW563028B (en)
WO (1) WO2002082286A2 (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7353288B1 (en) * 2001-10-17 2008-04-01 Ciena Corporation SONET/SDH payload re-mapping and cross-connect
JP3970581B2 (en) * 2001-11-09 2007-09-05 富士通株式会社 Transmission apparatus and transmission system
US7346067B2 (en) * 2001-11-16 2008-03-18 Force 10 Networks, Inc. High efficiency data buffering in a computer network device
JP3864250B2 (en) * 2002-10-31 2006-12-27 インターナショナル・ビジネス・マシーンズ・コーポレーション Exclusive control device, exclusive control method, program, and recording medium
US6954387B2 (en) * 2003-07-15 2005-10-11 International Business Machines Corporation Dynamic random access memory with smart refresh scheduler
US20050160188A1 (en) * 2004-01-20 2005-07-21 Zohar Bogin Method and apparatus to manage memory access requests
US7137091B1 (en) * 2004-02-13 2006-11-14 Sun Microsystems, Inc. Hierarchical repeater insertion
US7602777B2 (en) 2004-12-17 2009-10-13 Michael Ho Cascaded connection matrices in a distributed cross-connection system
US8254411B2 (en) * 2005-02-10 2012-08-28 International Business Machines Corporation Data processing system, method and interconnect fabric having a flow governor
CN100432957C (en) * 2005-02-12 2008-11-12 美国博通公司 Method for management memory and memory
CN100438693C (en) * 2005-03-21 2008-11-26 华为技术有限公司 Service access method for packet domain
US7987306B2 (en) * 2005-04-04 2011-07-26 Oracle America, Inc. Hiding system latencies in a throughput networking system
US20060248375A1 (en) 2005-04-18 2006-11-02 Bertan Tezcan Packet processing switch and methods of operation thereof
US7474662B2 (en) * 2005-04-29 2009-01-06 International Business Machines Corporation Systems and methods for rate-limited weighted best effort scheduling
US20070011287A1 (en) * 2005-05-16 2007-01-11 Charbel Khawand Systems and methods for seamless handover in a streaming data application
EP2016496B1 (en) * 2006-04-21 2014-03-12 Oracle America, Inc. Hiding system latencies in a throughput networking system
US7817652B1 (en) * 2006-05-12 2010-10-19 Integrated Device Technology, Inc. System and method of constructing data packets in a packet switch
US7747904B1 (en) 2006-05-12 2010-06-29 Integrated Device Technology, Inc. Error management system and method for a packet switch
US7706387B1 (en) 2006-05-31 2010-04-27 Integrated Device Technology, Inc. System and method for round robin arbitration
US7693040B1 (en) 2007-05-01 2010-04-06 Integrated Device Technology, Inc. Processing switch for orthogonal frequency division multiplexing
US8433859B2 (en) * 2008-11-25 2013-04-30 Mediatek Inc. Apparatus and method for buffer management for a memory operating
US20110320699A1 (en) * 2010-06-24 2011-12-29 International Business Machines Corporation System Refresh in Cache Memory
US8490107B2 (en) 2011-08-08 2013-07-16 Arm Limited Processing resource allocation within an integrated circuit supporting transaction requests of different priority levels
KR102364382B1 (en) * 2017-12-20 2022-02-16 한국전기연구원 Apparatus and method for controlling dual-port memory

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4325116A (en) * 1979-08-21 1982-04-13 International Business Machines Corporation Parallel storage access by multiprocessors
US4797815A (en) * 1985-11-22 1989-01-10 Paradyne Corporation Interleaved synchronous bus access protocol for a shared memory multi-processor system
US5603061A (en) * 1991-07-23 1997-02-11 Ncr Corporation Method for prioritizing memory access requests using a selected priority code
EP1059587A1 (en) * 1999-06-09 2000-12-13 Texas Instruments Incorporated Host access to shared memory with a high priority mode

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4633434A (en) 1984-04-02 1986-12-30 Sperry Corporation High performance storage unit
US4761736A (en) 1986-01-02 1988-08-02 Commodore Business Machines, Inc. Memory management unit for addressing an expanded memory in groups of non-contiguous blocks
US5034914A (en) 1986-05-15 1991-07-23 Aquidneck Systems International, Inc. Optical disk data storage method and apparatus with buffered interface
EP0407697A1 (en) 1989-07-10 1991-01-16 Seiko Epson Corporation Memory apparatus
US5559953A (en) 1994-07-01 1996-09-24 Digital Equipment Corporation Method for increasing the performance of lines drawn into a framebuffer memory
US5581310A (en) 1995-01-26 1996-12-03 Hitachi America, Ltd. Architecture for a high definition video frame memory and an accompanying data organization for use therewith and efficient access therefrom
US5781201A (en) 1996-05-01 1998-07-14 Digital Equipment Corporation Method for providing improved graphics performance through atypical pixel storage in video memory
US5920898A (en) 1996-08-16 1999-07-06 Unisys Corporation Memory control unit providing optimal timing of memory control sequences between different memory segments by optimally selecting among a plurality of memory requests
US5907863A (en) 1996-08-16 1999-05-25 Unisys Corporation Memory control unit using preloaded values to generate optimal timing of memory control sequences between different memory segments
US6031842A (en) 1996-09-11 2000-02-29 Mcdata Corporation Low latency shared memory switch architecture
US5966143A (en) 1997-10-14 1999-10-12 Motorola, Inc. Data allocation into multiple memories for concurrent access
US5870325A (en) 1998-04-14 1999-02-09 Silicon Graphics, Inc. Memory system with multiple addressing and control busses

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4325116A (en) * 1979-08-21 1982-04-13 International Business Machines Corporation Parallel storage access by multiprocessors
US4797815A (en) * 1985-11-22 1989-01-10 Paradyne Corporation Interleaved synchronous bus access protocol for a shared memory multi-processor system
US5603061A (en) * 1991-07-23 1997-02-11 Ncr Corporation Method for prioritizing memory access requests using a selected priority code
EP1059587A1 (en) * 1999-06-09 2000-12-13 Texas Instruments Incorporated Host access to shared memory with a high priority mode

Also Published As

Publication number Publication date
EP1374072A2 (en) 2004-01-02
AU2002242874A1 (en) 2002-10-21
US6757795B2 (en) 2004-06-29
KR100590387B1 (en) 2006-06-15
JP4336108B2 (en) 2009-09-30
DE60205231T2 (en) 2006-05-24
ATE300763T1 (en) 2005-08-15
CN1498374A (en) 2004-05-19
EP1374072B1 (en) 2005-07-27
JP2004523853A (en) 2004-08-05
CN1251102C (en) 2006-04-12
KR20040028725A (en) 2004-04-03
DE60205231D1 (en) 2005-09-01
TW563028B (en) 2003-11-21
US20020141256A1 (en) 2002-10-03
WO2002082286A3 (en) 2003-10-30

Similar Documents

Publication Publication Date Title
EP1374072B1 (en) Apparatus and method for efficiently sharing memory bandwidth in a network processor
EP1435043B1 (en) Method and apparatus for scheduling a resource to meet quality-of-service restrictions
US6976135B1 (en) Memory request reordering in a data processing system
US8990498B2 (en) Access scheduler
US5487170A (en) Data processing system having dynamic priority task scheduling capabilities
EP0993680B1 (en) Method and apparatus in a packet routing switch for controlling access at different data rates to a shared memory
US7684424B2 (en) Memory interleaving in a high-speed switching environment
US20030223453A1 (en) Round-robin arbiter with low jitter
US7506081B2 (en) System and method of maintaining high bandwidth requirement of a data pipe from low bandwidth memories
JP2004242333A (en) System, method, and logic for managing memory resources shared in high-speed exchange environment
US20040199704A1 (en) Apparatus for use in a computer system
US9104531B1 (en) Multi-core device with multi-bank memory
US6330632B1 (en) System for arbitrating access from multiple requestors to multiple shared resources over a shared communications link and giving preference for accessing idle shared resources
US7242684B2 (en) Architecture for switching packets in a high-speed switching environment
US20040243770A1 (en) Data transfer system
US20040190524A1 (en) Scheduler device for a system having asymmetrically-shared resources
JP5691419B2 (en) Request transfer apparatus and request transfer method
GB2341771A (en) Address decoding
GB2341766A (en) Bus architecture
Akesson An analytical model for a memory controller offering hard-real-time guarantees
GB2341772A (en) Primary and secondary bus architecture
GB2341765A (en) Bus idle usage
GB2341767A (en) Bus arbitration
GB2341769A (en) Data packet reordering
GB2341699A (en) Inter-module data transfer

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1020037011580

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 028071433

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 2002580183

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 2002708513

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2002708513

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 1020037011580

Country of ref document: KR

WWG Wipo information: grant in national office

Ref document number: 2002708513

Country of ref document: EP

WWG Wipo information: grant in national office

Ref document number: 1020037011580

Country of ref document: KR