Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20060271746 A1
Publication typeApplication
Application numberUS 11/499,232
Publication dateNov 30, 2006
Filing dateAug 3, 2006
Priority dateOct 20, 2003
Also published asUS7120743, US8589643, US20050086441, US20060136683
Publication number11499232, 499232, US 2006/0271746 A1, US 2006/271746 A1, US 20060271746 A1, US 20060271746A1, US 2006271746 A1, US 2006271746A1, US-A1-20060271746, US-A1-2006271746, US2006/0271746A1, US2006/271746A1, US20060271746 A1, US20060271746A1, US2006271746 A1, US2006271746A1
InventorsJames Meyer, Cory Kanski
Original AssigneeMeyer James W, Cory Kanski
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Arbitration system and method for memory responses in a hub-based memory system
US 20060271746 A1
Abstract
A memory hub includes a local queue that stores local memory responses, a bypass path that passes downstream memory responses, and a buffered queue coupled to the bypass path that stores downstream memory responses from the bypass path. A multiplexer is coupled to the local queue, buffered queue, and the bypass path and outputs responses from a selected one of the queues or the bypass path responsive to a control signal. Arbitration control logic is coupled to the multiplexer and the queues and develops the control signal to control the response output by the multiplexer.
Images(4)
Previous page
Next page
Claims(2)
1. A memory hub, comprising:
a local queue adapted to receive local memory responses, and operable to store the local memory responses;
a bypass path adapted to receive downstream memory responses, and operable to pass the downstream memory responses;
a buffered queue coupled to the bypass path and operable to store downstream memory responses;
a multiplexer coupled to the local queue, buffered queue and bypass path, the multiplexer being operable to output responses inform a selected one of the queues or the bypass path responsive to a control signal; and
arbitration control logic coupled to the multiplexer, the arbitration logic operable to develop the control signal to control the selection of responses output by the multiplexer.
2-47. (canceled)
Description
    TECHNICAL FIELD
  • [0001]
    This invention relates to computer systems, and, more particularly, to a computer system including a system memory having a memory hub architecture.
  • BACKGROUND OF THE INVENTION
  • [0002]
    Computer systems use memory devices, such as dynamic random access memory (“DRAM”) devices, to store data that are accessed by a processor. These memory devices are normally used as system memory in a computer system. In a typical computer system, the processor communicates with the system memory through a processor bus and a memory controller. The processor issues a memory request, which includes a memory command, such as a read command, and an address designating the location from which data or instructions are to be read. The memory controller uses the command and address to generate appropriate command signals as well as row and column addresses, which are applied to the system memory. In response to the commands and addresses, data are transferred between the system memory and the processor. The memory controller is often part of a system controller, which also includes bus bridge circuitry for coupling the processor bus to an expansion bus, such as a PCI bus.
  • [0003]
    Although the operating speed of memory devices has continuously increased, this increase in operating speed has not kept pace with increases in the operating speed of processors. Even slower has been the increase in operating speed of memory controllers coupling processors to memory devices. The relatively slow speed of memory controllers and memory devices limits the data bandwidth between the processor and the memory devices.
  • [0004]
    In addition to the limited bandwidth between processors and memory devices, the performance of computer systems is also limited by latency problems that increase the time required to read data from system memory devices. More specifically, when a memory device read command is coupled to a system memory device, such as a synchronous DRAM (“SDRAM”) device, the read data are output from the SDRAM device only after a delay of several clock periods. Therefore, although SDRAM devices can synchronously output burst data at a high data rate, the delay in initially providing the data can significantly slow the operating speed of a computer system using such SDRAM devices.
  • [0005]
    One approach to alleviating the memory latency problem is to use multiple memory devices coupled to the processor through a memory hub. In a memory hub architecture, a memory hub controller is coupled over a high speed data link to several memory modules. Typically, the memory modules are coupled in a point-to-point or daisy chain architecture such that the memory modules are connected one to another in series. Thus, the memory hub controller is coupled to a first memory module over a first high speed data link, with the first memory module connected to a second memory module through a second high speed data link, and the second memory module coupled to a third memory module through a third high speed data link, and so on in a daisy chain fashion.
  • [0006]
    Each memory module includes a memory hub that is coupled to the corresponding high speed data links and a number of memory devices on the module, with the memory hubs efficiently routing memory requests and memory responses between the controller and the memory devices over the high speed data links. Each memory requests typically includes a memory command specifying the type of memory access (e.g., a read or a write) called for by the request, a memory address specifying a memory location that is to be accessed, and, in the case of a write memory request, write data. The memory request also normally includes information identifying the memory module that is being accessed, but this can be accomplished by mapping different addresses to different memory modules. A memory response is typically provided only for a read memory request, and typically includes read data as well as an identifying header that allows the memory hub controller to identify the memory request corresponding to the memory response. However, it should be understood that memory requests and memory responses having other characteristics may be used. In any case, in the following description, memory requests issued by the memory hub controller propagate downstream from one memory hub to another, while memory responses propagate upstream from one memory hub to another until reaching the memory hub controller. Computer systems employing this architecture can have a higher bandwidth because a processor can access one memory device while another memory device is responding to a prior memory access. For example, the processor can output write data to one of the memory devices in the system while another memory device in the system is preparing to provide read data to the processor. Moreover, this architecture also provides for easy expansion of the system memory without concern for degradation in signal quality as more memory modules are added, such as occurs in conventional multi drop bus architectures.
  • [0007]
    Although computer systems using memory hubs may provide superior performance, they nevertheless may often fail to operate at optimum speeds for a variety of reasons. For example, even though memory hubs can provide computer systems with a greater memory bandwidth, they still suffer from latency problems of the type described above. More specifically, although the processor may communicate with one memory device while another memory device is preparing to transfer data, it is sometimes necessary to receive data from one memory device before the data from another memory device can be used. In the event data must be received from one memory device before data received from another memory device can be used, the latency problem continues to slow the operating speed of such computer systems.
  • [0008]
    Another factor that can reduce the speed of memory transfers in a memory hub system is the transferring of read data upstream (i.e., back to the memory hub controller) over the high-speed links from one hub to another. Each hub must determine whether to send local responses first or to forward responses from downstream memory hubs first, and the way in which this is done affects the actual latency of a specific response, and more so, the overall latency of the system memory. This determination may be referred to as arbitration, with each hub arbitrating between local requests and upstream data transfers.
  • [0009]
    There is a need for a system and method for arbitrating data transfers in a system memory having a memory hub architecture to lower the latency of the system memory.
  • SUMMARY OF THE INVENTION
  • [0010]
    According to one aspect of the present invention, a memory hub includes a local queue that receives and stores local memory responses. A bypass path receives downstream memory responses and passes the downstream memory responses while a buffered queue is coupled to the bypass path and stores downstream memory responses. A multiplexer is coupled to the local queue, the bypass path, and the buffered queue, and outputs one of the responses responsive to a control signal. Arbitration control logic is coupled to the multiplexer and develops the control signal to control the source of the responses output by the multiplexer.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • [0011]
    FIG. 1 is a block diagram of a computer system including a system memory having a high bandwidth memory hub architecture according to one example of the present invention.
  • [0012]
    FIG. 2 is a functional block diagram illustrating an arbitration control component contained in each of the memory hubs of FIG. 1 according to one example of the present invention.
  • [0013]
    FIG. 3 is a functional flow diagram illustrating the flow of upstream memory responses in a process executed by the arbitration control component of FIG. 2 where downstream responses are give priority over local responses according to one embodiment of the present invention.
  • [0014]
    FIG. 4 is a functional flow diagram illustrating the flow of upstream memory responses in a process executed by the arbitration control component of FIG. 2 to provide equal bandwidth for local and downstream memory responses.
  • DETAILED DESCRIPTION OF THE INVENTION
  • [0015]
    A computer system 100 according to one example of the present invention is shown in FIG. 1. The computer system 100 includes a system memory 102 having a memory hub architecture including a plurality of memory modules 130, each memory module including a corresponding memory hub 140. Each of the memory hubs 140 arbitrates between memory responses from the memory module 130 on which the hub is contained and memory responses from downstream memory modules, and in this way the memory hubs effectively control the latency of respective memory modules in the system memory by controlling how quickly responses are returned to a system controller 110, as will be described in more detail below. In the following description, certain details are set forth to provide a sufficient understanding of the present invention. One skilled in the art will understand, however, that the invention may be practiced without these particular details. In other instances, well-known circuits, control signals, timing protocols, and/or software operations have not been shown in detail or omitted entirely in order to avoid unnecessarily obscuring the present invention.
  • [0016]
    The computer system 100 includes a processor 104 for performing various computing functions, such as executing specific software to perform specific calculations or tasks. The processor 104 is typically a central processing unit (“CPU”) having a processor bus 106 that normally includes an address bus, a control bus, and a data bus. The processor bus 106 is typically coupled to cache memory 108, which, as previously mentioned, is usually static random access memory (“SRAM”). Finally, the processor bus 106 is coupled to the system controller 110, which is also sometimes referred to as a “North Bridge” or “memory controller.”
  • [0017]
    The system controller 110 serves as a communications path to the processor 104 for the memory modules 130 and for a variety of other components. More specifically, the system controller 110 includes a graphics port that is typically coupled to a graphics controller 112, which is, in turn, coupled to a video terminal 114. The system controller 110 is also coupled to one or more input devices 118, such as a keyboard or a mouse, to allow an operator to interface with the computer system 100. Typically, the computer system 100 also includes one or more output devices 120, such as a printer, coupled to the processor 104 through the system controller 110. One or more data storage devices 124 are also typically coupled to the processor 104 through the system controller 110 to allow the processor 104 to store data or retrieve data from internal or external storage media (not shown). Examples of typical storage devices 124 include hard and floppy disks, tape cassettes, and compact disk read-only memories (CD-ROMs).
  • [0018]
    The system controller 110 also includes a memory hub controller (“MHC”) 132 that is coupled to the system memory 102 including the memory modules 130 a,b . . . n, and operates to apply commands to control and access data in the memory modules. The memory modules 130 are coupled in a point-to-point or daisy chain architecture through respective high speed links 134 coupled between the modules and the memory hub controller 132. The high-speed links 134 may be optical, RF, or electrical communications paths, or may be some other suitable type of communications paths, as will be appreciated by those skilled in the art. In the event the high-speed links 134 are implemented as optical communications paths, each optical communication path may be in the form of one or more optical fibers, for example. In such a system, the memory hub controller 132and the memory modules 130 will each include an optical input/output port or separate input and output ports coupled to the corresponding optical communications paths. Although the memory modules 130 are shown coupled to the memory hub controller 132in a daisy architecture, other topologies that may be used, such as a ring topology, will be apparent to those skilled in the art.
  • [0019]
    Each of the memory modules 130 includes the memory hub 140 for communicating over the corresponding high-speed links 134 and for controlling access to six memory devices 148, which are synchronous dynamic random access memory (“SDRAM”) devices in the example of FIG. 1. The memory hubs 140 each include input and output ports that are coupled to the corresponding high-speed links 134, with the nature and number of ports depending on the characteristics of the high-speed links. A fewer or greater number of memory devices 148 may be used, and memory devices other than SDRAM devices may also be used. The memory hub 140 is coupled to each of the system memory devices 148 through a bus system 150, which normally includes a control bus, an address bus, and a data bus.
  • [0020]
    As previously mentioned, each of the memory hubs 140 executes an arbitration process that controls the way in which memory responses associated with the memory module 130 containing that hub and memory responses from downstream memory modules are returned to the memory hub controller 132. In the following description, upstream memory responses associated with the particular memory hub 140 and the corresponding memory module 130 will be referred to as “local” upstream memory responses or simply “local responses,” while upstream memory responses from downstream memory modules will be referred to as downstream memory responses or simply “downstream responses.” In operation, each memory hub 140 executes a desired arbitration process to control the way in which local and downstream responses are returned to the memory hub controller 132. For example, each hub 140 may give priority to downstream responses and thereby forward such downstream responses upstream prior to local responses that need to be sent upstream. Conversely, each memory hub 140 may give priority to local responses and thereby forward such local responses upstream prior to downstream responses that need to be sent upstream. Examples of arbitration processes that may be executed by the memory hubs 140 will be described in more detail below.
  • [0021]
    Each memory hub 140 may execute a different arbitration process or all the hubs may execute the same process, with this determination depending on the desired characteristics of the system memory 102. It should be noted that the arbitration process executed by each memory hub 140 is only applied when a conflict exists between local and downstream memory responses. Thus, each memory hub 140 need only execute the corresponding arbitration process when both local and downstream memory responses need to be returned upstream.
  • [0022]
    FIG. 2 is a functional block diagram illustrating an arbitration control component 200 contained in the memory hubs 140 of FIG. 1 according to one embodiment of the present invention. The arbitration control component 200 includes two queues for storing associated memory responses. A local queue 202 receives and stores local memory responses LMR from the memory devices 140 on the associated memory module 130. A buffered queue 206 receives and stores downstream memory responses which cannot be immediately forwarded upstream through a bypass path 204. A multiplexer 208 selects responses from one of the queues 202, 206 or the bypass path 204 under control of arbitration control logic 210 and supplies the memory responses in the selected queue upstream over the corresponding high-speed link 134. The arbitration control logic 210 is coupled to the queues 202, 206 through a control/status bus 136, which allows the logic 210 to monitor the contents of each of the queues 202, 206, and utilizes this information in controlling the multiplexer 208 to thereby control the overall arbitration process executed by the memory hub 140. The control/status bus 136 also allows “handshaking” signals to be coupled from the queues 202, 206 to the arbitration logic 210 to coordinate the transfer of control signals from the arbitration logic 210 to the queues 202, 206.
  • [0023]
    The specific operation of the arbitration control logic 210 in controlling the multiplexer 208 to provide responses from one of the queues 202, 206 or the bypass path 204 depends on the particular arbitration process being executed by the control logic. Several example arbitration processes that may be executed by the control logic 210 will now be described in more detail with reference to FIGS. 3 and 4. FIG. 3 is a functional flow diagram illustrating the flow of upstream memory responses in a process executed by the arbitration control component 200 of FIG. 2 where downstream responses are given priority over local responses according to one embodiment of the present invention. In the example of FIG. 3, the memory hub controller 132 applies a memory request to each of the memory modules 130 a, 130 b, and 130 c. Each of the memory modules 130 a-c provides a corresponding upstream response in response to the applied request, with the responses for the modules 130 a, 130 b, and 130 c being designated A1, B1, and C1, respectively. The responses B1 and C1 are assumed to arrive at the local queue 202 and bypass path 204 in the hub 140 of the module 130 b at approximately the same time. In this embodiment, the arbitration control logic 210 gives priority to downstream responses, and as a result the hub 140 in module 130 b forwards upstream the downstream responses C1 first and thereafter forwards upstream the local response B1 as shown in FIG. 3.
  • [0024]
    If the response C1 arrives in the bypass path 204 in the hub 140 of the module 130 a at approximately the same time as the local response A1 arrives in the local queue 202, the arbitration control logic 210 forwards upstream the downstream response C1 prior to the local response A1. Moreover, if the response B1 arrives in the bypass path 204 in the hub 140 of module 130 a at approximately the same time as the downstream response C1, then arbitration control logic 210 forwards upstream the downstream response C1 followed by response B1 followed by local response A1, as shown in FIG. 3. The system controller 110 thus receives the responses C1, B1, and A1 in that order.
  • [0025]
    Because the arbitration control logic 210 in each memory hub 140 may execute an independent arbitration process, the arbitration control logic in the memory hub of the module 130 a could give priority to local responses over downstream responses. In this situation, if the responses C1 and B1 arrive at the bypass path 204 in the hub 140 of the module 130 a at approximately the same time as the local response A1 arrives in the local queue 202, the arbitration control logic 210 forwards upstream the local response A1 prior to the downstream responses C1 and B1. The memory hub controller 132 thus receives the responses A1, C1 and B1 in that order, as shown in parentheses in FIG. 3. Thus, by assigning different arbitration processes to different memory hubs 140 the latency of the corresponding memory modules 130 may be controlled. For example, in the first example of FIG. 3 where priority is given to downstream responses, the latency of the module 130 a is higher than in the second example where in module 130 a priority is given to local responses. In the second example, the memory hub controller 132 could utilize the module 130 a to store frequently accessed data so that the system controller can more quickly access this data. Note that in the second example the responses C1, B1 would first be transferred to the buffered queue 206 since they could not be forwarded upstream immediately, and after response A1 is forwarded the responses C1, B1 would be forwarded from the buffered queue.
  • [0026]
    FIG. 4 is a functional flow diagram illustrating the flow of upstream memory responses in a process executed by the arbitration control component 200 of FIG. 2 to alternate between a predetermined number of responses from local and downstream memory. In the example of FIG. 4, the memory hub controller 132 applies two memory requests to each of the memory modules 130 a, 130 b, and 130 c, with the requests applied to module 130 a being designated A1, A2, requests applied to module 130 b being designated B1, B2, and requests to module 130 c being designated C1, C2. The responses C1 and C2 are assumed to arrive at the bypass path 204 in the hub 140 of the module 130 b at approximately the same time as the local responses B1, B2 arrive at the local queue 202. The responses C1, C2 are transferred to the buffered queue 206 since they cannot be forwarded upstream immediately. The arbitration control logic 210 thereafter alternately forwards responses from the local queue 202 and the buffered queue 206. In the example of FIG. 4, the local response B1 from the local queue 202 is forwarded first, followed by the downstream response C1 from the buffered queue 206, then the local response B2 and finally the downstream response C2.
  • [0027]
    Now in the module 130 a, the responses B1, C1, B2, C2 are assumed to arrive at the bypass path 204 in the hub 140 at approximately the same time as the local responses A1, A2 arrive at the local queue 202. The responses B1, C1, B2, C2 are transferred to the buffered queue 206 since they cannot be forwarded upstream immediately. The arbitration control logic 210 thereafter operates in the same way to alternately forward responses from the local queue 202 and the buffered queue 206. The local response A1 from the local queue 202 is forwarded first, followed by the downstream response B1 from the buffered queue 206, then the local response A2 followed by downstream response C1. At this point, the local queue 202 is empty while the buffered queue 206 still contains the responses B2, C2. No conflict between local and downstream responses exists, and the arbitration control logic 200 accordingly forwards upstream the remaining responses B2, C2 to empty the buffered queue 206.
  • [0028]
    In the arbitration process illustrated by FIG. 4, the arbitration control logic 210 forwarded a predetermined number of either local or downstream responses prior to forwarding the other type of response. For example, in the process just described the arbitration control logic 210 forwards one local response and then one downstream response. Alternatively, the arbitration control logic 210 could forward two local responses followed by two downstream responses, or three local responses followed by three downstream responses, and so on. Furthermore, the arbitration control logic 210 could forward N local responses followed by M downstream responses, where N and M may be selected to give either local or downstream responses priority.
  • [0029]
    In another embodiment, the arbitration control logic 210 of FIG. 2 executes an oldest first algorithm in arbitrating between local and downstream memory responses. In this embodiment, each memory response includes a response identifier portion and a data payload portion. The response identifier portion identifies a particular memory response and enables the arbitration control logic 210 to determine the age of a particular memory response. The data payload portion includes data being forwarded upstream to the memory hub controller 132, such as read data. In operation, the arbitration control logic 210 monitors the response identifier portions of the memory responses stored in the local queue 202 and the buffered queue 206 and selects the oldest response contained in either of these queues as the next response to be forwarded upstream. Thus, independent of queue 202, 206 in which a memory response is stored, the arbitration control logic 210 forwards the oldest responses first.
  • [0030]
    In determining the oldest response, the arbitration control logic 210 utilizes the response identifier portion of the memory response and a time stamp assigned to the memory request corresponding to the response. More specifically, the memory hub controller 132 generates a memory request identifier for each memory request. As the memory request passes through each memory hub 140, the arbitration control logic 210 of each hub assigns a time stamp to each request, with the time stamp indicating when the request passed through the memory hub 140. Thus, each hub 140 essentially creates a table of request identifiers and associated time stamps. Thus, the control logic 210 in each hub 140 stores a table of a unique memory request identifier and a corresponding time stamp for each memory request passing through the hub.
  • [0031]
    In each memory response, the response identifier portion corresponds to the memory request identifier, and thus the response for a given a request is identified by the same identifier. The arbitration control logic 210 thus identifies each memory response stored in the local queue 202 and buffered queue 206 by the corresponding response identifier portion. The control logic 210 then compares the response identifier portion of each response in the queues 202, 206 to the table of request identifiers, and identifies the time stamp of the response identifier as the time stamp associated with the corresponding request identifier in the table. The control logic 210 does this for each response, and then forwards upstream the oldest response as indicated by the corresponding time stamp. The arbitration control logic 210 repeats this process to determine the next oldest response and then forwards that response upstream, and so on.
  • [0032]
    In the preceding description, certain details were set forth to provide a sufficient understanding of the present invention. One skilled in the art will appreciate, however, that the invention may be practiced without these particular details. Furthermore, one skilled in the art will appreciate that the example embodiments described above do not limit the scope of the present invention, and will also understand that various equivalent embodiments or combinations of the disclosed example embodiments are within the scope of the present invention. Illustrative examples set forth above are intended only to further illustrate certain details of the various embodiments, and should not be interpreted as limiting the scope of the present invention. Also, in the description above the operation of well known components has not been shown or described in detail to avoid unnecessarily obscuring the present invention. Finally, the invention is to be limited only by the appended claims, and is not limited to the described examples or embodiments of the invention.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US4078228 *Mar 16, 1976Mar 7, 1978Ohkura Electric Co., Ltd.Loop data highway communication system
US4245306 *Dec 21, 1978Jan 13, 1981Burroughs CorporationSelection of addressed processor in a multi-processor network
US4253144 *Dec 21, 1978Feb 24, 1981Burroughs CorporationMulti-processor communication network
US4253146 *Dec 21, 1978Feb 24, 1981Burroughs CorporationModule for coupling computer-processors
US4724520 *Jul 1, 1985Feb 9, 1988United Technologies CorporationModular multiport data hub
US4831520 *Feb 24, 1987May 16, 1989Digital Equipment CorporationBus interface circuit for digital data processor
US4891808 *Dec 24, 1987Jan 2, 1990Coherent Communication Systems Corp.Self-synchronizing multiplexer
US4930128 *Jun 24, 1988May 29, 1990Hitachi, Ltd.Method for restart of online computer system and apparatus for carrying out the same
US4982185 *May 17, 1989Jan 1, 1991Blh Electronics, Inc.System for synchronous measurement in a digital computer network
US5299293 *Mar 27, 1992Mar 29, 1994Alcatel N.V.Protection arrangement for an optical transmitter/receiver device
US5313590 *Jan 5, 1990May 17, 1994Maspar Computer CorporationSystem having fixedly priorized and grouped by positions I/O lines for interconnecting router elements in plurality of stages within parrallel computer
US5317752 *Nov 16, 1992May 31, 1994Tandem Computers IncorporatedFault-tolerant computer system with auto-restart after power-fall
US5497476 *Sep 21, 1993Mar 5, 1996International Business Machines CorporationScatter-gather in data processing system
US5502621 *Mar 31, 1994Mar 26, 1996Hewlett-Packard CompanyMirrored pin assignment for two sided multi-chip layout
US5606717 *Mar 5, 1992Feb 25, 1997Rambus, Inc.Memory circuitry having bus interface for receiving information in packets and access time registers
US5706224 *Oct 10, 1996Jan 6, 1998Quality Semiconductor, Inc.Content addressable memory and random access memory partition circuit
US5710733 *Jan 22, 1996Jan 20, 1998Silicon Graphics, Inc.Processor-inclusive memory module
US5715456 *Feb 13, 1995Feb 3, 1998International Business Machines CorporationMethod and apparatus for booting a computer system without pre-installing an operating system
US5729709 *Mar 21, 1996Mar 17, 1998Intel CorporationMemory controller with burst addressing circuit
US5748616 *Nov 30, 1995May 5, 1998Square D CompanyData link module for time division multiplexing control systems
US5875352 *Nov 3, 1995Feb 23, 1999Sun Microsystems, Inc.Method and apparatus for multiple channel direct memory access control
US5875454 *Jul 24, 1996Feb 23, 1999International Business Machiness CorporationCompressed data cache storage system
US5887159 *Dec 11, 1996Mar 23, 1999Digital Equipment CorporationDynamically determining instruction hint fields
US5889714 *Nov 3, 1997Mar 30, 1999Digital Equipment CorporationAdaptive precharge management for synchronous DRAM
US5900020 *Jun 27, 1996May 4, 1999Sequent Computer Systems, Inc.Method and apparatus for maintaining an order of write operations by processors in a multiprocessor computer to maintain memory consistency
US6011741 *Jul 23, 1998Jan 4, 2000Sandisk CorporationComputer memory cards using flash EEPROM integrated circuit chips and memory-controller systems
US6014721 *Jan 7, 1998Jan 11, 2000International Business Machines CorporationMethod and system for transferring data between buses having differing ordering policies
US6023726 *Jan 20, 1998Feb 8, 2000Netscape Communications CorporationUser configurable prefetch control system for enabling client to prefetch documents from a network server
US6029250 *Sep 9, 1998Feb 22, 2000Micron Technology, Inc.Method and apparatus for adaptively adjusting the timing offset between a clock signal and digital signals transmitted coincident with that clock signal, and memory device and system using same
US6031241 *Dec 31, 1997Feb 29, 2000University Of Central FloridaCapillary discharge extreme ultraviolet lamp source for EUV microlithography and other related applications
US6033951 *Oct 23, 1996Mar 7, 2000United Microelectronics Corp.Process for fabricating a storage capacitor for semiconductor memory devices
US6038630 *Mar 24, 1998Mar 14, 2000International Business Machines CorporationShared access control device for integrated system with multiple functional units accessing external structures over multiple data buses
US6061263 *Dec 29, 1998May 9, 2000Intel CorporationSmall outline rambus in-line memory module
US6061296 *Aug 17, 1998May 9, 2000Vanguard International Semiconductor CorporationMultiple data clock activation with programmable delay for use in multiple CAS latency memory devices
US6064706 *Dec 31, 1996May 16, 2000Alcatel Usa, Inc.Apparatus and method of desynchronizing synchronously mapped asynchronous data
US6067262 *Dec 11, 1998May 23, 2000Lsi Logic CorporationRedundancy analysis for embedded memories with built-in self test and built-in self repair
US6067649 *Jun 10, 1998May 23, 2000Compaq Computer CorporationMethod and apparatus for a low power self test of a memory subsystem
US6175571 *Dec 9, 1997Jan 16, 2001Network Peripherals, Inc.Distributed memory switching hub
US6185352 *Feb 24, 2000Feb 6, 2001Siecor Operations, LlcOptical fiber ribbon fan-out cables
US6185676 *Sep 30, 1997Feb 6, 2001Intel CorporationMethod and apparatus for performing early branch prediction in a microprocessor
US6186400 *Mar 20, 1998Feb 13, 2001Symbol Technologies, Inc.Bar code reader with an integrated scanning component module mountable on printed circuit board
US6191663 *Dec 22, 1998Feb 20, 2001Intel CorporationEcho reduction on bit-serial, multi-drop bus
US6201724 *Nov 9, 1999Mar 13, 2001Nec CorporationSemiconductor memory having improved register array access speed
US6208180 *Oct 13, 1998Mar 27, 2001Intel CorporationCore clock correction in a 2/N mode clocking scheme
US6219725 *Aug 28, 1998Apr 17, 2001Hewlett-Packard CompanyMethod and apparatus for performing direct memory access transfers involving non-sequentially-addressable memory locations
US6223301 *Sep 30, 1997Apr 24, 2001Compaq Computer CorporationFault tolerant memory
US6233376 *May 18, 1999May 15, 2001The United States Of America As Represented By The Secretary Of The NavyEmbedded fiber optic circuit boards and integrated circuits
US6347055 *Jun 22, 2000Feb 12, 2002Nec CorporationLine buffer type semiconductor memory device capable of direct prefetch and restore operations
US6349363 *Dec 8, 1998Feb 19, 2002Intel CorporationMulti-section cache with different attributes for each section
US6356573 *Jan 22, 1999Mar 12, 2002Mitel Semiconductor AbVertical cavity surface emitting laser
US6367074 *Dec 28, 1998Apr 2, 2002Intel CorporationOperation of a system
US6370068 *Jan 5, 2001Apr 9, 2002Samsung Electronics Co., Ltd.Semiconductor memory devices and methods for sampling data therefrom based on a relative position of a memory cell array section containing the data
US6370611 *Apr 4, 2000Apr 9, 2002Compaq Computer CorporationRaid XOR operations to synchronous DRAM using a read buffer and pipelining of synchronous DRAM burst read data
US6373777 *May 15, 2001Apr 16, 2002Nec CorporationSemiconductor memory
US6381190 *May 11, 2000Apr 30, 2002Nec CorporationSemiconductor memory device in which use of cache can be selected
US6389514 *Mar 25, 1999May 14, 2002Hewlett-Packard CompanyMethod and computer system for speculatively closing pages in memory
US6392653 *Jun 23, 1999May 21, 2002Inria Institut National De Recherche En Informatique Et En AutomatiqueDevice for processing acquisition data, in particular image data
US6505287 *Dec 12, 2000Jan 7, 2003Nec CorporationVirtual channel memory access controlling circuit
US6523092 *Sep 29, 2000Feb 18, 2003Intel CorporationCache line replacement policy enhancement to avoid memory page thrashing
US6523093 *Sep 29, 2000Feb 18, 2003Intel CorporationPrefetch buffer allocation and filtering system
US6526483 *Sep 20, 2000Feb 25, 2003Broadcom CorporationPage open hint in transactions
US6526498 *Feb 15, 2000Feb 25, 2003Broadcom CorporationMethod and apparatus for retiming in a network of multiple context processing elements
US6539490 *Aug 30, 1999Mar 25, 2003Micron Technology, Inc.Clock distribution without clock delay or skew
US6552564 *Aug 30, 1999Apr 22, 2003Micron Technology, Inc.Technique to reduce reflections and ringing on CMOS interconnections
US6553479 *Jul 31, 2002Apr 22, 2003Broadcom CorporationLocal control of multiple context processing elements with major contexts and minor contexts
US6561329 *Apr 16, 2001May 13, 2003Nike, Inc.Athletic equipment bag
US6681292 *Aug 27, 2001Jan 20, 2004Intel CorporationDistributed read and write caching implementation for optimized input/output applications
US6697926 *Jun 6, 2001Feb 24, 2004Micron Technology, Inc.Method and apparatus for determining actual write latency and accurately aligning the start of data capture with the arrival of data at a memory device
US6715018 *Aug 1, 2002Mar 30, 2004Micron Technology, Inc.Computer including installable and removable cards, optical interconnection between cards, and method of assembling a computer
US6718440 *Sep 28, 2001Apr 6, 2004Intel CorporationMemory access latency hiding with hint buffer
US6721195 *Jul 12, 2001Apr 13, 2004Micron Technology, Inc.Reversed memory module socket and motherboard incorporating same
US6721860 *Feb 13, 2001Apr 13, 2004Micron Technology, Inc.Method for bus capacitance reduction
US6724685 *Oct 31, 2002Apr 20, 2004Infineon Technologies AgConfiguration for data transmission in a semiconductor memory system, and relevant data transmission method
US6728800 *Jun 28, 2000Apr 27, 2004Intel CorporationEfficient performance based scheduling mechanism for handling multiple TLB operations
US6735679 *Jun 23, 2000May 11, 2004Broadcom CorporationApparatus and method for optimizing access to memory
US6735682 *Mar 28, 2002May 11, 2004Intel CorporationApparatus and method for address calculation
US6742098 *Oct 3, 2000May 25, 2004Intel CorporationDual-port buffer-to-memory interface
US6845409 *Jul 25, 2000Jan 18, 2005Sun Microsystems, Inc.Data exchange methods for a switch which selectively forms a communication channel between a processing unit and multiple devices
US7181584 *Feb 5, 2004Feb 20, 2007Micron Technology, Inc.Dynamic command and/or address mirroring system and method for memory modules
US7187742 *Oct 6, 2000Mar 6, 2007Xilinx, Inc.Synchronized multi-output digital clock manager
US20030005223 *Jun 27, 2001Jan 2, 2003Coulson Richard L.System boot time reduction method
US20030005344 *Jun 29, 2001Jan 2, 2003Bhamidipati Sriram M.Synchronizing data with a capture pulse and synchronizer
US20030043158 *May 18, 2001Mar 6, 2003Wasserman Michael A.Method and apparatus for reducing inefficiencies in shared memory devices
US20030043426 *Aug 30, 2001Mar 6, 2003Baker R. J.Optical interconnect in high-speed memory systems
US20030093630 *Nov 15, 2001May 15, 2003Richard Elizabeth A.Techniques for processing out-of -order requests in a processor-based system
US20030095559 *Oct 11, 2002May 22, 2003Broadcom Corp.Systems including packet interfaces, switches, and packet DMA circuits for splitting and merging packet streams
US20040022094 *Feb 5, 2003Feb 5, 2004Sivakumar RadhakrishnanCache usage for concurrent multiple streams
US20040024948 *Nov 1, 2002Feb 5, 2004Joerg WinklerResponse reordering mechanism
US20040034753 *Aug 16, 2002Feb 19, 2004Jeddeloh Joseph M.Memory hub bypass circuit and method
US20040044833 *Aug 29, 2002Mar 4, 2004Ryan Kevin J.System and method for optimizing interconnections of memory devices in a multichip module
US20040047169 *Sep 9, 2002Mar 11, 2004Lee Terry R.Wavelength division multiplexed memory module, memory system and method
US20040064602 *Sep 30, 2002Apr 1, 2004Varghese GeorgeClaiming cycles on a processor bus in a system having a PCI to PCI bridge north of a memory controller
US20050015426 *Jul 14, 2003Jan 20, 2005Woodruff Robert J.Communicating data over a communication link
US20050044304 *Aug 20, 2003Feb 24, 2005Ralph JamesMethod and system for capturing and bypassing memory transactions in a hub-based memory system
US20050044327 *Aug 19, 2003Feb 24, 2005Quicksilver Technology, Inc.Asynchronous, independent and multiple process shared memory system in an adaptive computing architecture
US20050050255 *Aug 28, 2003Mar 3, 2005Jeddeloh Joseph M.Multiple processor system and method including multiple memory hub modules
US20050071542 *May 10, 2004Mar 31, 2005Advanced Micro Devices, Inc.Prefetch mechanism for use in a system including a host connected to a plurality of memory modules via a serial memory interconnect
US20060066375 *Nov 14, 2005Mar 30, 2006Laberge Paul ADelay line synchronizer apparatus and method
US20090013211 *Jun 9, 2008Jan 8, 2009Vogt Pete DMemory channel with bit lane fail-over
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US7774559Aug 27, 2007Aug 10, 2010Micron Technology, Inc.Method and system for terminating write commands in a hub-based memory system
US7873775Jul 20, 2009Jan 18, 2011Round Rock Research, LlcMultiple processor system and method including multiple memory hub modules
US8082404Jul 8, 2008Dec 20, 2011Micron Technology, Inc.Memory arbitration system and method having an arbitration packet protocol
US8151042 *Aug 22, 2007Apr 3, 2012International Business Machines CorporationMethod and system for providing identification tags in a memory system having indeterminate data response times
US8164375Apr 24, 2012Round Rock Research, LlcDelay line synchronizer apparatus and method
US8244952Aug 14, 2012Round Rock Research, LlcMultiple processor system and method including multiple memory hub modules
US8291173Jul 20, 2010Oct 16, 2012Micron Technology, Inc.Apparatus and method for data bypass for a bi-directional data bus in a hub-based memory sub-system
US8327105Feb 16, 2012Dec 4, 2012International Business Machines CorporationProviding frame start indication in a memory system having indeterminate read data latency
US8495328Feb 16, 2012Jul 23, 2013International Business Machines CorporationProviding frame start indication in a memory system having indeterminate read data latency
US8555006Nov 21, 2011Oct 8, 2013Micron Technology, Inc.Memory arbitration system and method having an arbitration packet protocol
US8589643Dec 22, 2005Nov 19, 2013Round Rock Research, LlcArbitration system and method for memory responses in a hub-based memory system
US8694735Sep 12, 2012Apr 8, 2014Micron Technology, Inc.Apparatus and method for data bypass for a bi-directional data bus in a hub-based memory sub-system
US8825939 *Jun 18, 2008Sep 2, 2014Conversant Intellectual Property Management Inc.Semiconductor memory device suitable for interconnection in a ring topology
US9032166Oct 8, 2013May 12, 2015Micron Technology, Inc.Memory arbitration system and method having an arbitration packet protocol
US9082461Aug 7, 2012Jul 14, 2015Round Rock Research, LlcMultiple processor system and method including multiple memory hub modules
US9164937Mar 24, 2014Oct 20, 2015Micron Technology, Inc.Apparatus and method for data bypass for a bi-directional data bus in a hub-based memory sub-system
US20050177677 *Feb 5, 2004Aug 11, 2005Jeddeloh Joseph M.Arbitration system having a packet memory and method for memory responses in a hub-based memory system
US20090154284 *Jun 18, 2008Jun 18, 2009Hakjune OhSemiconductor memory device suitable for interconnection in a ring topology
Classifications
U.S. Classification711/148
International ClassificationG06F12/00
Cooperative ClassificationG06F13/1605
European ClassificationG06F13/16A
Legal Events
DateCodeEventDescription
Jun 14, 2011ASAssignment
Owner name: ROUND ROCK RESEARCH, LLC, NEW YORK
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:026445/0017
Effective date: 20110520