|Publication number||US20060095592 A1|
|Application number||US 10/977,770|
|Publication date||May 4, 2006|
|Filing date||Oct 29, 2004|
|Priority date||Oct 29, 2004|
|Also published as||US7334070|
|Publication number||10977770, 977770, US 2006/0095592 A1, US 2006/095592 A1, US 20060095592 A1, US 20060095592A1, US 2006095592 A1, US 2006095592A1, US-A1-20060095592, US-A1-2006095592, US2006/0095592A1, US2006/095592A1, US20060095592 A1, US20060095592A1, US2006095592 A1, US2006095592A1|
|Original Assignee||International Business Machines Corporation|
|Export Citation||BiBTeX, EndNote, RefMan|
|Referenced by (57), Classifications (6), Legal Events (3)|
|External Links: USPTO, USPTO Assignment, Espacenet|
The invention relates to computers and data processing systems, and in particular to communication links used to couple multiple nodes in a data processing system together in a daisy chain arrangement.
Computer technology continues to advance at a remarkable pace, with numerous improvements being made to the performance of both processors—the “brains” of a computer—and the memory that stores the information processed by a computer.
One aspect of computer technology that can have a significant impact on system performance is the communication between various components in a computer or other data processing system. The communications between components such as processors, memory devices, processing complexes (sets of interconnected processors and memory devices), peripheral devices, and even separate computers, can have a significant effect on the overall performance of a computer system. Moreover, even from the perspective of individual components, and the various sub-components that may be disposed on the same or separate integrated circuit chips, the manner in which data is communicated within a computer system is often a significant contributor to the speed and computing power of the system.
For example, one prevalent architecture utilized to connect memory devices to a processor is a multidrop bus architecture, where a plurality of address and data lines are routed between a processor or intermediate memory controller to a plurality of memory devices. The various lines in the bus essentially couple the memory devices in parallel with one another, and each device receives the same signals. Typically, shared bus architectures of this type, despite improvements in terms of greater width (number of address and/or data lines) and data transmission rates, have been hampered by a number of drawbacks. First, the parallel nature of the architecture, and the resulting signal alignment issues that are raised by communicating data in a parallel fashion, have become limiting factors on the overall performance of the architecture. Moreover, the aforementioned issues also place limits on the lengths of the interconnects, and thus the types of connectors and form factors that are supported. Furthermore, these architectures are characterized by relatively high connector counts, thus requiring a high number of signal paths between devices.
One relatively recent memory architecture that has been utilized to address some of the shortcomings of a shared bus architecture involves the use of point-to-point interconnects between multiple nodes or components in a data processing system. Often, the point-to-point interconnects utilize serial transmission as opposed to parallel transmission, which can reduce the number of interconnects, while providing comparable or greater transmission speed due to the elimination of many of the signal alignment issues raised by parallel architectures. Some point-to-point architectures rely on complex switching to route data to desired components or nodes; however, other point-to-point architectures rely on individual nodes or components to forward data intended for other components coupled to the architecture.
In many applications, the use of point-to-point interconnects provides comparatively greater performance, as well as reduced connection counts and greater flexibility in terms of interconnecting components or nodes coupled to the architecture. Moreover, through the use of redundant connections, greater reliability may be provided, whereby the failure of a connection or a particular node may be overcome by routing data communications around a failed node.
As noted above, while some point-to-point architectures rely on complex switching or redundant connections, other point-to-point architectures desirably omit comparable data routing functionality to reduce complexity and cost, and to increase overall performance in some applications.
One such architecture is often referred to as a daisy chain architecture, where a sequence of nodes or components are interconnected by means of point-to-point interconnects coupled between adjacent nodes in the system. Often, the point-to-point interconnects comprise pairs of unidirectional interconnects, with one unidirectional interconnect used for communicating data in one direction between the adjacent nodes, and the other interconnect used to forward data in the opposite direction between the nodes. In such a configuration, the unidirectional interconnects form two unidirectional communication links, ensuring the data can be communicated between any two nodes in the architecture.
Incumbent in a daisy chain architecture is a capability within each node for forwarding data destined for a subsequent node in the architecture to the next adjacent node. In this regard, many daisy chain architectures provide driver circuits that essentially relay or repeat received signals and forward such signals as necessary to the next node in the architecture.
One specific example of a daisy chain architecture is implemented in the fully buffered dual inline memory module (FB-DIMM) memory architecture, for which a formal specification has been established by the Joint Electron Device Engineering Council, (JEDEC) of the Electronic Industry's Alliance (EIA). The FB-DIMM specification defines a high speed serial interface in which a memory controller is coupled to an FB-DIMM, upon which is disposed multiple memory devices and a controller device incorporating an interface between the memory devices and the high speed serial interface. The controller device also includes driver circuitry for repowering received signals and passing those signals along to the next FB-DIMM in the chain.
As with other memory controller designs, many FB-DIMM memory controllers support multiple memory channels, whereby separate daisy chain arrangements of FB-DIMM's are coupled to each memory channel, permitting the memory channels to operate independently and in parallel with one another.
The high speed serial communication links between the components in an FB-DIMM architecture include separate unidirectional read and write channels made up of sets of differential signal pairs, and over which data and address information is passed. Separate clocking and control buses are also provided, but not implemented using point-to-point interconnects.
It has been found, however, that a conventional daisy chain architecture such as the FB-DIMM architecture is not readily suited for use in some high availability applications. In particular, one benefit of a conventional shared bus architecture is ability to provide “hot” replacement or swapping of individual devices in an architecture. For example, some conventional shared bus memory architectures support the ability to remove and replace individual memory devices while a system is running, and without requiring the system to be shut down. In such circumstances, power is typically removed from an individual device, the device is physically removed from its connector (e.g., a slot for a memory device disposed on a module or card), a new device is inserted into the connector, and power is applied to the new device. So long as the system logic avoids attempts to access the device being replaced during the replacement procedure, other devices may continue to be accessed during the procedure, thus ensuring continued system availability. Furthermore, since the devices are essentially coupled in parallel via a shared bus, and all signals are propagated to all devices, the unavailability of one particular device does not interrupt the communication of signals to other devices.
A daisy chain architecture such as FB-DIMM, on the other hand, relies on individual components (here each FB-DIMM) to forward signals received from previous components in the chain to subsequent components in the chain. As such, an individual FB-DIMM could not be powered off and removed from the system without causing a discontinuity in high speed serial interface that would prevent data from being communicated between the memory controller and any subsequent FB-DIMM's in the daisy chain.
As a result, conventional FB-DIMM and other daisy chain configurations may not be suitable for use in applications where high availability is desired.
The invention addresses these and other problems associated with the prior art by effectively bridging multiple memory channels together in a multi-channel memory architecture to enable data traffic associated with various nodes in daisy chain arrangement to be communicated over both memory channels. Specifically, embodiments consistent with the invention couple a daisy chain arrangement of nodes, e.g., memory modules, disposed in a first memory channel to a second memory channel, with support for communicating data associated with one of the nodes over either or both of the first and second memory channels.
In one embodiment, for example, a multi-channel memory controller may couple a first memory channel to one end of a daisy chain arrangement of memory modules, and couple a second memory channel to the opposite end of the daisy chain arrangement (either directly or indirectly through another daisy chain arrangement of memory modules. By doing so, a discontinuity introduced in the daisy chain arrangement (e.g., due to a failure or removal of a node or a failure in a communication link coupled to a node), which would otherwise inhibit communication between the memory controller and any nodes located downstream of the discontinuity over the first memory channel, can be overcome by communicating data associated with any such downstream nodes over the second memory channel. In addition, in some embodiments, load balancing may be utilized to optimize bandwidth utilization and latencies over both memory channels, thus improving overall memory system performance.
Consistent with one aspect of the invention, a circuit arrangement may be utilized in a multi-channel memory system of the type including first and second memory channels, wherein each memory channel is configured to couple a plurality of nodes to one another in a daisy chain arrangement. The circuit arrangement may include a memory port configured to be coupled to the first memory channel, and a control circuit coupled to the memory port and configured to communicate data associated with a node in the second memory channel through the memory port and over the first memory channel. Consistent with another aspect of the invention, the circuit arrangement may be disposed in a memory controller circuit. Consistent with another aspect of the invention, the circuit arrangement may be disposed in a memory module.
Consistent with yet another aspect fo the invention, an apparatus is provided, which includes a memory controller, a daisy chain arrangement of memory modules, and a bridging interconnect. The memory controller includes first and second memory ports respectively configured to drive first and second memory channels, with the daisy chain arrangement of memory modules disposed in the first memory channel and coupled at a first end to the first memory port. The bridging interconnect is coupled between the second memory port and a second end of the daisy chain arrangement of memory modules to enable the memory controller to communicate data associated with a memory module in the daisy chain arrangement over the second memory channel.
These and other advantages and features, which characterize the invention, are set forth in the claims annexed hereto and forming a further part hereof. However, for a better understanding of the invention, and of the advantages and objectives attained through its use, reference should be made to the Drawings, and to the accompanying descriptive matter, in which there is described exemplary embodiments of the invention.
The embodiments discussed and illustrated hereinafter utilize bridging between multiple memory channels in a multi-channel memory architecture or system to enable data traffic associated with nodes disposed in a daisy chain configuration coupled to a particular memory channel to be communicated over multiple memory channels. In the context of the invention, data is associated with a particular node when that data either is output by, or directed to, that node. Moreover, such data may include various types of information, including for example, write data, read data, command data, address information, status information, configuration information or practically any other type of information that may be input to or output by a node.
Also in the context of the invention, a daisy chain configuration is a point-to-point configuration whereby multiple nodes are chained together via point-to-point interconnects between adjacent nodes (i.e., nodes that are immediately next to one another in a defined sequence of nodes), which is in contrast to a multi-drop bus architecture whereby a shared bus is used to communicate information to all nodes. Individual nodes in a daisy chain configuration are typically able to receive data from a communication link and automatically forward such data along the communication link to subsequent nodes in the configuration if such data is not intended for local consumption.
For example, as shown in the Drawings, wherein like numbers denote like parts throughout the several views, and in particular in
It will be appreciated that each communication link 18, 30, as well as the individual interconnect paths therein, may be implemented using any number of serial and/or parallel data signal paths, including, for example, any number of differential signal pairs and/or single-ended signal paths. It will also be appreciated that additional interconnects, e.g., a shared or multi-drop bus, may also be provided between memory controller 14 and nodes 12A-12C and 26A-26C in some applications, e.g., to provide power, clocking, or additional control signals.
Of note, nodes 12A-12C and nodes 26A-26C each define a sequence of nodes, with both starting (nodes 12A and 26A) and ending (nodes 12C and 26C) nodes defined for the sequence. In addition, a node that is farther away from a memory controller is considered to be “downstream” of a node that is closer to the memory controller, while a node that is closer to the memory controller than another node is considered to be “upstream” of the other node. It should also be appreciated that each daisy chain arrangement of nodes can include any number of nodes, and furthermore, that memory controller 14 may support any number of memory channels.
As will be apparent to one of ordinary skill in the art, one characteristic of a daisy chain configuration of nodes is that a discontinuity in a communication link (e.g., due to failure or shutdown of a node, or a failure in an interconnect between two nodes) conventionally inhibits the ability to relay data across the discontinuity. As such, a discontinuity such as the unavailability of node 12B, for example, would inhibit data from being communicated over the first memory channel from memory controller 14 to node 12C, and vice versa.
Embodiments consistent with the invention address this difficulty in part by bridging together multiple memory channels to permit data traffic associated with a node on one memory channel to be communicated over another memory channel.
The significance of such a configuration will be appreciated in the context of the scenario where a discontinuity arises in the first daisy chain arrangement 16, e.g., due to the unavailability of node 12B (which may be due to a failure in node 12B, a failure in an interconnect 18 coupled to node 12B, or simply due to node 12B being taken off-line). In this configuration, so long as the second memory channel supports the communication of data associated with nodes coupled to the first memory channel, data associated with node 12C from first daisy chain arrangement 16 may be communicated between node 12C and memory controller 14 via the path defined by communication links 30, nodes 26A-26C and bridging interconnect 36.
To support the ability to communicate data associated with a node on one memory channel over another memory channel, typically each node and the memory controller are configured to pass the data in such a manner that the data is identified as being associated with the proper node on the proper memory channel, as well as to ensure that all of the data necessary to perform a desired operation is communicated over the appropriate memory channel. In the FB-DIMM implementation discussed hereinafter, for example, the Advanced Memory Buffer (AMB) chip on each memory module is specifically configured to support all types of data traffic (i.e., read data, write data, command data and status data) on both the read and write channels. Furthermore, the memory controller is specifically configured to direct data traffic to the proper memory channel, as appropriate. It will be appreciated that the implementation of such functionality into a memory controller and an AMB chip in an FB-DIMM environment, as well as in other multi-channel memory systems, would be well within the abilities of one of ordinary skill in the art having the benefit of the instant disclosure.
Again referring to
It will be appreciated that the number of nodes in each daisy chain arrangement may differ from one another. Moreover, it will be appreciated that the principles of the invention may be utilized in situation where no daisy chain arrangement of nodes is resident in a particular memory channel. As shown in multi-channel memory system 10′ of
As noted above, a multi-channel memory system may be used in a number of applications consistent with the invention.
Computer 50 generally includes one or more processors 52 coupled to a main storage 54 through one or more levels of cache memory disposed within a cache system 56. In some embodiments each processor 52 may include multiple processing cores. Furthermore, main storage 54 is coupled to a number of types of external devices via a system input/output (I/O) system 58, e.g., one or more networks 60, one or more workstations 62 and one or more mass storage devices 64. Any number of alternate computer architectures may be used in the alternative.
Also shown resident in main storage 54 is a typical software configuration for computer 50, including an operating system 66 (which may include various components such as kernels, device drivers, runtime libraries, etc.) accessible by one or more applications 68.
Computer 50, or any subset of components therein, may also be referred to hereinafter as an “apparatus”. It should be recognized that the term “apparatus” may be considered to incorporate various data processing systems such as computers and other electronic devices, as well as various components within such systems, including individual integrated circuit devices or combinations thereof. Moreover, within an apparatus may be incorporated one or more circuit arrangements, typically implemented on one or more integrated circuit devices, and optionally including additional discrete components interfaced therewith.
It should also be recognized that circuit arrangements are typically designed and fabricated at least in part using one or more computer data files, referred to herein as hardware definition programs, that define the layout of the circuit arrangements on integrated circuit devices. The programs are typically generated in a known manner by a design tool and are subsequently used during manufacturing to create the layout masks that define the circuit arrangements applied to a semiconductor wafer. Typically, the programs are provided in a predefined format using a hardware definition language (HDL) such as VHDL, Verilog, EDIF, etc. Thus, while the invention has and hereinafter will be described in the context of circuit arrangements implemented in fully functioning integrated circuit devices, those skilled in the art will appreciate that circuit arrangements consistent with the invention are capable of being distributed as program products in a variety of forms, and that the invention applies equally regardless of the particular type of computer readable signal bearing media used to actually carry out the distribution. Examples of computer readable signal bearing media include but are not limited to recordable type media such as volatile and non-volatile memory devices, floppy disks, hard disk drives, CD-ROM's, and DVD's, among others, and transmission type media such as digital and analog communications links.
Memory modules 74A-74E are interconnected with one another and with memory controller 72 via pairs of unidirectional high speed differential serial communication links defined by a plurality of point-to-point interconnects 82, 84.
As shown in
In this configuration, first daisy chain arrangement 86 is shown coupled to a first memory channel 90 driven by memory controller 72, while second daisy chain arrangement 88 is shown coupled to a second memory channel 92 driven by memory controller 72. Given the FB-DIMM standard's reliance of pairs of unidirectional high speed differential serial communication links in each memory channel, interconnects 82 are write channel interconnects that define a write data channel over which write data and commands are communicated from memory controller 72, while interconnects 84 are read channel interconnects that define a read data channel over which data and status information is provided to memory controller 72 by one or more of modules 74A-74H. As such, it will be appreciated that each memory channel 90, 92 is itself comprised of individual read and write data channels. It will be appreciated that additional interconnects, e.g., power, clocking and other control interconnects, are also provided by the FB-DIMM standard, but are not shown in
To implement bridging between the first and second memory channels, memory architecture 70 additionally includes a bridging interconnect comprising a pair of point-to-point interconnects 94, 96, both of which are coupled between otherwise unused end connections of ending or last memory modules (here memory modules 74D and 74H) of each daisy chain arrangement 86, 88. In addition, it should be noted that interconnect 94 couples the write data channel of first memory channel 90 to the read data channel of second memory channel 92, while interconnect 96 couples the read data channel of first memory channel 90 to the write data channel of second memory channel 92.
By configuring interconnects 94, 96 in this manner, write data and/or commands emitted from second memory channel 92 and intended for consumption by one of memory modules 74A-74D in first daisy chain arrangement 86 may be propagated along write channel interconnects 82 coupling memory modules 74E-74H to one another and to memory controller 72, over bridging interconnect 96 and then along read channel interconnects 84 coupling memory modules 74A-74D to one another until the desired destination is reached. Likewise, read data and/or commands intended to be supplied by one of memory modules 74A-74D to memory controller 72 via second memory channel 92 may be propagated along write channel interconnects 82 coupling memory modules 74A-74D to one another, over bridging interconnect 94 and then along read channel interconnects 84 through nodes 74E-74H and ultimately to the memory port for second memory channel 92 in memory controller 72. For communicating data associated with a memory module 74E-74H over first memory channel 90, a corresponding flow of data occurs in a similar manner to that described above for communicating data associated with a memory module 74A-74D over second memory channel 92.
Memory controller 72 may include, for example, data and command logic block 100, which is utilized to initiate read and write operations in the memory storage and interface with a host processor. Incorporated into block 100 is load balancing logic 102, which is capable of implementing any number of load balancing algorithms to balance traffic between the first and second memory channels. Block 100 also includes failure detection logic 104, which is used to monitor the memory devices in the memory architecture, and may include, for example, Error Correcting Code (ECC) circuitry for correcting single or multi-bit failures, in a manner known in the art.
To drive the write data channel of the first memory channel, memory controller 72 includes a driver I/O block 106, which receives command/write data traffic over a data interconnect 114. Block 106 outputs to write channel data port 122 to drive an interconnect 82. Likewise, for the read data channel of the first memory channel, a receiver I/O block 108 is coupled to an interconnect 84 via a read channel data port 124, outputting status/read data traffic to block 100 via a data interconnect 116.
For the write data channel of the second memory channel, memory controller 72 includes a driver I/O block 110, which receives command/write data traffic over a data interconnect 118, and which outputs to write channel data port 126 to drive an interconnect 82. Likewise, for the read data channel of the second memory channel, a receiver I/O block 112 is coupled to an interconnect 84 via a read channel data port 128, outputting status/read data traffic to block 100 via a data interconnect 120.
Buffer device 80 includes a local DIMM DRAM control and data logic block 170, which is utilized to provide an interface between the write and read data channels and the various memory devices 78 on the respective module (e.g., via an internal memory bus 166). For the write data channel, a receiver I/O block 138 is coupled to data port 130 via a data interconnect 134, and outputs over an interconnect 142 both to block 170 (via interconnect 160) and to a driver I/O block 146. Block 146 is used to repower/repeat the data traffic received by block 146, for outputting to a subsequent node via data interconnect 150 coupled to data port 154.
Likewise, for the read data channel, incoming data traffic from port 156 is received by a receiver I/O block 148 over a data interconnect 152. The output of block 148 is fed over a data interconnect 144 to a driver I/O block 140, which repowers/repeats the data traffic over port 132 via data interconnect 136. In addition, block 170 is also capable of outputting data to data interconnect 144 via data interconnect 164.
In a conventional FB-DIMM AMB design, command and write data forwarded to the AMB via the write data channel is received by block 170 via data port 130, receiver I/O block 138 and interconnects 134, 142 and 160. Likewise, status and read data is output by block 170 over the read data channel via data port 132, driver I/O block 140 and interconnects 136, 144 and 164.
To support the ability to receive and/or transmit data associated with the local memory module over a different memory channel (i.e., a memory channel other than that within which the local memory module is disposed), buffer device 80 includes an additional pair of interconnects 158, 162. Interconnect 158 is configured to output status and read data for the local memory module over the write data channel via data port 154, driver I/O block 146 and interconnects 142 and 150. Interconnect 162 is configured to receive command and write data directed to the local memory module over the read data channel via data port 156, receiver I/O block 148 and interconnects 152 and 144. As such, it will be appreciated that interconnects 134, 142 and 150, receiver I/O block 138, driver I/O block 146 and data ports 130, 154, which are normally used in a write data channel, are additionally configured to communicate status and read data. Furthermore, interconnects 136, 144 and 152, receiver I/O block 140, driver I/O block 148 and data ports 132, 156, which are normally used in a read data channel, are additionally configured to communicate command and write data.
In addition, it will be appreciated that block 170 is typically configured to monitor both interconnects 160, 162 for command and write data directed to the local memory module, and to output any status or read data over both interconnects 158, 164. Block 170 may be configured to always output in such a manner, or alternatively may be configurable (either dynamically or statically) to operate in a special mode, whereby when the special mode is not enabled, the buffer device 80 operates in a conventional manner. As another alternative, block 170 may be configurable to selectively output status or read data over only one of interconnects 158, 164 (e.g., to switch between the interconnects). Various manners of configuring block 170 to operate in a different mode may be used, e.g., via directing a command to the block over the read or write data channel, or via sideband signals or dedicated control lines coupled to the buffer device 80.
As with memory controller 72, buffer device 80 may be implemented in a number of alternate manners consistent with the invention. Moreover, the implementation of the functionality of memory controller 72 and device 80 in integrated circuit devices would be within the ability of one of ordinary skill in the art having the benefit of the instant disclosure.
In normal operation, memory controller 72 may selectively route command and write data intended for any of memory modules 74A-74H over either (or both) of the first and second memory channels. In one embodiment, for example, conventional FB-DIMM protocols may be used, whereby all data related to a memory module disposed in one memory channel is routed only over that memory channel. In the alternative, as noted above, any number of load balancing algorithms may be utilized to optimize bandwidth and latency in the memory system, whereby command and write data directed to a memory module in one memory channel is selectively routed over either of the memory channels. In other embodiments, write and command data may be output over both memory channels even when no discontinuity is detected.
From the standpoint of status and read data output by any given memory module, the memory module may output the data only over its associated memory channel, or in the alternative, may route data over both memory channels. Furthermore, load balancing may be utilized within a memory module to balance data traffic. A memory module may alternatively route status or read data over a selected memory channel, e.g., based upon an indicator provided in the command to which the memory module is returning the data, based upon a sideband or external control signal, or based upon the port from which the command was initially received.
In addition, whenever a faulty memory module or interconnect is detected, whenever it is desired to replace a specific memory module, or otherwise whenever a discontinuity arises in a daisy chain arrangement of memory modules, command and write data may be routed from the memory controller over the appropriate memory channel, and status and read data may be routed by a particular memory module over the appropriate memory channel, to avoid the discontinuity. In one embodiment, all data traffic for each memory channel is replicated on the other memory channel. Furthermore, in some embodiments, the presence of a discontinuity may invoke a special mode whereby the data traffic flow is altered to account for the discontinuity.
Now turning to
Next, block 210 alters the read and write/command flow to effectively route data traffic around the failing FB-DIMM. The altering of the data flow may be implemented in a number of manners, e.g., by transitioning the memory controller and/or each memory module to operate in a special mode via a command, or through the use of side band signals or dedicated control lines. In the alternative, where each memory module is normally configured to relay data traffic associated with memory modules on the read and write channels, to monitor both the read and write channel for command and write data, and to output status and read data over both the read and write channel, no modification or reconfiguration of each memory module may be required, with the only change in data flow being effected by the memory controller. Furthermore, where the memory controller normally replicates data flow over both memory channels, no alteration of data flow may be necessary, whereby block 210 may be omitted.
Next, block 212 removes power from the slot for the failing FB-DIMM to enable a user to replace that failing FB-DIMM. Thereafter, once the failing FB-DIMM has been physically replaced with a replacement FB-DIMM, block 214 applies power to the replacement FB-DIMM, which typically initiates an initialization process for the replacement FB-DIMM. In addition, it may also be desirable to transmit configuration information from the memory controller to the replacement FB-DIMM to configure the replacement FB-DIMM to operate in the current environment.
Thereafter, block 216 restores the read and write/command flow (if necessary), thus restoring normal operation. Block 218 then adds the replacement FB-DIMM to the usable address range for the main storage, whereby the replacement FB-DIMM may then be utilized for the storage of working data in a manner known in the art.
It will be appreciated that any of blocks 206-218 may be initiated automatically, or alternatively, may be initiated in response to user control, e.g., after a notification to a user of a potential failure condition. It will also be appreciated that, in addition to enabling hot replacement of failing FB-DIMM's, the herein-described configuration may be utilized to address other situations in which a discontinuity exists in a daisy chain architecture, e.g., in the event of a failed interconnect or a total failure of an FB-DIMM.
It will also be appreciated that, in connection with the normal operation in block 202 and/or during the error recovery operation, load balancing may be utilized in the manner described herein.
Additional modifications may be made consistent with the invention. Therefore the invention lies in the claims hereinafter appended.
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7324352 *||Mar 1, 2005||Jan 29, 2008||Staktek Group L.P.||High capacity thin module system and method|
|US7442050||Aug 28, 2006||Oct 28, 2008||Netlist, Inc.||Circuit card with flexible connection for memory module with heat spreader|
|US7464225 *||Sep 26, 2005||Dec 9, 2008||Rambus Inc.||Memory module including a plurality of integrated circuit memory devices and a plurality of buffer devices in a matrix topology|
|US7478005 *||Apr 28, 2005||Jan 13, 2009||Rambus Inc.||Technique for testing interconnections between electronic components|
|US7487428||Jul 24, 2006||Feb 3, 2009||Kingston Technology Corp.||Fully-buffered memory-module with error-correction code (ECC) controller in serializing advanced-memory buffer (AMB) that is transparent to motherboard memory controller|
|US7511968 *||Dec 8, 2004||Mar 31, 2009||Entorian Technologies, Lp||Buffered thin module system and method|
|US7558887||Sep 5, 2007||Jul 7, 2009||International Business Machines Corporation||Method for supporting partial cache line read and write operations to a memory module to reduce read and write data traffic on a memory channel|
|US7584308||Aug 31, 2007||Sep 1, 2009||International Business Machines Corporation||System for supporting partial cache line write operations to a memory module to reduce write data traffic on a memory channel|
|US7636867 *||Mar 24, 2006||Dec 22, 2009||Nec Corporation||Memory system with hot swapping function and method for replacing defective memory module|
|US7640386 *||May 24, 2006||Dec 29, 2009||International Business Machines Corporation||Systems and methods for providing memory modules with multiple hub devices|
|US7644248 *||Sep 27, 2006||Jan 5, 2010||Intel Corporation||Mechanism to generate logically dedicated read and write channels in a memory controller|
|US7669086||Aug 2, 2006||Feb 23, 2010||International Business Machines Corporation||Systems and methods for providing collision detection in a memory system|
|US7685364||Apr 15, 2009||Mar 23, 2010||Rambus Inc.||Memory system topologies including a buffer device and an integrated circuit memory device|
|US7685392||Nov 28, 2005||Mar 23, 2010||International Business Machines Corporation||Providing indeterminate read data latency in a memory system|
|US7721140||Jan 2, 2007||May 18, 2010||International Business Machines Corporation||Systems and methods for improving serviceability of a memory system|
|US7729151||Jul 28, 2006||Jun 1, 2010||Rambus Inc.||System including a buffered memory module|
|US7737549||Oct 31, 2008||Jun 15, 2010||Entorian Technologies Lp||Circuit module with thermal casing systems|
|US7752490 *||Mar 24, 2006||Jul 6, 2010||Nec Corporation||Memory system having a hot-swap function|
|US7760513||Apr 3, 2006||Jul 20, 2010||Entorian Technologies Lp||Modified core for circuit module system and method|
|US7765368||Jul 5, 2007||Jul 27, 2010||International Business Machines Corporation||System, method and storage medium for providing a serialized memory interface with a bus repeater|
|US7768796||Jun 26, 2008||Aug 3, 2010||Entorian Technologies L.P.||Die module system|
|US7770077||Jan 24, 2008||Aug 3, 2010||International Business Machines Corporation||Using cache that is embedded in a memory hub to replace failed memory cells in a memory subsystem|
|US7811097||Sep 24, 2008||Oct 12, 2010||Netlist, Inc.||Circuit with flexible portion|
|US7818497||Aug 31, 2007||Oct 19, 2010||International Business Machines Corporation||Buffered memory module supporting two independent memory channels|
|US7839643||Nov 12, 2009||Nov 23, 2010||Netlist, Inc.||Heat spreader for memory modules|
|US7839645||Oct 26, 2009||Nov 23, 2010||Netlist, Inc.||Module having at least two surfaces and at least one thermally conductive layer therebetween|
|US7840748||Aug 31, 2007||Nov 23, 2010||International Business Machines Corporation||Buffered memory module with multiple memory device data interface ports supporting double the memory capacity|
|US7844769 *||Jul 26, 2006||Nov 30, 2010||International Business Machines Corporation||Computer system having an apportionable data bus and daisy chained memory chips|
|US7844771||Mar 31, 2008||Nov 30, 2010||International Business Machines Corporation||System, method and storage medium for a memory subsystem command interface|
|US7861014||Aug 31, 2007||Dec 28, 2010||International Business Machines Corporation||System for supporting partial cache line read operations to a memory module to reduce read data traffic on a memory channel|
|US7865674||Aug 31, 2007||Jan 4, 2011||International Business Machines Corporation||System for enhancing the memory bandwidth available through a memory module|
|US7870459||Oct 23, 2006||Jan 11, 2011||International Business Machines Corporation||High density high reliability memory module with power gating and a fault tolerant address and command bus|
|US7899983||Aug 31, 2007||Mar 1, 2011||International Business Machines Corporation||Buffered memory module supporting double the memory device data width in the same physical space as a conventional memory module|
|US7925824||Jan 24, 2008||Apr 12, 2011||International Business Machines Corporation||System to reduce latency by running a memory channel frequency fully asynchronous from a memory device frequency|
|US7925825||Jan 24, 2008||Apr 12, 2011||International Business Machines Corporation||System to support a full asynchronous interface within a memory hub device|
|US7925826||Jan 24, 2008||Apr 12, 2011||International Business Machines Corporation||System to increase the overall bandwidth of a memory channel by allowing the memory channel to operate at a frequency independent from a memory device frequency|
|US7930469||Jan 24, 2008||Apr 19, 2011||International Business Machines Corporation||System to provide memory system power reduction without reducing overall memory system performance|
|US7930470||Jan 24, 2008||Apr 19, 2011||International Business Machines Corporation||System to enable a memory hub device to manage thermal conditions at a memory device level transparent to a memory controller|
|US8019919||Sep 5, 2007||Sep 13, 2011||International Business Machines Corporation||Method for enhancing the memory bandwidth available through a memory module|
|US8037355 *||Jun 6, 2008||Oct 11, 2011||Texas Instruments Incorporated||Powering up adapter and scan test logic TAP controllers|
|US8078898 *||Jun 6, 2008||Dec 13, 2011||Texas Instruments Incorporated||Synchronizing TAP controllers with sequence on TMS lead|
|US8225126 *||Nov 9, 2011||Jul 17, 2012||Texas Instruments Incorporated||Adaptor detecting sequence on TMS and coupling TAP to TCK|
|US8458505 *||Jun 6, 2012||Jun 4, 2013||Texas Instruments Incorporated||Adapter and scan test logic synchronizing from idle state|
|US8463959||Jan 24, 2011||Jun 11, 2013||Mosaid Technologies Incorporated||High-speed interface for daisy-chained devices|
|US8503211||Apr 29, 2010||Aug 6, 2013||Mosaid Technologies Incorporated||Configurable module and memory subsystem|
|US8607088 *||Sep 6, 2011||Dec 10, 2013||Texas Intruments Incorporated||Synchronizing remote devices with synchronization sequence on JTAG control lead|
|US8984319 *||Nov 8, 2013||Mar 17, 2015||Texas Instruments Incorporated||Adapter power up circuitry forcing tap states and decoupling tap|
|US20100005220 *||Jan 7, 2010||International Business Machines Corporation||276-pin buffered memory module with enhanced memory system interconnect and features|
|US20110213908 *||Sep 1, 2011||Bennett Jon C R||Configurable interconnection system|
|US20110320850 *||Dec 29, 2011||Texas Instruments Incorporated||Offline at start up of a powered on device|
|US20120060068 *||Nov 9, 2011||Mar 8, 2012||Texas Instruments Incorporated||Synchronizing a device that has been power cycled to an already operational system|
|US20120254681 *||Oct 4, 2012||Texas Instruments Incorporated||Synchronizing a device that has been power cycled to an already operational system|
|US20140068361 *||Nov 8, 2013||Mar 6, 2014||Texas Instruments Incorporated||Offline at start up of a powered on device|
|EP2887223A4 *||Oct 12, 2012||Aug 19, 2015||Huawei Tech Co Ltd||Memory system, memory module, memory module access method and computer system|
|WO2010023355A1 *||Aug 19, 2009||Mar 4, 2010||Nokia Corporation||Method, apparatus and software product for multi-channel memory sandbox|
|WO2010132995A1 *||May 20, 2010||Nov 25, 2010||Mosaid Technologies Incorporated||Configurable module and memory subsystem|
|WO2011150496A1 *||May 31, 2011||Dec 8, 2011||Mosaid Technologies Incoporated||High speed interface for daisy-chained devices|
|Cooperative Classification||G06F13/1684, G06F13/1673|
|European Classification||G06F13/16D6, G06F13/16D2|
|Nov 11, 2004||AS||Assignment|
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BORKENHAGEN, JOHN MICHAEL;REEL/FRAME:015371/0186
Effective date: 20041025
|Aug 19, 2011||FPAY||Fee payment|
Year of fee payment: 4
|Sep 13, 2011||AS||Assignment|
Effective date: 20110817
Owner name: GOOGLE INC., CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:026894/0001