US 8181094 B2
A system to improve error correction includes a fast decoder to process data packets until the fast decoder finds an uncorrectable error in a data packet at which point a request for at least two data packets is generated. The system also includes a slow decoder to correct the uncorrectable error in a data packet based upon the at least two data packets.
1. A system to improve error correction using variable latency, the system comprising:
a first decoder to process data packets until said first decoder finds an uncorrectable error for a data packet at which point a request for at least two data packets is generated; and
a second decoder to correct the uncorrectable error in the data packet based upon the at least two data packets, the first decoder processes the data packets faster than the second decoder.
2. The system of
3. The system of
4. The system of
5. The system of
6. The system of
7. The system of
8. The system of
9. The system of
10. The system of
11. The system of
12. The system of
13. A method to improve error correction using variable latency, the method comprising:
processing data packets until a first decoder finds an uncorrectable error in a data packet at which point a request for at least two data packets is generated; and
correcting the uncorrectable error in the data packet via a second decoder using the at least two data packets, the first decoder processes the data packets faster than the second decoder.
14. The method of
15. The method of
16. The method of
17. A computer program product embodied in a non-transient tangible media comprising:
computer readable program codes coupled to the tangible media to improve error correction using variable latency, the computer readable program codes configured to cause the program to:
process data packets until a first decoder finds an uncorrectable error in a data packet at which point a request for at least two data packets is generated; and
correct the uncorrectable error in the data packet via a second decoder using the at least two data packets, the first decoder processes the data packets faster than the second decoder.
18. The computer program product of
provide the at least two data packets based upon the request from memory that is partitioned into memory ranks and comprised of memory chips.
19. The computer program product of
20. The computer program product of
This invention was made with Government support under Agreement No. HR0011-07-9-0002 awarded by DARPA. The Government has certain rights in the invention.
This application contains subject matter related to the following co-pending applications entitled “System for Error Decoding with Retries and Associated Methods” and having Ser. No. 12/023,356 “System to Improve Memory Reliability and Associated Methods” and having Ser. No. 12/023,374 “System for Error Control Coding for Memories of Different Types and Associated Methods” and having Ser. No. 12/023,408 “System to Improve Error Code Decoding Using Historical Information and Associated Methods” and having Ser. No. 12/023,445 “System to Improve Memory Failure Management and Associated Methods” and having Ser. No. 12/023,498 and “System to Improve Miscorrection Rates in Error Control Code Through Buffering and Associated Methods” and having Ser. No. 12/023,516 the entire subject matters of which are incorporated herein by reference in their entirety. The aforementioned applications are assigned to the same assignee as this application, International Business Machines Corporation of Armonk, New York.
The invention relates to the field of computer systems, and, more particularly, to error correction systems and related methods.
This invention relates generally to computer memory, and more particularly to providing a high fault tolerant memory system.
Computer systems often require a considerable amount of high speed RAM (random access memory) to hold information such as operating system software, programs and other data while a computer is powered on and operational. This information is normally binary, composed of patterns of 1's and 0's known as bits of data. The bits of data are often grouped and organized at a higher level. A byte, for example, is typically composed of 8 bits; more generally these groups are called symbols and may consist on any number of bits.
Computer RAM is often designed with pluggable subsystems, often in the form of modules, so that incremental amounts of RAM can be added to each computer, dictated by the specific memory requirements for each system and application. The acronym, “DIMM” refers to dual in-line memory modules, which are perhaps the most prevalent memory module currently in use. A DIMM is a thin rectangular card comprising one or more memory devices, and may also include one or more of registers, buffers, hub devices, and/or non-volatile storage (e.g., erasable programmable read only memory or “EPROM”) as well as various passive devices (e.g. resistors and capacitors), all mounted to the card.
DIMMs are often designed with dynamic memory chips or DRAMs that need to be regularly refreshed to prevent the data stored within them from being lost. Originally, DRAM chips were asynchronous devices, however contemporary chips, synchronous DRAM (SDRAM) (e.g. single data rate or “SDR”, double data rate or “DDR”, DDR2, DDR3, etc) have synchronous interfaces to improve performance. DDR devices are available that use pre-fetching along with other speed enhancements to improve memory bandwidth and to reduce latency. DDR3, for example, has a standard burst length of 8, where the term burst length refers to the number of DRAM transfers in which information is conveyed from or to the DRAM during a read or write. Another important parameter of DRAM devices is the number of I/O pins that it has to convey read/write data. When a DRAM device has 4 pins, it is said that it is a “by 4” (or ×4) device. When it has 8 pins, it is said that it is a “by 8” (or ×8) device, and so on.
Memory device densities have continued to grow as computer systems have become more powerful. Currently it is not uncommon to have the RAM content of a single computer be composed of hundreds of trillions of bits. Unfortunately, the failure of just a portion of a single RAM device can cause the entire computer system to fail. When memory errors occur, which may be “hard” (repeating) or “soft” (one-time or intermittent) failures, these failures may occur as single cell, multi-bit, full chip or full DIMM failures and all or part of the system RAM may be unusable until it is repaired. Repair turn-around-times can be hours or even days, which can have a substantial impact to a business dependent on the computer systems.
The probability of encountering a RAM failure during normal operations has continued to increase as the amount of memory storage in contemporary computers continues to grow.
Techniques to detect and correct bit errors have evolved into an elaborate science over the past several decades. Perhaps the most basic detection technique is the generation of odd or even parity where the number of 1's or 0's in a data word are “exclusive or-ed” (XOR-ed) together to produce a parity bit. For example, a data word with an even number of 1's will have a parity bit of 0 and a data word with an odd number of 1's will have a parity bit of 1, with this parity bit data appended to the stored memory data. If there is a single error present in the data word during a read operation, it can be detected by regenerating parity from the data and then checking to see that it matches the stored (originally generated) parity.
More sophisticated codes allow for detection and correction of errors that can affect groups of bits rather than individual bits; Reed-Solomon codes are an example of a class of powerful and well understood codes that can be used for these types of applications.
These error detection and error correction techniques are commonly used to restore data to its original/correct form in noisy communication transmission media or for storage media where there is a finite probability of data errors due to the physical characteristics of the device. The memory devices generally store data as voltage levels representing a 1 or a 0 in RAM and are subject to both device failure and state changes due to high energy cosmic rays and alpha particles.
In the 1980's, RAM memory device sizes first reached the point where they became sensitive to alpha particle hits and cosmic rays causing memory bits to flip. These particles do not damage the device but can create memory errors. These are known as soft errors, and most often affect just a single bit. Once identified, the bit failure can be corrected by simply rewriting the memory location. The frequency of soft errors has grown to the point that it has a noticeable impact on overall system reliability.
Memory Error Correction Codes (ECC) use a combination of parity checks in various bit positions of the data word to allow detection and correction of errors. Every time data words are written into memory, these parity checks need to be generated and stored with the data. Upon retrieval of the data, a decoder can use the parity bits thus generated together with the data message in order to determine whether there was an error and to proceed with error correction if feasible.
The first ECCs were applied to RAM in computer systems in an effort to increase fault-tolerance beyond that allowed by previous means. Binary ECC codes were deployed that allowed for double-bit error detection (DED) and single-bit error correction (SEC). This SEC/DED ECC also allows for transparent recovery of single bit hard errors in RAM.
Scrubbing routines were also developed to help reduce memory errors by locating soft errors through a a scanning of the memory whereby memory was read, corrected if necessary and then written back to memory.
Some storage manufacturers have used advanced ECC techniques, such as Reed-Solomon codes, to correct for full memory chip failures. Some memory system designs also have standard reserve memory chips (e.g. “spare” chips) that can be automatically introduced in a memory system to replace a faulty chip. These advancements have greatly improved RAM reliability, but as memory size continues to grow and customers' reliability expectations increase, further enhancements are needed.
The memory controller 110 attaches to four narrow/high speed point-to-point memory busses 106, with each bus 106 connecting one of the several unique memory controller interface channels to a cascade interconnect memory subsystem 103 (or memory module, e.g., a DIMM) which includes at least a hub device 104 and one or more memory devices 109. Some systems further enable operations when a subset of the memory busses 106 are populated with memory subsystems 103. In this case, the one or more populated memory busses 108 may operate in unison to support a single access request.
The connection between a hub in a DIMM and a memory controller may have transmission errors and therefore such a connection may be protected using error detection codes. In these types of designs, the memory controller checks a detection code during a read and if there is a mismatch, it issues a retry request for the faulty read (and possibly other read requests that happened in the near time vicinity). To support such retry mechanisms, the memory controller maintains a queue of pending requests which is used to determine which requests.
The evolution of the minimal burst length parameter of DRAM devices has been such that it makes it increasingly more difficult to provide for desirable error correction properties such as multiple chipkill support. The trend for such minimal burst length has to increase as new DRAM technologies are introduced.
As an illustrative example, assume that a processor has a cache line of 128B, and that ancillary information totaling 4 additional bytes needs to be stored and protected together with the cache line. Such ancillary information will vary from processor design to processor design. Again for illustrative purposes, suppose the additional information is comprised of a flag indicating whether the data was corrupted even before reaching memory (the SUE flag), tag bits that can be used in data structures and a node bit that indicates whether a more recent copy of the cache line may exist elsewhere in the system.
In the DDR3 generation of DRAM devices, the minimal burst length on each device is equal to 8 transfers. Therefore a ×4 DRAM device (which by definition has 4 I/O pins) delivers/accepts a minimum of 32 bits (4 bytes) on each read/write access. Correspondingly, a ×8 DRAM device delivers/accepts a minimum of 64 bits (8 bytes) on each read/write access. Assuming a processor cache line of size 128 bytes, and assuming that for every 8 data chips there is an additional 9th chip that provides additional storage for error correction/detection codes, a simple calculation demonstrates that a total of 36 ×4 devices can be accessed in parallel to supply a total of 144 bytes (out of which 128 bytes are for data, and 4 bytes are for ancillary information). Similarly, a total of 18 ×8 devices can be accessed in parallel to supply a total of 144 bytes.
As we stated earlier, it is highly desirable for an error correction code to provide for the ability to survive a chipkill. Unfortunately, those skilled in the art will recognize that while it is possible to allow for chipkill recovery in the setting where 2 of the 18 chips are completely devoted to redundant checks, once the additional ancillary information is introduced as a storage requirement it becomes mathematically impossible to allow for the recovery of chipkills with 100% certainty.
One alternative is to construct a memory using ×4 parts instead, since in this memory geometry a total of 32 devices may be devoted to data, the 33rd device may be devoted to the ancillary information which would leave 3 additional chips for redundant information. Such redundancy will allow, as those skilled in the art will recognize, to have single chip error correct/double chip error detect capabilities for the system.
A strong reason for not using ×4 parts nonetheless is related to power consumption. Assume that ×4 and ×8 parts have identical storage capacity. Contrasting two systems with exactly the same number of chips, but one with ×4 chips and the other one with ×8 chips, the same amount of “standby” power is incurred in both (standby power is the amount of power paid in the absence of any memory activity).
Nonetheless, every time an access is made to memory, in the ×4 memory configuration a total of 36 devices are activated simultaneously, as opposed to the ×8 situation where only 18 devices are activated simultaneously. Therefore, the “active” power (paid during memory accesses) is double in the ×4 setting than in the ×8 setting.
In view of the foregoing background, it is an object of the invention to provide a system that improves error correction using latency.
This and other objects, features, and advantages in accordance with the invention are provided by a system to improve error correction. The system may include a fast decoder to process data packets until the fast decoder finds an uncorrectable error in a data packet at which point a request for at least two data packets is generated. The system may also include a slow decoder to possibly correct the uncorrectable error in a data packet based upon the at least two data packets.
The system may further include a memory to provide the at least two data packets based upon the request. The memory may be partitioned into memory ranks comprised of memory chips.
The slow decoder may examine the at least two data packets for failures to identify which memory chip contains the error uncorrectable by the fast decoder. The slow decoder may use a Chien search while processing the at least two data packets.
The system may also include a table that contains information about persistent failures. The slow decoder may search for new failures not already identified in the table that contains persistent failure information.
The table may be updated to contain information about correctable errors if the slow decoder succeeds in locating the correctable errors. The slow decoder may process each of the at least two data packets based upon computed syndromes for each of the at least two data packets.
The slow decoder may process the syndromes of the at least two data packets in parallel. Alternatively, the slow decoder may process the syndromes of the at least two data packets serially. Additionally, the slow decoder may receive one of the data packets syndromes as all zeros.
Another aspect of the invention is a method to improve error correction using variable latency. The method may include processing data packets until a fast decoder finds an uncorrectable error in a data packet at which point a request for at least two data packets is generated. The method may also include possibly correcting the uncorrectable error in a data packet via a slow decoder using the at least two data packets.
The invention will now be described more fully hereinafter with reference to the accompanying drawings, in which preferred embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Like numbers refer to like elements throughout.
As will be appreciated by one skilled in the art, the invention may be embodied as a method, system, or computer program product. Furthermore, the invention may take the form of a computer program product on a computer-usable storage medium having computer-usable program code embodied in the medium.
Any suitable computer usable or computer readable medium may be utilized. The computer-usable or computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, or a magnetic storage device.
Computer program code for carrying out operations of the invention may be written in an object oriented programming language such as Java, Smalltalk, C++ or the like. However, the computer program code for carrying out operations of the invention may also be written in conventional procedural programming languages, such as the “C” programming language or similar programming languages.
The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
The invention is described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The system 10 also includes a slow decoder 14 to attempt to correct the uncorrectable error data packet based upon the at least two error packets, for instance. The system 10 further includes a memory 16 to provide the at least two error packets based upon the request, for example. The memory 16 is partitioned into memory ranks 18 comprised of memory chips 20, for instance.
In one embodiment, the slow decoder 14 examines the at least two error packets for failures to identify which memory chip 20 contains the uncorrectable error data packet. In addition, the slow decoder 14 uses a Chien search while processing the at least two error packets.
The system 10 also includes a table 22 that contains information about persistent failures, for example. In one embodiment, the slow decoder 14 searches for new failures not already identified in the table 22 that contains persistent failure information.
In another embodiment, the table 22 updates to contain information about correctable errors if the slow decoder 14 succeeds in locating the correctable errors. The slow decoder 14 processes each of the at least two error packets based upon computed syndromes for each of the at least two error packets, for instance.
In one embodiment, the slow decoder 14 processes the syndromes of the at least two error packets in parallel. Alternatively, the slow decoder 14 processes the syndromes of the at least two error packets serially. In another embodiment, the slow decoder 14 receives one of the error packets syndromes as all zeros.
The system 10 also includes a communications network 24, for instance. In one embodiment, the communications network 24 is a wired and/or wireless network including private and public communications infrastructure as will be appreciated by those of skill in the art. In one embodiment, the memory 16, the slow decoder 14, the fast decoder 12, and the table 22, communicate with each other over the communications network 24 using communications links 26 a-26 d, respectively, as will be appreciated by those of skill in the art.
In view of the foregoing, the system 10 provides for correcting errors using an error control decoder that is split into two sections, e.g. fast decoder 12 and slow decoder 14, in order to reduce the chip area that is required for implementation. In addition, such a system can improve the average latency with respect to an error control decoder with fixed decoding time.
In one embodiment, the fast decoder 12 is designed to correct the vast majority of the errors, for example all known persistent errors plus an additional symbol error. Further, the slow decoder 14 is used to locate errors that the fast decoder 12 is not able to correct, as long as the error control code allows for the correction of those errors.
In one embodiment, the information that the slow decoder 14 needs to locate new errors is the information about prior persistent fails, a description of how the error that is being searched for might appear (this may change as one uses memory chips 20 of different types), and the syndrome of the error packet being analyzed. The slow decoder 14 is capable of processing the syndromes of multiple error packets, for example. This design point is motivated by an application in which errors that the slow decoder 14 is expected to find can potentially affect multiple packets, for instance.
In one embodiment, by processing multiple packets, the slow decoder 14 can compare errors across packets and improve the chances of correcting errors in all of them. Also, the probability of miscorrecting errors can be reduced through this technique.
The system 10 may be attached to a memory 16 which is built using memory chips 20. The fast decoder 12 is unable to correct for new chip errors, but the slow decoder 14 is able to locate memory chips 20 in error with high probability in most cases. In one embodiment, by processing the syndromes of two packets simultaneously, the slow decoder 14 is able to better discriminate which memory chip 20 is actually failing during a chipkill.
Another aspect of the invention is directed to a method to improve error correction using variable latency, which is now described with reference to flowchart 40 of
A prophetic example of how the system 10 may work is now described with additional reference to
Error control codes generally operate on symbols which are comprised on one or more bits. For the purposes of this exemplary embodiment, symbols will contain 8 bits. Also illustrated in
For the purposes of this exemplary embodiment we shall assume that the error control code that is employed in this invention is a Reed-Solomon code whose symbols are comprised of 8 bits. In the 72 byte codeword, 64 byes will be dedicated to data, one byte will be dedicated to hold ancillary information and the additional 7 bytes will contain checks coming from a Reed-Solomon code. Thus the Reed-Solomon code, in the parlance of coding theory, as parameters [n=72,k=65]. During the course of this invention we shall take advantage of the fact that error control codes (including Reed-Solomon codes) can accept information about the location of failures in order to improve their error correction and detection capacity.
The goal of this XOR mask is to ensure that if a chipkill affects two consecutive 72B data packets and if the nature of the chipkill is that the chip produces a constant data output, then the errors appear different in both packets. This brings benefits to the miscorrection rates of the decoder when it is operating in Gather mode (set by switch 717). Read requests are made to the memory by issuing the proper commands 707 to the memory 702.
The memory returns data 708 request in a read which is then fed to the error control decoder 709. The received data is processor by the error control decoder 709 either through the fast decoder 714 or the slow decoder 715. The latter happens when the data coming from the memory has an error that cannot be decoded using the fast decoder 714 alone.
The decoder 709 uses information about prior failures that might have affected the memory rank from which the data is coming from. Such prior failure information is stored in the Marking Store 710 which is read by the decoder prior to decoding any data coming from the memory 708. This Marking store 710 is simply a table which has a number of bytes for every memory rank to which the memory controller can connect to.
The information stored in the marking store 710 essentially contains the locations of the symbols in the error control codeword that are expected to have errors; the decoder 709 uses this information in the calculations leading to the decoding of the Reed-Solomon code to mathematically erase the contributions of the information received in these locations in order to determine whether there are any additional errors in unknown locations that might be correctable.
The symbol location in the marking store 710 can be encoded both as a number to be interpreted as a Galois Field element, or might describe the symbol in a DIMM rank as a numeric offset. The decoder 709, computes the locations and magnitudes of any additional errors that might be present in the codeword, as well as the error magnitudes of the errors presumed in the known locations. If there are no such errors, then the error magnitude for the latter will be equal to zero.
Upon correcting any errors in the data received from the memory 708, the decoder 709 forwards the corrected message to a return bus buffer 711 that in which data is staged for transmission across a read return data bus 712. The return bus buffer 711 also accepts a gather mode switch signal 717 which if enabled, causes two or more packets to be buffered and their uncorrectable error flags combined using a logical OR operation in order to generate single global uncorrectable error flag. The effect that this has is to significantly improve miscorrection rates in the decoder, if so judged necessary.
The decoder 709 is able to process data coming both from a memory built using ×8 DRAM parts as well as a memory built using ×4 DRAM parts. To this end, there is a signal 713 which may be employed to set an operation mode for the decoder. In the present exemplary embodiment, the ×8/×4 control signal affects only the slow decoder; this is, the fast decoder 714 is oblivious as to what kind of memory the data is being received from. This is possible because for either kind of memory, exactly the same [72,65] Reed-Solomon code (over GF(256)) is employed and because the role of the fast decoder 714 is to correct for any errors denoted by marking information stored in the Marking Store 710 and to correct an additional symbol error only, as opposed to a full new ×4 or ×8 chipkill (we refer the reader to
Most of the circuitry in the decoder is attributed to the fast decoder 714, and as such the present design offers a design in which largely the same circuitry can be used to decode memories of two different types.
The slow decoder 715 has the responsibility of locating new chipkills and as such, it needs to know whether it is looking for ×4 chipkills or ×8 chipkills. As such, it uses the ×8/×4 select signal 713 in order to determine which mode to operate on. The operation of the slow decoder 715 requires the memory controller to retry any pending requests to memory since the slow decoder can only process on request at the time. At the end of the operation of the slow decoder, a new chipkill may have been found and if so, the Marking Store 710 is updated automatically with the information about the chipkill thus encountered. In addition, the firmware 703 is notified that a new chipkill has been encountered, so that it can note the chipkill in a logfile and start tracking further errors coming from the associated memory rank. This notification takes place through the maintenance block 716, which has the ability to communicate with the firmware 703.
Information communicated back to the firmware 703 through the maintenance block 716 is not limited to new chipkills countered. If the fast decoder has found an additional symbol error beyond those that might be found in marked locations (given by information coming from the Marking Store 710), then a notification that a New Correctable Symbol Error (NCSE) has occurred is given to the maintenance block 716 which in turn communicates it to the firmware 703.
The firmware 703 also has the ability to affect the marking store. This is allowed because the firmware 703 has considerably more resources than the hardware to keep track of failure statistics and such it might decide to remove a chipkill mark that is placed automatically by the decoder 714 in the marking store 710, since that chipkill might have been a temporary failure. In addition, the firmware 703 might decide to place a symbol mark if too many New Correctable Symbol Errors are being generated at some location in memory. The firmware 703 is also able to place chipkill marks. Since the hardware can also update the marking store table 710, a method for coordinating conflicting writes in the marking store 710 is needed. In this invention, the firmware 703 may request a change to the table 710 and then a notification that the write was successful may be obtained from the hardware.
The syndromes fed to the Modified Syndrome Computation engine 804 can come from the syndrome generation circuit 806 or may come externally through an optional syndrome bypass path 807. The fast decoder 801 has a signal 808 that allows to select which syndrome is fed to the modified syndrome computation engine 804. The bypass path is a useful in the slow decoder where the fast decoder 801 needs to be reused on the same data since but it is inconvenient to feed the original data again for syndrome generation. Then the previously computed syndrome of the original data can be fed through the bypass path 807. To this end, the syndrome of the data is an output 809 of the fast decoder.
The modified syndromes are fed to a circuit 810 that computes the error magnitudes of those errors that might exist in known locations, as well as the error magnitude and location of a potential new correctable symbol error. The result of this computation, along with several other partial computations useful for the generation of flags, are passed to an error correction stage 811 that combines the original potentially corrupted data (which has been stored temporarily in a channel buffer 812) with the error magnitudes and the (potential) new symbol error location computed in the earlier stage 810. In 811 we additionally compute several flags which summarize the analysis that the fast decoder has done of the data 805. These flags are: 1) the Correctable Error (CE) flag, which is true if any error (marked or not) was corrected and false otherwise; 2) the New Correctable Symbol Error (NCSE) which is true if and only if the fast decoder 801 found (and corrected) a symbol in error in a location not previously marked by the marking info 802; and 3) The FastUE flag, which is true if the error present in the data cannot be corrected by the fast decoder giving the current marking information 802.
It will be appreciated by the reader that the Fast decoder does not have an ×4/×8 input to modify its behavior depending on the nature of the memory parts used to build a memory system.
The marking information 803 is fed to a module 813 that computes a marking information score which is then fed to the error correction and flag computation stage 811. The score is a measure of how much exposure the system has to unsupported errors and is directly related to how many symbols have been previously marked according to the marking info 802. This score may simply be related to a count of the number of symbols that have been marked or may be a more complex function of the location and number of marked symbols. We adopt the convention that a low score indicates a higher exposure than a higher score. The error correction and flag computation stage can take advantage of the score as follows. It is known that many hard failures in DRAM parts are concentrated in a single pin of the DRAM. As it may be appreciated from
In normal circumstances, 72B worth of data are fed to the Fast decoder, and in the presence of no new errors (in addition to the marked locations) or in the presence of a new correctable symbol error (in addition to the marked locations), the error would be corrected and passed on for consumption by the system.
When the fast decoder declares a FastUE, it could be because the error stored in the DRAM is uncorrectable by the Fast decoder, or because such error would be correctable but the data received by the fast decoder suffered additional corruptions during the transmission from the DRAM storage to the input of the decoder. For example, there could be a transmission error in the bus 106 connecting the hub 104 in a DIMM 103 to a memory controller 110. To this end, the memory controller retries the 72B read request, along with other read requests.
We refer the reader to
An important element of the present design is that the retry is done for two 72B packets, as opposed to only one (a retry of more than two packets is feasible as an extension of this invention). The main reason two 72B packets are requested is so that a more thorough analysis can be made of the errors that might be present in both packets. These two 72B packets are stored in exactly the same memory rank and in fact are streamed back-to-back from the DRAM devices (since the DRAM devices have a burst length of 8 and each 72B packet is communicated in exactly 4 DRAM transfers in either ×4 or ×8 mode). Therefore, a chipkill is expected to corrupt both data packets at the same chip location, of course in general with different error magnitudes. Thus an analysis of both packets at the same time would greatly increase the level of confidence that the decoder will have on its veredict of the nature of the error that occurred in both error packets.
When the two 72B packets come back to the memory controller 901 after the initial retry happened, they are fed to the decoder in a special retry mode.
A diagram of the retry mode is found in
If either of the 72B packets has a FastUE as determined by the fast decoder 1003, then the decoder requests that the memory controller retry all pending requests BUT the current one. This is done to make space in time for a Chien search 1007 to be performed. This Chien search is implemented as a serial process during which each chip location is tested to see whether a chipkill may be taking place in that location. It is an important feature of this invention that this process is implemented serially as that way we attain significant hardware savings. This slow process nonetheless cannot be made concurrently with other decoding activity, and that is the reason the decoder requests a retry of all pending requests but the current one. The Chien search 1007 is enabled with a signal 1006 from the OR computation 1005. The input of the Chien search 1007 are the two syndromes of the 72B packets passed in retry mode, along with the marking information which is the same for both packets. The output of the Chien search is a (potentially) new set of marking information 1008, which might describe new marks pointing to where the ×8 or ×4 chipkill has occurred. It may be appreciated that the Chien search 1007 is the only place where the ×4/×8 select control signal 1009 is employed in the entire decoder, including the fast decoder and the decoder in retry mode. The Chien search 1007 since it is implemented as a serial process, admits a very efficient implementation when compared to the implementation of the fast decoder.
In some instances it is not legal to search for new chipkills. For example, in ×4 mode at most two chipkills are supported and therefore it does not make sense to attempt to locate a third chipkill. To this end, there is a stage 1009 to which the old and new marking info are fed, which decides whether the new marks are valid or not. If so, it feeds them back to the fast decoder (for both 72B packets) so that the fast decoder can attempt to decode the data again. If it is not legal for the Chien search to generate new marking information, then the old marking information is passed instead to both applications of the fast decoder.
If valid new marking information has been generated by the Chien search, then it is expected that the fast decoder will be able to correct a ×4 or ×8 chipkill.
The Chien search is initialized in 1102 where flags chipkill_found and search_fail are set to false and a pointer i is set to the location of the first chip.
In a test 1103, both sets of modified syndromes are checked to see whether a chipkill might exist in that chip location. The way this is attained is by further modifying the modified syndromes to remove any error contributions coming from the chip currently being pointed to by the pointer i, and to check whether the resulting (twice) modified syndromes are all equal to zero. If this is the case for both of the (twice) modified syndromes then the test 1103 results in “Yes”
Then the flag chipkill_found is tested to see whether it is equal to “True”. If not, then the chipkill_loc pointer is made to point to the current pointer i, and the chipkill_found flag is set to “True”. If on the other hand the chipkill_found flag is already set to “True” then the search_fail flag is raised. The rationale behind this process is that only exactly one location may claim a chipkill, and if more than one location claims a chipkill, there is ambiguity and the Chien search loop fails.
The procedure described above is repeated until all chips have been examined. Then a module that generates new marks 1104 takes the chipkill_found, search_fail flags, the chipkill_loc pointer, the old marking information, the ×4/×8 select signal and the Enable Chien Search Signal to produce new marking information.
A general design philosophy employed in this invention is that an optimized circuit (the fast decoder) is designed to be able to deal with most error events (which do not affect more than one new symbol error), and that a very small circuit that takes much longer to operate is employed in very rare circumstances. This results in lower latency and smaller circuit area than if the decoder had to additionally correct for rare but catastrophic events such as chipkills. When a new chipkill is discovered, a slow procedure to figure out its location is invoked (aided by the memory controller request retry functionality), but this does not result in any measurable performance degradation in the system because once the slow procedure has finished, the location of the chipkill becomes known and stored in the marking store. Thus subsequent accesses to this memory rank no longer result in a retry.
Many modifications and other embodiments of the invention will come to the mind of one skilled in the art having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is understood that the invention is not to be limited to the specific embodiments disclosed, and that other modifications and embodiments are intended to be included within the scope of the appended claims.