US20080307276A1 - Memory Controller with Loopback Test Interface - Google Patents

Memory Controller with Loopback Test Interface Download PDF

Info

Publication number
US20080307276A1
US20080307276A1 US11/760,566 US76056607A US2008307276A1 US 20080307276 A1 US20080307276 A1 US 20080307276A1 US 76056607 A US76056607 A US 76056607A US 2008307276 A1 US2008307276 A1 US 2008307276A1
Authority
US
United States
Prior art keywords
data
memory controller
write
read
receivers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US11/760,566
Other versions
US7836372B2 (en
Inventor
Luka Bodrozic
Sukalpa Biswas
Hao Chen
Sridhar P. Subramanian
James B. Keller
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
PA Semi Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US11/760,566 priority Critical patent/US7836372B2/en
Application filed by PA Semi Inc filed Critical PA Semi Inc
Assigned to P.A. SEMI, INC. reassignment P.A. SEMI, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KELLER, JAMES B., SUBRAMANIAN, SRIDHAR P.
Assigned to P.A. SEMI, INC. reassignment P.A. SEMI, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KELLER, JAMES B., SUBRAMANIAN, SRIDHAR P., BISWAS, SUKALPA, BODROZIC, LUKA, CHEN, HAI
Assigned to P.A. SEMI, INC. reassignment P.A. SEMI, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KELLER, JAMES B., SUBRAMANIAN, SRIDHAR P., BISWAS, SUKALPA, BODROZIC, LUKA, CHEN, HAO
Publication of US20080307276A1 publication Critical patent/US20080307276A1/en
Assigned to APPLE INC. reassignment APPLE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PA SEMI, INC.
Priority to US12/909,073 priority patent/US8086915B2/en
Publication of US7836372B2 publication Critical patent/US7836372B2/en
Application granted granted Critical
Priority to US13/305,202 priority patent/US8301941B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/28Testing of electronic circuits, e.g. by signal tracer
    • G01R31/317Testing of digital circuits
    • G01R31/31712Input or output aspects
    • G01R31/31716Testing of input or output with loop-back

Definitions

  • This invention is related to the field of memory controllers and, more particularly, to loopback test functionality for memory controllers and integrated circuits including such memory controllers.
  • a symmetrical interface is an interface that has the same protocol and physical attributes in both the transmit and receive directions.
  • PCIe Peripheral Component Interconnect Express
  • One or more lanes are configured into a link, and each lane comprises a transmit serial communication and a receive serial communication.
  • a communication transmitted on the transmit link can fairly easily be returned (or “looped back”) on the receive link.
  • Other interfaces that use loopback testing include Ethernet, for example. Loopback testing allows at-speed, functional test of the interface hardware, all the way to the integrated circuit pins and back.
  • both the functional circuitry and the entire transmission path within the integrated circuit can be tested using loopback. Additionally, the test can be performed inexpensively by coupling the output to the input (possibly with delay for timing purposes and/or minor processing to meet protocol requirements) or coupling a component to the interface, rather than using an expensive at-speed tester.
  • a memory interface from a memory controller to one or more memory modules such as Dual-Inline Memory Modules (DIMMs), is not symmetrical.
  • a unidirectional address/command and address control (row address strobe (RAS), column address strobe (CAS), etc.) interface is provided from the memory controller to the memory modules.
  • a bidirectional data bus and data control (e.g. DQ signals) interface is provided, which flows from the memory controller to the memory modules for a write operation and from the memory modules to the memory controller for a read operation. Accordingly, there is not a natural way to perform loopback testing on the memory interface, to test the memory controller hardware. Typically, expensive tester equipment is used to test the memory controller, increasing the cost of the product.
  • an apparatus comprises an interconnect; at least one processor coupled to the interconnect; and at least one memory controller coupled to the interconnect.
  • the memory controller is programmable by the processor into a loopback test mode of operation. In the loopback test mode, the memory controller is configured to receive a first write operation from the processor over the interconnect.
  • the memory controller is configured to route write data from the first write operation through a plurality of drivers and receivers connected to a plurality of data pins that are capable of connection to one or more memory modules.
  • the memory controller is further configured to return the write data as read data on the interconnect for a first read operation received from the processor on the interconnect. In one embodiment, the read data is driven through the drivers and receivers as well, before being returned on the interconnect.
  • the memory controller comprises a write data buffer, the plurality of drivers, the plurality of receivers, and a controller.
  • the write data buffer is configured to store write data received from one or more write operations on an interconnect to which the memory controller is coupled.
  • the controller is configured, in a loopback test mode of operation, to cause first write data to be transmitted from the write data buffer, through the plurality of drivers and the plurality of receivers, to be recaptured by the memory controller in response to a first write operation.
  • a method comprises issuing a first write operation from a processor to a memory controller in a loopback test mode of operation; routing write data from the first write operation through a plurality of drivers and receivers in the memory controller, wherein the plurality of drivers and receivers are connected to a plurality of data pins that are capable of connection to one or more memory modules; issuing a first read operation from the processor to the memory controller in the loopback test mode of operation; and returning the write data as read data on the interconnect for a first read operation received from the processor on the interconnect.
  • the read data is driven through the plurality of drivers and receivers as well.
  • FIG. 1 is a block diagram of one embodiment of a system.
  • FIG. 2 is a block diagram of one embodiment of a memory controller shown in FIG. 1 .
  • FIG. 3 is a flowchart illustrating operation of a write in loopback test mode for one embodiment.
  • FIG. 4 is a flowchart illustrating operation of a read in loopback test mode for one embodiment.
  • FIG. 5 is a diagram illustrating pseudocode corresponding to code that is executed on a processor shown in FIG. 1 for one embodiment.
  • the system 10 includes a DMA controller 14 , one or more processors such as processors 18 A- 18 B, one or more memory controllers such as memory controllers 20 A- 20 B, an I/O bridge (IOB) 22 , an I/O memory (IOM) 24 , an I/O cache (IOC) 26 , a level 2 (L2) cache 28 , an interconnect 30 , a peripheral interface controller 32 , one or more media access control circuits (MACs) such as MACs 34 A- 34 B, and a physical interface layer (PHY) 36 .
  • processors such as processors 18 A- 18 B
  • memory controllers such as memory controllers 20 A- 20 B
  • I/O bridge (IOB) 22 I/O bridge 22
  • IOM I/O memory
  • IOC I/O cache
  • L2 level 2
  • interconnect 30 a peripheral interface controller 32
  • MACs media access control circuits
  • PHY physical interface layer
  • the processors 18 A- 18 B, memory controllers 20 A- 20 B, IOB 22 , and L2 cache 28 are coupled to the interconnect 30 .
  • the IOB 22 is further coupled to the IOC 26 and the IOM 24 .
  • the DMA controller 14 is also coupled to the IOB 22 and the IOM 24 .
  • the MACs 34 A- 34 B are coupled to the DMA controller 14 and to the physical interface layer 36 .
  • the peripheral interface controller 32 is also coupled to the I/O bridge 22 and the I/O memory 34 and to the physical interface layer 36 .
  • the components of the system 10 may be integrated onto a single integrated circuit as a system on a chip. In other embodiments, the system 10 may be implemented as two or more integrated circuits.
  • the DMA controller 14 is configured to perform DMA transfers between the interface circuits 16 and the host address space. Additionally, the DMA controller 14 may, in some embodiments, be configured to perform DMA transfers between sets of memory locations within the address space (referred to as a “copy DMA transfer”).
  • the DMA controller 14 may also be configured to perform one or more operations (or “functions”) on the DMA data as the DMA data is being transferred, in some embodiments.
  • some of the operations that the DMA controller 14 performs are operations on packet data (e.g. encryption/decryption, cyclical redundancy check (CRC) generation or checking, checksum generation or checking, etc.).
  • the operations may also include an exclusive OR (XOR) operation, which may be used for redundant array of inexpensive disks (RAID) processing, for example.
  • the processors 18 A- 18 B comprise circuitry to execute instructions defined in an instruction set architecture implemented by the processors 18 A- 18 B. Specifically, one or more programs comprising the instructions may be executed by the processors 18 A- 18 B. Any instruction set architecture may be implemented in various embodiments. For example, the PowerPCTM instruction set architecture may be implemented. Other exemplary instruction set architectures may include the ARMTM instruction set, the MIPSTM instruction set, the SPARCTM instruction set, the x86 instruction set (also referred to as IA-32), the IA-64 instruction set, etc.
  • the memory controllers 20 A- 20 B comprise circuitry configured to interface to memory.
  • the memory controllers 20 A- 20 B may be configured to interface to dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), double data rate (DDR) SDRAM, DDR2 SDRAM, Rambus DRAM (RDRAM), etc.
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • DDR double data rate SDRAM
  • RDRAM Rambus DRAM
  • the memory controllers 20 A- 20 B may receive read and write operations for the memory to which they are coupled from the interconnect 30 , and may perform the read/write operations to the memory.
  • the memory controllers 20 A- 20 B may be configured to interface to one or more DRAM memory modules.
  • a memory module may generally comprise two or more memory chips attached to a printed circuit board that can be inserted into a memory module slot on another printed circuit board to which the memory controllers 20 A- 20 B are coupled. Each memory controller 20 A- 20 B may be coupled to one or more memory module slots on the board, and each memory module slot may be coupled to only one memory controller 20 A- 20 B.
  • the memory module may also include other memory circuitry, such as the advanced memory buffer (AMB) included in fully buffered DIMMs.
  • AMB advanced memory buffer
  • Memory modules may include DIMMs, single inline memory modules (SIMMs), etc.
  • the L2 cache 28 may comprise a cache memory configured to cache copies of data corresponding to various memory locations in the memories to which the memory controllers 20 A- 20 B are coupled, for low latency access by the processors 18 A- 18 B and/or other agents on the interconnect 30 .
  • the L2 cache 28 may comprise any capacity and configuration (e.g. direct mapped, set associative, etc.).
  • the IOB 22 comprises circuitry configured to communicate transactions on the interconnect 30 on behalf of the DMA controller 14 and the peripheral interface controller 32 .
  • the interconnect 30 may support cache coherency, and the IOB 22 may participate in the coherency and ensure coherency of transactions initiated by the IOB 22 .
  • the IOB 22 employs the IOC 26 to cache recent transactions initiated by the IOB 22 .
  • the IOC 26 may have any capacity and configuration, in various embodiments, and may be coherent.
  • the IOC 26 may be used, e.g., to cache blocks of data which are only partially updated due to reads/writes generated by the DMA controller 14 and the peripheral interface controller 32 .
  • read-modify-write sequences may be avoided on the interconnect 30 , in some cases. Additionally, transactions on the interconnect 30 may be avoided for a cache hit in the IOC 26 for a read/write generated by the DMA controller 14 or the peripheral interface controller 32 if the IOC 26 has sufficient ownership of the cache block to complete the read/write. Other embodiments may not include the IOC 26 .
  • the IOM 24 may be used as a staging buffer for data being transferred between the IOB 22 and the peripheral interface controller 32 or the DMA controller 14 .
  • the data path between the IOB 22 and the DMA controller 14 /peripheral interface controller 32 may be through the IOM 24 .
  • the control path (including read/write requests, addresses in the host address space associated with the requests, etc.) may be between the IOB 22 and the DMA controller 14 /peripheral interface controller 32 directly.
  • the IOM 24 may not be included in other embodiments.
  • the interconnect 30 may comprise any communication medium for communicating among the processors 18 A- 18 B, the memory controllers 20 A- 20 B, the L2 cache 28 , and the IOB 22 .
  • the interconnect 30 may be a bus with coherency support.
  • the interconnect 30 may alternatively be a point-to-point interconnect between the above agents, a packet-based interconnect, or any other interconnect.
  • the interconnect may be coherent, and the protocol for supporting coherency may vary depending on the interconnect type.
  • the address interconnect may be a broadcast address bus (with staging to absorb one or more clock cycles of transmit latency).
  • a partial crossbar data interconnect may be implemented for data transmission.
  • the MACs 34 A- 34 B may comprise circuitry implementing the media access controller functionality defined for network interfaces.
  • one or more of the MACs 34 A- 34 B may implement the Gigabit Ethernet standard.
  • One or more of the MACs 34 A- 34 B may implement the 10 Gigabit Ethernet Attachment Unit Interface (XAUI) standard.
  • Other embodiments may implement other Ethernet standards, such as the 10 Megabit or 100 Megabit standards, or any other network standard.
  • Other embodiments may have more or fewer MACs, and any mix of MAC types.
  • the MACs 34 A- 34 B that implement Ethernet standards may strip off the inter-frame gap (IFG), the preamble, and the start of frame delimiter (SFD) from received packets and may provide the remaining packet data to the DMA controller 14 for DMA to memory.
  • the MACs 34 A- 34 D may be configured to insert the IFG, preamble, and SFD for packets received from the DMA controller 14 as a transmit DMA transfer, and may transmit the packets to the PHY 36 for transmission.
  • the peripheral interface controller 32 comprises circuitry configured to control a peripheral interface.
  • the peripheral interface controller 32 may control a peripheral component interconnect (PCI) Express interface.
  • PCI peripheral component interconnect
  • Other embodiments may implement other peripheral interfaces (e.g. PCI, PCI-X, universal serial bus (USB), etc.) in addition to or instead of the PCI Express interface.
  • the PHY 36 may generally comprise the circuitry configured to physically communicate on the external interfaces to the system 10 under the control of the interface circuits 16 .
  • the PHY 36 may comprise a set of serializer/deserializer (SERDES) circuits that may be configured for use as PCI Express lanes or as Ethernet connections.
  • the PHY 36 may include the circuitry that performs 8 b / 10 b encoding/decoding for transmission through the SERDES and synchronization first-in, first-out (FIFO) buffers, and also the circuitry that logically configures the SERDES links for use as PCI Express or Ethernet communication links.
  • the PHY may comprise 24 SERDES that can be configured as PCI Express lanes or Ethernet connections. Any desired number of SERDES may be configured as PCI Express and any desired number may be configured as Ethernet connections.
  • system 10 may include one or any number of any of the elements shown in FIG. 1 (e.g. processors, memory controllers, caches, I/O bridges, DMA controllers, and/or interface circuits, etc.).
  • processors e.g. processors, memory controllers, caches, I/O bridges, DMA controllers, and/or interface circuits, etc.
  • the memory controllers 20 A- 20 B may be designed to perform loopback testing on the memory interface.
  • the memory controller functionality may be tested at speed using loopback operation, and may be performed relatively inexpensively, in some embodiments.
  • data from a write operation may be looped back into a data buffer in the memory controller 20 A- 20 B.
  • a subsequent read operation to the same address as the write operation may cause the memory controller 20 A- 20 B to return the data from the data buffer.
  • the read and write data may be compared to determine that proper operation was observed.
  • one of the processors 18 A- 18 B may execute a program including one or more instructions that generate the write operation and the read operation, when executed.
  • the program may also include instructions to compare the read data to the write data to detect an error (and thus detect that the test fails).
  • the loopback test mode may additionally include looping back the address information to a buffer or queue.
  • the write address may be captured in this fashion, and may be compared to read addresses to detect that data is to be returned. Multiple write operations may be performed before a corresponding set of read operations, if desired, using the comparison to discern which write data to return for each read operation.
  • the memory controller 20 A may be preconfigured with addresses that are to be read and written during the test. The looped-back address information may be compared to the preconfigured addresses to capture/supply data for write/read operations.
  • the memory controller 20 A- 20 B may return error data for the read.
  • the error data may be any data that results in an error being detected.
  • the error data may simply be different that the data in the read data buffer 42 (e.g. the data may be inverted and returned). Alternatively, any random data may be returned, or all zeros may be returned.
  • Data with erroneous ECC may be returned, if the interconnect 30 supports ECC. If the interconnect 30 permits returning an error status instead of data, the error data may comprise the error status.
  • Patterns of address and data may be selected from any of a number of test patterns (e.g. an alternating ones and zeros pattern and the inverse of the pattern, etc.).
  • FIG. 2 a block diagram of one embodiment of the memory controller 20 A is shown.
  • the memory controller 20 B may be similar.
  • the memory controller 20 A includes an interconnect interface circuit 74 , a write data buffer 40 , a read data buffer 42 , a scheduler 44 , a transaction queue 46 , a memory interface queue 48 , a memory interface controller 50 that includes a loopback mode bit (LPM) in a control register 54 , a merge buffer 56 , muxes 52 and 64 , drivers 62 and 66 , and receivers 68 and 72 .
  • the interconnect interface circuit 74 is coupled to the interconnect 30 and to the write data buffer 40 , the read data buffer 42 , and the transaction queue 46 .
  • LPM loopback mode bit
  • the scheduler 44 is coupled to the transaction queue 46 , which is coupled to the memory interface queue 48 .
  • the memory interface queue 48 is coupled to the memory interface controller 50 and to the mux 64 , the output of which is coupled to the drivers 66 .
  • the drivers 66 are coupled to a plurality of address and control pins (Addr/Ctl Pins) to which one or more memory modules are capable of being coupled.
  • the address pins are further coupled to the receivers 72 , which are coupled to receive a disable input from the memory interface controller 50 and are coupled to the merge buffer 56 .
  • the memory interface controller 50 is also coupled to the merge buffer 56 , the read data buffer 42 , the write data buffer 40 , and the muxes 52 and 64 .
  • the memory interface controller 50 is coupled to provide an input to the mux 64 as well as a selection control to the mux 64 .
  • the memory interface controller 50 is coupled to provide mux control to the mux 52 also.
  • the write data buffer 40 is coupled to the mux 52 as an input.
  • the mux 52 output is coupled to the drivers 62 , and the other input of the mux 52 is coupled to the merge buffer 56 .
  • the drivers 62 are coupled to a plurality of data and data control pins (Data/Data Ctl Pins), which are also coupled to the receivers 68 , which are coupled to the read data buffer 42 and the merge buffer 56 .
  • the interconnect interface circuit 74 includes the circuitry for interfacing to the interconnect 30 , using appropriate protocol, timing, physical signalling, etc.
  • the interconnect interface circuit 74 may further include circuitry for decoding the address of a transaction on the interconnect 30 , to detect read or write operations that are targeted at memory locations controlled by the memory controller 20 A. That is, the interconnect interface circuit 74 may be programmable with one or more memory address ranges that are mapped to the memory controller 20 A.
  • the interconnect interface circuit 74 may include buffering for timing/pipelining reasons, but may generally use the write data buffer 40 , the read data buffer 42 , and the transaction queue 46 for storage, in one embodiment.
  • the interconnect interface circuit 74 may decode the address of a transaction transmitted on the interconnect 30 , and may detect a read or write operation mapped to the memory controller 20 A.
  • the interconnect interface circuit 74 may write the address portion of the operation to the transaction queue 46 .
  • the transaction queue 46 may comprise a plurality of entries for storing address information for read/write operations.
  • an entry may include an address field (Addr), an identifier (Id) field, and a control (ctl) field.
  • the address field may store the address bits (or at least enough of the address bits to address the memory modules to which the memory controller 20 A may be coupled in normal operation).
  • the Id field may store the identifier transmitted on the interconnect 30 , which may identify the source of the operation and may also include a sequence number or other tag assigned by the source.
  • the control field may store various control information for the operation (e.g. valid bit, size of the operation, read or write, etc.).
  • the scheduler 44 may schedule operations from the transaction queue 46 according to any scheduling criteria. For example, ordering rules may be enforced by the scheduler 44 . The scheduler 44 may also attempt to opportunistically schedule operations that are to the same memory page (and thus can be performed with lower latency “page mode” accesses), if possible. Scheduled operations are read from the transaction queue 46 and written to the memory interface queue 48 , which may comprise a plurality of entries for scheduled operations.
  • the memory interface controller 50 may schedule operations from the memory interface queue 48 and may drive the address control signals on the memory interface, through the mux 64 and the drivers 66 .
  • the address bits may be read from the memory interface queue 48 and may be driven through the mux 64 and on the pins via the drivers 66 . In another embodiment, both address and control may be read from the memory interface queue 48 and provided through the mux 64 to the drivers 66 .
  • the write data may be received from the interconnect interface circuit 74 and may be written to the write data buffer 40 .
  • the memory interface controller 50 may read the write data from the write data buffer 40 for transmission to the memory modules. Data from the write data buffer 40 is driven through the mux 52 and then by the drivers 62 onto the data pins. For cache block sized writes, the complete write data may be transmitted as several “beats” over the data pins.
  • the write data buffer 40 may comprise a plurality of entries. Each entry may store data for one write operation, and thus multiple write operations may have data in the write data buffer 40 at the same time.
  • the read data buffer 42 may comprise a plurality of entries. Each entry may store data for one read operation, and thus multiple read operations may have data in the read data buffer 42 at the same time.
  • a read-modify-write operation may be performed.
  • a read-modify-write operation comprises reading the memory data, modifying the read data with the write data, and writing the modified data back to memory. If the memory includes error correction code (ECC) protection, ECC data may be generated for the modified data and the new ECC data may be written back with the modified data.
  • ECC error correction code
  • the merge buffer 56 may be used to support the read-modify-write operation. As part of the read, the address of the operation may be written to the merge buffer 56 .
  • the address may be provided from the transaction queue 46 in parallel with writing it to the memory interface queue 48 , or may be provided by the memory interface queue 48 (separate from the path through the drivers and receivers shown in FIG. 2 ).
  • the read data from the read portion of the read-modify-write is also provided, and is stored in the merge buffer 56 . In other embodiments, only the data may be stored in the merge buffer 56 and the address may be retained in the memory interface queue 48 . Subsequently, the write portion of the read-modify-write may be performed. During the write, data from the merge buffer 56 may be provided to the mux 52 as well as the write data from the write data buffer 40 .
  • the mux 52 may select the byte from the write data buffer 40 . Otherwise, the read data from the merge buffer 56 is selected for that byte. If the address is also provided from the merge buffer 56 , the address may be muxed into the address path from the memory interface queue 48 to the drivers 66 , or the address may be routed through the memory interface queue 48 from the merge buffer 56 . However, in some embodiments, the memory interface queue 48 may retain the address for a read-modify-write operation and may retransmit the address for the write portion of the operation.
  • the merge buffer 56 may comprise a plurality of entries and thus may support multiple read-modify-write operations outstanding at one time.
  • the receivers 72 are provided. If the loopback test mode were not implemented, the receivers 72 would not be needed, since the address and address control pins are unidirectional in normal mode, from the memory controller 20 A to the memory modules.
  • the receivers 72 may include a disable input, and the memory interface controller 50 may disable the receivers 72 during normal mode.
  • the memory interface controller 50 may be programmable into the loopback test mode (e.g. by software setting the LPM bit in the register 54 , in one embodiment).
  • the loopback test mode control logic may be implemented in the memory interface controller 50 . If loopback test mode is enabled, the memory interface controller 50 may enable the receivers 72 .
  • FIG. 3 is a flowchart illustrating additional operation for one embodiment of a write operation in the loopback test mode. Blocks are shown in FIG. 3 in a particular order for ease of understanding, but other orders may be used. Blocks may be performed in parallel in combinatorial logic in the memory interface controller 50 and/or may be pipelined over multiple clock cycles as desired.
  • a write operation received in loopback test mode may initially be processed similar to a write operation in normal mode. That is, the address information received from the interconnect 30 may be written to the transaction queue 46 and the data received from the interconnect 30 may be written to the write data buffer 40 .
  • the scheduler 44 may schedule the write operation, and the scheduled write operation may be written to the memory interface queue 48 .
  • the memory interface controller 50 may schedule the write operation from the memory interface queue 48 .
  • the address and control signals may be driven to the address/control pins by the drivers 66 . Additionally, the address bits and control signals may be received by the receivers 72 , which may provide the address bits and control signals to the merge buffer 56 .
  • the memory interface controller 50 may write the address and control information to the merge buffer 56 (block 80 ).
  • the address may be routed to the merge buffer 56 (through the drivers 66 and receivers 72 ) may be compared to the addresses to select a merge buffer entry to store the write data. If no matching address is detected (which may indicate an error in the address transmission path), then the write data may not be stored.
  • the address may be transmitted in two transmissions over the address pins, with different controls (the row address and the column address).
  • capturing the address in the merge buffer 56 may similarly include two captures of address bits. Comparing the address may include two comparisons. It is noted that, in the loopback test mode, there may not be (and need not be) any actual memory attached to the memory controller pins.
  • the loopback test may be performed on the system 10 in isolation, on the system 10 mounted on a test board without any memory, etc.
  • the write data may be read from the write data buffer 40 and transmitted, through the drivers 62 , onto the data pins. Additionally, the data may flow through the receivers 68 , and the memory interface controller 50 may cause the merge buffer 56 to capture the data (block 82 ). As mentioned previously, the data may be driven as multiple beats over the data pins. Accordingly, the data may be captured as multiple beats as well. Such operation is similar to receiving read data in multiple beats in normal mode. Alternatively, in another embodiment, the data may be captured into the read data buffer 42 . The data may be provided from the read data buffer for a subsequent read to the same address.
  • the write address and data of the write operation is in an entry of the merge buffer 56 in the present embodiment.
  • the addresses may be stored in control registers that can be written with the expected addresses before the write operation(s) are issued.
  • FIG. 4 is a flowchart illustrating additional operation for one embodiment of a read operation in the loopback test mode. Blocks are shown in FIG. 4 in a particular order for ease of understanding, but other orders may be used. Blocks may be performed in parallel in combinatorial logic in the memory interface controller 50 and/or may be pipelined over multiple clock cycles as desired.
  • a read operation in the loopback test mode may initially be processed similar to a read operation in the normal mode.
  • the read operation may be received from the interconnect 30 by the interconnect interface 74 and may be written to the transaction queue 46 .
  • the scheduler 44 may schedule the read operation, and the scheduled write operation may be written to the memory interface queue 48 .
  • the memory interface controller 50 may schedule the read operation from the memory interface queue 48 .
  • the read address may be driven through the drivers 66 to the address pins, and through the receivers 72 to the merge buffer 56 .
  • the read address may be compared to the address in the merge buffer 56 (or the corresponding control registers) (block 90 ).
  • the memory interface controller 50 may cause error data to be supplied for the read (block 94 ). If the read address matches a write address in the merge buffer 56 (decision block 92 , “yes” leg), the memory interface controller 50 may cause the data from the associated entry of the merge buffer 56 to be supplied (block 96 ). The supplied data (error data or from the merge buffer) may then be transmitted through the mux 52 , the drivers 62 , and the receivers 68 . The data may be captured by the read data buffer 42 (block 98 ) to be supplied on the interconnect 30 (block 100 ).
  • each of drivers 62 and 66 may represent a plurality of drivers, one for each pin to which the drivers are connected. The drivers may drive different bits/signals, in parallel.
  • each of the receivers 68 and 72 may represent a plurality of receivers, each coupled to a different pin.
  • first-in, first-out buffers may be provided on the read data path and/or the write data path to cross the clock domain boundary from the interconnect 30 to the clock domain corresponding to the memory interface/memory modules.
  • FIG. 5 is a diagram illustrating pseudocode representing instructions that may be executed by a processor 18 A- 18 B to perform a loopback test of a memory controller 20 A- 20 B for one embodiment.
  • the actual instructions, and number of instructions, may differ from the pseudocode.
  • the processor 18 A- 18 B may execute instructions to write the LPM bit in the control register 54 (Write LPM, CtlReg), to write an address with test data (Write Addr, TestData), to read data from the address to a destination register (Read Addr, DestReg), to compare the test data with the destination register (Compare TestData, DestReg), to branch if the test data is not equal to the destination register to an error handling routine (BNE ErrorTag), to modify one or both of the address and the test data (Modify Addr, TestData), and to branch back to the write operation (B).
  • BNE ErrorTag error handling routine
  • multiple write operations followed by multiple read operations may be performed, up to the number of operations that can be concurrently queued in the merge buffer 56 .
  • the branch at the end may be conditional branch based on a loop count or other variable, and then the loopback test may exit (e.g. writing the control register 54 to clear the LPM bit).
  • the write of the LPM bit may be preceded by writes that preconfigured the test addresses into control registers in the memory controller 20 A that correspond to each merge buffer entry.

Abstract

In one embodiment, an apparatus comprises an interconnect; at least one processor coupled to the interconnect; and at least one memory controller coupled to the interconnect. The memory controller is programmable by the processor into a loopback test mode of operation and, in the loopback test mode, the memory controller is configured to receive a first write operation from the processor over the interconnect. The memory controller is configured to route write data from the first write operation through a plurality of drivers and receivers connected to a plurality of data pins that are capable of connection to one or more memory modules. The memory controller is further configured to return the write data as read data on the interconnect for a first read operation received from the processor on the interconnect.

Description

    BACKGROUND
  • 1. Field of the Invention
  • This invention is related to the field of memory controllers and, more particularly, to loopback test functionality for memory controllers and integrated circuits including such memory controllers.
  • 2. Description of the Related Art
  • As integrated circuits increase in complexity and in the number of transistors included on a given instance of the circuit, the testing capabilities of the circuit increase in importance. The ability to test the circuit with a high level of test coverage (to ensure that the circuit is not defective) and inexpensively is an important component of producing a high quality, affordable, and profitable integrated circuit product.
  • One mechanism that can be useful for testing on a symmetrical interface is loopback. A symmetrical interface is an interface that has the same protocol and physical attributes in both the transmit and receive directions. For example, the Peripheral Component Interconnect (PCI) Express (PCIe) interface is symmetrical. One or more lanes are configured into a link, and each lane comprises a transmit serial communication and a receive serial communication. Thus, a communication transmitted on the transmit link can fairly easily be returned (or “looped back”) on the receive link. Other interfaces that use loopback testing include Ethernet, for example. Loopback testing allows at-speed, functional test of the interface hardware, all the way to the integrated circuit pins and back. Accordingly, both the functional circuitry and the entire transmission path within the integrated circuit can be tested using loopback. Additionally, the test can be performed inexpensively by coupling the output to the input (possibly with delay for timing purposes and/or minor processing to meet protocol requirements) or coupling a component to the interface, rather than using an expensive at-speed tester.
  • A memory interface, from a memory controller to one or more memory modules such as Dual-Inline Memory Modules (DIMMs), is not symmetrical. Typically, a unidirectional address/command and address control (row address strobe (RAS), column address strobe (CAS), etc.) interface is provided from the memory controller to the memory modules. A bidirectional data bus and data control (e.g. DQ signals) interface is provided, which flows from the memory controller to the memory modules for a write operation and from the memory modules to the memory controller for a read operation. Accordingly, there is not a natural way to perform loopback testing on the memory interface, to test the memory controller hardware. Typically, expensive tester equipment is used to test the memory controller, increasing the cost of the product.
  • SUMMARY
  • In one embodiment, an apparatus comprises an interconnect; at least one processor coupled to the interconnect; and at least one memory controller coupled to the interconnect. The memory controller is programmable by the processor into a loopback test mode of operation. In the loopback test mode, the memory controller is configured to receive a first write operation from the processor over the interconnect. The memory controller is configured to route write data from the first write operation through a plurality of drivers and receivers connected to a plurality of data pins that are capable of connection to one or more memory modules. The memory controller is further configured to return the write data as read data on the interconnect for a first read operation received from the processor on the interconnect. In one embodiment, the read data is driven through the drivers and receivers as well, before being returned on the interconnect.
  • In an embodiment, the memory controller comprises a write data buffer, the plurality of drivers, the plurality of receivers, and a controller. The write data buffer is configured to store write data received from one or more write operations on an interconnect to which the memory controller is coupled. The controller is configured, in a loopback test mode of operation, to cause first write data to be transmitted from the write data buffer, through the plurality of drivers and the plurality of receivers, to be recaptured by the memory controller in response to a first write operation.
  • In one embodiment, a method comprises issuing a first write operation from a processor to a memory controller in a loopback test mode of operation; routing write data from the first write operation through a plurality of drivers and receivers in the memory controller, wherein the plurality of drivers and receivers are connected to a plurality of data pins that are capable of connection to one or more memory modules; issuing a first read operation from the processor to the memory controller in the loopback test mode of operation; and returning the write data as read data on the interconnect for a first read operation received from the processor on the interconnect. In one embodiment, the read data is driven through the plurality of drivers and receivers as well.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The following detailed description makes reference to the accompanying drawings, which are now briefly described.
  • FIG. 1 is a block diagram of one embodiment of a system.
  • FIG. 2 is a block diagram of one embodiment of a memory controller shown in FIG. 1.
  • FIG. 3 is a flowchart illustrating operation of a write in loopback test mode for one embodiment.
  • FIG. 4 is a flowchart illustrating operation of a read in loopback test mode for one embodiment.
  • FIG. 5 is a diagram illustrating pseudocode corresponding to code that is executed on a processor shown in FIG. 1 for one embodiment.
  • While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims.
  • DETAILED DESCRIPTION OF EMBODIMENTS Overview
  • Turning now to FIG. 1, a block diagram of one embodiment of a system 10 is shown. In the illustrated embodiment, the system 10 includes a DMA controller 14, one or more processors such as processors 18A-18B, one or more memory controllers such as memory controllers 20A-20B, an I/O bridge (IOB) 22, an I/O memory (IOM) 24, an I/O cache (IOC) 26, a level 2 (L2) cache 28, an interconnect 30, a peripheral interface controller 32, one or more media access control circuits (MACs) such as MACs 34A-34B, and a physical interface layer (PHY) 36.
  • The processors 18A-18B, memory controllers 20A-20B, IOB 22, and L2 cache 28 are coupled to the interconnect 30. The IOB 22 is further coupled to the IOC 26 and the IOM 24. The DMA controller 14 is also coupled to the IOB 22 and the IOM 24. The MACs 34A-34B are coupled to the DMA controller 14 and to the physical interface layer 36. The peripheral interface controller 32 is also coupled to the I/O bridge 22 and the I/O memory 34 and to the physical interface layer 36. In some embodiments, the components of the system 10 may be integrated onto a single integrated circuit as a system on a chip. In other embodiments, the system 10 may be implemented as two or more integrated circuits.
  • The DMA controller 14 is configured to perform DMA transfers between the interface circuits 16 and the host address space. Additionally, the DMA controller 14 may, in some embodiments, be configured to perform DMA transfers between sets of memory locations within the address space (referred to as a “copy DMA transfer”).
  • The DMA controller 14 may also be configured to perform one or more operations (or “functions”) on the DMA data as the DMA data is being transferred, in some embodiments. In one embodiment, some of the operations that the DMA controller 14 performs are operations on packet data (e.g. encryption/decryption, cyclical redundancy check (CRC) generation or checking, checksum generation or checking, etc.). The operations may also include an exclusive OR (XOR) operation, which may be used for redundant array of inexpensive disks (RAID) processing, for example.
  • The processors 18A-18B comprise circuitry to execute instructions defined in an instruction set architecture implemented by the processors 18A-18B. Specifically, one or more programs comprising the instructions may be executed by the processors 18A-18B. Any instruction set architecture may be implemented in various embodiments. For example, the PowerPC™ instruction set architecture may be implemented. Other exemplary instruction set architectures may include the ARM™ instruction set, the MIPS™ instruction set, the SPARC™ instruction set, the x86 instruction set (also referred to as IA-32), the IA-64 instruction set, etc.
  • The memory controllers 20A-20B comprise circuitry configured to interface to memory. For example, the memory controllers 20A-20B may be configured to interface to dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), double data rate (DDR) SDRAM, DDR2 SDRAM, Rambus DRAM (RDRAM), etc. The memory controllers 20A-20B may receive read and write operations for the memory to which they are coupled from the interconnect 30, and may perform the read/write operations to the memory. Specifically, the memory controllers 20A-20B may be configured to interface to one or more DRAM memory modules. A memory module may generally comprise two or more memory chips attached to a printed circuit board that can be inserted into a memory module slot on another printed circuit board to which the memory controllers 20A-20B are coupled. Each memory controller 20A-20B may be coupled to one or more memory module slots on the board, and each memory module slot may be coupled to only one memory controller 20A-20B. The memory module may also include other memory circuitry, such as the advanced memory buffer (AMB) included in fully buffered DIMMs. Memory modules may include DIMMs, single inline memory modules (SIMMs), etc.
  • The L2 cache 28 may comprise a cache memory configured to cache copies of data corresponding to various memory locations in the memories to which the memory controllers 20A-20B are coupled, for low latency access by the processors 18A-18B and/or other agents on the interconnect 30. The L2 cache 28 may comprise any capacity and configuration (e.g. direct mapped, set associative, etc.).
  • The IOB 22 comprises circuitry configured to communicate transactions on the interconnect 30 on behalf of the DMA controller 14 and the peripheral interface controller 32. The interconnect 30 may support cache coherency, and the IOB 22 may participate in the coherency and ensure coherency of transactions initiated by the IOB 22. In the illustrated embodiment, the IOB 22 employs the IOC 26 to cache recent transactions initiated by the IOB 22. The IOC 26 may have any capacity and configuration, in various embodiments, and may be coherent. The IOC 26 may be used, e.g., to cache blocks of data which are only partially updated due to reads/writes generated by the DMA controller 14 and the peripheral interface controller 32. Using the IOC 26, read-modify-write sequences may be avoided on the interconnect 30, in some cases. Additionally, transactions on the interconnect 30 may be avoided for a cache hit in the IOC 26 for a read/write generated by the DMA controller 14 or the peripheral interface controller 32 if the IOC 26 has sufficient ownership of the cache block to complete the read/write. Other embodiments may not include the IOC 26.
  • The IOM 24 may be used as a staging buffer for data being transferred between the IOB 22 and the peripheral interface controller 32 or the DMA controller 14. Thus, the data path between the IOB 22 and the DMA controller 14/peripheral interface controller 32 may be through the IOM 24. The control path (including read/write requests, addresses in the host address space associated with the requests, etc.) may be between the IOB 22 and the DMA controller 14/peripheral interface controller 32 directly. The IOM 24 may not be included in other embodiments.
  • The interconnect 30 may comprise any communication medium for communicating among the processors 18A-18B, the memory controllers 20A-20B, the L2 cache 28, and the IOB 22. For example, the interconnect 30 may be a bus with coherency support. The interconnect 30 may alternatively be a point-to-point interconnect between the above agents, a packet-based interconnect, or any other interconnect. The interconnect may be coherent, and the protocol for supporting coherency may vary depending on the interconnect type. In one embodiment, the address interconnect may be a broadcast address bus (with staging to absorb one or more clock cycles of transmit latency). A partial crossbar data interconnect may be implemented for data transmission.
  • The MACs 34A-34B may comprise circuitry implementing the media access controller functionality defined for network interfaces. For example, one or more of the MACs 34A-34B may implement the Gigabit Ethernet standard. One or more of the MACs 34A-34B may implement the 10 Gigabit Ethernet Attachment Unit Interface (XAUI) standard. Other embodiments may implement other Ethernet standards, such as the 10 Megabit or 100 Megabit standards, or any other network standard. In one implementation, there are 6 MACs, 4 of which are Gigabit Ethernet MACs and 2 of which are XAUI MACs. Other embodiments may have more or fewer MACs, and any mix of MAC types.
  • Among other things, the MACs 34A-34B that implement Ethernet standards may strip off the inter-frame gap (IFG), the preamble, and the start of frame delimiter (SFD) from received packets and may provide the remaining packet data to the DMA controller 14 for DMA to memory. The MACs 34A-34D may be configured to insert the IFG, preamble, and SFD for packets received from the DMA controller 14 as a transmit DMA transfer, and may transmit the packets to the PHY 36 for transmission.
  • The peripheral interface controller 32 comprises circuitry configured to control a peripheral interface. In one embodiment, the peripheral interface controller 32 may control a peripheral component interconnect (PCI) Express interface. Other embodiments may implement other peripheral interfaces (e.g. PCI, PCI-X, universal serial bus (USB), etc.) in addition to or instead of the PCI Express interface.
  • The PHY 36 may generally comprise the circuitry configured to physically communicate on the external interfaces to the system 10 under the control of the interface circuits 16. In one particular embodiment, the PHY 36 may comprise a set of serializer/deserializer (SERDES) circuits that may be configured for use as PCI Express lanes or as Ethernet connections. The PHY 36 may include the circuitry that performs 8 b/10 b encoding/decoding for transmission through the SERDES and synchronization first-in, first-out (FIFO) buffers, and also the circuitry that logically configures the SERDES links for use as PCI Express or Ethernet communication links. In one implementation, the PHY may comprise 24 SERDES that can be configured as PCI Express lanes or Ethernet connections. Any desired number of SERDES may be configured as PCI Express and any desired number may be configured as Ethernet connections.
  • It is noted that, in various embodiments, the system 10 may include one or any number of any of the elements shown in FIG. 1 (e.g. processors, memory controllers, caches, I/O bridges, DMA controllers, and/or interface circuits, etc.).
  • Memory Controller with Loopback Testing
  • The memory controllers 20A-20B may be designed to perform loopback testing on the memory interface. Thus, the memory controller functionality may be tested at speed using loopback operation, and may be performed relatively inexpensively, in some embodiments.
  • In one embodiment, in a loopback mode of operation, data from a write operation may be looped back into a data buffer in the memory controller 20A-20B. A subsequent read operation to the same address as the write operation may cause the memory controller 20A-20B to return the data from the data buffer. The read and write data may be compared to determine that proper operation was observed. For example, in one embodiment, one of the processors 18A-18B may execute a program including one or more instructions that generate the write operation and the read operation, when executed. The program may also include instructions to compare the read data to the write data to detect an error (and thus detect that the test fails).
  • The loopback test mode may additionally include looping back the address information to a buffer or queue. The write address may be captured in this fashion, and may be compared to read addresses to detect that data is to be returned. Multiple write operations may be performed before a corresponding set of read operations, if desired, using the comparison to discern which write data to return for each read operation. Alternatively, the memory controller 20A may be preconfigured with addresses that are to be read and written during the test. The looped-back address information may be compared to the preconfigured addresses to capture/supply data for write/read operations.
  • If a read operation occurs in loopback test mode and no data is detected for the read (e.g. because there is an error in the address path that results in an incorrect address being captured for comparison), the memory controller 20A-20B may return error data for the read. The error data may be any data that results in an error being detected. For example, the error data may simply be different that the data in the read data buffer 42 (e.g. the data may be inverted and returned). Alternatively, any random data may be returned, or all zeros may be returned. Data with erroneous ECC may be returned, if the interconnect 30 supports ECC. If the interconnect 30 permits returning an error status instead of data, the error data may comprise the error status.
  • By exercising various combinations of binary ones and zeros for the addresses and data in the read/write operations used during loopback test mode, relatively high levels of test coverage may be achieved. Patterns of address and data may be selected from any of a number of test patterns (e.g. an alternating ones and zeros pattern and the inverse of the pattern, etc.).
  • Turning now to FIG. 2, a block diagram of one embodiment of the memory controller 20A is shown. The memory controller 20B may be similar. In the illustrated embodiment, the memory controller 20A includes an interconnect interface circuit 74, a write data buffer 40, a read data buffer 42, a scheduler 44, a transaction queue 46, a memory interface queue 48, a memory interface controller 50 that includes a loopback mode bit (LPM) in a control register 54, a merge buffer 56, muxes 52 and 64, drivers 62 and 66, and receivers 68 and 72. The interconnect interface circuit 74 is coupled to the interconnect 30 and to the write data buffer 40, the read data buffer 42, and the transaction queue 46. The scheduler 44 is coupled to the transaction queue 46, which is coupled to the memory interface queue 48. The memory interface queue 48 is coupled to the memory interface controller 50 and to the mux 64, the output of which is coupled to the drivers 66. The drivers 66 are coupled to a plurality of address and control pins (Addr/Ctl Pins) to which one or more memory modules are capable of being coupled. The address pins are further coupled to the receivers 72, which are coupled to receive a disable input from the memory interface controller 50 and are coupled to the merge buffer 56. The memory interface controller 50 is also coupled to the merge buffer 56, the read data buffer 42, the write data buffer 40, and the muxes 52 and 64. Specifically, the memory interface controller 50 is coupled to provide an input to the mux 64 as well as a selection control to the mux 64. The memory interface controller 50 is coupled to provide mux control to the mux 52 also. The write data buffer 40 is coupled to the mux 52 as an input. The mux 52 output is coupled to the drivers 62, and the other input of the mux 52 is coupled to the merge buffer 56. The drivers 62 are coupled to a plurality of data and data control pins (Data/Data Ctl Pins), which are also coupled to the receivers 68, which are coupled to the read data buffer 42 and the merge buffer 56.
  • The interconnect interface circuit 74 includes the circuitry for interfacing to the interconnect 30, using appropriate protocol, timing, physical signalling, etc. The interconnect interface circuit 74 may further include circuitry for decoding the address of a transaction on the interconnect 30, to detect read or write operations that are targeted at memory locations controlled by the memory controller 20A. That is, the interconnect interface circuit 74 may be programmable with one or more memory address ranges that are mapped to the memory controller 20A. The interconnect interface circuit 74 may include buffering for timing/pipelining reasons, but may generally use the write data buffer 40, the read data buffer 42, and the transaction queue 46 for storage, in one embodiment.
  • In a normal mode of operation (i.e. non-test mode), the interconnect interface circuit 74 may decode the address of a transaction transmitted on the interconnect 30, and may detect a read or write operation mapped to the memory controller 20A. The interconnect interface circuit 74 may write the address portion of the operation to the transaction queue 46. The transaction queue 46 may comprise a plurality of entries for storing address information for read/write operations. For example, an entry may include an address field (Addr), an identifier (Id) field, and a control (ctl) field. The address field may store the address bits (or at least enough of the address bits to address the memory modules to which the memory controller 20A may be coupled in normal operation). The Id field may store the identifier transmitted on the interconnect 30, which may identify the source of the operation and may also include a sequence number or other tag assigned by the source. The control field may store various control information for the operation (e.g. valid bit, size of the operation, read or write, etc.).
  • The scheduler 44 may schedule operations from the transaction queue 46 according to any scheduling criteria. For example, ordering rules may be enforced by the scheduler 44. The scheduler 44 may also attempt to opportunistically schedule operations that are to the same memory page (and thus can be performed with lower latency “page mode” accesses), if possible. Scheduled operations are read from the transaction queue 46 and written to the memory interface queue 48, which may comprise a plurality of entries for scheduled operations. The memory interface controller 50 may schedule operations from the memory interface queue 48 and may drive the address control signals on the memory interface, through the mux 64 and the drivers 66. The address bits may be read from the memory interface queue 48 and may be driven through the mux 64 and on the pins via the drivers 66. In another embodiment, both address and control may be read from the memory interface queue 48 and provided through the mux 64 to the drivers 66.
  • If the operation is a write, the write data may be received from the interconnect interface circuit 74 and may be written to the write data buffer 40. The memory interface controller 50 may read the write data from the write data buffer 40 for transmission to the memory modules. Data from the write data buffer 40 is driven through the mux 52 and then by the drivers 62 onto the data pins. For cache block sized writes, the complete write data may be transmitted as several “beats” over the data pins. The write data buffer 40 may comprise a plurality of entries. Each entry may store data for one write operation, and thus multiple write operations may have data in the write data buffer 40 at the same time.
  • If the operation is a read, the data is received by the receivers 68 and is captured in the read data buffer 42, and then may be transferred from the read data buffer 42 onto the interconnect 30. The read data buffer 42 may comprise a plurality of entries. Each entry may store data for one read operation, and thus multiple read operations may have data in the read data buffer 42 at the same time.
  • For non-cache block sized write operations, if the size of the operation is not a multiple of the data beat size on the data pins, a read-modify-write operation may be performed. A read-modify-write operation comprises reading the memory data, modifying the read data with the write data, and writing the modified data back to memory. If the memory includes error correction code (ECC) protection, ECC data may be generated for the modified data and the new ECC data may be written back with the modified data. The merge buffer 56 may be used to support the read-modify-write operation. As part of the read, the address of the operation may be written to the merge buffer 56. The address may be provided from the transaction queue 46 in parallel with writing it to the memory interface queue 48, or may be provided by the memory interface queue 48 (separate from the path through the drivers and receivers shown in FIG. 2). The read data from the read portion of the read-modify-write is also provided, and is stored in the merge buffer 56. In other embodiments, only the data may be stored in the merge buffer 56 and the address may be retained in the memory interface queue 48. Subsequently, the write portion of the read-modify-write may be performed. During the write, data from the merge buffer 56 may be provided to the mux 52 as well as the write data from the write data buffer 40. If a given byte is being written, the mux 52 may select the byte from the write data buffer 40. Otherwise, the read data from the merge buffer 56 is selected for that byte. If the address is also provided from the merge buffer 56, the address may be muxed into the address path from the memory interface queue 48 to the drivers 66, or the address may be routed through the memory interface queue 48 from the merge buffer 56. However, in some embodiments, the memory interface queue 48 may retain the address for a read-modify-write operation and may retransmit the address for the write portion of the operation. The merge buffer 56 may comprise a plurality of entries and thus may support multiple read-modify-write operations outstanding at one time.
  • To support the loopback test mode, the receivers 72 are provided. If the loopback test mode were not implemented, the receivers 72 would not be needed, since the address and address control pins are unidirectional in normal mode, from the memory controller 20A to the memory modules. In the illustrated embodiment, the receivers 72 may include a disable input, and the memory interface controller 50 may disable the receivers 72 during normal mode.
  • The memory interface controller 50 may be programmable into the loopback test mode (e.g. by software setting the LPM bit in the register 54, in one embodiment). In this embodiment, the loopback test mode control logic may be implemented in the memory interface controller 50. If loopback test mode is enabled, the memory interface controller 50 may enable the receivers 72.
  • FIG. 3 is a flowchart illustrating additional operation for one embodiment of a write operation in the loopback test mode. Blocks are shown in FIG. 3 in a particular order for ease of understanding, but other orders may be used. Blocks may be performed in parallel in combinatorial logic in the memory interface controller 50 and/or may be pipelined over multiple clock cycles as desired.
  • A write operation received in loopback test mode may initially be processed similar to a write operation in normal mode. That is, the address information received from the interconnect 30 may be written to the transaction queue 46 and the data received from the interconnect 30 may be written to the write data buffer 40. The scheduler 44 may schedule the write operation, and the scheduled write operation may be written to the memory interface queue 48. The memory interface controller 50 may schedule the write operation from the memory interface queue 48. The address and control signals may be driven to the address/control pins by the drivers 66. Additionally, the address bits and control signals may be received by the receivers 72, which may provide the address bits and control signals to the merge buffer 56. The memory interface controller 50 may write the address and control information to the merge buffer 56 (block 80). Alternatively, in embodiments in which the addresses are preconfigured in the memory controller 20A, the address may be routed to the merge buffer 56 (through the drivers 66 and receivers 72) may be compared to the addresses to select a merge buffer entry to store the write data. If no matching address is detected (which may indicate an error in the address transmission path), then the write data may not be stored.
  • For DRAMs, the address may be transmitted in two transmissions over the address pins, with different controls (the row address and the column address). Thus, capturing the address in the merge buffer 56 may similarly include two captures of address bits. Comparing the address may include two comparisons. It is noted that, in the loopback test mode, there may not be (and need not be) any actual memory attached to the memory controller pins. The loopback test may be performed on the system 10 in isolation, on the system 10 mounted on a test board without any memory, etc.
  • The write data may be read from the write data buffer 40 and transmitted, through the drivers 62, onto the data pins. Additionally, the data may flow through the receivers 68, and the memory interface controller 50 may cause the merge buffer 56 to capture the data (block 82). As mentioned previously, the data may be driven as multiple beats over the data pins. Accordingly, the data may be captured as multiple beats as well. Such operation is similar to receiving read data in multiple beats in normal mode. Alternatively, in another embodiment, the data may be captured into the read data buffer 42. The data may be provided from the read data buffer for a subsequent read to the same address.
  • Accordingly, when the write operation completes in loopback mode, the write address and data of the write operation is in an entry of the merge buffer 56 in the present embodiment. Alternatively, in embodiments that store only data in the merge buffer 56, the addresses may be stored in control registers that can be written with the expected addresses before the write operation(s) are issued.
  • FIG. 4 is a flowchart illustrating additional operation for one embodiment of a read operation in the loopback test mode. Blocks are shown in FIG. 4 in a particular order for ease of understanding, but other orders may be used. Blocks may be performed in parallel in combinatorial logic in the memory interface controller 50 and/or may be pipelined over multiple clock cycles as desired.
  • A read operation in the loopback test mode may initially be processed similar to a read operation in the normal mode. The read operation may be received from the interconnect 30 by the interconnect interface 74 and may be written to the transaction queue 46. The scheduler 44 may schedule the read operation, and the scheduled write operation may be written to the memory interface queue 48. The memory interface controller 50 may schedule the read operation from the memory interface queue 48. The read address may be driven through the drivers 66 to the address pins, and through the receivers 72 to the merge buffer 56. The read address may be compared to the address in the merge buffer 56 (or the corresponding control registers) (block 90). If the read address does not match a write address in the merge buffer 56 (decision block 92, “no” leg), the memory interface controller 50 may cause error data to be supplied for the read (block 94). If the read address matches a write address in the merge buffer 56 (decision block 92, “yes” leg), the memory interface controller 50 may cause the data from the associated entry of the merge buffer 56 to be supplied (block 96). The supplied data (error data or from the merge buffer) may then be transmitted through the mux 52, the drivers 62, and the receivers 68. The data may be captured by the read data buffer 42 (block 98) to be supplied on the interconnect 30 (block 100).
  • It is noted that each of drivers 62 and 66 may represent a plurality of drivers, one for each pin to which the drivers are connected. The drivers may drive different bits/signals, in parallel. Similarly, each of the receivers 68 and 72 may represent a plurality of receivers, each coupled to a different pin. It is further noted that, in some embodiments, first-in, first-out buffers may be provided on the read data path and/or the write data path to cross the clock domain boundary from the interconnect 30 to the clock domain corresponding to the memory interface/memory modules.
  • FIG. 5 is a diagram illustrating pseudocode representing instructions that may be executed by a processor 18A-18B to perform a loopback test of a memory controller 20A-20B for one embodiment. The actual instructions, and number of instructions, may differ from the pseudocode. The processor 18A-18B may execute instructions to write the LPM bit in the control register 54 (Write LPM, CtlReg), to write an address with test data (Write Addr, TestData), to read data from the address to a destination register (Read Addr, DestReg), to compare the test data with the destination register (Compare TestData, DestReg), to branch if the test data is not equal to the destination register to an error handling routine (BNE ErrorTag), to modify one or both of the address and the test data (Modify Addr, TestData), and to branch back to the write operation (B). As noted previously, multiple write operations followed by multiple read operations may be performed, up to the number of operations that can be concurrently queued in the merge buffer 56. The branch at the end may be conditional branch based on a loop count or other variable, and then the loopback test may exit (e.g. writing the control register 54 to clear the LPM bit). In some embodiments, the write of the LPM bit may be preceded by writes that preconfigured the test addresses into control registers in the memory controller 20A that correspond to each merge buffer entry.
  • Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.

Claims (19)

1. A memory controller comprising:
a write data buffer configured to store write data received from one or more write operations on an interconnect to which the memory controller is coupled;
a plurality of drivers configured to drive write data from the write data buffer on a plurality of pins to which a plurality of memory modules are capable of being coupled;
a plurality of receivers configured to receive data from the plurality of pins; and
a controller configured, in a loopback test mode of operation, to cause first write data to be transmitted from the write data buffer, through the plurality of drivers and the plurality of receivers, to be captured in the memory controller in response to a first write operation.
2. The memory controller as recited in claim 1 wherein, in response to a first read operation received from the interconnect, the controller is configured to cause the first write data to be transmitted on the interconnect.
3. The memory controller as recited in claim 2 further comprising a merge buffer which, in a normal mode of operation, is configured to store data of one or more read-modify-write operations, and wherein the controller is configured to compare a read address from the first read operation to addresses of data in the merge buffer to determine which data to provide in response to the first read operation.
4. The memory controller as recited in claim 3 further comprising a second plurality of drivers and a second plurality of receivers, wherein the second plurality of drivers are configured to drive address pins to which the one or more memory modules are capable of being coupled, and wherein the second plurality of receivers are configured to provide address bits from the address pins to the merge buffer for storage in the loopback test mode, whereby a write address of the first write operation is stored in the merge buffer in the loopback test mode.
5. The memory controller as recited in claim 3 wherein the merge buffer is written with one or more addresses prior to the first write operation, and wherein the memory controller further comprises a second plurality of drivers and a second plurality of receivers, wherein the second plurality of drivers are configured to drive address pins to which the one or more memory modules are capable of being coupled, and wherein the second plurality of receivers are configured to provide address bits from the address pins for comparison to the merge buffer to detect a merge buffer entry to store the write data.
6. The memory controller as recited in claim 5 wherein the second plurality of receivers comprise a disable input, and wherein the controller is configured to disable the second plurality of receivers in the normal mode.
7. The memory controller as recited in claim 4 wherein the second plurality of receivers comprise a disable input, and wherein the controller is configured to disable the second plurality of receivers in the normal mode.
8. The memory controller as recited in claim 3 wherein, if the read address does not match an address in the merge buffer, the controller is configured to cause error data to be returned in response to the first read operation.
9. The memory controller as recited in claim 8 wherein the controller is configured to cause the data from the merge buffer to be transmitted through the plurality of drivers and plurality of receivers to a read data buffer.
10. An apparatus comprising:
an interconnect;
at least one processor coupled to the interconnect; and
at least one memory controller coupled to the interconnect, wherein the memory controller is programmable by the processor into a loopback test mode of operation, and wherein, in the loopback test mode, the memory controller is configured to receive a first write operation from the processor over the interconnect, and wherein the memory controller is configured to route write data from the first write operation through a plurality of drivers and receivers connected to a plurality of data pins that are capable of connection to one or more memory modules, and wherein the memory controller is configured to return the write data as read data on the interconnect for a first read operation received from the processor on the interconnect.
11. The apparatus as recited in claim 10 wherein the processor is configured to execute a program during the loopback test mode that includes instructions which, when executed, cause the first write operation and the first read operation.
12. The apparatus as recited in claim 11 wherein the program further includes one or more instructions which check that the read data matches the write data.
13. The apparatus as recited in claim 10 wherein the memory controller further comprises a merge buffer which, in a normal mode of operation, is configured to store data of one or more read-modify-write operations, and wherein the memory controller is configured to compare a read address from the first read operation to addresses of the data in the merge buffer to determine which data to provide in response to the first read operation.
14. The apparatus as recited in claim 13 wherein the memory controller further comprises a second plurality of drivers and a second plurality of receivers, wherein the second plurality of drivers are configured to drive address pins to which the one or more memory modules are capable of being coupled, and wherein the second plurality of receivers are configured to provide address bits from the address pins to the merge buffer in the loopback test mode.
15. The apparatus as recited in claim 14 wherein the second plurality of receivers comprise a disable input, and wherein the memory controller is configured to disable the second plurality of receivers in the normal mode.
16. The apparatus as recited in claim 13 wherein, if the read address does not match an address in the merge buffer, the memory controller is configured to return error data in response to the first read operation.
17. A method comprising:
issuing a first write operation from a processor to a memory controller in a loopback test mode of operation;
routing write data from the first write operation through a plurality of drivers and receivers in the memory controller, wherein the plurality of drivers and receivers are connected to a plurality of data pins that are capable of connection to one or more memory modules;
issuing a first read operation from the processor to the memory controller in the loopback test mode of operation; and
returning the write data as read data on the interconnect for a first read operation received from the processor on the interconnect.
18. The method as recited in claim 17 further comprising checking that the read data matches the write data in the processor.
19. The method as recited in claim 17 wherein the memory controller further comprises a second plurality of drivers and a second plurality of receivers, wherein the second plurality of drivers are configured to drive address pins to which the one or more memory modules are capable of being coupled, and wherein the second plurality of receivers are configured to provide address bits from the address pins back to the memory controller, and wherein the second plurality of receivers comprise a disable input, the method further comprising disabling the second plurality of receivers in a normal mode of operation by the memory controller.
US11/760,566 2007-06-08 2007-06-08 Memory controller with loopback test interface Expired - Fee Related US7836372B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US11/760,566 US7836372B2 (en) 2007-06-08 2007-06-08 Memory controller with loopback test interface
US12/909,073 US8086915B2 (en) 2007-06-08 2010-10-21 Memory controller with loopback test interface
US13/305,202 US8301941B2 (en) 2007-06-08 2011-11-28 Memory controller with loopback test interface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/760,566 US7836372B2 (en) 2007-06-08 2007-06-08 Memory controller with loopback test interface

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/909,073 Continuation US8086915B2 (en) 2007-06-08 2010-10-21 Memory controller with loopback test interface

Publications (2)

Publication Number Publication Date
US20080307276A1 true US20080307276A1 (en) 2008-12-11
US7836372B2 US7836372B2 (en) 2010-11-16

Family

ID=40096992

Family Applications (3)

Application Number Title Priority Date Filing Date
US11/760,566 Expired - Fee Related US7836372B2 (en) 2007-06-08 2007-06-08 Memory controller with loopback test interface
US12/909,073 Active US8086915B2 (en) 2007-06-08 2010-10-21 Memory controller with loopback test interface
US13/305,202 Active US8301941B2 (en) 2007-06-08 2011-11-28 Memory controller with loopback test interface

Family Applications After (2)

Application Number Title Priority Date Filing Date
US12/909,073 Active US8086915B2 (en) 2007-06-08 2010-10-21 Memory controller with loopback test interface
US13/305,202 Active US8301941B2 (en) 2007-06-08 2011-11-28 Memory controller with loopback test interface

Country Status (1)

Country Link
US (3) US7836372B2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090216966A1 (en) * 2008-02-25 2009-08-27 International Business Machines Corporation Method, system and computer program product for storing external device result data
WO2010120449A2 (en) * 2009-04-16 2010-10-21 Freescale Semiconductor Inc. Memory testing with snoop capabilities in a data processing system
US20140052950A1 (en) * 2012-08-14 2014-02-20 Fujitsu Limited System controlling apparatus, information processing system, and controlling method of system controlling apparatus
US20140250340A1 (en) * 2013-03-01 2014-09-04 International Business Machines Corporation Self monitoring and self repairing ecc
CN110881009A (en) * 2018-09-06 2020-03-13 迈普通信技术股份有限公司 Method, device, communication equipment and storage medium for receiving test message
US10978170B2 (en) * 2018-01-26 2021-04-13 Samsung Electronics Co., Ltd. Method and system for monitoring information of a memory module in real time
US11210247B2 (en) * 2017-09-27 2021-12-28 Chengdu Starblaze Technology Co., Ltd. PCIe controller and loopback data path using PCIe controller

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010085405A1 (en) 2009-01-22 2010-07-29 Rambus Inc. Maintenance operations in a dram
JP2012027734A (en) * 2010-07-23 2012-02-09 Panasonic Corp Memory controller and memory access system
US8607104B2 (en) * 2010-12-20 2013-12-10 Advanced Micro Devices, Inc. Memory diagnostics system and method with hardware-based read/write patterns
US8839057B2 (en) * 2011-02-03 2014-09-16 Arm Limited Integrated circuit and method for testing memory on the integrated circuit
US8904248B2 (en) 2012-07-10 2014-12-02 Apple Inc. Noise rejection for built-in self-test with loopback
KR102077072B1 (en) * 2013-07-05 2020-02-14 에스케이하이닉스 주식회사 Parallel test device and method
US9304709B2 (en) 2013-09-06 2016-04-05 Western Digital Technologies, Inc. High performance system providing selective merging of dataframe segments in hardware
JP6164003B2 (en) * 2013-09-25 2017-07-19 富士通株式会社 Memory control apparatus, information processing apparatus, and information processing apparatus control method
US9576682B2 (en) 2014-03-20 2017-02-21 International Business Machines Corporation Traffic and temperature based memory testing
PT2939980T (en) 2014-04-30 2018-06-26 Omya Int Ag Production of precipitated calcium carbonate
US10825545B2 (en) * 2017-04-05 2020-11-03 Micron Technology, Inc. Memory device loopback systems and methods
US11199967B2 (en) * 2018-07-13 2021-12-14 Micron Technology, Inc. Techniques for power management using loopback
US11201811B2 (en) 2019-03-18 2021-12-14 International Business Machines Corporation Multiport network adapter loopback hardware
WO2021041445A1 (en) * 2019-08-27 2021-03-04 Rambus Inc. Joint command dynamic random access memory (dram) apparatus and methods
KR20210112845A (en) * 2020-03-06 2021-09-15 에스케이하이닉스 주식회사 Memory device and test operation thereof
EP4237377A1 (en) 2020-11-02 2023-09-06 Omya International AG Process for producing precipitated calcium carbonate in the presence of natural ground calcium carbonate

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US504976A (en) * 1893-09-12 Loading device
US3599160A (en) * 1969-03-06 1971-08-10 Interdata Inc Time division multiplexing
US4799152A (en) * 1984-10-12 1989-01-17 University Of Pittsburgh Pipeline feedback array sorter with multi-string sort array and merge tree array
US4858116A (en) * 1987-05-01 1989-08-15 Digital Equipment Corporation Method and apparatus for managing multiple lock indicators in a multiprocessor computer system
US5043976A (en) * 1989-08-31 1991-08-27 Minister Of The Post Telecommunications And Space (Centre National D'etudes Des Telecommunications) Loop-back device for half-duplex optical transmission system
US5161162A (en) * 1990-04-12 1992-11-03 Sun Microsystems, Inc. Method and apparatus for system bus testability through loopback
US5321805A (en) * 1991-02-25 1994-06-14 Westinghouse Electric Corp. Raster graphics engine for producing graphics on a display
US5392302A (en) * 1991-03-13 1995-02-21 Quantum Corp. Address error detection technique for increasing the reliability of a storage subsystem
US5553265A (en) * 1994-10-21 1996-09-03 International Business Machines Corporation Methods and system for merging data during cache checking and write-back cycles for memory reads and writes
US5701306A (en) * 1994-08-26 1997-12-23 Nec Corporation Semiconductor integrated circuit which can be tested by an LSI tester having a reduced number of pins
US5812472A (en) * 1997-07-16 1998-09-22 Tanisys Technology, Inc. Nested loop method of identifying synchronous memories
US20030120989A1 (en) * 2001-12-26 2003-06-26 Zumkehr John F. Method and circuit to implement double data rate testing
US20040130344A1 (en) * 2002-04-18 2004-07-08 Rohrbaugh John G. Systems and methods for testing receiver terminations in integrated circuits
US6928593B1 (en) * 2000-09-18 2005-08-09 Intel Corporation Memory module and memory component built-in self test
US20060282722A1 (en) * 2005-05-24 2006-12-14 Kingston Technology Corp. Loop-Back Memory-Module Extender Card for Self-Testing Fully-Buffered Memory Modules
US20070038846A1 (en) * 2005-08-10 2007-02-15 P.A. Semi, Inc. Partial load/store forward prediction
US7202545B2 (en) * 2003-08-28 2007-04-10 Infineon Technologies Ag Memory module and method for operating a memory module
US20080147969A1 (en) * 2004-11-12 2008-06-19 Melissa Ann Barnum Separate Handling of Read and Write of Read-Modify-Write
US20080181047A1 (en) * 2007-01-30 2008-07-31 Renesas Technology Corp. Semiconductor device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6016525A (en) * 1997-03-17 2000-01-18 Lsi Logic Corporation Inter-bus bridge circuit with integrated loopback capability and method for use of same
KR19990024602A (en) * 1997-09-04 1999-04-06 윤종용 Parallel Port Test Method of Personal Computer Using Loopback
US7353362B2 (en) * 2003-07-25 2008-04-01 International Business Machines Corporation Multiprocessor subsystem in SoC with bridge between processor clusters interconnetion and SoC system bus

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US504976A (en) * 1893-09-12 Loading device
US3599160A (en) * 1969-03-06 1971-08-10 Interdata Inc Time division multiplexing
US4799152A (en) * 1984-10-12 1989-01-17 University Of Pittsburgh Pipeline feedback array sorter with multi-string sort array and merge tree array
US4858116A (en) * 1987-05-01 1989-08-15 Digital Equipment Corporation Method and apparatus for managing multiple lock indicators in a multiprocessor computer system
US5043976A (en) * 1989-08-31 1991-08-27 Minister Of The Post Telecommunications And Space (Centre National D'etudes Des Telecommunications) Loop-back device for half-duplex optical transmission system
US5161162A (en) * 1990-04-12 1992-11-03 Sun Microsystems, Inc. Method and apparatus for system bus testability through loopback
US5321805A (en) * 1991-02-25 1994-06-14 Westinghouse Electric Corp. Raster graphics engine for producing graphics on a display
US5392302A (en) * 1991-03-13 1995-02-21 Quantum Corp. Address error detection technique for increasing the reliability of a storage subsystem
US5701306A (en) * 1994-08-26 1997-12-23 Nec Corporation Semiconductor integrated circuit which can be tested by an LSI tester having a reduced number of pins
US5553265A (en) * 1994-10-21 1996-09-03 International Business Machines Corporation Methods and system for merging data during cache checking and write-back cycles for memory reads and writes
US5812472A (en) * 1997-07-16 1998-09-22 Tanisys Technology, Inc. Nested loop method of identifying synchronous memories
US6928593B1 (en) * 2000-09-18 2005-08-09 Intel Corporation Memory module and memory component built-in self test
US20030120989A1 (en) * 2001-12-26 2003-06-26 Zumkehr John F. Method and circuit to implement double data rate testing
US20040130344A1 (en) * 2002-04-18 2004-07-08 Rohrbaugh John G. Systems and methods for testing receiver terminations in integrated circuits
US7202545B2 (en) * 2003-08-28 2007-04-10 Infineon Technologies Ag Memory module and method for operating a memory module
US20080147969A1 (en) * 2004-11-12 2008-06-19 Melissa Ann Barnum Separate Handling of Read and Write of Read-Modify-Write
US20060282722A1 (en) * 2005-05-24 2006-12-14 Kingston Technology Corp. Loop-Back Memory-Module Extender Card for Self-Testing Fully-Buffered Memory Modules
US7197676B2 (en) * 2005-05-24 2007-03-27 Kingston Technology Corp. Loop-Back Memory-Module Extender Card for Self-Testing Fully-Buffered Memory Modules
US20070038846A1 (en) * 2005-08-10 2007-02-15 P.A. Semi, Inc. Partial load/store forward prediction
US20080181047A1 (en) * 2007-01-30 2008-07-31 Renesas Technology Corp. Semiconductor device

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090216966A1 (en) * 2008-02-25 2009-08-27 International Business Machines Corporation Method, system and computer program product for storing external device result data
US8250336B2 (en) * 2008-02-25 2012-08-21 International Business Machines Corporation Method, system and computer program product for storing external device result data
WO2010120449A2 (en) * 2009-04-16 2010-10-21 Freescale Semiconductor Inc. Memory testing with snoop capabilities in a data processing system
US20100268999A1 (en) * 2009-04-16 2010-10-21 Morrison Gary R Memory testing with snoop capabilities in a data processing system
WO2010120449A3 (en) * 2009-04-16 2011-01-20 Freescale Semiconductor Inc. Memory testing with snoop capabilities in a data processing system
US8312331B2 (en) * 2009-04-16 2012-11-13 Freescale Semiconductor, Inc. Memory testing with snoop capabilities in a data processing system
US20140052950A1 (en) * 2012-08-14 2014-02-20 Fujitsu Limited System controlling apparatus, information processing system, and controlling method of system controlling apparatus
US20140250340A1 (en) * 2013-03-01 2014-09-04 International Business Machines Corporation Self monitoring and self repairing ecc
US8996953B2 (en) * 2013-03-01 2015-03-31 International Business Machines Corporation Self monitoring and self repairing ECC
US20150178147A1 (en) * 2013-03-01 2015-06-25 International Business Machines Corporation Self monitoring and self repairing ecc
US9535784B2 (en) * 2013-03-01 2017-01-03 International Business Machines Corporation Self monitoring and self repairing ECC
US11210247B2 (en) * 2017-09-27 2021-12-28 Chengdu Starblaze Technology Co., Ltd. PCIe controller and loopback data path using PCIe controller
US10978170B2 (en) * 2018-01-26 2021-04-13 Samsung Electronics Co., Ltd. Method and system for monitoring information of a memory module in real time
CN110881009A (en) * 2018-09-06 2020-03-13 迈普通信技术股份有限公司 Method, device, communication equipment and storage medium for receiving test message

Also Published As

Publication number Publication date
US20120072787A1 (en) 2012-03-22
US7836372B2 (en) 2010-11-16
US8301941B2 (en) 2012-10-30
US20110035560A1 (en) 2011-02-10
US8086915B2 (en) 2011-12-27

Similar Documents

Publication Publication Date Title
US7836372B2 (en) Memory controller with loopback test interface
US7676728B2 (en) Apparatus and method for memory asynchronous atomic read-correct-write operation
US7698478B2 (en) Managed credit update
US7461286B2 (en) System and method for using a learning sequence to establish communications on a high-speed nonsynchronous interface in the absence of clock forwarding
KR101842568B1 (en) Early identification in transactional buffered memory
US7266633B2 (en) System and method for communicating the synchronization status of memory modules during initialization of the memory modules
US7412555B2 (en) Ordering rule and fairness implementation
US5519839A (en) Double buffering operations between the memory bus and the expansion bus of a computer system
JP7247213B2 (en) debug controller circuit
US5313627A (en) Parity error detection and recovery
US7342816B2 (en) Daisy chainable memory chip
US5966728A (en) Computer system and method for snooping date writes to cacheable memory locations in an expansion memory device
JPH032943A (en) Storage system
US6256693B1 (en) Master/slave data bus employing undirectional address and data lines and request/acknowledge signaling
KR100195856B1 (en) Data-processing system with bidirectional synchronous multidrop data bus
KR20220116033A (en) Error Recovery for Non-Volatile Memory Modules
US7345900B2 (en) Daisy chained memory system
US7673093B2 (en) Computer system having daisy chained memory chips
US7345901B2 (en) Computer system having daisy chained self timed memory chips
JPH0695981A (en) Workstation having central processing unit cpu and system bus, i.e. server
US7627711B2 (en) Memory controller for daisy chained memory chips
US20080028125A1 (en) Computer System Having an Apportionable Data Bus
US7577811B2 (en) Memory controller for daisy chained self timed memory chips
US7660942B2 (en) Daisy chainable self timed memory chip
US7711885B2 (en) Bus control apparatus and bus control method

Legal Events

Date Code Title Description
AS Assignment

Owner name: P.A. SEMI, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SUBRAMANIAN, SRIDHAR P.;KELLER, JAMES B.;REEL/FRAME:019749/0479;SIGNING DATES FROM 20070625 TO 20070821

Owner name: P.A. SEMI, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BODROZIC, LUKA;BISWAS, SUKALPA;CHEN, HAI;AND OTHERS;REEL/FRAME:019749/0540;SIGNING DATES FROM 20070608 TO 20070821

Owner name: P.A. SEMI, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SUBRAMANIAN, SRIDHAR P.;KELLER, JAMES B.;SIGNING DATES FROM 20070625 TO 20070821;REEL/FRAME:019749/0479

Owner name: P.A. SEMI, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BODROZIC, LUKA;BISWAS, SUKALPA;CHEN, HAI;AND OTHERS;SIGNING DATES FROM 20070608 TO 20070821;REEL/FRAME:019749/0540

AS Assignment

Owner name: P.A. SEMI, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BODROZIC, LUKA;BISWAS, SUKALPA;CHEN, HAO;AND OTHERS;REEL/FRAME:021025/0176;SIGNING DATES FROM 20070608 TO 20070821

Owner name: P.A. SEMI, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BODROZIC, LUKA;BISWAS, SUKALPA;CHEN, HAO;AND OTHERS;SIGNING DATES FROM 20070608 TO 20070821;REEL/FRAME:021025/0176

AS Assignment

Owner name: APPLE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PA SEMI, INC.;REEL/FRAME:022793/0565

Effective date: 20090508

Owner name: APPLE INC.,CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PA SEMI, INC.;REEL/FRAME:022793/0565

Effective date: 20090508

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552)

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20221116