|Publication number||US20010044874 A1|
|Application number||US 08/798,950|
|Publication date||Nov 22, 2001|
|Filing date||Feb 11, 1997|
|Priority date||Apr 24, 1996|
|Also published as||US6345348|
|Publication number||08798950, 798950, US 2001/0044874 A1, US 2001/044874 A1, US 20010044874 A1, US 20010044874A1, US 2001044874 A1, US 2001044874A1, US-A1-20010044874, US-A1-2001044874, US2001/0044874A1, US2001/044874A1, US20010044874 A1, US20010044874A1, US2001044874 A1, US2001044874A1|
|Inventors||Naoya Watanabe, Akira Yamazaki|
|Original Assignee||Naoya Watanabe, Akira Yamazaki|
|Export Citation||BiBTeX, EndNote, RefMan|
|Referenced by (3), Classifications (8), Legal Events (14)|
|External Links: USPTO, USPTO Assignment, Espacenet|
 1. Field of the Invention
 The present invention relates to a memory system and a semiconductor memory device used therefor, and more particularly relates to a high speed memory system and a semiconductor memory device for the system achieving high speed transfer of a large amount of data.
 2. Description of the Background Art
 The performance of a microprocessor has been improved, and the storage capacity of a Dynamic Random Access Memory (DRAM) as a memory device is increasing. However, a large amount of data (including instructions) requested by the microprocessor cannot be transferred at high speed from the DRAM to the microprocessor since the operation speed of the DRAM is slower than that of the microprocessor. Therefore, a high speed memory system has been proposed in which a memory controller/processor and a plurality of DRAMs are connected by a bus, and data are consecutively transferred in synchronization with a clock signal. As one example of the high speed memory system, a memory system employing a high speed memory interface referred to as “Sync Link” will be described in the following.
FIG. 16 is an illustration showing a structure of a general Sync Link memory system. In FIG. 16, the memory system includes: a controller 1; a send link 10 transmitting a request packet output from controller 1; memories (RAMs) 2-0 to 2-n located in parallel and connected to send link 10 in parallel with each other and executing a designated operation according to the request packet supplied via send link 10; a sink link 20 commonly coupled to memories 2-0 to 2-n transmitting a response packet read from a selected memory to controller 1; and a control bus line 12 transmitting a flag flg and a strobe srb which are operation timing signals from controller 1.
 Strobe srb on control signal bus 12 defines an operation speed and an operation timing of controller 1 and memories 2-0 to 2-n, and flag flg indicates the start of a packet transmitted onto send link 10. Send link 10 transmits the request packet from controller 1 in one direction only, while sink link 20 transfers the response packet output from memories 2-0 to 2-n only in one direction toward controller 1. The request packet includes a slave ID (identifier) for identifying each of the memories 2-0 to 2-n, a command which instructs an operation to be executed, and information on address and write data, for example. The response packet transferred onto sink link 20 includes only read data in a normal operation.
 As for the path along which the request packet is transferred from controller 1 to the memories and the response packet is transferred from the memories via sink link 20, the length of the packet transferred path for each of memories 2-0 to 2-n is made equal. Accordingly, sink link 20 includes a portion coupled to memories 2-0 to 2-n transferring the response packet output from a selected memory in the direction away from controller 1, and a portion transferring the response packet in the direction toward controller 1. The packet transfer path of the same distance allows controller 1 to take the same period of time for each of memories 2-0 to 2-n, from outputting the request packet to obtaining the response packet, and synchronized packet transfer is thus easily implemented.
 It is noted that controller 1 may be a processor. In the following description, “memory controller” is used as a term referring to both of a controller controlling the access to memories 2-0 to 2-n and a processor having an operational processing function.
 Send link 10 generally has a width of 8 or 9 bits, and sink link 20 has a bit width two times larger than that of send link 10.
FIG. 17 is a timing chart at the time of data reading of the memory system. Referring to FIG. 17, a data reading operation will be described.
 At time t0, “open•row” request is generated. Prior to sending of an open•row packet at time t0, a flag flg is raised from “0” to “1”. Transfer of the packet is instructed by the rise of flag flg. The open•row packet includes a slave ID (identifier) designating one of memories 2-0 to 2-n, a command indicating the open•row, and an address designating a row to be opened. In the case of the open•row, an addressed row in a designated memory 2-i is selected. At this time, only a row select operation is performed, and data in a memory cell connected to the selected row is not output. Therefore, there is no output of a response packet on sink link 20.
 At time t1, a “read•of•open” request is output. At time t1, flag flg is also raised from “0” to “1”, and transfer of a packet is instructed. The “read-of-open” request instructs to select a necessary memory cell out of memory cells connected to the row selected by the open•row command and to read data. In other words, the read•of•open corresponds to an ordinary “page hit” state. The request packet on send link 10 is taken into the addressed memory at both of the rise and fall edges of strobe srb. A time period required for the addressed row to be selected in the addressed memory (corresponding to RAS-CAS delay time tRCD of an ordinary DRAM) is needed between time t0 and time t1.
 According to the read•of•open, from the addressed memory, corresponding data in the addressed memory cell is read. The data in the addressed memory cell is sent onto sink link 20 at time t2. The time between time t1 and time t2 is defined by information included in the request packet. The response packet (read data) onto sink link 20 is taken into controller 1 at one of rise and fall of strobe srb.
 The bit width of send link 10 is one half that of sink link 20, while the sampling rate on send link 10 is two times higher than that of sink link 20. The data transfer rate is accordingly the same. The request packet and the response packet are transferred respectively on send link 10 and sink link 20, so that data can be consecutively transferred between the memory controller and the memory by sending the request packet to one memory while sending the response packet to memory controller 1 from another memory.
FIG. 18 is a timing chart representing an operation at the time of data writing in the memory system shown in FIG. 16. At the time of data writing, transfer of request packet is also indicated by the rise of flag flg from “0” to “1” prior to time t0. A request packet instructing the open•row is sent onto send link 10. An addressed row is selected in an addressed memory by the open•row.
 After an elapse of row act time (tRCD), a request packet instructing a write operation is sent at time t1. The request packet instructing the write operation includes a slave ID for identifying a memory, write data, a command indicating writing of data, and the number of write data. When the write request packet is sent at time t1, data is written to an addressed memory cell (column) on the row selected by the open•row in the addressed memory. In the case of the write•request packet, a data packet is not sent onto the sink link since only the writing of data is executed and there is no sending of the response packet.
 At the time of data writing access, only the sending of the request packet is performed using send link 10. Therefore, the response packet can be sent using sink link 20 in parallel with the data writing operation, and high speed data transfer can thus be possible.
FIGS. 19A and 19B show structures of packets transmitted and received by the controller. FIG. 19A illustrates a request packet. The request packet includes an identifier area 22 storing a slave ID (identifier) for identifying the memory, a command area 24 storing a command instructing an operation to be executed, an information area 26 storing information about, for example, address, time to start a response, number of transfer data byte, and write data. FIG. 19B illustrates a structure of a response packet. A response packet 28 is only transmitted according to a request packet and includes only the information which is read data.
 As is described above, when information is transmitted in the form of a packet, the size of the area included in each packet is defined. Therefore, the bit number of address information or the like could be constant. The memory controller has no knowledge about information specific to a memory constituting a memory system (size of address bit number of row/column address, storage capacity, and bank number), so that memories employed in the memory system should have the same structure, and a problem of lack of flexibility in structuring the memory system occurs. In other words, if a nonvolatile memory is used in addition to the dynamic random access memory (DRAM) as a memory in the memory system, the memory system cannot be structured when these address configurations are different, and a problem of the lack of flexibility of the system arises.
 Further, when the memory system is utilized in a system which processes image data while executing an ordinary operational processing, a memory for storing the image data and a memory for storing data used in the operational processing are often used separately in the memory system. In this case, if respective characteristics of the memory for storing image data and the memory for storing data (instruction and data) used for the operational processing are different, the memory controller cannot acknowledge the characteristics of the individual memories constituting the memory system. As a result, a memory system cannot be structured utilizing memories of different types or characteristics. Accordingly, the use of the memory system is limited and generality of the system is adversely affected.
 An object of the present invention is to provide a memory system and a semiconductor memory device for the memory system capable of mixedly employing memories having different characteristics.
 Another object of the present invention is to provide a semiconductor memory device which can be easily incorporated in a high speed memory system.
 A semiconductor memory device according to a first aspect of the invention is provided with circuitry for storing specific information representing inherent characteristics, and an output circuit for transferring the specific information stored in a storing unit onto a bus according to a transfer instruction command supplied via the bus.
 A memory system according to a second aspect of the invention includes a memory controller, and a plurality of semiconductor memory devices connected in parallel with each other to the memory controller via first and second buses. Each of the plurality of semiconductor memory devices is provided with a storing unit for storing specific information inherent to the semiconductor memory device, and output circuit for transmitting the specific information stored in the storing unit onto the second bus according to a transfer instruction command supplied via first bus.
 Since the information inherent to respective semiconductor memory devices is transferred to the memory controller, the memory controller can manage the specific information for respective memories (semiconductor memory devices), and achieve an efficient address mapping, so that a memory system can easily be structured under the management of the memory controller even if the memories have different characteristics.
 The foregoing and other objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings.
FIG. 1 schematically shows an entire structure of a memory according to the present invention.
FIG. 2 is a flow chart representing an operation of a memory controller for the memory shown in FIG. 1.
FIG. 3 schematically shows a structure of a memory management table of the memory controller according to the first embodiment of the invention.
FIG. 4A shows an output bit structure of an output buffer in the memory shown in FIG. 1, and FIG. 4B shows a correspondence between the output bit of the output buffer and storage capacity.
FIG. 5A schematically shows a structure of an array portion of the memory in FIG. 1, and FIG. 5B shows structures of row and column addresses for designating a memory cell.
FIG. 6 is an illustration showing the manner in which address bit number information is transmitted according to the second embodiment of the present invention.
FIG. 7 schematically shows a structure of a memory management table provided in the memory controller according to the second embodiment of the invention.
FIG. 8 schematically shows a structure of a request packet transmitted via a send link from the memory controller to the memory.
FIG. 9 is a schematic illustration of a structure of a main portion of the memory according to the second embodiment of the present invention.
FIG. 10 is an illustration showing how the memory management table in the memory controller is utilized, as one example, according to the second embodiment of the invention.
FIG. 11 schematically shows a structure of a main portion of the memory according to the third embodiment of the invention.
FIG. 12 is an illustration showing how the bank number information is transferred according to the third embodiment of the present invention.
FIG. 13 shows a correspondence between a CPU address and a memory address.
FIG. 14 schematically shows a structure of a memory management table in a memory controller according to the third embodiment of the present invention.
FIG. 15 is an illustration showing how the bank number information is utilized as one example according to the third embodiment of the invention.
FIG. 16 is a schematic representation of a structure of a high speed memory system which has been proposed conventionally.
FIG. 17 shows a sequence of request packet transfer at the time of data reading in the memory system shown in FIG. 16.
FIG. 18 is a timing chart showing the timing of transfer of a request packet for writing data in the memory system shown in FIG. 16.
FIGS. 19A and 19B respectively show structures of a request packet and a response packet.
 [First Embodiment]
FIG. 1 schematically shows an entire structure of a memory according to the present invention. In FIG. 1, a memory 2 includes: an input buffer 50 receiving a request packet from send link 10; a command decoder 52 receiving the request packet via input buffer 50, decoding a command included in the request packet, and generating a control signal according to the result of decoding; a memory portion 54 having a plurality of memory cells arrayed in rows and columns and accessing a memory cell addressed according to address information supplied via input buffer 50 under the control of command decoder 52; an ROM portion 56 storing specific information inherent in memory 2; and an output buffer 58 transmitting information of one of memory portion 54 and ROM portion 56 onto sink link 20 at prescribed timing under the control of command decoder 52. ROM portion 56 reads the stored specific information and supply it to output buffer 58 under the control of command decoder 52.
 Command decoder 52 is activated when a slave ID (identifier) included in the request packet supplied via send link 10 is the same as a slave ID stored in an identifier register (not shown), and decodes the command of the request packet supplied from input buffer 50. The identifier of memory 2 is the one allocated to each of the memories at an initialization sequence, as shown under “RAM” in FIG. 16. The slave ID is incremented one by one starting from zero, starting at the memory nearest to controller 1. The initialization sequence is executed through a path which is not shown in the figure. The initialization sequence of the slave ID includes the following steps.
 An initialization command is supplied via send link 10 and all of the slave IDs of memory 2 are set at an initial value (62). Next, the controller transfers a slave ID setting command and the slave ID onto the send link, and outputs a slave ID input enable signal for a memory adjacent to the controller via a path not shown in FIG. 16. The memory 2-0 nearest to the controller (see FIG. 16) stores data supplied on the send link as its own slave ID when the slave ID enable signal is activated. After the storing of the slave ID, the memory transmits the slave ID input enable signal to an adjacent memory. The memory takes the slave ID supplied on the send link and stores it as its own identifier only when the slave ID input enable signal is supplied. After the storing of the slave ID by the last memory is completed, the memory of the final stage transfers an identifier store completion signal (slave ID input enable signal) to the memory controller. The memory controller thus recognizes that the slave IDs of all of the memories included in the memory system have been set.
 After the initialization sequence, the memory controller reads the specific information included in each memory 2. The structure of the memory system is the same as that shown in FIG. 16. A sequence of reading specific information according to the first embodiment of the present invention is hereinafter described referring to the flow chart shown in FIG. 2.
 After the initialization sequence is completed and all of the memories included in the memory system store respective slave IDs, memory controller (1) sends a load command with slave ID onto send link 10 (Step S1). Receiving the load command, command decoder 52 in a memory designated by slave ID operates to decode the load command, and gives ROM 56 a command to read the stored information. ROM portion 56 reads the stored information and supplies it to output buffer 58 under the control of command decoder 52. Output buffer 58 sends the specific information onto sink link 20 at a prescribed timing under the control of command decoder 52. The specific information sent onto sink link 20 is transmitted to the memory controller.
 Memory controller then determines whether the load command has been issued to each of the memories (Step S2). The determination is made within the memory controller by comparing a slave ID being issued and a slave ID having the maximum value in the memory system. If the load command has not been issued for all of the memories, the memory controller increments slave ID by 1 (Step S3), and returns to Step S1. When it is determined that the issuance of the load command is completed for all of the memories in Step S2, the load operation of the specific information is completed.
 The memory controller controls information specific to the memory for each slave ID by issuing the load command for all of the memories included in the memory system, and the memory controller can thus flexibly control the memory system according to the specific information.
FIG. 3 shows a method of controlling in the memory controller when the specific (inherent) information is the one indicating the storage capacity of memory 2. As shown in FIG. 3, memory controller 1 includes a control unit 60 which controls transmission and reception of the request packet and the response packet, and a memory management table 70 for controlling information specific to each of the memories included in the memory system. Memory management table 70 includes an identifier area 72 storing slave IDs for respective memories included in the memory system, a storage capacity area 74 storing the storage capacity of each of the memories, and an address store area 76 storing CPU address space allocated to each of the memories. Slave ID for each of memories 2-0 to 2-n, the storage capacity, and the CPU address space are coupled with each other and constitute one entry, and stored in management table 70. Accordingly, memory management table 70 includes entries ENO-ENn respectively corresponding to memories 2-0 to 2-n provided in the memory system.
 Memory 2-0 identified by slave ID#0 has storage capacity M#0 and CPU address space #0 is allocated to memory 2-0. By referring to memory management table 70, control unit 60 can identify a memory corresponding to an address area to which the processor requests accessing. The control unit 60 thus can easily transfer a requested packet to a memory storing information to which the processor (CPU) requests accessing.
 By the use of memory management table 70 shown in FIG. 3 and the control of specific information for each of the memories, the CPU address space can be allocated according to each storage capacity even if memories 2-0 to 2-n have different storage capacities. A memory system can thus be structured easily when memories of different capacities are utilized.
 A condition required for memories 2-0 to 2-n is that each of them has an input buffer (input interface) coupled to send link 10 and an output buffer (output interface) coupled to sink link 20. As long as the interface condition is fulfilled, a memory system can be structured employing memories of optional characteristics, and the memory controller can flexibly execute allocation of addresses and so on according to the characteristic of each memory (specific information).
 Writing of the storage information as well as the storage capacity information to ROM portion 56 has already been completed before the shipment of memory 2. ROM portion 56 can be comprised of an EPROM (erasable programmable read only memory) as well as an EEPROM referred to as a flash memory as long as it can store information in nonvolatile manner. A masked ROM to which the storage capacity information is written by a specific mask may be utilized.
FIGS. 4A and 4B show, as one example, the way how the storage capacity information is output. Output buffer 58 is supposed to have sixteen data output nodes (terminals) d0-d15. It is not necessary for ROM portion 56 to have an output structure of 16 bit width. ROM portion 56 may be of 8 bit width, or 8 bit data from ROM portion 56 may be converted to 16 bit data in output buffer 58 (it is sufficient for 8 bit data to be successively read out from ROM portion 56). As shown in FIG. 4A, output buffer 58 is supposed to output 16 bit data d0-d15 to sink link 20. In this case, suppose that storage capacity of 1M bits is represented by the least significant bit d0 of 1 and residual bits d1-d15 of “0”, as shown in FIG. 4B, maximum storage capacity of 63G bits can be represented on the basis of 1M bits. Accordingly, for example, if the storage capacity of memory 2 is 16M bits, information showing the storage capacity is as follows.
 d<15:0>=(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0)
 The memory controller can thus allocate CPU address space according to the storage capacity information. The allocation of the address space is appropriately adjusted depending on the bit number that one address includes. When one address stores 16 bits of data, 16M bit storage capacity corresponds to an addresses space of addresses 0 to 220−1.
 As described above, even if a memory system is structured employing memories of different storage capacities, CPU address space can easily be allocated to each of the memories in a memory controller by utilizing, as specific information, information about the storage capacity of the memory. Therefore, the memory system can be structured utilizing memories having a plurality of different storage capacities.
 [Second Embodiment]
FIG. 5A schematically shows a structure of a memory array 54 a included in memory portion 54 in FIG. 1. In FIG. 5A, memory array 54 a includes a plurality of memory cells MCs arrayed in a matrix of rows and columns. In memory array 54 a, memory cell MC is designated by a row address and a column address. Row address designates a row in memory array 54 a, and column address designates a column in memory array 54 a.
 As shown in FIG. 5B, in the memories having the same storage capacity, the bit number of row address and that of column address are different. Memory array 54 a is usually divided into a plurality of blocks, and locations of the blocks selected at the same time are often different. In FIG. 5B, address arrangements are exemplary shown in which one memory is provided with row addresses of address bits RA0-RA11 and column addresses of address bits CA0-CA10, and the other memory is provided with row addresses of address bits RA0-RA12 and column addresses of address bits CA0-CA9. When the bit number of row addresses and column addresses are different from memory to memory, the memory controller is informed of the address bit number.
FIG. 6 illustrates a structure when specific information inherent in the memory is given to the memory controller, according to the second embodiment of the present invention. FIG. 6 also shows, as one example, that the output buffer sends 16 bits d15-d0 onto sink link 20 as shown in FIG. 4A. In this structure, as shown in FIG. 6, suppose that the upper byte (8 bits) d15-d8 is used as an area storing information about the bit number of row address, and the lower byte d7-d0 is used as an area storing information about the bit number of column address. In this case, the bit number of row/column addresses can be represented respectively in the range of 1 to 255 bits. For example, if row address is 10 bits and column address is 8 bits, the information can be represented as follows.
 D<15:0>=(0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0)
 The information on the address bit number of the memory is stored in a memory management table 80 as shown in FIG. 7. Memory management table 80 includes an identifier store area 82 which stores slave ID for identifying respective memories, and an address bit number store area 84 which stores information on row/column address bit number of respective memories. Entries EN0-ENn are provided corresponding to respective memories. Accordingly, information on row/column address bit number corresponding to respective identifiers are stored.
FIG. 8 shows a structure of a request packet. In FIG. 8, a request packet 90 includes a field 90 a storing slave ID for designating a memory included in the memory system, a field 90 b storing a command indicating a process to be executed, and fields 90 c-90 f storing addresses add0-add3. Fields 90 a-90 f are respectively transferred sequentially in synchronization with the rise and fall of strobe srb. The bit number of fields 90 a-90 f is constant (determined according to the bit width of the send link). In this case, addresses are distributed in the memory as follows.
FIG. 9 illustrates a structure of a main portion of the memory. In FIG. 9, the memory includes: an input register 95 successively storing the request packet in synchronization with strobe srb; command decoder 52 decoding a command and generating a necessary control signal when the memory is addressed, according to slave ID and the command supplied from input register 95; an address relocation register 96 successively storing the address supplied from input register 95 under the control of command decoder 52; a row decoder 54 b decoding the row address supplied from address relocation register 96 and selecting a row in memory array 54 a; and a column decoder 54 c decoding the column address from address relocation register 96 and selecting a column in memory array 54 a.
 Address relocation register 96 is activated by command decoder 52, and stores addresses supplied from input register 95 successively from, for example, upper bits. A prescribed bit number in address relocation register 96 (row address bit number of the memory) is provided to row decoder 54 b, and residual address bits are supplied to column decoder 54 c. Row decoder 54 b and column decoder 54 c are activated/deactivated by the control signal from command decoder 52 (the path is not shown).
 Accordingly, the use of address relocation register 96 allows addresses included in the request packet transmitted from the memory controller to be distributed easily to suitable row/column addresses in the memory.
 At the time of the page hit (read•of•open) operation, only the column address is provided. In this case, command decoder 52 causes address relocation register 96 to store an address supplied from input register 95 only in the portion corresponding to the column address.
 Using the structure shown in FIG. 9, a memory system can be structured employing memories of different address bit numbers.
FIG. 10 illustrates a structure of a main portion of a memory controller. In FIG. 10, memory controller 1 includes memory management table 80, a page memory 100 storing addresses of pages (rows) selected respectively in the memories of the memory system, and a page hit determination portion 110 receiving an address from a processor (CPU) and determining whether the selected page is addressed or not referring to memory management table 80 and page memory 100. The memory is supposed to be a dynamic random access memory (DRAM).
 Page memory 100 includes an identifier area 102 storing slave ID specifying each of the memories, and a page address area 104 storing a page address (row address) selected in each memory. Page hit determination portion 110 receives an address (CPU address) from the processor and determines which memory has been designated according to the CPU address (the second embodiment may be used), and retrieves information about row/column address bit number of the addressed memory referring to memory management table 80.
 Page hit determination portion 110 extracts a page address (row address) from the CPU address according to the retrieved information on the row address bit number, retrieves corresponding page address from page memory 100, determines whether or not the retrieved page address is coincident with the page address included in the CPU address, and activates/deactivates a page hit instruction signal PH according to the result of the determination. When the page hit is determined, a control unit (not shown) extracts a column address from the CPU address according to information on the bit number of the column address, and generates a request packet.
 By storing in memory management table 80, information on the bit number of the row/column address in respective memories, page hit/miss can be easily determined even if the address structures of the memories included in the memory system are different. Based on the result of the determination, an address necessary for generating a packet can easily be produced from the CPU address.
 Accordingly, even if the memories have different address configurations, page mode access (open-row-read/write) can be performed according to the address configurations, and the memory access can thus be achieved efficiently.
 [Third Embodiment]
FIG. 11 shows a structure of a main portion of a memory according to the third embodiment of the invention. In FIG. 11, memory 2 includes an input buffer 120 receiving a request packet on the send link, banks #B0-#Bm commonly coupled to an internal bus 121 and independently driven to active/inactive state, and an output buffer 122 coupled to internal data bus 121 and outputs information supplied via internal data bus 121 onto sink link as a response packet.
 Banks #B0-#Bm are selectively driven to active/inactive state independently of each other according to a bank address under the control of a command decoder (not shown). The number of banks #B0-#Bm included in memory 2 is appropriately determined. Memory 2 includes an ROM portion 123 which stores the number of banks #B0-#Bm. ROM portion 123 corresponds to ROM portion 56 in FIG. 1. Information on the bank number stored in ROM portion 123 is transferred onto the sink link via output buffer 122, and transmitted to the memory controller.
FIG. 12 shows a format of the information on the bank number output from output buffer 122. In FIG. 12, output buffer 122 transmits 16 bits d15-d0 onto the sink link. In this case, the number of the banks can be designated in the range of 1 to 65535 as shown in FIG. 12.
FIG. 13 shows a correspondence between a CPU address and an address output by the memory controller to the memory. As shown in FIG. 13, the CPU address includes a memory selection address designating a memory included in the memory system, a bank address specifying a bank included in the memory, a page address specifying a page (row) included in the bank, and a column address specifying a column on the page. The bit numbers of the bank addresses are different if the numbers of the banks are different. In this case, the bit number of the page addresses varies, accordingly. The memory selection address is correlated to slave ID.
FIG. 14 shows, as one example, a structure of a memory management table 130 included in the memory controller. As shown in FIG. 14, memory management table 130 included in memory controller 1 includes an identifier area 132 storing slave ID (slave ID#0-slave ID#n) specifying each of the memories, and a bank number information store area 134 storing information on the bank numbers #0-#n in each of the memories. Slave ID specifying a memory and the bank number information indicating the number of the banks included in the memory are stored in one entry EN (EN0-ENn). The memory controller controls the number of the banks information of each memory by referring to memory management table 130.
 The memory executes an operation of memory cell selection according to the bank address, the page address, and the column address supplied from the memory controller. As the structure of the memory, the structure of FIG. 9 may be utilized, and the addresses supplied from input register 95 are divided into the bank address, the page address, and the column address by address relocation register 96.
FIG. 15 shows a structure of a page memory. In FIG. 15, a page memory 140 includes an identifier area 142 storing slave ID, a bank store area 144 specifying a bank provided corresponding to each slave ID, and a page address area 146 storing a page address indicating a selected page respective banks. The memory controller secures an area for storing a page address of each bank in page memory 140 according to information on the bank number stored in memory management table 130. In FIG. 15, banks #0, #1 and #2 relating to slave ID #0 designating memory 2-0, as well as page addresses #0, #1 and #2 stored for respective banks are representatively shown.
 As a structure of the page hit determination portion, a structure similar to that shown in FIG. 10 may be utilized. Page memory 140 shown in FIG. 15 is used in place of page memory 100. In this case, the operation of the page hit determination portion is slightly different. Page hit determination portion 110 identifies the bank address bit from the CPU address according to information on the bank number stored in memory management table 130, reads corresponding page address from page memory 140 using the memory selection address and the bank address as reference addresses, and extracts a page address from the CPU address. Page hit/miss is determined by deciding whether the page address of the CPU address coincides or does not coincide with the page address read from page memory 140.
 When the request packet is sent, it is not necessary for the memory controller to distinguish the bank address from the page address. The reason is that the bank address and the page address are separated in memory 2.
 As described above, even if the bank numbers of the memories included in the memory system are different, the memory controller can determine the page hit since the storage area is provided for storing information about the number of banks in the memory, and the bank number information is transferred to the memory controller, controlling the bank numbers of respective memories accurately. An efficient access can thus be achieved.
 If the page memories 100 and 140 are constituted by a content addressable memory (CAM), the page hit determination portion need not discriminate the bank address from the page address. What is necessary for the page hit determination portion is simply to extract addresses including a memory selection address, a bank address, and a page address from CPU addresses. However, the location of the column address should be the same even if the numbers of the banks are different.
 As for the memory, the memory may be the one in which random access is possible even if the page mode access cannot be executed, such as a dynamic random access memory, or a memory device of nonvolatile type (for example, a flash memory). A necessary condition of the memory is that it has an input port (input register or input buffer) coupled to a send link and an output port (output buffer or output register) coupled to a sink link.
 As for the memory system, the present invention can be applied to a memory system if the memory system is structured such that the information is transferred according to a command from the memory controller, even if the system is not structured to be a bus structure including a send link and a sink link.
 According to the present invention, a memory can be incorporated into a memory system if the memory is provided with input/output interface of the memory system, owing to storage means provided for storing information specific to the memory. Therefore, even if the memories of different standards are employed, a memory system can be structured. In other words, a memory system can be structured by selecting a memory suitable for the application of the memory system.
 The memory controller can easily achieve allocation of CPU address space, as well as page hit/miss determination according to the characteristics of memories by utilizing information on address bit number, storage capacity, and bank number as information specific to the memories.
 Although the present invention has been described and illustrated in detail, it is clearly understood that the same is by way of illustration and example only and is not to be taken by way of limitation, the spirit and scope of the present invention being limited only by the terms of the appended claims.
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US6888843 *||Sep 17, 1999||May 3, 2005||Advanced Micro Devices, Inc.||Response virtual channel for handling all responses|
|US6938094||Sep 17, 1999||Aug 30, 2005||Advanced Micro Devices, Inc.||Virtual channels and corresponding buffer allocations for deadlock-free computer system operation|
|US6950438||Aug 17, 2000||Sep 27, 2005||Advanced Micro Devices, Inc.||System and method for implementing a separate virtual channel for posted requests in a multiprocessor computer system|
|U.S. Classification||711/105, 711/154, 711/E12.088|
|International Classification||G11C11/401, G06F12/06, G11C11/407|
|Feb 11, 1997||AS||Assignment|
Owner name: MITSUBISHI DENKI KABUSHIKI KAISHA, JAPAN
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WATANBE, NAOYA;YAMAZAKI, AKIRA;REEL/FRAME:012097/0713
Effective date: 19970131
|Jul 13, 2005||FPAY||Fee payment|
Year of fee payment: 4
|Sep 14, 2009||REMI||Maintenance fee reminder mailed|
|Feb 5, 2010||REIN||Reinstatement after maintenance fee payment confirmed|
|Mar 30, 2010||FP||Expired due to failure to pay maintenance fee|
Effective date: 20100205
|Nov 13, 2010||AS||Assignment|
Effective date: 20101111
Owner name: DRAM MEMTECH LLC, TEXAS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:RENASAS ELECTRONICS CORPORATION;REEL/FRAME:025358/0749
|Jan 3, 2011||PRDP||Patent reinstated due to the acceptance of a late maintenance fee|
Effective date: 20110106
|Jan 6, 2011||SULP||Surcharge for late payment|
|Jan 6, 2011||FPAY||Fee payment|
Year of fee payment: 8
|Jan 31, 2011||AS||Assignment|
Effective date: 20101111
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:RENESAS ELECTRONICS CORPORATION;REEL/FRAME:025723/0127
Owner name: DRAM MEMTECH LLC, TEXAS
|Mar 5, 2011||AS||Assignment|
Owner name: DRAM MEMORY TECHNOLOGIES LLC, CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DRAM MEMTECH LLC;REEL/FRAME:025904/0828
Effective date: 20110201
|Sep 13, 2013||REMI||Maintenance fee reminder mailed|
|Dec 27, 2013||FPAY||Fee payment|
Year of fee payment: 12
|Dec 27, 2013||SULP||Surcharge for late payment|
Year of fee payment: 11