|Publication number||US6779100 B1|
|Application number||US 09/465,900|
|Publication date||Aug 17, 2004|
|Filing date||Dec 17, 1999|
|Priority date||Dec 17, 1999|
|Publication number||09465900, 465900, US 6779100 B1, US 6779100B1, US-B1-6779100, US6779100 B1, US6779100B1|
|Inventors||Paul Stanton Keltcher, Stephen Eric Richardson|
|Original Assignee||Hewlett-Packard Development Company, L.P.|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (3), Referenced by (17), Classifications (23), Legal Events (9)|
|External Links: USPTO, USPTO Assignment, Espacenet|
1. Field of the Invention
This invention relates generally to computer systems that store instructions in memory in compressed and uncompressed form, and more particularly to computer systems that store instructions in compressed form in a main memory and store corresponding uncompressed instructions in an instruction cache.
2. Description of the Related Art
Computer systems store instructions in a main memory. Main memories tend to have a high capacity, but they also tend to be relatively slow to access. Also, the instructions can be stored in compressed form in order to increase the capacity of the main memory. However, the compression also slows access time because the compressed instructions must be decompressed before they can be processed.
Therefore, a faster cache memory is often employed to store certain frequently-used instructions. Instructions are stored in the cache memory in uncompressed form so that they can be accessed without being delayed by decompression pre-processing. However, cache memories generally have a more limited capacity and may be able to hold only some of the instructions of an ordered instruction set. Conventionally, when an instruction is called for that is not present in the cache, a cache miss operation (or miss to main memory operation) is executed and the instruction is accessed in compressed form from the main memory based on its expected address in the cache memory.
Instructions are typically grouped into instruction blocks and instruction pages. A program consists of an ordered list of instructions having virtual addresses 0 to n. In a typical RISC system, where each instruction is 32 bits long, a computer will group these instructions into instruction pages of 1024 instructions each. Typically, the instructions are also grouped into blocks so that there are 8 instructions per instruction block and 128 instruction blocks per page.
In most computers, instruction blocks have a fixed length, regardless of whether the instructions constituting the instruction block have a fixed instruction length.
An exemplary prior art computer system 100 is shown in FIG. 1. The computer system 100 includes main memory 102, instruction cache 104, instruction decompressor 106, address translation look-up table ATLT 108, and central processing unit (CPU) 112. As shown in FIG. 1, ordered, compressed instructions CI0 to CIX are stored in main memory 102 so that they are scattered across main memory lines MA0 to MAY. As explained in detail below, ATLT 108 controls the storage of compressed instructions CI so that these instructions take up as little of the available main memory space as possible.
Because the compression is approximately 50 percent, two compressed instruction blocks CI will usually be stored on one main memory line MA. For example, instruction blocks CI0 and CI1 are stored on a main memory line MA0.
However, three or more compressed instruction blocks CI will be stored on a single main memory line MA, if they will fit. For example, instruction blocks CI2, CI3 and CI4 are stored on main memory line MA1. Also, sometimes it is only possible to fit a single instruction block CI on a main memory address line MA. For example, instruction block CI5 is the only instruction block stored at main memory address line MA4. Note that no compressed instruction blocks CI are stored at MA2 or MA3 because these lines are unavailable due to the fact that these addresses are being used to store other unrelated data.
Before the compressed instruction blocks CI0 to CIX are to be used by CPU 112, they are first translated into uncompressed form by instruction decompressor 106 and stored in instruction cache 104 as corresponding uncompressed instruction blocks having physical addresses UI0 to UIX. As shown in FIG. 1, the uncompressed instruction blocks UI0 to UIX are stored in instruction cache 104 so that each uncompressed instruction block UI is stored on a single line I$ of instruction cache 104. In order to simplify this example, UI0 to UIX are stored in order at consecutive cache lines I$0 to I$X.
When a miss to main memory 102, such as an instruction cache miss, is executed in prior art computer system 100, the missing instruction UI is translated by address lookup table 108 into a main memory line address MA, so that the uncompressed instruction block UIn, which is supposed to be stored at information cache line address l$n can be found in compressed form CIn at the corresponding main memory address MAnn (where n is any one of the ordered instruction blocks from 0 to X, and nn depends on how the compressed instructions CI are scattered through the main memory 102). For example, an instruction cache miss of instruction block UI5 involves sending UI5 to address translation look-up table 108. Address translation look-up table 108 translates the uncompressed address UI5 into the corresponding main memory compressed address MA4 because that is where the corresponding compressed instruction block CI5 is stored in main memory 102.
Translator 108 employs an address look-up table because there is no predictable correlation between addresses in the uncompressed space of information cache 104 and corresponding addresses of the compressed space in main memory 102. Address look-up table 108 takes space and time to initiate, maintain and utilize. This can result in complicated and expensive computer hardware and relatively high silicon cost.
According to the present invention, there is an “algebraic” relationship between memory line addresses for instruction blocks stored in uncompressed form (for example, in an instruction cache) and corresponding memory line addresses for corresponding instruction blocks stored in compressed form (for example, in a main memory). Preferably, some set integral number n of compressed instruction blocks is stored on each line of main memory, with the instruction blocks being placed in consecutive order on consecutive memory lines. In this way, line addresses in uncompressed memory space will be easy to translate to compressed addresses because uncompressed addresses will be proportional to the corresponding compressed memory addresses. Even more preferably, two instructions are stored on each consecutive line of compressed memory, because it is relatively easy to reliably utilize this two-instructions-per-line scheme, even when the compression is as low as 50 percent.
As stated above, there is an “algebraic” relationship between memory line addresses in uncompressed space and corresponding memory line addresses in compressed space. As used herein, “algebraic” refers to any mathematical function or combination of mathematical functions commonly implemented on computers. Such mathematical functions include but are not limited to addition, subtraction, multiplication, division, rounding to an integer value, exponents, factorials, logarithms, trigonometric functions and so on.
It is noted that, after compression, instruction blocks often vary in length, and the predetermined number n of instruction blocks may not fit on a single line of compressed memory space when dealing with a longer-than-expected instruction block. One solution to this problem is to set n sufficiently low so that the predetermined number of instruction blocks n will always fit in the compressed memory space. The drawback to the solution is that decreasing n will increase the amount of compressed memory space that is required. Also, n cannot be less than two.
In at least some embodiments of the present invention, a different solution is used. More particularly, a flag and a pointer are stored in compressed memory in the location which had been allocated for the longer-than-expected instruction block. The pointer points to another line address of the compressed memory where the longer-than-expected instruction block has been alternatively stored, where it is out of the way of the set of compressed addresses that follow an algebraic pattern for ease of uncompressed-to-compressed address translation.
The objects, advantages and features of the present invention will become more readily apparent from the following detailed description, when taken together with the accompanying drawing, in which:
FIG. 1 is a block diagram of a prior art computer system having corresponding instruction blocks stored in compressed and uncompressed memory space;
FIG. 2 is a block diagram of a first embodiment of a computer system according to the present invention having corresponding instruction blocks stored in compressed and uncompressed memory space;
FIG. 3 is a block diagram of a second embodiment of a computer system according to the present invention having corresponding instruction blocks stored in compressed and uncompressed memory space;
FIG. 4 is a block diagram of a third embodiment of a computer system according to the present invention having corresponding instruction blocks stored in compressed and uncompressed memory space; and
FIG. 5 is a block diagram of a fourth embodiment of a computer system according to the present invention having corresponding instruction blocks stored in compressed and uncompressed memory space.
A first embodiment of computer system 200 according to present invention will now be described with reference to FIG. 2. Computer system 200 includes main memory 202, instruction cache 204, instruction decompressor 206, algebraic address translator 208 and CPU 212.
Main memory 202 may be constructed as any sort of data hierarchy which is found above an instruction cache. Main memory 202 is divided into memory lines. In this example, each line of main memory 202 is 32 bytes long. The addresses of the relevant lines of main memory, where compressed instruction blocks CI0 . . . X are stored, are MA0, MA1, MA2, MA3, . . . , MA(X−1)/2. Other lines of main memory 202 may contain other unrelated data. Main memory 202 is referred to as compressed memory or compressed space, because the instruction blocks are stored here in compressed form.
Instruction cache 204 is also divided into memory lines of 32 bytes apiece. In this example, uncompressed instruction blocks UI0 . . . X are respectively stored in instruction cache 204 at line addresses I$0 . . . X in uncompressed form. It is noted that uncompressed instructions block physical addresses UI0 . . . X can be mapped to cache lines I$ in other, more complicated ways, in which case a look-up table may be used to correlate uncompressed memory space with lines I$ of the cache. Uncompressed instruction blocks UI0 . . . X can be quickly accessed y CPU 212 for use in processing instructions during computer operations. In this example, each uncompressed instruction block UI0 . . . X has a fixed length of 32 bytes, and therefore, completely fills one 32-byte-long line I$ of cache memory.
As further explained below, the compressed instruction blocks CI0 . . . X are first stored in main memory 202. When it is anticipated that CPU 212 will need the instructions, compressed instructions CI are sent to instruction decompressor 206, where they are converted to uncompressed instruction blocks UI. Uncompressed instruction blocks UI are sent on to instruction cache 204.
The compressed ordered instruction blocks CI0 through ClX are stored so that two consecutive, compressed instruction blocks CI occupy each line of compressed memory space. For example, CI0 and CI1 occupy MA0, CI2 and CI3 occupy MA1, and so on down to CI(X−1) and CIX occupying MA(X−1)/2.
In this embodiment, the first instruction block on a compressed memory line MA is allocated the first 16 bytes of the 32 byte line. The following instruction is allocated the second 16 bytes of the 32 byte line. As shown in FIG. 2, compressed instruction block CI3 occupies its full 16 byte allocation. The other compressed instruction blocks do not quite fill their 16 byte allocations.
This allocation scheme makes especially efficient use of compressed memory space when compression is slightly better than 50%. This is because under 50 percent compression, an instruction block that is 32 bytes or less long will generally be compressed down to 16 bytes or less, which corresponds to one-half of a line of compressed memory space. If the compression is greater than 50 percent, then a compressed instruction block will not fit in one-half of a line of uncompressed memory space (some fixes for this potential problem are discussed below). On the other hand, if the compression is much less than 50 percent, then the compressed instruction blocks CI will take only a small part of their 16 byte allocations. This is not an efficient use of compressed memory space, and therefore, it might be preferable to put more than two compressed instruction blocks CI on each line of compressed memory under these conditions, as described below in more detail in connection with the embodiment of FIG. 3.
Once the compressed instruction blocks CI are stored in main memory 202, it is a straightforward process to decompress the instructions into uncompressed form UI so that the uncompressed instruction blocks are stored on consecutive lines I$0 to I$X in instruction cache 204. A large advantage of the above-described main memory 202 and instruction cache 204 is the simplification of translation of line addresses of corresponding instruction blocks from uncompressed memory space UI to compressed memory space MA. This simplified translation will be discussed below in connection with an instruction cache miss operation.
The instruction cache miss operation is initiated when instruction cache memory sends to algebraic address translator 208 the uncompressed space address UI corresponding to one of the uncompressed instruction blocks in instruction cache 204. Algebraic address translator 208 translates this address into a compressed space address of the line of main memory 202 that holds the compressed version of the corresponding instruction block. Algebraic address translator 208 performs this translation by utilizing the algebraic relationship between uncompressed space addresses and compressed space addresses.
In this example, the algebraic relationship for any uncompressed space line address UIi to its corresponding compressed space line address MAj is:
where “Round” is a mathematical function that rounds down to the next lowest integer. Algebraic address translator 208 only needs to know the value of MA0, UIi and the algebraic relationship. Generally, algebraic address translator 208 will therefore will require much less area and time than a look-up table (for example, look-up table 108 in FIG. 1) that would individually store individual correlations for a large multiplicity of instructions.
FIG. 3 is a second embodiment according to the present invention. Computer system 300 includes main memory 302, instruction cache 304, instruction decompressor 306, algebraic address translator 308 and CPU 312. Instruction cache 304, instruction decompressor 306 and CPU 312 operate similarly to their corresponding elements in previously-described computer system 200 in FIG. 2.
However, in computer system 300 the instruction compression is slightly better than 25 percent. Therefore, four instruction blocks can be packed onto each 32 byte line MA of main memory 302. Also, the instruction blocks are stored in reverse order in main memory 302 in order to illustrate some of the possible variety that the algebraic relationship between uncompressed memory space and compressed memory space can have.
In computer system 300, algebraic address translator 308 controls the storage of compressed instruction blocks CI in main memory 302 so that the ordered compressed instruction blocks CI0 to CIX are stored four per line in reverse order on consecutive memory lines MA0 to MA(X−1)/4. Accordingly, each 32 byte maximum uncompressed instruction block UI is allocated 8 bytes of compressed memory space in main memory 302.
Again in this example, compressed instruction blocks CI are stored in the compressed memory space and uncompressed instruction blocks UI are stored in the uncompressed memory space so that there is an algebraic correlation between addresses of instruction cache 304 and corresponding addresses of main memory 302. For computer system 300, this correlation is:
Equation (2) is the correlation applied by algebraic address translator 308 during an instruction cache miss operation in order to determine an address for compressed memory space MA based on an address for uncompressed memory space UIi.
FIG. 4 shows in a third embodiment, computer system 400 according to present invention. Computer system 400 includes main memory 402, instruction cache 404, instruction decompressor 406, algebraic address translator (AAT) 408 and CPU 412. Computer system 400 is similar to above-described computer system 200 (FIG. 2) in that it has approximately 50 percent compression and stores two compressed instruction blocks CI on each line of compressed memory space in main memory 402.
However, in computer system 400, AAT 408 controls storage of compressed instruction blocks CI in main memory 402 so that the second instruction block on a line of main memory 402 starts directly after and abuts the previous compressed instruction. For example, CI1 is stored immediately after CI0. This abutting storage scheme can help ensure that two compressed instruction blocks will fit on a single main memory line.
For example, as explained in connection with computer system 200, under that scheme every 32 byte uncompressed instruction block UI had to be compressed down to a 16 byte maximum in main memory 202. On the other hand, in computer system 400, there is a less rigorous requirement that each pair of two consecutive instructions be compressed down to a 32 byte maximum (corresponding to one line of main memory 402). To illustrate this added flexibility, compressed instruction block CI2 (shown in FIG. 4) takes more than 16 bytes of main memory 402 line MA1. However, because CI3 takes up less than 16 bytes, both of these instruction blocks can fit on line MA1.
FIG. 5 is a fourth embodiment according to present invention. Computer system 500 includes main memory 502, instruction cache 504, instruction decompressor 506, algebraic address translator 508 and CPU 512. Computer system 500 further includes multiplexer 514 and fixup detector 516. Computer system 500 is similar to above-described computer system 200 (FIG. 2) in that it has approximately 50 percent compression. This computer system generally stores a first compressed instruction block CI within the first 16 bytes of each memory line MA of compressed memory and a following compressed instruction block within the last 16 bytes of each compressed memory line.
However, as stated above, some of the compressed instruction blocks may require more than 16 bytes of compressed storage, especially if the compression factor is only 50 percent or slightly greater. Computer system 500 handles this problem by using pointers P1, P(X−1) in the place of overly-long compressed instruction blocks CI1, CI(X−1) in compressed memory 502. The pointers P1 and P(X−1) each include some flag data to indicate that they are indeed pointers, as opposed to compressed instructions. The pointers P1 and P(X−1) further respectively include line addresses MAP and MAP+1, for other lines in compressed memory 502 where compressed instruction blocks CI1 and CI(X−1) have been alternatively stored.
Because entire lines of compressed memory MAP and MAP+1, have respectively been allocated for overly-long instruction blocks CI1 and CI(X−1), these instructions do not need to be squeezed into the 16 byte allocations set aside for the bulk of the compressed instruction blocks CI. Compressed memory addresses MAP and MAP+1, are chosen by memory controller 510 so that they are out of the way of the other compressed instructions stored on lines MA0 . . . MA(X−1)/2, and out of the way of other, unrelated data stored in compressed memory 502.
Now an instruction cache miss operation for uncompressed instruction block UI1 will be described. Uncompressed instruction block UI1 is stored at instruction cache line address I$1. Address UI1 is sent to address translator 508, which applies the algebraic correlation of equation (1) to determine corresponding compressed memory line address MA0. Address MA0 is sent through multiplexer 514, and line MA0 of compressed memory 502 is accessed to get compressed instruction block CI1.
However, pointer P1 is stored at line MA0 in the place of CI1, because CI1 is one of the overly-long instruction blocks. Compressed memory 502 sends pointer P1 out to instruction decompressor 506 and fixup detector 516. Fixup detector 516 recognizes that P1 is a pointer, rather than a compressed instruction based on the flag data present in pointer P1. Fixup detector 516 responds by sending compressed memory address MAP, which corresponds to the compressed memory address data in pointer P1, to multiplexer 514. Multiplexer 514 sends along the corrected address MAP, so that compressed memory line MAP is accessed to finally get to the overly-long compressed instruction block CI1.
Then, compressed instruction block CI1 is sent to fixup detector 516 and instruction decompressor 506. Fixup detector 516 ignores compressed instruction block CI1 because this compressed instruction block will not have any of the flag data that fixup detector 516 looks for. However, instruction decompressor 506 will be able to decompress compressed instruction block CI1 into corresponding uncompressed instruction block UI1. Uncompressed instruction block UI1 is then sent along to instruction cache 504. In this way, an efficient correction can be effected when a relatively small number of overly-long instruction lines do not fit within the algebraic scheme for packing the compressed memory space with compressed instructions.
Although the above-described, preferred embodiments generally show uncompressed instruction blocks on consecutive lines of the instruction cache, and compressed instruction blocks on consecutive lines of the compressed memory, this consecutive allocation may not be necessary. For example, suppose the algebraic correlation between the uncompressed memory lines and the compressed memory lines is:
In this case, the program will not be stored on consecutive compressed memory lines, but rather will be spaced out based on the cubic exponential relationship given in equation (3).
Certain embodiments have been described above. It is likely that there are modifications and improvements to these embodiments which are within the literal scope or are equivalents of the claims which follow.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US6216213 *||Jun 7, 1996||Apr 10, 2001||Motorola, Inc.||Method and apparatus for compression, decompression, and execution of program code|
|US6343354 *||Apr 19, 2000||Jan 29, 2002||Motorola Inc.||Method and apparatus for compression, decompression, and execution of program code|
|US6353871 *||Feb 22, 1999||Mar 5, 2002||International Business Machines Corporation||Directory cache for indirectly addressed main memory|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7065606 *||Sep 4, 2003||Jun 20, 2006||Lsi Logic Corporation||Controller architecture for memory mapping|
|US7069523||Dec 13, 2002||Jun 27, 2006||Lsi Logic Corporation||Automated selection and placement of memory during design of an integrated circuit|
|US7343471 *||Jan 12, 2005||Mar 11, 2008||Pts Corporation||Processor and method for generating and storing compressed instructions in a program memory and decompressed instructions in an instruction cache wherein the decompressed instructions are assigned imaginary addresses derived from information stored in the program memory with the compressed instructions|
|US7523294 *||Nov 9, 2006||Apr 21, 2009||Realtek Semiconductor Corp.||Maintaining original per-block number of instructions by inserting NOPs among compressed instructions in compressed block of length compressed by predetermined ratio|
|US7568070 *||Jul 29, 2005||Jul 28, 2009||Qualcomm Incorporated||Instruction cache having fixed number of variable length instructions|
|US8824814 *||Mar 19, 2012||Sep 2, 2014||Alpha Imaging Technology Corp.||Pixel data compression and decompression method|
|US9146933||Dec 20, 2012||Sep 29, 2015||International Business Machines Corporation||Compressed storage access system with uncompressed frequent use data|
|US9378560 *||Jun 17, 2011||Jun 28, 2016||Advanced Micro Devices, Inc.||Real time on-chip texture decompression using shader processors|
|US9720693||Jun 26, 2015||Aug 1, 2017||Microsoft Technology Licensing, Llc||Bulk allocation of instruction blocks to a processor instruction window|
|US9792252||Apr 14, 2014||Oct 17, 2017||Microsoft Technology Licensing, Llc||Incorporating a spatial array into one or more programmable processor cores|
|US20040117744 *||Dec 13, 2002||Jun 17, 2004||Lsi Logic Corporation||Automated selection and placement of memory during design of an integrated circuit|
|US20050055527 *||Sep 4, 2003||Mar 10, 2005||Andreev Alexander E.||Controller architecture for memory mapping|
|US20050125633 *||Jan 12, 2005||Jun 9, 2005||Pts Corporation||Processor and method for generating and storing compressed instructions in a program memory and decompressed instructions in an instruction cache wherein the decompressed instructions are assigned imaginary addresses derived from information stored in the program memory with the compressed instructions|
|US20070028050 *||Jul 29, 2005||Feb 1, 2007||Bridges Jeffrey T||Instruction cache having fixed number of variable length instructions|
|US20070113052 *||Nov 9, 2006||May 17, 2007||Realtek Semiconductor Corp.||Method for compressing instruction codes|
|US20120294542 *||Mar 19, 2012||Nov 22, 2012||Alpha Imaging Technology Corp.||Pixel data compression and decompression method|
|US20120320067 *||Jun 17, 2011||Dec 20, 2012||Konstantine Iourcha||Real time on-chip texture decompression using shader processors|
|U.S. Classification||711/202, 711/E12.02, 712/230, 711/220, 712/E09.055, 711/125, 712/E09.072, 711/214, 711/217|
|International Classification||G06F12/04, G06F9/38, G06F12/08|
|Cooperative Classification||G06F12/04, G06F12/0875, G06F9/3802, G06F2212/401, G06F9/30178, G06F9/3822|
|European Classification||G06F9/38B, G06F9/38C4, G06F9/30U4, G06F12/08B14, G06F12/04|
|Apr 27, 2000||AS||Assignment|
Owner name: HEWLETT-PACKARD COMPANY, COLORADO
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KELTCHER, PAUL STANTON;RICHARDSON, STEPHEN ERIC;REEL/FRAME:010768/0930
Effective date: 19991216
|Sep 30, 2003||AS||Assignment|
Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY L.P., TEXAS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;REEL/FRAME:014061/0492
Effective date: 20030926
Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY L.P.,TEXAS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;REEL/FRAME:014061/0492
Effective date: 20030926
|Feb 19, 2008||FPAY||Fee payment|
Year of fee payment: 4
|Feb 25, 2008||REMI||Maintenance fee reminder mailed|
|Feb 17, 2012||FPAY||Fee payment|
Year of fee payment: 8
|Nov 9, 2015||AS||Assignment|
Owner name: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP, TEXAS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.;REEL/FRAME:037079/0001
Effective date: 20151027
|Mar 25, 2016||REMI||Maintenance fee reminder mailed|
|Aug 17, 2016||LAPS||Lapse for failure to pay maintenance fees|
|Oct 4, 2016||FP||Expired due to failure to pay maintenance fee|
Effective date: 20160817