WO1998027492A1 - Cache hierarchy management with locality hints for different cache levels - Google Patents

Cache hierarchy management with locality hints for different cache levels Download PDF

Info

Publication number
WO1998027492A1
WO1998027492A1 PCT/US1997/022659 US9722659W WO9827492A1 WO 1998027492 A1 WO1998027492 A1 WO 1998027492A1 US 9722659 W US9722659 W US 9722659W WO 9827492 A1 WO9827492 A1 WO 9827492A1
Authority
WO
WIPO (PCT)
Prior art keywords
cache
data
level
temporal
processor
Prior art date
Application number
PCT/US1997/022659
Other languages
French (fr)
Inventor
Millind Mittal
Original Assignee
Intel Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corporation filed Critical Intel Corporation
Priority to EP97953136A priority Critical patent/EP1012723A4/en
Priority to AU56940/98A priority patent/AU5694098A/en
Publication of WO1998027492A1 publication Critical patent/WO1998027492A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/3004Arrangements for executing specific machine instructions to perform operations on memory
    • G06F9/30047Prefetch instructions; cache control instructions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0888Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using selective caching, e.g. bypass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0897Caches characterised by their organisation or structure with two or more cache hierarchy levels

Definitions

  • the present invention relates to the field of processors and, more particularly, to a technique of providing hierarchical management of cache memories.
  • cache memory With a processor is well known in the computer art.
  • a primary purpose of utilizing cache memory is to bring the data closer to the processor in order for the processor to operate on that data. It is generally understood that memory devices closer to the processor operate faster than memory devices farther away on the data path from the processor.
  • memory devices closer to the processor operate faster than memory devices farther away on the data path from the processor.
  • cost trade-off in utilizing faster memory devices. The faster the data access, the higher the cost to store a bit of data. Accordingly, a cache memory tends to be much smaller in storage capacity than main memory, but is faster in accessing the data.
  • a computer system may utilize one or more levels of cache memory. Allocation and de-allocation schemes implemented for the cache for various known computer systems are generally similar in practice. That is, data that is required by the processor is cached in the cache memory (or memories). If a miss occurs, then an allocation is made at the entry indexed by the access. The access can be for loading data to the processor or storing data from the processor to memory. The cached information is retained by the cache memory until it is no longer needed, made invalid or replaced by other data, in which instances the cache entry is de-allocated.
  • cache allocation can be programmed by software. Furthermore, advantages can be gained if cache memory allocation can be based on the application which is to be executed and if such allocation can be managed based on the particular level of the cache memory within the cache hierarchy.
  • the present invention describes a technique of providing for a hierarchical cache memory management structure in which cache allocation criteria are established at a particular cache hierarchy level.
  • the present invention describes a technique for providing allocation of a cache memory by utilizing a locality hint associated with an instruction.
  • a processor accesses a memory for transfer of data between the processor and the memory, that access can be allocated or not allocated in a cache memory.
  • the locality hint associated with the instruction provides the programming controls if cache allocation is to be made.
  • cache memories When a plurality of cache memories are present, they are arranged as a cache hierarchy, usually with the lowest level being closest to the processor.
  • a locality hint value is assigned for one or more of the cache level(s) in order to categorize the data at each cache hierarchy level.
  • the locality hint values identify which cache levels are to be allocated for the particular data transfer.
  • the management of cache memory at each level is based on the locality hint value, which is provided by a bit or bits in the particular instruction.
  • cache allocation management is based on temporal (or non-temporal) and spatial (or non-spatial) nature of the data access. However, in the preferred embodiment, only temporal: spatial and non- temporal: spatial categories are used with the locality hints.
  • each cache level can have either of two states for the data. Accordingly, the locality hints associated with the particular instruction determines if the data is to be categorized as temporal: spatial data (cached) or non-temporal:spatial data (not cached).
  • Figure 1 shows a circuit block diagram of a prior art computer system, in which a cache memory is used for data accesses between a main memory and processor of the computer system.
  • Figure 2 shows a circuit block diagram of an exemplary prior art computer system, in which two cache memories are arranged into cache memory levels for accessing of data between a main memory and a processor(s) of the computer system.
  • Figure 3 shows a circuit block diagram of a computer system having two hierarchical levels of cache memories and utilizing the present invention to specify four data access attributes based on temporal and spatial parameters for the cache memories.
  • Figure 4 shows a circuit block diagram of a computer system having two hierarchical levels of cache memories for implementing the preferred embodiment, in which only two of the four data access attributes noted in Figure 3 are used for each level of the cache hierarchy.
  • Figure 5 shows a circuit block diagram of a computer system implementing the present invention, in which hierarchical cache memory architecture is used and in which cache allocation control is provided by locality hint values present in an instruction that accesses data.
  • Figure 6 is a flow diagram showing a general case in implementing the present invention when the instruction of Figure 5 is executed.
  • Figure 7 is a flow diagram showing the implementation of the preferred embodiment in the specific case illustrated in Figure 4 when the instruction of Figure 5 is executed.
  • a technique for providing hierarchical management of cache memories, in which cache allocation is determined by data utilization.
  • numerous specific details are set forth, such as specific memory devices, circuit diagrams, processor instructions, etc., in order to provide a thorough understanding of the present invention.
  • the present invention may be practiced without these specific details.
  • well known techniques and structures have not been described in detail in order not to obscure the present invention.
  • a particular implementation is described as a preferred embodiment of the present invention, however, it is readily understood that other embodiments can be designed and implemented without departing from the spirit and scope of the present invention.
  • the present invention is described in reference to a serially arranged cache hierarchy system, but it need not be limited strictly to such a hierarchy.
  • a typical computer system wherein a processor 10, which forms the central processing unit (CPU) of the computer system is coupled to a main memory 11 by a bus 14.
  • the main memory 11 is typically comprised of a random-access-memory and is usually referred to as RAM. Subsequently, the main memory 11 is generally coupled to a mass storage device 12, such as a magnetic or optical memory device, for mass storage (or saving) of information.
  • a cache memory 13 (hereinafter also referred simply as cache) is coupled to the bus 14 as well.
  • the cache 13 is shown located between the CPU 11 and the main memory 11, in order to exemplify the functional utilization and transfer of data associated with the cache 13.
  • a cache controller 15 is shown coupled to the cache 13 and the bus 14 for controlling the operation of the cache 13.
  • the operation of a cache controller, such as the controller 15, is known in the art and, accordingly, in the subsequent Figures, cache controllers are not illustrated. It is presumed that some controller(s) is/are present under control of the CPU 10 to control the operation of cache(s) shown.
  • information transfer between the memory 11 and the CPU 10 is achieved by memory accesses from the CPU 10.
  • data When data is currently or shortly to be accessed by the CPU 10, that data is first allocated in the cache 13. That is, when the CPU 10 accesses a given information from the memory 11, it seeks the information from the cache 13. If the accessed data is in the cache 13, a "hit” occurs. Otherwise, a "miss” results and cache allocation for the data is sought.
  • all accesses (whether load or store) require the allocation of the cache 13 (except for the limited exception noted in the Background section above when the cache is bypassed).
  • FIG. 2 a computer system implementing a multiple cache arrangement is shown.
  • the CPU 10 is still coupled to the main memory 11 by the bus 14 and the memory 11 is then coupled to the mass storage device 12.
  • two separate cache memories 21 and 22 are shown.
  • the caches 21-22 are shown arranged serially and each is representative of a cache level, referred to as Level 1 (LI) cache and Level 2 (L2) cache, respectively.
  • the LI cache 21 is shown as part of the CPU 10, while the L2 cache 22 is shown external to the CPU 10.
  • This structure exemplifies the current practice of placing the LI cache on the processor chip while higher level caches are placed external to it.
  • the actual placement of the various cache memories is a design choice or dictated by the processor architecture. Thus, it is appreciated that the LI cache could be placed external to the CPU 10.
  • CPU 10 includes an execution unit 23, register file 24 and prefetch/decoder unit 25.
  • the execution unit 23 is the processing core of the CPU 10 for executing the various processor instructions.
  • the register file 24 is a set of general purpose registers for storing (or saving) various information required by the execution unit 23. There may be more than one register file in more advanced systems.
  • the prefetch/decoder unit 25 fetches instructions from a storage location (such as the main memory 11) holding the instructions of a program that will be executed and decodes these instructions for execution by the execution unit 23. In more advanced processors utilizing pipelined architecture, future instructions are prefetched and decoded before the instructions are actually needed so that the processor is not idle waiting for the instructions to be fetched when needed.
  • a bus interface unit (BIU) 26 provides an interface for coupling the various units of CPU 10 to the bus 14.
  • BIU bus interface unit
  • the LI cache is coupled to the internal bus 27 and functions as an internal cache for the CPU 10.
  • the LI cache could reside outside of the CPU 10 and coupled to the bus 14.
  • the caches can be used to cache data, instructions or both.
  • the LI cache is actually split into two sections, one section for caching data and one section for caching instructions.
  • the various caches described in the Figures are shown as single caches with data, instructions and other information all referenced herein as data. It is appreciated that the operations of the units shown in Figure 2 are known.
  • the CPU 10 actually includes many more components than just the components shown. Thus, only those structures pertinent to the understanding of the present invention are shown in Figure 2. It is also to be noted that the computer system may be comprised of more than one CPU (as shown by the dotted line in Figure 2). In such a system, it is typical for multiple CPUs to share the main memory 11 and/or mass storage unit 12. Accordingly, some or all of the caches associated with the computer system may be shared by the various processors of the computer system. For example, with the system of Figure 2, LI cache 21 of each processor would be utilized by its processor only, but the external L2 cache 22 would be shared by some or all of the CPUs of the system. The present invention can be practiced in a single CPU computer system or in a multiple CPU computer system.
  • DMA direct memory accessing
  • the currently practiced method in which the cache is allocated is based primarily on the spatial closeness of data in reference to the currently executed instruction in the CPU. That is, for a given memory location accessed by the CPU 10, data at that location, as well as data within a specified adjacent locations (stride) are cached. This is due to the current practice of using cache systems that obtain cache lines. Currently there is not a way to distinguish which accesses should be cached (or not cached) based on the use or re-use of data at a particular cache hierarchy level. The present invention provides for a way to manage the cache hierarchy to decide what information should be cached, and if cached, at which levels (if there are more than one).
  • the cache allocation can be programmed based on the particular pattern of data usage.
  • data pattern accesses requiring multiple re-use can be cached, while data pattern accesses requiring single use need not be allocated in the cache and such control can be made with respect to each level of the cache hierarchy.
  • Temporal locality is an attribute associated with data and determined by how soon in the future a program will access the data.
  • Spatial locality is an attribute associated with the storage of data and determined by how close the address locations of data being used are to each other.
  • T temporal locality
  • nT temporal locality
  • S spatial locality
  • nS spatial locality
  • two levels of cache hierarchy are shown (representing levels LI and L2), wherein at each level there are four possibilities for classifying the data pattern being accessed.
  • the classifications are noted as 1) temporal and spatial (T:S); 2) non-temporal, but spatial (nT:S); 3) temporal, but non-spatial (T:nS); and 4) non-temporal and non-spatial (nT:nS).
  • These four classification categories are represented by the four appropriately labeled blocks 34-37 at LI and four similarly labeled blocks 30-33 at L2.
  • the classifications are based on the attributes associated with the data access for a computer system. It is appreciated that two levels of cache hierarchy are shown in Figure 3, but there could be additional cache levels. Furthermore, the present invention could be practiced where there is only one cache level (LI only). For each level of the cache hierarchy, there would be four blocks representing the four classification categories.
  • the temporal property is associated with how close to the CPU 10 the data is stored or saved. Accordingly, temporal is associated with the use or re-use of data at a given level. For example, if a particular data pattern in the program is identified to be T with respect to LI, but nT with respect to L2, then this data will be used in the near future in the LI cache, but not so near in the future in the L2 cache.
  • the temporal distance of how soon the data will be used or re-used is application dependent for a particular computer system and software.
  • data access is regarded as T at a given cache level, it will be re-used within a certain time frame (for example, within x number of instructions) in the near future. Where data access is regarded as nT at a given level, it will not be reused within the specified time frame.
  • the spatial property is associated with the stride (memory address range or distance) of a data pattern being accessed and can be designated S or nS at each of the levels in the cache hierarchy. That is, a program will utilize various data for a particular operation and the required data for this operation will reside at various memory locations. The address locations for the data can be close together or far apart.
  • the stride range or distance for determining how close the data locations must be for the data to be regarded as within the spatial category is a design parameter of the computer system.
  • the spatial requirement will depend on factors, such as cache size, minimum cache-line size and set associativity for the cache. For example, where the L2 cache is larger than the LI cache, the stride range for the L2 cache can be made larger than that associated with the LI cache.
  • a data pattern access by the CPU 10 from memory 11 will have temporal and spatial attributes at each level of the cache hierarchy.
  • data can be categorized as T:S (block 30), nT:S (block 31), T:nS (block 32) or nT:nS (block 33) at L2.
  • each data block 30-33 can take one of four similar attributes (as shown by corresponding blocks 34-37 at LI).
  • the four paths from each block 30-33 of L2 to LI are noted in Figure 3. Again, if there are additional cache levels in the hierarchy, the interconnection between the cache levels would have similar paths as shown between L2 and LI in Figure 3.
  • each of the blocks 34-37 of LI are coupled to the CPU 10.
  • the currently implemented cache management systems are equivalent to providing the path 38 (shown by a dotted line 38), in that when a data access is made by the CPU 10, that data and its adjacent data having spatial locality are allocated at all levels of the cache hierarchy.
  • the current practice is to treat accesses as having a T:S attribute at each level.
  • the T:S data is cached at all levels of the hierarchy. The one exception being the condition noted earlier when the cache is bypassed altogether.
  • the present invention can provide multiple classifications of data access at each level of the cache hierarchy and caching or non-caching can be controlled at each level by the attributes associated with the data at a given level.
  • T:S general access patterns for scalar data and since most data accesses will fall in this category, the practice of treating all accesses as T:S may be adequate in many instances. However, in other situations performance can be lost when adhering to this rule. There will be instances when data patterns do not fall within this general access (T:S) category. For example, the multiplying of two matrices ([A] x [B]) requires repeated use of column [B] data with the values of [A]. The prior art technique would not differentiate these operations from others when allocating the cache(s). However, by employing the present invention, the matrix [A] values could be designated as T:S at L2, but nT:S at LI.
  • the column [B] data can still be regarded as T:S at both LI and L2.
  • the block of data for [A] can be cached in the large L2 cache, but not in the LI cache.
  • data accesses can be regarded as nT and need not be cached at all at any level of the cache hierarchy.
  • the present invention provides for a scheme in which data can be categorized at each cache hierarchy level depending on the attributes associated with the data at a particular level.
  • the noted temporal and spatial criteria can be set based on the action required of the data.
  • mechanisms can be put into place for how each of the categories are to be processed at each level.
  • cache allocation can be based on none, one, or more than one ,of the categories available at each of the levels.
  • the allocation rules can be changed at each of the levels, so that at one level a particular category is cached, but at another level the same category may not be cached.
  • FIG 3 four categories (T:S, nT:S, T:nS and nT:nS) are noted and can be readily implemented at each of the cache hierarchy levels. However, it is appreciated that data accesses can be categorized into more or less classification categories. Accordingly, in the preferred embodiment, a simpler design is implemented and is shown in Figure 4. The embodiment shown in Figure 4 is preferred since most data access patterns can still be made to fit within the reduced number of categories shown.
  • a cache hierarchy management scheme of the preferred embodiment is shown having only T:S and nT:S categories at each level of the cache hierarchy. Only LI and L2 levels are shown, but it is appreciated that additional levels can be readily implemented. The preferred embodiment can be practiced utilizing only one cache level as well.
  • the preferred embodiment shown in Figure 4 is a sub-set of the invention shown in Figure 3.
  • data accesses can be classified as T:S or nT:S at L2 and the same at LI.
  • T:S or nT:S data accesses can be classified as T:S or nT:S at L2 and the same at LI.
  • the nS categories have been disregarded in the cache allocation scheme of the preferred embodiment, since current cache systems are generally based on obtaining a complete cache line, accordingly, it is the temporal aspect (T or nT) which determines if a particular data access will be cached or not cached.
  • T or nT temporal aspect
  • the inter-level pathways are simplified. Since there are only two categories at each level (as shown by blocks 30-31 at L2 and blocks 34-35 at LI), data can reach the CPU 10 from the memory 11 by four potential paths, designated A, B, C and D.
  • nT:S data are placed in a high-speed buffer(s) for the data transfer.
  • path D is not permitted and, therefore, is shown by a dotted line.
  • the above rule is implemented effectively with the current practice of having the same or larger size caches at each higher level of the cache hierarchy.
  • T:S or nT:S classification at each cache hierarchy level is specified by a locality "hint" associated with each level for instructions that access the memory 11. For example, when load, store and/or prefetch instructions are executed, the locality hint(s) is/are transferred as part of an instruction to designate the status of the data associated with the access.
  • FIG. 5 An implementation of the preferred embodiment is shown in Figure 5.
  • the computer system of Figure 5 is equivalent to the computer system of Figure 2 (accordingly, letter "a” has been appended to like reference numerals of Figure 2), but now has a processor 10a, which includes an execution unit 23 a for operating on instructions which include the locality hint(s).
  • An instruction 19 is shown having a locality hint(s) as part of the instruction.
  • a particular bit or bits in an instruction is associated with the caching of data at each of the cache levels where cache memory allocation is to be designated. In the example shown two bits 17 and 18 are shown. The first bit 17 is used to provide the locality hint value for the LI cache 21a and the second bit 18 is used to provide the locality hint value for the L2 cache 22a.
  • the bit-state identifies the attribute assigned for the particular access being attempted by the instruction 19. For example, a "1" bit state for bits 17 and/or 18 designates a T:S condition for a cache level, while a "0" bit state would designate a nT:S condition. Where additional cache levels are present, a bit would be required for each of the cache levels, provided there are only two attributes to be utilized. Where four categories are to be used (such as when the hierarchy structure of Figure 3 is being implemented), two bits will be required for each level. It is to be appreciated that not all caches need to utilize the cache hierarchy control provided by the locality hint bit(s). For example, only LI or only LI and L2 (in a three-level cache system) may opt to utilize the invention. The other cache(s) would then treat accesses based on the hierarchy rule or on a default condition (such as treating the accesses as a T:S access at that level). Thus, it is appreciated that numerous variations are available.
  • the instructions which typically will incorporate the locality hint bit(s) are load, store and prefetch instructions, with primary use attributable to the prefetch instruction. However, it is appreciated that other memory accessing instructions can readily incorporate the present invention.
  • the prefetch instruction prefetches the data (including other instructions) for execution by the processor and it is this prefetching operation that discriminates how the caches should be allocated. It should be noted that in some instances the prefetched data may never be used. This may be due to unused branch conditions or due to a generation of an exception.
  • cache allocation can be based on the particular type of instruction being executed.
  • load instructions could be either T:S or nT:S with respect to LI of the cache hierarchy and store instructions could be either T:S with respect to LI or are nT:S with respect to all levels of cache hierarchy.
  • variations can be introduced based on the particular instruction being executed.
  • the scheme of the present invention can be implemented with a T:S default condition, so that the cache hierarchy management could be "shut-off when not desired (leaving only the default T:S condition for all accesses). This default condition permits instructions written with cache hierarchy management capability to operate with a computer system which does not implement the invention. This aspect of the invention is noted in the flow diagram of Figure 6.
  • the diagram of Figure 6 shows what happens when an instruction containing the locality hint(s) of the present invention is executed. If the computer system is capable of processing the locality hint(s), then the cache allocation is based on the hierarchy management scheme of the system when performing the operations dictated by the instruction (as shown in block 40). However, if the computer system is not capable of processing the locality hint(s), the locality hints are disregarded (as shown in block 41).
  • FIG. 7 shows what happens in the instance the preferred embodiment of Figure 4 is implemented. If the computer system is capable of processing the locality hint bit(s), then the cache allocation is based on the level at which data is regarded as T:S or nT:S (as shown in block 50). The application of the design rule associated with Figure 4 would utilize this implementation. However, if the computer system is not capable of processing the locality hint(s), the default condition of allocating caches at all levels is used (as shown in block 51).
  • a second path B denotes a situation when data is nT at both levels. In this instance, data reaches CPU 10 without being cached at all. As noted, in the preferred embodiment streaming buffers are used for the data transfer without being cached. Data utilizing path B are read-once unit-stride vectors and block copying of data from one set of address locations to another. Thus, data that are not to be re-used are sent along path B.
  • a third path C denotes a situation when data is T:S at L2, but the same data is nT:S at LI.
  • data is cached in the L2 cache, but not in the LI cache.
  • the previous described matrix multiplication example would fit in this category.
  • cache levels there may be multiple caches at a particular level. For example, there may be two separate caches at LI, one for handling data and the second for handling instructions.
  • multiple processors may be coupled at a particular level. For example, four processors could be coupled at a point between the LI and L2 caches, so that each processor would have its own LI cache, but the L2 (and higher level) cache is shared by all four processors (as illustrated in Figure 5).
  • each cache level will be configured to have four blocks, as shown in Figure 3.
  • the available combination of paths from memory 1 la to CPU 10a will increase significantly, thereby complicating the implementation.
  • two bits would be allocated in the instructions for identifying the particular classification at each cache level.
  • Advantages of practicing the present invention reside in the design flexibility in allocating cache memory at each level of the cache hierarchy for a particular data pattern accessed. This design flexibility allows for improved performance in deciding when cache should be allocated based on data access latency and effective cache utilization.
  • the locality hints associated with each cache level reside within the instructions, which benefit both operating systems and application programs.
  • the processor opcodes are written to read the bit(s) associated with providing the locality hint and respond accordingly for allocating or not allocating the cache(s) for data access.
  • the performance advantage is especially noticeable with multimedia and supercomputing systems requiring substantially higher rate of processing and data transfer.

Abstract

A computer system and method in which allocation of a cache memory (21a, 22a) is managed by utilizing a locality hint value (17, 18), included within an instruction (19), which controls if cache allocation is to be made. The locality value is based on spatial and/or temporal locality for a data access and may be assigned to each level of a cache hierarchy where allocation control is desired. The locality hint value may be used to identify a lowest level where management of cache allocation is desired and cache is allocated at that level and any higher level or levels. If the locality hint identifies a particular access for data as temporal or non-temporal with respect to a particular cache level, the particular access may be determined to be temporal or non-temporal with respect to the higher and lower cache levels.

Description

CACHE HIERARCHY MANAGEMENT WITH LOCALITY HINTS FOR DIFFERENT CACHE LEVELS
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to the field of processors and, more particularly, to a technique of providing hierarchical management of cache memories.
2. Background of the Related Art
The use of a cache memory with a processor is well known in the computer art. A primary purpose of utilizing cache memory is to bring the data closer to the processor in order for the processor to operate on that data. It is generally understood that memory devices closer to the processor operate faster than memory devices farther away on the data path from the processor. However, there is a cost trade-off in utilizing faster memory devices. The faster the data access, the higher the cost to store a bit of data. Accordingly, a cache memory tends to be much smaller in storage capacity than main memory, but is faster in accessing the data.
A computer system may utilize one or more levels of cache memory. Allocation and de-allocation schemes implemented for the cache for various known computer systems are generally similar in practice. That is, data that is required by the processor is cached in the cache memory (or memories). If a miss occurs, then an allocation is made at the entry indexed by the access. The access can be for loading data to the processor or storing data from the processor to memory. The cached information is retained by the cache memory until it is no longer needed, made invalid or replaced by other data, in which instances the cache entry is de-allocated.
General practice has been to allocate cache for all accesses required by the processor. Accordingly, system architectures specify re-use of accessed data without notion of relevant cache hierarchy level. That is, all accesses are allocated in cache. A disadvantage of this approach is that it does not address instances where data is only read once with respect to a cache level, but where that same data maybe re-used with respect to another cache level. One solution (implemented as a cache bypass operation) provides for a load instruction to bypass the cache altogether by not allocating the cache for certain accesses. However, this technique does not provide flexibility in programming and when implemented, applies for all applications.
It is appreciated that in some operations, system performance can be enhanced by not allocating the cache. It would also be advantageous if cache allocation can be programmed by software. Furthermore, advantages can be gained if cache memory allocation can be based on the application which is to be executed and if such allocation can be managed based on the particular level of the cache memory within the cache hierarchy.
The present invention describes a technique of providing for a hierarchical cache memory management structure in which cache allocation criteria are established at a particular cache hierarchy level.
SUMMARY OF THE INVENTION
The present invention describes a technique for providing allocation of a cache memory by utilizing a locality hint associated with an instruction. When a processor accesses a memory for transfer of data between the processor and the memory, that access can be allocated or not allocated in a cache memory. The locality hint associated with the instruction provides the programming controls if cache allocation is to be made.
When a plurality of cache memories are present, they are arranged as a cache hierarchy, usually with the lowest level being closest to the processor. A locality hint value is assigned for one or more of the cache level(s) in order to categorize the data at each cache hierarchy level. The locality hint values identify which cache levels are to be allocated for the particular data transfer. Thus, the management of cache memory at each level is based on the locality hint value, which is provided by a bit or bits in the particular instruction. In the practice of the present invention, cache allocation management is based on temporal (or non-temporal) and spatial (or non-spatial) nature of the data access. However, in the preferred embodiment, only temporal: spatial and non- temporal: spatial categories are used with the locality hints. Thus, each cache level can have either of two states for the data. Accordingly, the locality hints associated with the particular instruction determines if the data is to be categorized as temporal: spatial data (cached) or non-temporal:spatial data (not cached).
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 shows a circuit block diagram of a prior art computer system, in which a cache memory is used for data accesses between a main memory and processor of the computer system.
Figure 2 shows a circuit block diagram of an exemplary prior art computer system, in which two cache memories are arranged into cache memory levels for accessing of data between a main memory and a processor(s) of the computer system.
Figure 3 shows a circuit block diagram of a computer system having two hierarchical levels of cache memories and utilizing the present invention to specify four data access attributes based on temporal and spatial parameters for the cache memories.
Figure 4 shows a circuit block diagram of a computer system having two hierarchical levels of cache memories for implementing the preferred embodiment, in which only two of the four data access attributes noted in Figure 3 are used for each level of the cache hierarchy.
Figure 5 shows a circuit block diagram of a computer system implementing the present invention, in which hierarchical cache memory architecture is used and in which cache allocation control is provided by locality hint values present in an instruction that accesses data.
Figure 6 is a flow diagram showing a general case in implementing the present invention when the instruction of Figure 5 is executed.
Figure 7 is a flow diagram showing the implementation of the preferred embodiment in the specific case illustrated in Figure 4 when the instruction of Figure 5 is executed.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
A technique is described for providing hierarchical management of cache memories, in which cache allocation is determined by data utilization. In the following description, numerous specific details are set forth, such as specific memory devices, circuit diagrams, processor instructions, etc., in order to provide a thorough understanding of the present invention. However, it will be appreciated by one skilled in the art that the present invention may be practiced without these specific details. In other instances, well known techniques and structures have not been described in detail in order not to obscure the present invention. It is to be noted that a particular implementation is described as a preferred embodiment of the present invention, however, it is readily understood that other embodiments can be designed and implemented without departing from the spirit and scope of the present invention. Furthermore, it is appreciated that the present invention is described in reference to a serially arranged cache hierarchy system, but it need not be limited strictly to such a hierarchy.
Referring to Figure 1, a typical computer system is shown, wherein a processor 10, which forms the central processing unit (CPU) of the computer system is coupled to a main memory 11 by a bus 14. The main memory 11 is typically comprised of a random-access-memory and is usually referred to as RAM. Subsequently, the main memory 11 is generally coupled to a mass storage device 12, such as a magnetic or optical memory device, for mass storage (or saving) of information. A cache memory 13 (hereinafter also referred simply as cache) is coupled to the bus 14 as well. The cache 13 is shown located between the CPU 11 and the main memory 11, in order to exemplify the functional utilization and transfer of data associated with the cache 13. It is appreciated that the actual physical placement of the cache 13 can vary depending on the system and the processor architecture. Furthermore, a cache controller 15 is shown coupled to the cache 13 and the bus 14 for controlling the operation of the cache 13. The operation of a cache controller, such as the controller 15, is known in the art and, accordingly, in the subsequent Figures, cache controllers are not illustrated. It is presumed that some controller(s) is/are present under control of the CPU 10 to control the operation of cache(s) shown.
In operation, information transfer between the memory 11 and the CPU 10 is achieved by memory accesses from the CPU 10. When data is currently or shortly to be accessed by the CPU 10, that data is first allocated in the cache 13. That is, when the CPU 10 accesses a given information from the memory 11, it seeks the information from the cache 13. If the accessed data is in the cache 13, a "hit" occurs. Otherwise, a "miss" results and cache allocation for the data is sought. As currently practiced, all accesses (whether load or store) require the allocation of the cache 13 (except for the limited exception noted in the Background section above when the cache is bypassed).
Referring to Figure 2, a computer system implementing a multiple cache arrangement is shown. The CPU 10 is still coupled to the main memory 11 by the bus 14 and the memory 11 is then coupled to the mass storage device 12. However, in the example of Figure 2, two separate cache memories 21 and 22 are shown. The caches 21-22 are shown arranged serially and each is representative of a cache level, referred to as Level 1 (LI) cache and Level 2 (L2) cache, respectively. Furthermore, the LI cache 21 is shown as part of the CPU 10, while the L2 cache 22 is shown external to the CPU 10. This structure exemplifies the current practice of placing the LI cache on the processor chip while higher level caches are placed external to it. The actual placement of the various cache memories is a design choice or dictated by the processor architecture. Thus, it is appreciated that the LI cache could be placed external to the CPU 10.
Generally, CPU 10 includes an execution unit 23, register file 24 and prefetch/decoder unit 25. The execution unit 23 is the processing core of the CPU 10 for executing the various processor instructions. The register file 24 is a set of general purpose registers for storing (or saving) various information required by the execution unit 23. There may be more than one register file in more advanced systems. The prefetch/decoder unit 25 fetches instructions from a storage location (such as the main memory 11) holding the instructions of a program that will be executed and decodes these instructions for execution by the execution unit 23. In more advanced processors utilizing pipelined architecture, future instructions are prefetched and decoded before the instructions are actually needed so that the processor is not idle waiting for the instructions to be fetched when needed.
The various units 23-25 of the CPU 10 are coupled to an internal bus structure 27. A bus interface unit (BIU) 26 provides an interface for coupling the various units of CPU 10 to the bus 14. As shown in Figure 2, the LI cache is coupled to the internal bus 27 and functions as an internal cache for the CPU 10. However, again it is to be emphasized that the LI cache could reside outside of the CPU 10 and coupled to the bus 14. The caches can be used to cache data, instructions or both. In some systems, the LI cache is actually split into two sections, one section for caching data and one section for caching instructions. However, for simplicity of explanation, the various caches described in the Figures are shown as single caches with data, instructions and other information all referenced herein as data. It is appreciated that the operations of the units shown in Figure 2 are known. Furthermore it is appreciated that the CPU 10 actually includes many more components than just the components shown. Thus, only those structures pertinent to the understanding of the present invention are shown in Figure 2. It is also to be noted that the computer system may be comprised of more than one CPU (as shown by the dotted line in Figure 2). In such a system, it is typical for multiple CPUs to share the main memory 11 and/or mass storage unit 12. Accordingly, some or all of the caches associated with the computer system may be shared by the various processors of the computer system. For example, with the system of Figure 2, LI cache 21 of each processor would be utilized by its processor only, but the external L2 cache 22 would be shared by some or all of the CPUs of the system. The present invention can be practiced in a single CPU computer system or in a multiple CPU computer system. It is further noted that other types of units (other than processors) which access memory can function equivalently to the CPUs described herein and, therefore, are capable of performing the memory accessing functions similar to the described CPUs. For example, direct memory accessing (DMA) devices can readily access memory similar to the processors described herein. Thus, a computer system having one processor (CPU), but one or more of the memory accessing units would function equivalent to the multiple processor system shown described herein.
As noted, only two caches 21-22 are shown. However, the compute system need not be limited to only two levels of cache. It is now a practice to utilize a third level (L3) cache in more advanced systems. It is also the practice to have a serial arrangement of cache memories so that data cached in the LI cache is also cached in the L2 cache. If there happens to be an L3 cache, then data cached in the L2 cache is typically cached in the L3 cache as well. Thus, data cached at a particular cache level is also cached at all higher levels of the cache hierarchy.
The currently practiced method in which the cache is allocated is based primarily on the spatial closeness of data in reference to the currently executed instruction in the CPU. That is, for a given memory location accessed by the CPU 10, data at that location, as well as data within a specified adjacent locations (stride) are cached. This is due to the current practice of using cache systems that obtain cache lines. Currently there is not a way to distinguish which accesses should be cached (or not cached) based on the use or re-use of data at a particular cache hierarchy level. The present invention provides for a way to manage the cache hierarchy to decide what information should be cached, and if cached, at which levels (if there are more than one). By permitting program instructions to control the caching decision, the cache allocation can be programmed based on the particular pattern of data usage. Thus, data pattern accesses requiring multiple re-use can be cached, while data pattern accesses requiring single use need not be allocated in the cache and such control can be made with respect to each level of the cache hierarchy.
Referring to Figure 3, a hierarchical cache management structure of the present invention is shown. However, in order to understand the present invention, certain terminology must be understood in reference to the cache management. The present invention operates within the framework wherein the particular data being accessed will have (or not have) temporal locality and spatial locality. Temporal locality is an attribute associated with data and determined by how soon in the future a program will access the data. Spatial locality is an attribute associated with the storage of data and determined by how close the address locations of data being used are to each other. Thus, for each data pattern, the data may have temporal locality (T) or not have temporal locality (nT) with respect to a cache level in the hierarchy. Accordingly, that same data pattern may have spatial locality (S) or not have spatial locality (nS) with respect to a cache level in the hierarchy.
In Figure 3, two levels of cache hierarchy are shown (representing levels LI and L2), wherein at each level there are four possibilities for classifying the data pattern being accessed. The classifications are noted as 1) temporal and spatial (T:S); 2) non-temporal, but spatial (nT:S); 3) temporal, but non-spatial (T:nS); and 4) non-temporal and non-spatial (nT:nS). These four classification categories are represented by the four appropriately labeled blocks 34-37 at LI and four similarly labeled blocks 30-33 at L2. The classifications are based on the attributes associated with the data access for a computer system. It is appreciated that two levels of cache hierarchy are shown in Figure 3, but there could be additional cache levels. Furthermore, the present invention could be practiced where there is only one cache level (LI only). For each level of the cache hierarchy, there would be four blocks representing the four classification categories.
In the practice of the present invention, the temporal property is associated with how close to the CPU 10 the data is stored or saved. Accordingly, temporal is associated with the use or re-use of data at a given level. For example, if a particular data pattern in the program is identified to be T with respect to LI, but nT with respect to L2, then this data will be used in the near future in the LI cache, but not so near in the future in the L2 cache. The temporal distance of how soon the data will be used or re-used is application dependent for a particular computer system and software. When data access is regarded as T at a given cache level, it will be re-used within a certain time frame (for example, within x number of instructions) in the near future. Where data access is regarded as nT at a given level, it will not be reused within the specified time frame.
The spatial property is associated with the stride (memory address range or distance) of a data pattern being accessed and can be designated S or nS at each of the levels in the cache hierarchy. That is, a program will utilize various data for a particular operation and the required data for this operation will reside at various memory locations. The address locations for the data can be close together or far apart. The stride range or distance for determining how close the data locations must be for the data to be regarded as within the spatial category is a design parameter of the computer system. The spatial requirement will depend on factors, such as cache size, minimum cache-line size and set associativity for the cache. For example, where the L2 cache is larger than the LI cache, the stride range for the L2 cache can be made larger than that associated with the LI cache.
Accordingly, as shown in Figure 3, a data pattern access by the CPU 10 from memory 11 will have temporal and spatial attributes at each level of the cache hierarchy. Thus, data can be categorized as T:S (block 30), nT:S (block 31), T:nS (block 32) or nT:nS (block 33) at L2. Since there are also four data classification categories at LI, each data block 30-33 can take one of four similar attributes (as shown by corresponding blocks 34-37 at LI). The four paths from each block 30-33 of L2 to LI are noted in Figure 3. Again, if there are additional cache levels in the hierarchy, the interconnection between the cache levels would have similar paths as shown between L2 and LI in Figure 3. Finally, each of the blocks 34-37 of LI are coupled to the CPU 10.
It is appreciated that the currently implemented cache management systems are equivalent to providing the path 38 (shown by a dotted line 38), in that when a data access is made by the CPU 10, that data and its adjacent data having spatial locality are allocated at all levels of the cache hierarchy. Thus, the current practice is to treat accesses as having a T:S attribute at each level. The T:S data is cached at all levels of the hierarchy. The one exception being the condition noted earlier when the cache is bypassed altogether. The present invention, on the other hand, can provide multiple classifications of data access at each level of the cache hierarchy and caching or non-caching can be controlled at each level by the attributes associated with the data at a given level.
Since the T:S condition reflects general access patterns for scalar data and since most data accesses will fall in this category, the practice of treating all accesses as T:S may be adequate in many instances. However, in other situations performance can be lost when adhering to this rule. There will be instances when data patterns do not fall within this general access (T:S) category. For example, the multiplying of two matrices ([A] x [B]) requires repeated use of column [B] data with the values of [A]. The prior art technique would not differentiate these operations from others when allocating the cache(s). However, by employing the present invention, the matrix [A] values could be designated as T:S at L2, but nT:S at LI. The column [B] data can still be regarded as T:S at both LI and L2. Thus, the block of data for [A] can be cached in the large L2 cache, but not in the LI cache. In another example, where data will be used only once, such as for read-once data or for block copying from one address to another, such data accesses can be regarded as nT and need not be cached at all at any level of the cache hierarchy.
Thus, it is appreciated that the present invention provides for a scheme in which data can be categorized at each cache hierarchy level depending on the attributes associated with the data at a particular level. Thus, the noted temporal and spatial criteria can be set based on the action required of the data. Once categorized, mechanisms (rules) can be put into place for how each of the categories are to be processed at each level. Thus, cache allocation can be based on none, one, or more than one ,of the categories available at each of the levels. Further, the allocation rules can be changed at each of the levels, so that at one level a particular category is cached, but at another level the same category may not be cached.
In Figure 3, four categories (T:S, nT:S, T:nS and nT:nS) are noted and can be readily implemented at each of the cache hierarchy levels. However, it is appreciated that data accesses can be categorized into more or less classification categories. Accordingly, in the preferred embodiment, a simpler design is implemented and is shown in Figure 4. The embodiment shown in Figure 4 is preferred since most data access patterns can still be made to fit within the reduced number of categories shown. In Figure 4, a cache hierarchy management scheme of the preferred embodiment is shown having only T:S and nT:S categories at each level of the cache hierarchy. Only LI and L2 levels are shown, but it is appreciated that additional levels can be readily implemented. The preferred embodiment can be practiced utilizing only one cache level as well. The preferred embodiment shown in Figure 4 is a sub-set of the invention shown in Figure 3.
As illustrated in Figure 4, data accesses can be classified as T:S or nT:S at L2 and the same at LI. With the preferred embodiment, only spatially close data are employed. The nS categories have been disregarded in the cache allocation scheme of the preferred embodiment, since current cache systems are generally based on obtaining a complete cache line, accordingly, it is the temporal aspect (T or nT) which determines if a particular data access will be cached or not cached. Thus, with only two classifications at each cache hierarchy level, the inter-level pathways are simplified. Since there are only two categories at each level (as shown by blocks 30-31 at L2 and blocks 34-35 at LI), data can reach the CPU 10 from the memory 11 by four potential paths, designated A, B, C and D. Data fitting the T:S condition at a given cache hierarchy level is designated to have cache allocated at that level. Data fitting the nT:S condition at a given level are designated to not have any cache allocation at that level. In the preferred embodiment, nT:S data are placed in a high-speed buffer(s) for the data transfer.
Although there are four potential paths in Figure 4, only three are actually implemented due to a design rule imposed on the cache hierarchy management scheme of the preferred embodiment. This rule specifies locality attributes based on the following two requirements:
1. If an access is specified T:S with respect to a level Li, then that access is to exhibit T:S locality for Lj for all j > i; and
2. If an access is specified nT:S with respect to a level Li, then that access exhibits nT:S locality for Lj for all j < i , and it is T:S for all j > i. The above requirements presume that for a given Li, the smaller the value of i, the closer that Li is to the processor in the cache hierarchy.
Thus, because of the above rule, path D is not permitted and, therefore, is shown by a dotted line. The above rule is implemented effectively with the current practice of having the same or larger size caches at each higher level of the cache hierarchy.
Accordingly, in Figure 4, if data is specified as T:S at LI then it is to be T:S at L2 and higher. An example of this requirement is shown as path A. If data is specified as nT:S at LI, then it is to be T:S at L2 and higher. An example of this requirement is shown as path C. If data is specified as nT:S at L2, then it is to be nT:S at LI and T:S at L3 and higher (if there was a L3). An example of this requirement is shown as path B for the two levels LI and L2. By adhering to the above rule of management in implementing the cache hierarchy management scheme of the preferred embodiment, it is only necessary to identify if the particular access is a T:S or nT:S at one particular level of the cache hierarchy.
Although there are a variety of way to specify the particular condition for a cache level, in the preferred embodiment computer instructions are used to designate how the data access is to be classified. The T:S or nT:S classification at each cache hierarchy level is specified by a locality "hint" associated with each level for instructions that access the memory 11. For example, when load, store and/or prefetch instructions are executed, the locality hint(s) is/are transferred as part of an instruction to designate the status of the data associated with the access.
An implementation of the preferred embodiment is shown in Figure 5. The computer system of Figure 5 is equivalent to the computer system of Figure 2 (accordingly, letter "a" has been appended to like reference numerals of Figure 2), but now has a processor 10a, which includes an execution unit 23 a for operating on instructions which include the locality hint(s). An instruction 19 is shown having a locality hint(s) as part of the instruction. A particular bit or bits in an instruction is associated with the caching of data at each of the cache levels where cache memory allocation is to be designated. In the example shown two bits 17 and 18 are shown. The first bit 17 is used to provide the locality hint value for the LI cache 21a and the second bit 18 is used to provide the locality hint value for the L2 cache 22a.
The bit-state identifies the attribute assigned for the particular access being attempted by the instruction 19. For example, a "1" bit state for bits 17 and/or 18 designates a T:S condition for a cache level, while a "0" bit state would designate a nT:S condition. Where additional cache levels are present, a bit would be required for each of the cache levels, provided there are only two attributes to be utilized. Where four categories are to be used (such as when the hierarchy structure of Figure 3 is being implemented), two bits will be required for each level. It is to be appreciated that not all caches need to utilize the cache hierarchy control provided by the locality hint bit(s). For example, only LI or only LI and L2 (in a three-level cache system) may opt to utilize the invention. The other cache(s) would then treat accesses based on the hierarchy rule or on a default condition (such as treating the accesses as a T:S access at that level). Thus, it is appreciated that numerous variations are available.
The instructions which typically will incorporate the locality hint bit(s) are load, store and prefetch instructions, with primary use attributable to the prefetch instruction. However, it is appreciated that other memory accessing instructions can readily incorporate the present invention. The prefetch instruction prefetches the data (including other instructions) for execution by the processor and it is this prefetching operation that discriminates how the caches should be allocated. It should be noted that in some instances the prefetched data may never be used. This may be due to unused branch conditions or due to a generation of an exception.
With the present invention, cache allocation can be based on the particular type of instruction being executed. For example, load instructions could be either T:S or nT:S with respect to LI of the cache hierarchy and store instructions could be either T:S with respect to LI or are nT:S with respect to all levels of cache hierarchy. Thus, variations can be introduced based on the particular instruction being executed. Additionally, the scheme of the present invention can be implemented with a T:S default condition, so that the cache hierarchy management could be "shut-off when not desired (leaving only the default T:S condition for all accesses). This default condition permits instructions written with cache hierarchy management capability to operate with a computer system which does not implement the invention. This aspect of the invention is noted in the flow diagram of Figure 6.
The diagram of Figure 6 shows what happens when an instruction containing the locality hint(s) of the present invention is executed. If the computer system is capable of processing the locality hint(s), then the cache allocation is based on the hierarchy management scheme of the system when performing the operations dictated by the instruction (as shown in block 40). However, if the computer system is not capable of processing the locality hint(s), the locality hints are disregarded (as shown in block 41).
The diagram of Figure 7 shows what happens in the instance the preferred embodiment of Figure 4 is implemented. If the computer system is capable of processing the locality hint bit(s), then the cache allocation is based on the level at which data is regarded as T:S or nT:S (as shown in block 50). The application of the design rule associated with Figure 4 would utilize this implementation. However, if the computer system is not capable of processing the locality hint(s), the default condition of allocating caches at all levels is used (as shown in block 51).
Again, it is appreciated that the manner of configuring the classification of data for each cache level is a design choice and various configurations are available without departing from the spirit and scope of the present invention. Examples of the type of data accesses configured using the paths shown in Figure 4. An example of data utilizing path A are scalar accesses and blocks of unit-stride vectors that have re-use and are small enough in size to fit in the LI cache. The size of the memory device utilized for the LI cache will determine how much data can be cashed in the LI cache. It is presumed that the L2 cache is at least as large (typically larger) in storage size than the LI cache.
A second path B denotes a situation when data is nT at both levels. In this instance, data reaches CPU 10 without being cached at all. As noted, in the preferred embodiment streaming buffers are used for the data transfer without being cached. Data utilizing path B are read-once unit-stride vectors and block copying of data from one set of address locations to another. Thus, data that are not to be re-used are sent along path B.
A third path C denotes a situation when data is T:S at L2, but the same data is nT:S at LI. In this case, data is cached in the L2 cache, but not in the LI cache. Such a condition exists when data is re-used, but not nearly in the future to warrant allocating the LI cache, yet is allocated at L2 since the size of the L2 cache allows for justifying an allocation at this cache level. The previous described matrix multiplication example would fit in this category.
It is to be noted that although only two hierarchical levels have been illustrated, the present invention is applicable with any number of cache levels. In some instances there may be multiple caches at a particular level. For example, there may be two separate caches at LI, one for handling data and the second for handling instructions. Also, as noted earlier, multiple processors (or other memory accessing devices) may be coupled at a particular level. For example, four processors could be coupled at a point between the LI and L2 caches, so that each processor would have its own LI cache, but the L2 (and higher level) cache is shared by all four processors (as illustrated in Figure 5).
Finally, it is to be noted that in the implementation of the preferred embodiment, all data is assumed to have spatial locality. This simplifies the design rule for implementation. However, it is appreciated that the other two discussed categories of T:nS and nT:nS can be made part of the design equation. In this instance, each cache level will be configured to have four blocks, as shown in Figure 3. The available combination of paths from memory 1 la to CPU 10a will increase significantly, thereby complicating the implementation. However, if so desired, such an undertaking can be achieved without departing from the spirit and scope of the present invention. In such a system, two bits would be allocated in the instructions for identifying the particular classification at each cache level.
Advantages of practicing the present invention reside in the design flexibility in allocating cache memory at each level of the cache hierarchy for a particular data pattern accessed. This design flexibility allows for improved performance in deciding when cache should be allocated based on data access latency and effective cache utilization. The locality hints associated with each cache level reside within the instructions, which benefit both operating systems and application programs. The processor opcodes are written to read the bit(s) associated with providing the locality hint and respond accordingly for allocating or not allocating the cache(s) for data access. The performance advantage is especially noticeable with multimedia and supercomputing systems requiring substantially higher rate of processing and data transfer.
Thus, a technique for providing cache hierarchy management is described.

Claims

CLAIMS:I Claim:
1. A computer system for providing cache memory management comprising: a first memory; a processor coupled to said first memory for accessing locations of said first memory for data transfer between said processor and said first memory; at least one cache memory coupled to said processor and said first memory for caching of said data; said processor for receiving an instruction in which a locality hint value is included therein for determining whether cache allocation is to be made to said at least one cache memory.
2. The computer system of claim 1 wherein said locality hint value is provided by a bit-state of a designated bit or bits within said instruction.
3. The computer system of claim 2 wherein said locality hint value is provided by said bit or bits for each of said cache memories when more than one cache memory is present.
4. The computer system of claim 1 wherein said locality hint value identifies if said data is regarded temporal with respect to one or more of said cache memories.
5. The computer system of claim 1 wherein said locality hint value identifies if said data is regarded temporal and spatial, temporal and non-spatial, non-temporal and spatial, or non-temporal and non-spatial with respect to one or more of said cache memories.
6. A computer system for providing cache memory management comprising: a main memory; a processor coupled to said main memory for accessing locations of said main memory for data transfer between said processor and said main memory; a plurality of cache memories coupled to said processor and said main memory and arranged so that a cache memory closest to said processor with respect to data transfer is at a lowest level of a cache hierarchy and other cache memory or memories are arranged at a higher level or levels in said cache hierarchy; said processor for receiving an instruction in which a locality hint value is included therein for determining at which of said cache levels cache allocation is to be made.
7. The computer system of claim 6 wherein said locality hint value identifies if a particular access for data is temporal or non-temporal at a given one of said cache levels and if said particular access is specified as temporal with respect to a cache level Li, then said particular access is determined to be temporal for cache level Lj for all j > i; but if said particular access is specified as non-temporal with respect to said level Li, then said particular access is determined to be non-temporal for cache level Lj for all j < i and is temporal for cache level Lj for all j > i, in which smaller a value of i the closer that Li is to said processor in said cache hierarchy.
8. The computer system of claim 6 wherein said locality hint value is provided by a bit-state of a designated bit or bits within said instruction.
9. The computer system of claim 8 wherein said locality hint value is provided by said bit or bits for each level of said cache hierarchy where said cache memory management is desired.
10. The computer system of claim 8 wherein said locality hint value identifies a lowest level where caching is desired and cache is allocated at that level and all higher level or levels.
11. The computer system of claim 8 wherein said locality hint value identifies a lowest level where caching is not desired and cache is not allocated at that level and all lower level or levels.
12. The computer system of claim 8 wherein cache is allocated at all levels higher than said lowest level where caching is not desired.
13. The computer system of claim 8 further including additional processor and said processors share one or more levels of said cache memories.
14. In a computer system, a method of allocating cache memories based on a pattern of accesses for data utilized by a processor, comprising the steps of: implementing a first memory where data is to be stored; implementing said cache memories for caching of said data; implementing a processor in which said processor is coupled to said first memory and said cache memories; said processor for executing an instruction in which a locality hint value is included therein for determining whether cache allocation is to be made for at least one of said cache memories.
15. The method of claim 14 wherein said locality hint value is provided by a bit-state of a designated bit or bits within said instruction.
16. The method of claim 15 wherein said locality hint value is provided by said bit or bits for each of said cache memories.
17. The method of claim 14 wherein said locality hint value identifies if said data is regarded temporal with respect to one or more of said cache memories.
18. The method of claim 14 wherein said locality hint value identifies if said data is regarded temporal and spatial, temporal and non- spatial, non-temporal and spatial, or non-temporal and non-spatial with respect to one or more of said cache memories.
19. In a computer system having a processor which is coupled to a first memory for accessing locations in said first memory for transferring data between said processor and said first memory, and in which a plurality of cache memories are coupled to provide caching of said data, a method of allocating said cache memories based on a pattern of accesses for said data, comprising the steps of: arranging said cache memories in a cache hierarchy arrangement for transfer of said data between said processor and said first memory, each of said cache memories being assigned to a cache level within said cache hierarchy; implementing said processor in which said processor allocates or not allocates said cache memories at each cache hierarchy level for said data transfer based on a locality hint value included with an instruction which transfers said data between said processor and said first memory when said instruction is executed.
20. The method of claim 19 wherein said locality hint value is provided to identify if said pattern of accesses for said data is temporal or non-temporal at a given one of said hierarchy levels.
21. The method of claim 20 wherein said step of arranging said cache memories includes identifying which one or more of said cache memory or memories will be utilized by more than one processor, when multiple processors are present in said computer system.
22. The method of claim 20 wherein said locality hint value is provided by a bit-state of a designated bit or bits within said instruction.
23. The method of claim 20 wherein said locality hint value is provided by a bit-state of a designated bit or bits within said instruction for each level of said cache hierarchy where management of cache allocation is to be made.
24. A method of providing an instruction in a computer system having a processor, a first memory and a plurality of cache memories arranged into a cache hierarchy for accessing locations in said first memory by said processor and for transferring data between said processor and said first memory, comprising the steps of: providing said instruction to be executed by said processor; providing a locality hint value in said instruction for identifying at which level or levels of cache hierarchy, cache allocation is to be made when said instruction is executed.
25. The method of claim 24 wherein said locality hint value is provided by a bit-state of a designated bit or bits within said instruction.
26. The method of claim 24 wherein said locality hint value identifies if a particular access for data is temporal or non-temporal at a given level of said cache hierarchy and if said particular access is specified as temporal with respect to a cache level Li, then said particular access is determined to be temporal for cache level Lj for all j > i; but if said particular access is specified as non-temporal with respect to said level Li, then said particular access is determined to be non-temporal for cache level Lj for all j < i and is temporal for cache level Lj for all j > i, in which smaller a value of i the closer that Li is to said processor in said cache hierarchy.
PCT/US1997/022659 1996-12-17 1997-12-12 Cache hierarchy management with locality hints for different cache levels WO1998027492A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP97953136A EP1012723A4 (en) 1996-12-17 1997-12-12 Cache hierarchy management with locality hints for different cache levels
AU56940/98A AU5694098A (en) 1996-12-17 1997-12-12 Cache hierarchy management with locality hints for different cache levels

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/767,950 1996-12-17
US08/767,950 US5829025A (en) 1996-12-17 1996-12-17 Computer system and method of allocating cache memories in a multilevel cache hierarchy utilizing a locality hint within an instruction

Publications (1)

Publication Number Publication Date
WO1998027492A1 true WO1998027492A1 (en) 1998-06-25

Family

ID=25081069

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1997/022659 WO1998027492A1 (en) 1996-12-17 1997-12-12 Cache hierarchy management with locality hints for different cache levels

Country Status (4)

Country Link
US (1) US5829025A (en)
EP (1) EP1012723A4 (en)
AU (1) AU5694098A (en)
WO (1) WO1998027492A1 (en)

Families Citing this family (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5978379A (en) 1997-01-23 1999-11-02 Gadzoox Networks, Inc. Fiber channel learning bridge, learning half bridge, and protocol
US6026470A (en) * 1997-04-14 2000-02-15 International Business Machines Corporation Software-managed programmable associativity caching mechanism monitoring cache misses to selectively implement multiple associativity levels
US6130680A (en) * 1997-12-01 2000-10-10 Intel Corporation Method and apparatus for multi-level demand caching of textures in a graphics display device
US6134636A (en) * 1997-12-31 2000-10-17 Intel Corporation Method and apparatus for storing data in a memory array
US6643745B1 (en) * 1998-03-31 2003-11-04 Intel Corporation Method and apparatus for prefetching data into cache
US6205520B1 (en) * 1998-03-31 2001-03-20 Intel Corporation Method and apparatus for implementing non-temporal stores
US6202129B1 (en) * 1998-03-31 2001-03-13 Intel Corporation Shared cache structure for temporal and non-temporal information using indicative bits
US6421690B1 (en) 1998-04-30 2002-07-16 Honeywell International Inc. Computer memory management system
US6088789A (en) * 1998-05-13 2000-07-11 Advanced Micro Devices, Inc. Prefetch instruction specifying destination functional unit and read/write access mode
US7430171B2 (en) 1998-11-19 2008-09-30 Broadcom Corporation Fibre channel arbitrated loop bufferless switch circuitry to increase bandwidth without significant increase in cost
US6526481B1 (en) 1998-12-17 2003-02-25 Massachusetts Institute Of Technology Adaptive cache coherence protocols
US6636950B1 (en) 1998-12-17 2003-10-21 Massachusetts Institute Of Technology Computer architecture for shared memory access
JP3512678B2 (en) * 1999-05-27 2004-03-31 富士通株式会社 Cache memory control device and computer system
US6574712B1 (en) * 1999-11-08 2003-06-03 International Business Machines Corporation Software prefetch system and method for predetermining amount of streamed data
US6460115B1 (en) * 1999-11-08 2002-10-01 International Business Machines Corporation System and method for prefetching data to multiple levels of cache including selectively using a software hint to override a hardware prefetch mechanism
US6507895B1 (en) * 2000-03-30 2003-01-14 Intel Corporation Method and apparatus for access demarcation
US6728835B1 (en) * 2000-08-30 2004-04-27 Unisys Corporation Leaky cache mechanism
US6668307B1 (en) * 2000-09-29 2003-12-23 Sun Microsystems, Inc. System and method for a software controlled cache
US6578111B1 (en) * 2000-09-29 2003-06-10 Sun Microsystems, Inc. Cache memory system and method for managing streaming-data
US6598124B1 (en) * 2000-09-29 2003-07-22 Sun Microsystems, Inc. System and method for identifying streaming-data
US7287649B2 (en) * 2001-05-18 2007-10-30 Broadcom Corporation System on a chip for packet processing
US6574708B2 (en) 2001-05-18 2003-06-03 Broadcom Corporation Source controlled cache allocation
US6766389B2 (en) * 2001-05-18 2004-07-20 Broadcom Corporation System on a chip for networking
US7239636B2 (en) 2001-07-23 2007-07-03 Broadcom Corporation Multiple virtual channels for use in network devices
US7328328B2 (en) * 2002-02-19 2008-02-05 Ip-First, Llc Non-temporal memory reference control mechanism
US7295555B2 (en) 2002-03-08 2007-11-13 Broadcom Corporation System and method for identifying upper layer protocol message boundaries
US7114043B2 (en) * 2002-05-15 2006-09-26 Broadcom Corporation Ambiguous virtual channels
US7266587B2 (en) 2002-05-15 2007-09-04 Broadcom Corporation System having interfaces, switch, and memory bridge for CC-NUMA operation
US7269709B2 (en) 2002-05-15 2007-09-11 Broadcom Corporation Memory controller configurable to allow bandwidth/latency tradeoff
US6944715B2 (en) * 2002-08-13 2005-09-13 International Business Machines Corporation Value based caching
US7411959B2 (en) 2002-08-30 2008-08-12 Broadcom Corporation System and method for handling out-of-order frames
US7934021B2 (en) 2002-08-29 2011-04-26 Broadcom Corporation System and method for network interfacing
US7346701B2 (en) 2002-08-30 2008-03-18 Broadcom Corporation System and method for TCP offload
US7313623B2 (en) 2002-08-30 2007-12-25 Broadcom Corporation System and method for TCP/IP offload independent of bandwidth delay product
US8180928B2 (en) 2002-08-30 2012-05-15 Broadcom Corporation Method and system for supporting read operations with CRC for iSCSI and iSCSI chimney
US7512498B2 (en) * 2002-12-31 2009-03-31 Intel Corporation Streaming processing of biological sequence matching
US7155572B2 (en) * 2003-01-27 2006-12-26 Advanced Micro Devices, Inc. Method and apparatus for injecting write data into a cache
US7334102B1 (en) 2003-05-09 2008-02-19 Advanced Micro Devices, Inc. Apparatus and method for balanced spinlock support in NUMA systems
US7634566B2 (en) * 2004-06-03 2009-12-15 Cisco Technology, Inc. Arrangement in a network for passing control of distributed data between network nodes for optimized client access based on locality
WO2006029508A1 (en) * 2004-09-13 2006-03-23 Solace Systems Inc. Highly scalable subscription matching for a content routing network
US20060179240A1 (en) * 2005-02-09 2006-08-10 International Business Machines Corporation System and method for algorithmic cache-bypass
US7581065B2 (en) * 2005-04-07 2009-08-25 O'connor Dennis M Low locality-of-reference support in a multi-level cache hierachy
US7356650B1 (en) * 2005-06-17 2008-04-08 Unisys Corporation Cache apparatus and method for accesses lacking locality
JP4563314B2 (en) * 2005-12-14 2010-10-13 富士通株式会社 Storage system control device, storage system control program, and storage system control method
US7698505B2 (en) * 2006-07-14 2010-04-13 International Business Machines Corporation Method, system and computer program product for data caching in a distributed coherent cache system
CN101589373A (en) * 2007-01-25 2009-11-25 Nxp股份有限公司 Hardware triggered data cache line pre-allocation
US9311085B2 (en) * 2007-12-30 2016-04-12 Intel Corporation Compiler assisted low power and high performance load handling based on load types
US7543109B1 (en) 2008-05-16 2009-06-02 International Business Machines Corporation System and method for caching data in a blade server complex
US8131951B2 (en) * 2008-05-30 2012-03-06 Freescale Semiconductor, Inc. Utilization of a store buffer for error recovery on a store allocation cache miss
US9323527B2 (en) * 2010-10-15 2016-04-26 International Business Machines Corporation Performance of emerging applications in a virtualized environment using transient instruction streams
US9035961B2 (en) * 2012-09-11 2015-05-19 Apple Inc. Display pipe alternate cache hint
CN103678405B (en) * 2012-09-21 2016-12-21 阿里巴巴集团控股有限公司 Mail index establishing method and system, e-mail search method and system
US9652233B2 (en) * 2013-08-20 2017-05-16 Apple Inc. Hint values for use with an operand cache
US9471955B2 (en) 2014-06-19 2016-10-18 Apple Inc. Multiple display pipelines driving a divided display
US9600442B2 (en) 2014-07-18 2017-03-21 Intel Corporation No-locality hint vector memory access processors, methods, systems, and instructions
US10261901B2 (en) * 2015-09-25 2019-04-16 Intel Corporation Method and apparatus for unneeded block prediction in a computing system having a last level cache and a multi-level system memory
US10503656B2 (en) * 2017-09-20 2019-12-10 Qualcomm Incorporated Performance by retaining high locality data in higher level cache memory

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4928239A (en) * 1986-06-27 1990-05-22 Hewlett-Packard Company Cache memory with variable fetch and replacement schemes
US5537573A (en) * 1993-05-28 1996-07-16 Rambus, Inc. Cache system and method for prefetching of data
US5652858A (en) * 1994-06-06 1997-07-29 Hitachi, Ltd. Method for prefetching pointer-type data structure and information processing apparatus therefor
US5689679A (en) * 1993-04-28 1997-11-18 Digital Equipment Corporation Memory system and method for selective multi-level caching using a cache level code

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4323929A1 (en) * 1992-10-13 1994-04-14 Hewlett Packard Co Software-managed, multi-level cache storage system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4928239A (en) * 1986-06-27 1990-05-22 Hewlett-Packard Company Cache memory with variable fetch and replacement schemes
US5689679A (en) * 1993-04-28 1997-11-18 Digital Equipment Corporation Memory system and method for selective multi-level caching using a cache level code
US5537573A (en) * 1993-05-28 1996-07-16 Rambus, Inc. Cache system and method for prefetching of data
US5652858A (en) * 1994-06-06 1997-07-29 Hitachi, Ltd. Method for prefetching pointer-type data structure and information processing apparatus therefor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP1012723A4 *

Also Published As

Publication number Publication date
US5829025A (en) 1998-10-27
EP1012723A4 (en) 2002-11-20
EP1012723A1 (en) 2000-06-28
AU5694098A (en) 1998-07-15

Similar Documents

Publication Publication Date Title
US5829025A (en) Computer system and method of allocating cache memories in a multilevel cache hierarchy utilizing a locality hint within an instruction
US6105111A (en) Method and apparatus for providing a cache management technique
US6584547B2 (en) Shared cache structure for temporal and non-temporal instructions
US6356980B1 (en) Method and system for bypassing cache levels when casting out from an upper level cache
US6058456A (en) Software-managed programmable unified/split caching mechanism for instructions and data
US6219760B1 (en) Cache including a prefetch way for storing cache lines and configured to move a prefetched cache line to a non-prefetch way upon access to the prefetched cache line
US6470422B2 (en) Buffer memory management in a system having multiple execution entities
US7676632B2 (en) Partial cache way locking
US5974507A (en) Optimizing a cache eviction mechanism by selectively introducing different levels of randomness into a replacement algorithm
KR101441019B1 (en) Configurable cache for a microprocessor
US20020116584A1 (en) Runahead allocation protection (rap)
US8250307B2 (en) Sourcing differing amounts of prefetch data in response to data prefetch requests
US6385695B1 (en) Method and system for maintaining allocation information on data castout from an upper level cache
US20090177842A1 (en) Data processing system and method for prefetching data and/or instructions
KR101462220B1 (en) Configurable cache for a microprocessor
US6370618B1 (en) Method and system for allocating lower level cache entries for data castout from an upper level cache
US6026470A (en) Software-managed programmable associativity caching mechanism monitoring cache misses to selectively implement multiple associativity levels
US7293141B1 (en) Cache word of interest latency organization
US6801982B2 (en) Read prediction algorithm to provide low latency reads with SDRAM cache
JPH06202951A (en) Cash memory system
US5983322A (en) Hardware-managed programmable congruence class caching mechanism
US6000014A (en) Software-managed programmable congruence class caching mechanism
US20230236973A1 (en) Cat aware loads and software prefetches
KR100302928B1 (en) Hardware-managed programmable unified/split caching mechanism for instructions and data

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AT AU AZ BA BB BG BR BY CA CH CN CU CZ CZ DE DE DK DK EE EE ES FI FI GB GE GH HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SK SL TJ TM TR TT UA UG UZ VN YU ZW AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SZ UG ZW AT BE CH DE DK ES FI

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
CFP Corrected version of a pamphlet front page
CR1 Correction of entry in section i

Free format text: PAT. BUL. 25/98 UNDER (81) ADD "GM, GW"

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1997953136

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: CA

WWP Wipo information: published in national office

Ref document number: 1997953136

Country of ref document: EP