|Publication number||US7769983 B2|
|Application number||US 11/132,748|
|Publication date||Aug 3, 2010|
|Filing date||May 18, 2005|
|Priority date||May 18, 2005|
|Also published as||CN101223504A, CN101223504B, EP1886217A2, US20060265573, WO2006125219A2, WO2006125219A3, WO2006125219A9|
|Publication number||11132748, 132748, US 7769983 B2, US 7769983B2, US-B2-7769983, US7769983 B2, US7769983B2|
|Inventors||Rodney Wayne Smith, Brian Michael Stempel|
|Original Assignee||Qualcomm Incorporated|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (46), Non-Patent Citations (4), Referenced by (2), Classifications (10), Legal Events (2)|
|External Links: USPTO, USPTO Assignment, Espacenet|
The present disclosure relates generally to processing systems, and more specifically, to caching instructions for a multiple-state processor.
Computers typically employ a processor supported by memory. Memory is a storage medium that holds the programs and data needed by the processor to perform its functions. Recently, with the advent of more powerful software programs, the demands on memory have been increasing at an astounding rate. The result is that modern processors require a large amount of memory, which is inherently slower than the smaller memories. Large memories with speeds capable of supporting today's processors are simply too expensive for large scale commercial applications.
Computer designers have addressed this problem by organizing memory into several hierarchal components. The largest component, in terms of capacity, is commonly a hard drive. The hard drive provides large quantities of inexpensive permanent storage. The basic input/output system (BIOS) and the operating system are just a few examples of programs that are typically stored on the hard drive. These programs may be loaded into Random Access Memory (RAM) when the computer is operational. Software applications that are launched by a user may also be loaded into RAM from the hard drive. RAM is a temporary storage area that allows the processor to access the information more readily.
The computer's RAM is still not fast enough to keep up with the processor. This means that processors may have to wait for program instructions and data to be written to and read from the RAM. Caches are used to increase the speed of memory access by making the information most often used by the processor readily available. This is accomplished by integrating a small amount of memory, known as a primary or Level 1 (L1) cache, into the processor. A secondary or Level 2 (L2) cache between the RAM and L1 cache may also be used in some computer applications.
The speed of the computer may be further improved by partially decoding the instructions prior to being placed into the cache. This process is often referred to as “pre-decoding,” and entails generating some “pre-decode information” that can be stored along with the instruction in the cache. The pre-decode information indicates some basic aspects of the instruction such as whether the instruction is an arithmetic or storage instruction, whether the instruction is a branch instruction, whether the instruction will make a memory reference, or any other information that may be used by the processor to reduce the complexity of the decode logic. Pre-decoding instructions improves processor performance by reducing the length of the machine's pipeline without reducing the frequency at which it operates.
Processors capable of operating in multiple states are becoming commonplace with today's emerging technology. A “multiple state processor” means a processor that can support two or more different instruction sets. The ARM (Advance RISC Machine) processor, as sold by ARM limited, is just one example. The ARM processor is an efficient, low-power RISC processor that is commonly used today in mobile applications such as mobile telephones, personal digital assistants (PDA), digital camera, and game consoles, just to name a few. ARM processors have historically supported two instruction sets: the ARM instruction set, in which all instructions are 32-bits long, and the Thumb instruction set, which compresses the most commonly used instructions into a 16-bit format. A third instruction set that has recently been added to some ARM processors is “THUMB-2 Execution Environment” (T2EE). T2EE is an instruction set (similar to THUMB) that is optimized as a dynamic (JIT) compilation target for bytecode languages, such as Java and NET.
These multiple-state processors have significantly increased the capability of modern day computing systems, but can pose unique challenges to the computer designer. By way of example, if a block of instructions the size of one line in the L1 instruction cache contains instructions from multiple instruction sets, pre-decode information calculated assuming that the entire cache line contains instructions in one state cannot be used for those instructions that are actually in the other state. The solution described in this disclosure is not limited to ARM processors with THUMB and/or T2EE capability, but may be applied to any system that pre-decodes instructions for multiple instruction sets with overlapping instruction encodings prior to placing them into cache.
One aspect of the present invention is directed to a method of operating a processor. The processor is capable of operating in different states, with each state supporting a different instruction set. The method includes retrieving a block of instructions from memory while the processor is operating in one of the states, pre-decoding the instructions in accordance with said one of the states, loading the pre-decoded instructions into cache, and determining whether the current state of the processor is the same as said one of the states used to pre-decode the instructions when one of the pre-decoded instructions in the cache is needed by the processor.
Another aspect of the present invention is directed to a processing system. The processing system includes memory, cache, a processor capable of operating in different states, each of the states supporting a different instruction set, the processor being further configured to retrieve a block of instructions from the memory while operating in one of the states, and a pre-decoder configured to pre-decode the instructions retrieved from the memory in accordance with said one of the states, wherein the processor is further configured to load the pre-decoded instructions into the cache, and, when one of the pre-decoded instructions in the cache is needed by the processor, determine whether its current state is the same as said one of the states used to pre-decode the instructions.
It is understood that other embodiments of the present invention will become readily apparent to those skilled in the art from the following detailed description, wherein various embodiments of the invention are shown and described by way of illustration. As will be realized, the invention is capable of other and different embodiments and its several details are capable of modification in various other respects, all without departing from the spirit and scope of the present invention. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not as restrictive.
Aspects of the present invention are illustrated by way of example, and not by way of limitation, in the accompanying drawings, wherein:
The detailed description set forth below in connection with the appended drawings is intended as a description of various embodiments of the present invention and is not intended to represent the only embodiments in which the present invention may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the present invention. However, it will be apparent to those skilled in the art that the present invention may be practiced without these specific details. In some instances, well known structures and components are shown in block diagram form in order to avoid obscuring the concepts of the present invention.
The computer 100 may also include memory 104, which holds the program instructions and data needed by the processor 102 to perform its functions. The memory 104 may be implemented with RAM or other suitable memory, and may be comprised of the computer's main memory and optionally a L2 cache. An instruction cache 106 may be used between the processor 102 and the memory 104. The instruction cache 106 is a relatively small, high-speed L1 cache used for the temporary storage of program instructions from the memory 104 to be executed by the processor 102. In one embodiment of the computer 100, the instruction cache 106 is a high-speed static RAM (SRAM) instead of the slower and cheaper dynamic RAM (DRAM) that may be used for the memory 104. The instruction cache 106 provides a mechanism for increasing processor access speed because most programs repeatedly access the same instructions. By keeping as much of this information as possible in the instruction cache 106, the processor 102 avoids having to access the slower memory 104. The computer 100 may also include a data cache (not shown) for the storage of data used in the execution of the instructions.
The instruction cache 106 provides storage for the most recently accessed instructions by the processor 102 from the memory 104. When the processor 102 needs instructions from the memory 104, it first checks the instruction cache 106 to see if the instruction is there. When an instruction required by the processor 102 is found in the instruction cache 106, the lookup is called a “cache hit.” On a cache hit, the instruction may be retrieved directly from the instruction cache 106, thus drastically increasing the rate at which instructions may be processed. An instruction required by the processor 102 that is not found in the instruction cache 106 results in a “cache miss.” On a cache miss, the processor 102 must fetch the required instruction from the memory 104, which takes considerably longer than fetching an instruction from the instruction cache 106. Usually the processor 102 fetches a “cache line” from memory 104. The cache line from the memory 104 may be stored in the instruction cache 106 for future access.
Computer performance may be further enhanced by pre-decoding the instructions from the memory 104 prior to being placed in the instruction cache 106. A pre-decoder 108 takes the instructions as they are fetched from the memory 104, pre-decodes them in accordance with the operating state of the processor 102, and stores the pre-decode information in the instruction cache 106. Signaling from the processor 102 may be used to indicate the current operating state of the processor 102 for the pre-decoding operation.
The instruction cache 106 maintains a cache directory (not shown). The cache directory contains one entry or “tag” for each cache line. A one-to-one mapping exists between a cache directory tag and its associated cache line in the cache storage array. The cache directory tag contains the memory address for the first instruction in the cache line. The processor 102 (see
In the processing system of
Several techniques may be employed by the processing system 100 to ensure that each instruction retrieved from the instruction cache 106 is not executed by the processor 102 with incorrect pre-decode information. One possible solution is to store “state information” with the pre-decoded instruction in each cache line. “State information” is defined as one or more bits that indicate which state the processor 102 was in when the associated cache line was pre-decoded.
Another possible solution is to include state information in each cache directory tag.
The various illustrative logical blocks, modules, circuits, elements, and/or components described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic component, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing components, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The methods or algorithms described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. A storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein, but is to be accorded the full scope consistent with the claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” All structural and functional equivalents to the elements of the various embodiments described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. ARM and THUMB are registered trademarks of ARM Limited. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. §112, sixth paragraph, unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.”
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US5335331||Jul 12, 1991||Aug 2, 1994||Kabushiki Kaisha Toshiba||Microcomputer using specific instruction bit and mode switch signal for distinguishing and executing different groups of instructions in plural operating modes|
|US5638525 *||Feb 10, 1995||Jun 10, 1997||Intel Corporation||Processor capable of executing programs that contain RISC and CISC instructions|
|US5819056||May 16, 1996||Oct 6, 1998||Advanced Micro Devices, Inc.||Instruction buffer organization method and system|
|US5822559||Jan 2, 1996||Oct 13, 1998||Advanced Micro Devices, Inc.||Apparatus and method for aligning variable byte-length instructions to a plurality of issue positions|
|US6002876 *||Sep 26, 1997||Dec 14, 1999||Texas Instruments Incorporated||Maintaining code consistency among plural instruction sets via function naming convention|
|US6021265 *||Apr 14, 1997||Feb 1, 2000||Arm Limited||Interoperability with multiple instruction sets|
|US6081884 *||Jan 5, 1998||Jun 27, 2000||Advanced Micro Devices, Inc.||Embedding two different instruction sets within a single long instruction word using predecode bits|
|US6356997||Oct 4, 2000||Mar 12, 2002||Hitachi, Ltd.||Emulating branch instruction of different instruction set in a mixed instruction stream in a dual mode system|
|US6374348||Oct 17, 2000||Apr 16, 2002||Hitachi, Ltd.||Prioritized pre-fetch/preload mechanism for loading and speculative preloading of candidate branch target instruction|
|US6430674 *||Dec 30, 1998||Aug 6, 2002||Intel Corporation||Processor executing plural instruction sets (ISA's) with ability to have plural ISA's in different pipeline stages at same time|
|US6438700 *||May 18, 1999||Aug 20, 2002||Koninklijke Philips Electronics N.V.||System and method to reduce power consumption in advanced RISC machine (ARM) based systems|
|US6654871 *||Nov 9, 1999||Nov 25, 2003||Motorola, Inc.||Device and a method for performing stack operations in a processing system|
|US6654873||Jan 7, 2000||Nov 25, 2003||Sony Corporation||Processor apparatus and integrated circuit employing prefetching and predecoding|
|US6816962||Feb 25, 2002||Nov 9, 2004||International Business Machines Corporation||Re-encoding illegal OP codes into a single illegal OP code to accommodate the extra bits associated with pre-decoded instructions|
|US6907515 *||May 22, 2002||Jun 14, 2005||Arm Limited||Configuration control within data processing systems|
|US6910206 *||Nov 7, 2000||Jun 21, 2005||Arm Limited||Data processing with native and interpreted program instruction words|
|US6952754 *||Jan 3, 2003||Oct 4, 2005||Intel Corporation||Predecode apparatus, systems, and methods|
|US6965984 *||Apr 30, 2002||Nov 15, 2005||Arm Limited||Data processing using multiple instruction sets|
|US6978358 *||Apr 2, 2002||Dec 20, 2005||Arm Limited||Executing stack-based instructions within a data processing apparatus arranged to apply operations to data items stored in registers|
|US7003652 *||Jun 25, 2001||Feb 21, 2006||Arm Limited||Restarting translated instructions|
|US7017030 *||Feb 20, 2002||Mar 21, 2006||Arm Limited||Prediction of instructions in a data processing apparatus|
|US7080242 *||Dec 19, 2002||Jul 18, 2006||Hewlett-Packard Development Company, L.P.||Instruction set reconciliation for heterogeneous symmetric-multiprocessor systems|
|US7093108||Jun 8, 2001||Aug 15, 2006||Arm Limited||Apparatus and method for efficiently incorporating instruction set information with instruction addresses|
|US7120779 *||Jan 28, 2004||Oct 10, 2006||Arm Limited||Address offset generation within a data processing system|
|US7134003 *||Oct 6, 2003||Nov 7, 2006||Arm Limited||Variable cycle instruction execution in variable or maximum fixed cycle mode to disguise execution path|
|US7234043 *||Mar 7, 2005||Jun 19, 2007||Arm Limited||Decoding predication instructions within a superscaler data processing system|
|US7353363||Mar 28, 2006||Apr 1, 2008||Microsystems, Inc.||Patchable and/or programmable decode using predecode selection|
|US7356673 *||Apr 30, 2001||Apr 8, 2008||International Business Machines Corporation||System and method including distributed instruction buffers for storing frequently executed instructions in predecoded form|
|US7360060||Jul 31, 2003||Apr 15, 2008||Texas Instruments Incorporated||Using IMPDEP2 for system commands related to Java accelerator hardware|
|US7406585 *||Jul 1, 2003||Jul 29, 2008||Arm Limited||Data processing system having an external instruction set and an internal instruction set|
|US7421568||Mar 4, 2005||Sep 2, 2008||Qualcomm Incorporated||Power saving methods and apparatus to selectively enable cache bits based on known processor state|
|US7493479 *||Jun 11, 2003||Feb 17, 2009||Renesas Technology Corp.||Method and apparatus for event detection for multiple instruction-set processor|
|US20020004897 *||Dec 27, 2000||Jan 10, 2002||Min-Cheng Kao||Data processing apparatus for executing multiple instruction sets|
|US20020083301||Dec 22, 2000||Jun 27, 2002||Jourdan Stephan J.||Front end system having multiple decoding modes|
|US20020083302 *||Jun 25, 2001||Jun 27, 2002||Nevill Edward Colles||Hardware instruction translation within a processor pipeline|
|US20040024990 *||Jul 31, 2003||Feb 5, 2004||Texas Instruments Incorporated||Processor that accommodates multiple instruction sets and multiple decode modes|
|US20040133764 *||Jan 3, 2003||Jul 8, 2004||Intel Corporation||Predecode apparatus, systems, and methods|
|US20050262329 *||Aug 19, 2003||Nov 24, 2005||Hitachi, Ltd.||Processor architecture for executing two different fixed-length instruction sets|
|US20060125219||Nov 17, 2005||Jun 15, 2006||Takata Corporation||Passenger protection device|
|US20060149927 *||Nov 24, 2003||Jul 6, 2006||Eran Dagan||Processor capable of multi-threaded execution of a plurality of instruction-sets|
|US20060265573 *||May 18, 2005||Nov 23, 2006||Smith Rodney W||Caching instructions for a multiple-state processor|
|US20070260854||May 4, 2006||Nov 8, 2007||Smith Rodney W||Pre-decoding variable length instructions|
|US20080229069||Mar 14, 2007||Sep 18, 2008||Qualcomm Incorporated||System, Method And Software To Preload Instructions From An Instruction Set Other Than One Currently Executing|
|US20090249033 *||Dec 3, 2008||Oct 1, 2009||Arm Limited||Data processing apparatus and method for handling instructions to be executed by processing circuitry|
|EP0411747A2||May 25, 1990||Feb 6, 1991||Advanced Micro Devices, Inc.||Multiple instruction decoder|
|WO2006125219A2||May 18, 2006||Nov 23, 2006||Qualcomm Inc||Caching instructions for a multiple-state processor|
|1||International Search Report-PCT/US06/019788, International Search Authority-European Patent Office-Feb. 14, 2007.|
|2||International Search Report—PCT/US06/019788, International Search Authority—European Patent Office—Feb. 14, 2007.|
|3||Written Opinion-PCT/US06/019788, International Search Authority-European Patent Office-Feb. 14, 2007.|
|4||Written Opinion—PCT/US06/019788, International Search Authority—European Patent Office—Feb. 14, 2007.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US8347067 *||Jan 23, 2008||Jan 1, 2013||Arm Limited||Instruction pre-decoding of multiple instruction sets|
|US20150186149 *||Apr 11, 2014||Jul 2, 2015||Lite-On It Corporation||Processing system and operating method thereof|
|U.S. Classification||712/213, 710/14|
|Cooperative Classification||G06F9/30189, G06F9/30181, G06F9/3802, G06F9/382|
|European Classification||G06F9/38C2, G06F9/38B, G06F9/30X|
|Jul 19, 2005||AS||Assignment|
Owner name: QUALCOMM INCORPORATED, CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SMITH, RODNEY WAYNE;STEMPEL, BRIAN MICHAEL;REEL/FRAME:016545/0992
Effective date: 20050518
|Jan 28, 2014||FPAY||Fee payment|
Year of fee payment: 4