|Publication number||US7836281 B1|
|Application number||US 11/245,774|
|Publication date||Nov 16, 2010|
|Filing date||Oct 6, 2005|
|Priority date||Oct 14, 2003|
|Publication number||11245774, 245774, US 7836281 B1, US 7836281B1, US-B1-7836281, US7836281 B1, US7836281B1|
|Inventors||Marc Tremblay, Shailender Chaudhry|
|Original Assignee||Oracle America, Inc.|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (7), Non-Patent Citations (6), Referenced by (2), Classifications (22), Legal Events (3)|
|External Links: USPTO, USPTO Assignment, Espacenet|
This application is a continuation-in-part of a pending U.S. patent application, entitled “Selectively Deferring the Execution Of Instructions With Unresolved Data Dependencies As They Are Issued In Program Order,” by inventors Shailender Chaudhry and Marc Tremblay, having Ser. No. 10/686,061 and a filing date of 14 Oct. 2003. This application hereby claims priority under 35 U.S.C. §120 to the above-listed patent application. Moreover, the above-listed application is hereby incorporated by reference.
1. Field of the Invention
The present invention relates to techniques for improving the performance of computer systems. More specifically, the present invention relates to a method and an apparatus for speeding up program execution by continuing to speculatively execute instructions in scout mode using a parallel speculative thread after a stall condition has cleared and the main thread resumes normal execution.
2. Related Art
Advances in semiconductor fabrication technology have given rise to dramatic increases in microprocessor clock speeds. This increase in microprocessor clock speeds has not been matched by a corresponding increase in memory access speeds. Hence, the disparity between microprocessor clock speeds and memory access speeds continues to grow, and is beginning to create significant performance problems. Execution profiles for fast microprocessor systems show that a large fraction of execution time is spent not within the microprocessor core, but within memory structures outside of the microprocessor core. This means that the microprocessor systems spend a large fraction of time waiting for memory references to complete instead of performing computational operations.
Efficient caching schemes can help reduce the number of memory accesses that are performed. However, when a memory reference, such as a load operation, generates a cache miss, the subsequent access to level-two (L2) cache or memory can require dozens or hundreds of clock cycles to complete, during which time the processor is typically idle, performing no useful work.
A number of techniques are presently used (or have been proposed) to hide this cache-miss latency. Some processors support out-of-order execution, in which instructions are kept in an issue queue, and are issued “out-of-order” when operands become available. Unfortunately, existing out-of-order designs have a hardware complexity that grows quadratically with the size of the issue queue. Practically speaking, this constraint limits the number of entries in the issue queue to one or two hundred, which is not sufficient to hide memory latencies as processors continue to get faster. Moreover, constraints on the number of physical registers that are available for register renaming purposes during out-of-order execution also limits the effective size of the issue queue.
Some designers have proposed entering a scout mode when a stall condition is encountered. During scout mode, the processor speculatively executes instructions to prefetch future loads, but the processor does not commit the results to the architectural state of the processor. For example, see U.S. patent application Ser. No. 10/741,944, entitled “Generating Prefetches by Speculatively Executing Code Through Hardware Scout Threading,” by inventors Shailender Chaudhry and Marc Tremblay. This solution to the latency problem eliminates the complexity of the issue queue and the rename unit, and also achieves memory-level parallelism.
Note that after the stall condition is cleared, the processor leaves scout mode and returns to normal-execution mode. However, performance can be lost if the processor leaves scout mode just before executing an instruction which would have generated a useful prefetch. For example, suppose a processor executing in normal-execution mode encounters a stall condition which causes the processor to enter scout mode, wherein instructions are speculatively executed by a speculative thread to prefetch future loads, but results are not committed to the architectural state of the processor. At some time in the future, when the stall condition clears, the processor will return to normal-execution mode. If the processor is just about to generate a useful prefetch when it returns to normal-execution mode, the processor can potentially lose the opportunity to hide the memory latency for the useful prefetch because the processor returns to normal-execution mode instead of prefetching a cache line that it will eventually need.
Hence, what is needed is a method and an apparatus that facilitates prefetching cache lines during stall conditions without the above-described drawbacks of existing processor designs that support scout mode.
One embodiment of the present invention provides a system that facilitates improving performance of a processor during scout mode. During a normal-execution mode, the system executes instructions using a main thread. Upon encountering a stall condition during execution of the main thread, the system generates a checkpoint. The system then enters a scout mode, wherein instructions are speculatively executed by a speculative thread to prefetch future memory references, but results are not committed to the architectural state of the processor. (Note that both the main thread and the speculative thread are associated with a single software thread.) Upon encountering a memory reference during scout mode, the system issues a prefetch for the memory reference. If the stall condition that caused the processor to enter scout mode is resolved, the system uses the checkpoint to resume execution of the main thread from the instruction that caused the stall condition, and simultaneously continues executing instructions in scout mode using the speculative thread from the point where the speculative thread left off.
In a variation on this embodiment, when the main thread encounters a second stall condition, the system generates a checkpoint and re-launches the speculative thread in scout mode from the instruction that caused the second stall condition.
In a variation on this embodiment, upon encountering an unresolved data dependency during execution of the main thread, the system generates a checkpoint. The system then executes subsequent instructions in an execute-ahead mode, wherein instructions that cannot be executed because of an unresolved data dependency are deferred, and wherein other non-deferred instructions are executed in program order. When the unresolved data dependency is resolved during execute-ahead mode, the system executes deferred instructions in a deferred-execution mode. The system also simultaneously executes instructions in scout mode using the speculative thread from the point where the execute-ahead mode left off. If all deferred instructions are eventually executed, the system returns to the normal-execution mode to resume normal program execution from the point where the execute-ahead mode left off.
In a variation on this embodiment, the system interleaves execution of instructions between the speculative thread and the main thread.
In a variation on this embodiment, the system maintains a first program counter for the main thread and a second program counter for the speculative thread.
In a variation on this embodiment, the system supports simultaneous multithreading or vertical multithreading.
The following description is presented to enable any person skilled in the art to make and use the invention, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
During operation, fetch unit 104 retrieves instructions to be executed from instruction cache 102, and feeds these instructions into decode unit 106. Decode unit 106 forwards the instructions to be executed into instruction buffer 108, which is organized as a FIFO buffer. Instruction buffer 108 feeds instructions in program order into grouping logic 110, which groups instructions together and sends them to execution units, including memory pipe 122 (for accessing memory 124), ALU 114, ALU 116, branch pipe 118 (which resolves conditional branch computations), and floating point unit 120.
If an instruction cannot be executed due to an unresolved data dependency, such as an operand that has not returned from a load operation, the system defers execution of the instruction and moves the instruction into deferred buffer 112. Note that like instruction buffer 108, deferred buffer 112 is also organized as a FIFO buffer.
When the data dependency is eventually resolved, instructions from deferred buffer 112 are executed in program order with respect to other deferred instructions, but not with respect to other previously executed non-deferred instructions. This process is described in more detail below with reference to
In one embodiment of the present invention, the processor supports simultaneous multithreading or vertical multithreading.
Keeping Track of Dependencies
The present invention keeps track of data dependencies in order to determine if an instruction is subject to an unresolved data dependency. In one embodiment of the present invention, this can involve maintaining state information for each register, which indicates whether or not a value in the register depends on an unresolved data dependency.
Next, if an unresolved data dependency arises during execution of an instruction, the system moves to execute-ahead mode 204. An unresolved data dependency can include: a use of an operand that has not returned from a preceding load miss; a use of an operand that has not returned from a preceding translation lookaside buffer (TLB) miss; a use of an operand that has not returned from a preceding full or partial read-after-write (RAW) from store buffer operation; and a use of an operand that depends on another operand that is subject to an unresolved data dependency.
While moving to execute-ahead mode 204, the system performs a checkpointing operation to generate a checkpoint that can be used, if necessary, to return execution of the process to the point where the unresolved data dependency was encountered; this point is referred to as the “launch point.” (Note that generating the checkpoint can involve saving the precise architectural state of the processor to facilitate subsequent recovery from exceptions that arise during execute-ahead mode or deferred mode.) The system also “defers” execution of the instruction that encountered the unresolved data dependency, and stores the deferred instruction in deferred buffer 112.
Within execute-ahead mode 204, the system continues to execute instructions in program order as they are received from instruction buffer 108, and any instructions that cannot execute because of an unresolved data dependency are stored in deferred buffer 112.
When the system is in execute-ahead mode 204, if an unresolved data dependency is finally resolved, the system moves into deferred execution mode 206, wherein instructions are executed in program order from deferred buffer 112. During deferred execution mode 206, the system attempts to execute deferred instructions from deferred buffer 112. Note that the system attempts to execute these instructions in program order with respect to other deferred instructions in deferred buffer 112, but not with respect to other previously executed non-deferred instructions (and not with respect to deferred instructions executed in previous passes through deferred buffer 112). During this process, the system defers execution of deferred instructions that still cannot be executed because of unresolved data dependencies and places these again-deferred instruction back into deferred buffer 112. The system executes the other instruction that can be executed in program order with respect to each other.
After the system completes a pass through deferred buffer 112, if deferred buffer 112 is empty, the system moves back into normal-execution mode 202. This may involve committing changes made during execute-ahead mode 204 and deferred execution mode 206 to the architectural state of the processor, if such changes have not been already committed. It can also involve throwing away the checkpoint generated when the system moved into execute-ahead mode 204.
On the other hand, if deferred buffer 112 is not empty after the system completes a pass through deferred buffer 112, the system returns to execute ahead mode to execute instructions from instruction buffer 108 from the point where the execute-ahead mode 204 left off.
If a non-data dependent stall condition arises while the system is in normal-execution mode 202 or in execute-ahead mode 204, the system moves into scout mode 208. (This non-data-dependent stall condition can include: a memory barrier operation; a load buffer full condition; a store buffer full condition, or a deferred buffer full condition.) In scout mode 208, instructions are speculatively executed to prefetch future loads, but results are not committed to the architectural state of the processor.
Scout mode is described in more detail in U.S. patent application Ser. No. 10/741,944, entitled “Generating Prefetches by Speculatively Executing Code Through Hardware Scout Threading,” by inventors Shailender Chaudhry and Marc Tremblay (filed 19 Dec. 2003). This patent application is hereby incorporated by reference herein to provide details on how scout mode operates.
Unfortunately, computational operations performed during scout-ahead mode need to be recomputed again, which can require a large amount of computational work.
When the original “launch point” stall condition is finally resolved, the system moves back into normal-execution mode 202, and, in doing so, uses the previously generated checkpoint to resume execution from the launch point instruction (the instruction that initially encountered the stall condition).
Note that the launch point stall condition is the stall condition that originally caused the system to move out of normal-execution mode 202. For example, the launch point stall condition can be the data-dependent stall condition that caused the system to move from normal-execution mode 202 to execute-ahead mode 204, before moving to scout mode 208. Alternatively, the launch point stall condition can be the non-data-dependent stall condition that caused the system to move directly from normal-execution mode 202 to scout mode 208.
Normal-Execution Mode and Scout Mode
In step 612, if the present instruction references memory, the processor issues a prefetch for the memory reference (step 616). The processor then determines if the memory reference causes a load miss (step 618). If not, the processor returns to step 614 and determines if there is still a stall condition. Otherwise, the processor determines if the prior memory reference targets the same cache line as the current memory reference (step 620). If so, the processor returns to step 614 and determines if there is still a stall condition. Otherwise, the processor issues the memory instruction (step 622) and the processor returns to step 614 to determine if there is still a stall condition. If there is no longer a stall condition at step 614, the processor restarts execution of the main thread from the checkpoint (step 624) and the processor halts execution of the scout thread (step 626).
Note that after the processor halts execution of the scout thread, the processor returns to normal-execution mode at step 602. Unfortunately, by returning to normal-execution mode, the system may miss an opportunity to perform a prefetch that could potentially improve performance.
After speculatively executing instructions in scout mode using PC 304, the stall condition clears. In this example, the stall condition clears when PC 304 reaches the SUB instruction (see
After the stall condition is cleared, the processor uses PC 302 to return to normal-execution mode (see
After the stall condition clears and after the prefetch operation returns the cache line for the future load, the processor returns to normal-execution mode. However, the processor loses an opportunity to prefetch cache lines for a future load instruction past the point where the scout thread stopped. For instance, if the LD instruction after the SUB instruction, pointed to by PC 304 in
Normal-Execution Mode Plus Scout Mode
After speculatively executing instructions in scout mode using PC 404, the stall condition clears (see
After the stall condition clears, the processor uses PC 402 to return to normal-execution mode (see
Note that the present invention improves upon the process illustrated in
Note that the system interleaves execution of instructions between the speculative thread and the main thread. In one embodiment of the present invention, the processor uses a round-robin thread scheduling scheme to interleave instructions between the speculative thread and the main thread.
In one embodiment of the present invention, if the processor encounters a second stall condition while both the main thread and the speculative thread are executing, the processor generates a checkpoint and re-launches the speculative thread in scout mode from the instruction that caused the second stall condition; the main thread is terminated.
Deferred-Execution Mode Plus Scout Mode
The processor uses PC 502, which is the program counter for the main thread, to execute instructions in execute-ahead mode. The processor also places instructions which depend on the unresolved data dependency into deferred buffer 506 (see
At this point, the processor leaves execute-ahead mode and enters deferred-execution mode where it uses PC 502 to execute instructions in deferred buffer 506 (see
Note that if all deferred instructions are eventually executed, the processor returns to the normal-execution mode to resume normal program execution from the point where the execute-ahead mode left off.
Note that there are two sources for the instructions when the system concurrently executes the scout thread and the deferred thread: (1) the instruction cache and (2) the deferred buffer. Hence, the scout thread executes instructions from the instruction cache while the deferred thread executes instructions from the deferred buffer.
Note that the instructions in the deferred buffer are dependent on unresolved data dependencies, and these dependent instructions tend to cause pipeline bubbles when they are executed. Hence, if the processor only executes instructions from the deferred queue, clock cycles are wasted waiting for the pipeline bubbles. However, since the system interleaves execution of instructions between the scout thread and the deferred thread, the processor can execute instructions for the scout thread during the cycles when the deferred thread would otherwise be waiting for bubbles into the execution pipeline.
The foregoing descriptions of embodiments of the present invention have been presented only for purposes of illustration and description. They are not intended to be exhaustive or to limit the present invention to the forms disclosed. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. Additionally, the above disclosure is not intended to limit the present invention. The scope of the present invention is defined by the appended claims.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US5920710 *||Nov 18, 1996||Jul 6, 1999||Advanced Micro Devices, Inc.||Apparatus and method for modifying status bits in a reorder buffer with a large speculative state|
|US5933627 *||Jul 1, 1996||Aug 3, 1999||Sun Microsystems||Thread switch on blocked load or store using instruction thread field|
|US6880073 *||Dec 28, 2000||Apr 12, 2005||International Business Machines Corporation||Speculative execution of instructions and processes before completion of preceding barrier operations|
|US7444544 *||Jul 14, 2006||Oct 28, 2008||International Business Machines Corporation||Write filter cache method and apparatus for protecting the microprocessor core from soft errors|
|US20010042187 *||Dec 3, 1998||Nov 15, 2001||Marc Tremblay||Variable issue-width vliw processor|
|US20020144083 *||Mar 30, 2001||Oct 3, 2002||Hong Wang||Software-based speculative pre-computation and multithreading|
|US20060136915 *||Dec 17, 2004||Jun 22, 2006||Sun Microsystems, Inc.||Method and apparatus for scheduling multiple threads for execution in a shared microprocessor pipeline|
|1||*||Balasubramonian et al., Dynamically Allocating Processor Resources between Nearby and Distant ILP, May 2001, pp. 1-6.|
|2||*||Hirata et al., An Elementary Processor Architecture with Simultaneous Instruction Issuing from Multiple Threads, Apr. 1992, pp. 136-145.|
|3||*||Lebeck et al., A Large, Fast Instruction Window for Tolerating Cache Misses, May 2002, pp. 59-69.|
|4||*||Mutlu et al., Runahead Execution: An Alternative to Very Large Instruction Windows for Out-of-order Processors, Feb. 2003, p. 1.|
|5||*||Sohi et al., Speculative Multithreaded Processors, Apr. 2001, pp. 71-72.|
|6||*||Srinivasan et al., A Minimal Dual-Core Speculative Multi-Threading Architecture, Oct. 2004, pp. 1-2.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US9086889 *||Apr 27, 2010||Jul 21, 2015||Oracle International Corporation||Reducing pipeline restart penalty|
|US20110264862 *||Apr 27, 2010||Oct 27, 2011||Martin Karlsson||Reducing pipeline restart penalty|
|U.S. Classification||712/220, 712/207, 712/228, 712/229|
|International Classification||G06F9/44, G06F9/40, G06F7/38, G06F9/00, G06F9/30|
|Cooperative Classification||G06F9/3842, G06F9/3838, G06F9/3859, G06F9/3836, G06F9/3851, G06F9/3865, G06F9/30105, G06F9/3863|
|European Classification||G06F9/38E2, G06F9/38E, G06F9/38H2, G06F9/38H3, G06F9/38E1|
|Oct 6, 2005||AS||Assignment|
Owner name: SUN MCIROSYSTEMS, INC., CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TREMBLAY, MARC;CHAUDHRY, SHAILENDER;REEL/FRAME:017076/0110
Effective date: 20050920
|Apr 16, 2014||FPAY||Fee payment|
Year of fee payment: 4
|Aug 31, 2016||AS||Assignment|
Owner name: ORACLE AMERICA, INC., CALIFORNIA
Free format text: MERGER AND CHANGE OF NAME;ASSIGNORS:ORACLE USA, INC.;SUN MICROSYSTEMS, INC.;ORACLE AMERICA, INC.;REEL/FRAME:039604/0471
Effective date: 20100212