US20010054139A1 - Mechanism for power efficient processing in a pipeline processor - Google Patents

Mechanism for power efficient processing in a pipeline processor Download PDF

Info

Publication number
US20010054139A1
US20010054139A1 US09/410,929 US41092999A US2001054139A1 US 20010054139 A1 US20010054139 A1 US 20010054139A1 US 41092999 A US41092999 A US 41092999A US 2001054139 A1 US2001054139 A1 US 2001054139A1
Authority
US
United States
Prior art keywords
pipefile
stage
instruction
result
execution pipeline
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/410,929
Other versions
US6351803B2 (en
Inventor
Chih-Jui Peng
Lew Chua-Eoan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Priority to US09/410,929 priority Critical patent/US6351803B2/en
Assigned to HITACHI, LTD. reassignment HITACHI, LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PENG, CHIH-JUI
Priority to JP2000286978A priority patent/JP2001142700A/en
Assigned to HITACHI LTD. reassignment HITACHI LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHUA-EOAN, LEW
Publication of US20010054139A1 publication Critical patent/US20010054139A1/en
Application granted granted Critical
Publication of US6351803B2 publication Critical patent/US6351803B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3824Operand accessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • G06F9/3838Dependency mechanisms, e.g. register scoreboarding
    • G06F9/384Register renaming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3854Instruction completion, e.g. retiring, committing or graduating
    • G06F9/3856Reordering of instructions, e.g. using queues or age tags
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3854Instruction completion, e.g. retiring, committing or graduating
    • G06F9/3858Result writeback, i.e. updating the architectural state or memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3867Concurrent instruction execution, e.g. pipeline, look ahead using instruction pipelines

Definitions

  • the present invention relates in general to microprocessors and, more particularly, to a system, method, and mechanism providing power efficient operation in a pipeline processor.
  • Computer programs comprise a series of instructions that direct a data processing mechanism to perform specific operations on data. These operations including loading data from memory, storing data to memory, adding, multiplying, and the like.
  • Data processors including microprocessors, microcontrollers, and the like include a central processing unit (CPU) comprising one or more functional units that perform various tasks.
  • Typical functional units include a decoder, an instruction cache, a data cache, an integer execution unit, a floating point execution unit, a load/store unit, and the like.
  • a given program may run on a variety of data processing hardware.
  • data processor includes complex instruction set computers (CISC), reduced instruction set computers (RISC) and hybrids.
  • a data processor may be a stand alone central processing unit (CPU) or an embedded system comprising a processor core integrated with other components to form a special purpose data processing machine.
  • data refers to a digital or binary information that may represent memory addresses, data, instructions, or the like.
  • Pipelined architectures attempt to keep all the functional units of a processor busy at all times by overlapping execution of several instructions.
  • Pipelined designs increase the rate at which instructions can be executed by allowing a new instruction to begin execution before a previous instruction is finished executing.
  • a simple pipeline may have only five stages whereas an extended pipeline may have ten or more stages. In this manner, the pipeline hides the latency associated with the execution of any particular instruction.
  • Pipeline hazards include structural hazards, data hazards, and control hazards. Structural hazards arise when more than one instruction in the pipeline requires a particular hardware resource at the same time (e.g., two execution units requiring access to a single ALU resource in the same clock cycle). Data hazards arise when an instruction needs as input the output of an instruction that has not yet produced that output. Control hazards arise when an instruction changes the program counter (PC) because execution cannot continue until the target instruction from the new PC is fetched.
  • PC program counter
  • Another goal of many processors is to control the power used by the processor.
  • Many applications particularly those directed at mobile or battery operated environments, require low power usage.
  • the execution pipelines of a computer consume a significant amount of power. Power consumption is largely caused by moving data between registers, files, and execution units. As data paths become wider, the power consumed to move the data increases.
  • the present invention involves a processor including a plurality of execution pipeline stages where each stage accepts a plurality of operand inputs and generates a result.
  • a pipefile having at least the same number of entries as the number of execution pipeline stages is included in the processor.
  • a pointer register is associated with each execution pipeline stage.
  • a value is stored in at least one of the pointer registers, the value indicating a particular one of the entries in the pipefile.
  • the present invention involves a method, system and apparatus for forwarding data within a pipeline of a pipelined data processor having a plurality of execution pipeline stages where each stage accepts a plurality of operand inputs and generates a result.
  • a pipefile is implemented having at least the same number of entries as the number of execution pipeline stages.
  • Each new instruction is assigned to one of the entries in the pipefile before the new instruction is executed.
  • the pipefile entry assignment remains valid while the instruction remains in any of the execution pipeline stages.
  • the new instruction is passed through the execution pipeline stages to generate a result. Upon successful completion of executing the new instruction, the result is written back from the assigned pipefile entry to an architectural register.
  • FIG. 1 shows in block diagram form a computer system incorporating an apparatus and system in accordance with the present invention
  • FIG. 2 shows a processor in block diagram form incorporating the apparatus and method in accordance with the present invention
  • FIG. 3 illustrates a CPU core useful in the implementation of the processor and system shown in FIG. 1 and FIG. 2 in accordance with the present invention
  • FIG. 4 shows an instruction fetch unit in which features of the present invention are embodied in a particular implementation
  • FIG. 5 illustrates an exemplary execution pipeline in accordance with a specific embodiment of the present invention
  • FIG. 6 illustrates comparative pipeline timing for the execution pipeline shown in FIG. 5;
  • FIG. 7A and FIG. 7B show exemplary a snapshot register entries in accordance with embodiments of the present invention.
  • FIG. 8 shows an operand multiplexing mechanism in accordance with an embodiment of the present invention.
  • FIG. 9 schematically illustrates internal operand forwarding mechanism in accordance with the present invention.
  • Power efficient operation is an important feature for many data processors. This is particularly true for embedded processors so that they do not place undue demands on the power supply requirements for the system in which they are embedded.
  • the present invention is illustrated in terms of a particular embedded processor system using a multi-stage pipeline for processing instructions.
  • the present invention particularly involves a structure for efficiently forwarding data within the pipefile mechanism so that, for example, operands that are determined by a first instruction within the pipeline can be used by subsequent instructions before the first instruction has completed to write back.
  • Operand forwarding is important in avoiding pipeline stalls, but can lead to a significant amount of power loss as data is copied and moved between registers to make the data available throughout the pipeline.
  • the power required is more significant when wide data words (e.g., 64-bit, 128-bit, or larger) are used.
  • the present invention provides a mechanism that limits the need to copy data between registers within the pipeline.
  • the present invention implements a mechanism called a “pipefile” to improve power performance.
  • Results from execution units are written on the results busses only once. They are captured by the pipefile which acts as a sort of cache. The results are forwarded as needed from the pipefile.
  • a less efficient pipeline processor implementation simply moves the results from stage-to-stage through the execution pipeline without using a pipefile.
  • the results busses are heavily loaded due to the loads they are driving and parasitic impedance, driving the busses multiple times for the same interim result can be very power inefficient.
  • results from the execution stages need only be driven onto the results bus once.
  • the pipefile mimics the pipeline and shifts the result from entry-to-entry as its producing instruction moves through the pipeline. This offers some improvement as the data is moved without the penalty of the heavily loaded results bus.
  • a result stays in the entry until the instruction has completed and the result has been committed to the register file. The improved implementation avoids power loss associated with switching the transistor in the pipefile.
  • Any system is usefully described as a collection of processes or modules communicating via data objects or messages as shown in FIG. 1.
  • the modules may be large collections of circuitry whose properties are somewhat loosely defined, and may vary in size or composition significantly.
  • the data object or message is a communication between modules that make up the system. To actually connect a module within the system it is necessary to define an interface between the system and the component module.
  • the present invention is illustrated in terms of a media system 100 shown in FIG. 1.
  • Media processor 100 comprises, for example, a “set-top box” for video processing, a video game controller, a digital video disk (DVD) player, and the like.
  • system 100 is a special purpose data processing system targeted at high throughput multimedia applications.
  • processor 101 that operates to communicate and process data received through a high speed bus 102 , peripheral bus 104 , and memory bus 106 .
  • Video controller 105 receives digital data from system bus 102 and generates video signals to display information on an external video monitor, television set, and the like.
  • the generated video signals may be analog or digital
  • video controller may receive analog and/or digital video signals from external devices as well.
  • Audio controller 107 operates in a manner akin to video controller 105 , but differs in that it controls audio information rather than video
  • Network I/O controller 109 may be a conventional network card, ISDN connection, modem, and the like for communicating digital information
  • Mass storage device 111 coupled to high speed bus 102 may comprise magnetic disks, tape drives, CDROM, DVD, banks of random access memory, and the like. A wide variety of random access and read only memory technologies are available and are equivalent for purposes of the present invention.
  • Mass storage 111 may include computer programs and data stored therein.
  • high speed bus 102 is implemented as a peripheral component interconnect (PCI) industry standard bus.
  • PCI peripheral component interconnect
  • An advantage of using an industry standard bus is that a wide variety of expansion units such as controller's 105 , 107 , 109 and 111 are readily available.
  • Peripherals 113 include a variety of general purpose I/O devices that may require lower bandwidth communication than provided by high speed bus 102 .
  • Typical I/O devices include read only memory (ROM) devices such as game program cartridges, serial input devices such as a mouse or joystick, keyboards, and the like.
  • Processor 101 includes corresponding serial port(s), parallel port(s), printer ports, and external timer ports to communicate with peripherals 113 . Additionally, ports may be included to support communication with on-board ROM, such as a BIOS ROM, integrated with processor 101 .
  • External memory 103 is typically required to provide working storage for processor 101 and may be implemented using dynamic or static RAM, ROM, synchronous DRAM, or any of a wide variety of equivalent devices capable of storing digital data in a manner accessible to processor 101 .
  • Processor 101 is illustrated in a greater detail in the functional diagram of FIG. 2.
  • One module in a data processing system is a central processor unit (CPU) core 201 .
  • the CPU core 201 includes, among other components execution resources (e.g., arithmetic logic units, registers, control logic) and cache memory. These functional units, discussed in greater detail below, perform the functions of fetching instructions and data from memory, preprocessing fetched instructions, scheduling instructions to be executed, executing the instructions, managing memory transactions, and interfacing with external circuitry and devices.
  • CPU core 201 communicates with other components shown in FIG. 2 through a system bus 202 .
  • system bus 202 is a high-speed network bus using packet technology and is referred to herein as a “super highway”.
  • Bus 202 couples to a variety of system components. Of particular importance are components that implement interfaces with external hardware such as external memory interface unit 203 , PCI bridge 207 , and peripheral bus 204 .
  • the organization of interconnects in the system illustrated in FIG. 2 is guided by the principle of optimizing each interconnect for its specific purpose.
  • the bus system 202 interconnect facilitates the integration of several different types of sub-systems. It is used for closely coupled subsystems which have stringent memory latency/bandwidth requirements.
  • the peripheral subsystem bus 204 supports bus standards which allow easy integration of hardware of types indicated in reference to FIG. 1 through interface ports 213 .
  • PCI bridge 207 provides a standard interface that supports expansion using a variety of PCI standard devices that demand higher performance that available through peripheral port 204 .
  • the system bus 202 may be outfitted with an expansion port which supports the rapid integration of application modules without changing the other components of system 101 .
  • External memory interface 203 provides an interface between the system bus 202 and the external main memory subsystem 103 (shown in FIG. 1).
  • the external memory interface comprises a port to system bus 202 and a DRAM controller.
  • the CPU core 201 can be represented as a collection of interacting functional units as shown in FIG. 3. These functional units, discussed in greater detail below, perform the functions of fetching instructions and data from memory, preprocessing fetched instructions, scheduling instructions to be executed, executing the instructions, managing memory transactions, and interfacing with external circuitry and devices.
  • a bus interface unit (BIU) 301 handles all requests to and from the system bus 202 and external memory.
  • An instruction flow unit (IFU) 303 is the front end of the CPU pipe and controls fetch, predecode, decode, issue and branch operations in the preferred embodiment.
  • IFU 303 includes a pipe control unit 401 (shown in FIG. 4) that implements features of the present invention.
  • pipe control unit 401 shown in FIG. 4
  • inventive features of the present invention may be usefully embodied in a number of alternative processor architectures that will benefit from the performance features of the present invention. Accordingly, these alternative embodiments are equivalent to the particular embodiments shown and described herein.
  • An execution unit (IEU) 305 handles all integer and multimedia instructions.
  • the main CPU datapath includes an instruction cache unit (ICU) 307 implements an instruction cache (Icache not shown) and an instruction translation lookaside buffer (ITLB, not shown).
  • Load store unit (LSU) 309 handles all memory instructions.
  • a data cache control unit (DCU) 311 includes a data cache (Dcache, not shown) and a data translation lookaside buffer (DTLB, not shown).
  • DTLB data translation lookaside buffer
  • FIG. 4 shows hardware resources within IFU 303 including a pipe control unit 401 in accordance with the present invention.
  • FIG. 4 shows a simplified IFU block diagram with the internal blocks as well as the external interfacing units.
  • IFU 303 can be divided into the following functional blocks according to their functions: the Instruction Cache Control Unit (ICC) 413 , the Fetch Unit (FE) 403 , the Branch Unit (BR) 411 , the Decode Unit 405 , the Pipe Control Unit 401 , and the Operand File Unit comprising register file 407 and pipe file 409 .
  • ICC Instruction Cache Control Unit
  • FE Fetch Unit
  • BR Branch Unit
  • Decode Unit 405 the Decode Unit
  • Pipe Control Unit 401 the Operand File Unit comprising register file 407 and pipe file 409 .
  • IFU 303 functions as the sequencer of the CPU core 201 in accordance with the present invention. It coordinates the flow of instructions and data within the core 201 as well as merges the external events with the core internal activities. Its main functions are to fetch instructions from ICU 307 using fetch unit 403 and decode the instructions in decoder 405 . IFU 303 checks for instruction inter-dependency, reads the operands from the register file 407 and sends the decoded instructions and the operands to the execution units (e.g., IEU 305 , and LSU 309 ). In addition, IFU 303 couples to BIU 301 on instruction cache misses to fill the instruction cache within ICU 307 with the missing instructions from external memory.
  • execution units e.g., IEU 305 , and LSU 309
  • IFU 303 interfaces with almost every other functional unit.
  • the interface between IFU 303 and BIU 301 initiates the loading of instructions into the instruction cache.
  • the interface between IFU 303 and ICU 307 provides the flow of instructions for execution.
  • the interface between IFU 303 and IMU 305 and LSU 309 provides the paths for sending/receiving instructions, operands, results, as well as the control signals to enable the execution of instructions.
  • IFU 303 may also receive external interrupt signals from an external interrupt controller (shown in FIG. 2), which samples and arbitrates external interrupts. IFU 303 will then arbitrate the external interrupts with internal exceptions and activates the appropriate handler to take care of the asynchronous events.
  • pipe control unit 401 monitors their execution through the remaining pipe stages.
  • the main function of pipe control unit 401 is to ensure that instructions are executed smoothly and correctly that (i) instructions will be held in the decode stage until the source operands are ready or can be ready when needed, (ii) that synchronization and serialization requirements imposed by the instruction as well as internal/external events are observed, and (iii) that data operands/temporary results are forwarded correctly.
  • the operand file unit implements the architecturally defined general purpose register file 407 . In addition, it also implements a limited version of a reorder buffer called “pipe file” 409 for storing and forwarding temporary results that are yet to be committed to architectural registers. Because CPU core 201 is principally directed at in-order execution, there is only a small window of time that execution results may be produced out-of-order. The present invention takes advantage of this property and implements a simplified version of the reorder buffer that allows temporary results to be forwarded as soon as they are produced, while avoiding the expensive tag passing/matching mechanism usually associated with a reorder buffer.
  • the operand file implements the data path portion of this pipe file.
  • the control is implemented in the pipe control unit 401 .
  • Pipe file 409 operates to collect results from the execution units, and writes them back to the register file 407 during the writeback stage.
  • Pipe file 409 is an important component of the present invention.
  • One option for using pipe file 409 is to have a pipe file entry associated with each execution stage. This requires that interim results determined at an early execution stage be copied from entry-to-entry within pipefile 409 so that the interim result follows the instruction through the pipeline.
  • the present invention involves a mechanism and method of operation that avoids this entry-to-entry data shifting.
  • FIG. 5 and FIG. 6 illustrate an example execution pipeline in accordance with the present invention.
  • the particular example is a scalar (i.e., single pipeline), single issue machine.
  • the implementation in FIG. 5 and FIG. 6 includes three execution stages. Many instructions however execute in a single cycle.
  • the present invention implements features to enable comprehensive forwarding within the pipeline to achieve a high instruction throughput. Although illustrated in terms of a single pipeline (i.e., scalar) machine, the teachings of the present invention are adapted to multiple pipeline machines in a straightforward manner.
  • stage 503 the instruction cache access which was initiated in the previous cycle is completed and the instruction is returned to IFU 303 where it can be latched by mid-cycle.
  • An instruction may spend from 1 to n cycles in stage 503 depending on downstream pipeline instructions.
  • stage 505 some pre-decoding of the instruction will be carried out.
  • Decode stage 505 handles the full instruction decode, operand dependency checks and register file read and instruction issue to the execution units.
  • the first execution stage 507 implements the execution of all single cycle integer instructions as well as the address calculation for memory and branch instructions.
  • the second execution stage 509 implements the second cycle of execution for all multicycle integer/multimedia instructions. Additionally it corresponds to the second cycle for load instructions.
  • the third execution stage 511 implements the third cycle of execution for all multicycle integer/multimedia instructions and corresponds to the completion cycle for load instructions.
  • Write back stage 513 is where all architectural state modified by an instruction (e.g. general purpose register, program counter etc.) is updated. The exception status of the instruction arriving in this stage or any external exception can prevent the update in this stage.
  • the pipe control unit 401 performs a number of operations in handling the instruction flow.
  • An important feature of the pipe control unit 401 is the pipeline snapshot file 415 (shown in FIG. 4) implemented within pipe control unit 401 .
  • Snapshot file 415 may be implemented as a lookup table having a table entry 701 (shown in FIG. 7) corresponding to each execution stage in the pipeline.
  • the snapshot file 415 provides a central resource for all pipeline control operations such as dependency checks, operand forwarding, exception handling, and the like.
  • snapshot file 415 includes four entries corresponding to the three execution pipeline stages and the writeback pipeline stage.
  • FIG. 7A and FIG. 7B show exemplary snapshot files 701 and 702 indicating entries holding metadata describing the instruction execution state at the corresponding pipe stage. As instructions move from one stage to another, their associated snapshot entry moves to the corresponding snapshot entry 701 or 702 .
  • the contents of each snapshot entry 701 may be varied to meet the needs of a particular application.
  • the specific examples shown in FIG. 7 correspond to pipeline control operations described hereinbelow.
  • the essential functionality of examples 701 and 702 are similar although the implementation of that essential functionality differs between the examples.
  • snapshot file 701 does not include a “STAGE” entry as that is implied by the index of the entry whereas example 702 includes an explicit STAGE entry.
  • E 1 _RESULT E 2 _RESULT
  • E 3 _RESULT E 1 _RESULT
  • the fields have the function generally described in the figures and additional or fewer fields may be added to meet the needs of a particular application.
  • snapshot entry 701 includes a pointer to the pipefile entry corresponding to that instruction.
  • An instruction is assigned an entry in pipefile 409 in decode and the assigned value indicated in the instruction's pipefile entry 701 .
  • Other execution stages or hardware resources that desire to know which pipefile stage is being used by the instruction can look to the snapshot entry 701 for that information.
  • the snapshot register may be used by the pipe control unit 401 to perform a number of parallel checks to classify the instruction currently being processed by the decoder 405 .
  • the three potential operand register fields of the instruction word are checked against the existing pipe snapshot to detect data dependency, forwarding dependence, write after write hazard, and write after write for an accumulating-type instruction.
  • the result from the EXE_ 2 and EXE_ 3 stages will be written into the pipefile 409 when the E 2 _RESULT and E 3 _RESULT fields of the corresponding snapshot file entries are set.
  • the write into pipefile 409 will occur even if the EXCEPTION field in snapshot file 702 is set. This is to allow transportation data for exceptions back to the branch unit.
  • the rdest_valid field also determines if the contents of the pipefile is written back to the architectural register file. Once in write-back, if no exception has occurred, the instruction has completed.
  • the snapshot register plays a role in managing pipefile 409 and operand file 407 updates in the event of exceptions. Even though an exception has been detected the pipefile 409 will continue to be updated with data according to the “stage_rdy” field of the snapshot file, While an excepting instruction is executing through the pipe, in certain cases the result data associated with the excepting data is of interest. A key point is that these results are written to pipefile 409 in the normal stage_rdy stage of the excepting instruction. As long as this rule is honored exception data is transported through the pipefile 409 as normal and will indicate to the branch unit 411 at write-back that exception data of interest is on the write-back bus.
  • snapshot register 701 indicates which pipestage will produce a result to the pipefile 409 , subsequent instructions can reliably use the interim result from the pipefile 409 before the interim result is committed to architectural state. This process is called internal operand forwarding.
  • the present invention supports internal operand forwarding by providing a pipefile entry from which the interim result can be readily forwarded.
  • the pipe control block determines from the instruction code the source of the operands for the instruction.
  • the operand can be sourced from, for example:
  • FIG. 8 illustrates an exemplary operand multiplexing (“muxing”) mechanism that enables rich sharing of operands within the pipeline.
  • the mechanism shown in FIG. 8 is distributed throughout pipe control unit 401 as described below.
  • the operand multiplexer mechanism of FIG. 8 produces three choices (e.g., IFU_SRC 1 , IFU_SRC 2 , IFU_SRC 3 ) for the source operands provided to the first execution stage 507 .
  • Each execution stage produces a result (labeled EXE_ 1 , EXE_ 2 , and EXE_ 3 in FIG. 8) that may be used as a source operand input to the first execution stage 507 .
  • Execution stage 507 is associated with a multiplexors 809 a - 809 c for selecting up to three source operands from those available.
  • the specific examples given herein are for purposes of explanation and understanding, and are not a limitation on the actual implementation.
  • execution stage 507 , 509 and 511 shown in FIG. 8 are representative of all of the hardware resources used in that execution stage as defined by the processor microarchitecture.
  • An execution stage is physically implemented using the hardware resources such as those shown in FIG. 3.
  • the outputs of multiplexors 809 are physically coupled to each of the hardware resources that will use the source operands during its operation.
  • PC program counter
  • instruction address registers instruction address registers
  • control register contents are pre-muxed in the branch unit using multiplexors 801 and 803 . All these inputs are available at the start of the cycle.
  • decode constant extracted from the instruction and possibly tied high zeroes are pre-muxed in the decode stage using multiplexor 811 .
  • the outputs of the pipefile 409 are muxed with the program counter data and decode constant data respectively in multiplexors 805 and 813 .
  • the register file contents are muxed with the pipefile outputs using multiplexors 807 , 815 , and 821 to produce source operands which are distributed down the execution datapath (IFU_SRC 1 , IFU_SRC 2 , IFU_SRC 3 in FIG. 8).
  • the LSU ex 3 result is muxed with the output of the IMU ex 3 result (from the multiplier). This is also controlled by the pipe control unit 401 .
  • pipe control unit 401 generates the control signals for multiplexors and execution stage resources. This enables the source operand inputs used by each execution stage to be selected from among a plurality of possible inputs. Of particular significance is that each source operand can be forwarded from the interim results stored in the pipefile if valid results are available in the pipefile. This is useful in handling data hazards in a manner that limits the need to stall the pipeline or fill the pipeline with bubbles while data dependencies resolve.
  • the particular choice and distribution of operand sources can include more or fewer sources to meet the needs of a particular application and unless specified otherwise herein the examples are provided for example purposes only.
  • each source operand is desirably allowed to be taken from the execution unit's own result output. This is particularly useful for accumulate-type operations where the destination register is used in a series of instructions to hold an accumulating result. Without this feature, pipeline bubbles would likely be inserted between accumulate instructions thereby reducing throughput significantly.
  • the decoder can issue accumulating type instructions one-after-another.
  • FIG. 9 that schematically illustrates the execution stages of a pipeline and the operand sources for each stage.
  • Each execution stage (EXE_ 1 , EXE_ 2 and EXE_ 3 ) may generate a result.
  • the specific stage that generates a result for any given instruction will vary from instruction-to-instruction, but is preferably indicated in the “stage_rdy” field of the snapshot file entry 702 or the E 1 _RESULT, E 2 _RESULT and E 3 _RESULT fields described hereinbefore.
  • Each source operand can be taken from the execution unit's own result output.
  • FIG. 9 shows an operand bus comprising IFU_SRC 1 , IFU_SRC 2 and IFU_SRC 3 (determined as shown in FIG. 8) and a results bus comprising EXE_ 1 _RESULT, EXE_ 2 _RESULT and EXE_ 3 _RESULT.
  • the results bus carries results to appropriate entries in pipefile 409 .
  • each execution stage corresponds to a specific entry in the pipe file 409 (e.g., EXE_ 2 corresponds to pipefile entry 409 A, EXE_ 3 stage 509 corresponds to entry 409 B).
  • Results are written from the result bus into pipefile 409 according to the “stage_rdy” value in the snapshot register (FIG. 7A) or the E 1 _RESULT through E 3 _RESULT entries (FIG. 7B) as described hereinbefore.
  • Pipefile 409 A takes the EXE_ 1 result and can forward its contents when the instruction that produces the result is in the EXE_ 2 stage.
  • pipefile entry 409 B takes the EXE_ 2 result and 409 C takes the EXE_ 3 result respectively. Otherwise, results are moved sequentially from entry 409 A to 409 B to 409 C. Entry 409 C corresponds to the write back pipe stage. Assuming the snapshot register entry 701 corresponding to the instruction in the write back stage is valid and does not indicate an exception, the value stored in pipefile stage 409 is copied to the appropriate register in register file 407 .
  • the operands for each execution stage can be selected from either the operand bus or the results bus.
  • a result that is ready in EXE_ 1 will be driven onto the EXE_ 1 _RESULT line and can be used as an operand on the following cycle in the second and third execution stages before being written to either register file 407 or the pipefile 409 .
  • a result determined in EXE_ 3 can be used on the next clock cycle as an operand for an instruction executing in the first execution stage (EXE_ 1 ). This enables the instruction to be issued to EXE_ 1 without delays or pipeline bubbles normally associated with waiting for the EXE_ 3 _RESULT to be written out to a register or rename register.
  • execution stage 507 can use its own output as well as the outputs of stages 509 and 511 as an operand for the next cycle. This is done, for example, by selecting EXE_ 1 _RESULT, EXE_ 2 _RESULT or EXE_ 3 _RESULT as one of its operand inputs. This is particularly useful for accumulate-type operations where the destination register is used in a series of instructions to hold an accumulating result. Without this feature, pipeline bubbles would likely be inserted between accumulate instructions thereby reducing throughput significantly. Using this feature, the decoder can issue accumulating type instructions one-after-another.
  • the results are coupled to a corresponding selector unit 901 .
  • Each selector selectively couples the result to one of the result bus lines.
  • Each selector is controlled by, for example, the pointer value (labeled POINTER_ 1 , POINTER_ 2 and POINTER_ 3 in FIG. 9) corresponding to that pipe stage.
  • the pointer values are determined from the PIPE_FILE_ENTRY and E 1 _RESULT, E 2 _RESULT and E 3 _RESULT fields of snapshot entry 701 .
  • the pointer value 903 may be stored in the snapshot file entry 701 as described hereinbefore, or may be stored in a separate register that operates in a manner such that the pointer value remains associated with a particular instruction as the instruction moves through the pipeline. The result is written to the specified pipefile entry 409 a - 409 c.
  • Pipefile 409 preferably comprises dual ported memory structure so that the contents of any entry 409 a - 409 c can be written to and/or read out at any time.
  • the memory within pipefile 409 is typically implemented using CMOS or BiCMOS static random access memory (SRAM) technology using four or more transistors per stored bit.
  • a multiplexor set 903 selectively couples the data stored in pipefile entries 409 a - 409 c to appropriate lines on a pipefile bus 904 .
  • the pipefile bus 904 provides values to the multiplexing mechanism shown in FIG. 8, for example.
  • Multiplexor set 903 is controlled by pipe control unit 401 to couple appropriate bus lines to corresponding entries 409 a - 409 c in pipefile 409 .
  • multiplexor 903 may be implemented in a variety of ways depending on the level of operand forwarding needed in a particular implementation. For example, if operand forwarding from the pipefile is not needed, there would be no corresponding need to generate the PIPEFILE_SCR 1 , PIPEFILE_SCR 2 and PIPEFILE_SCR 3 lines.
  • the writeback line is controlled by the writeback stage pointer and selects one of the pipefile entries for writeback to an architectural register in register file 407 .

Abstract

A processor including a plurality of execution pipeline stages where each stage accepts a plurality of operand inputs and generates a result. A pipefile having at least the same number of entries as the number of execution pipeline stages is included in the processor A pointer register is associated with each execution pipeline stage. A value is stored in at least one of the pointer registers, the value indicating a particular one of the entries in the pipefile.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0001]
  • The present invention relates in general to microprocessors and, more particularly, to a system, method, and mechanism providing power efficient operation in a pipeline processor. [0002]
  • 2. Relevant Background [0003]
  • Computer programs comprise a series of instructions that direct a data processing mechanism to perform specific operations on data. These operations including loading data from memory, storing data to memory, adding, multiplying, and the like. Data processors, including microprocessors, microcontrollers, and the like include a central processing unit (CPU) comprising one or more functional units that perform various tasks. Typical functional units include a decoder, an instruction cache, a data cache, an integer execution unit, a floating point execution unit, a load/store unit, and the like. A given program may run on a variety of data processing hardware. [0004]
  • Early data processors executed only one instruction at a time. Each instruction was executed to completion before execution of a subsequent instruction was begun. Each instruction typically requires a number of data processing operations and involves multiple functional units within the processor. Hence, an instruction may consume several clock cycles to complete. In serially executed processors each functional unit may be busy during only one step, and idle during the other steps. The serial execution of instructions results in the completion of less than one instruction per clock cycle. [0005]
  • As used herein the term “data processor” includes complex instruction set computers (CISC), reduced instruction set computers (RISC) and hybrids. A data processor may be a stand alone central processing unit (CPU) or an embedded system comprising a processor core integrated with other components to form a special purpose data processing machine. The term “data” refers to a digital or binary information that may represent memory addresses, data, instructions, or the like. [0006]
  • In response to the need for improved performance several techniques have been used to extend the capabilities of these early processors including pipelining, superpipelining, and superscaling. Pipelined architectures attempt to keep all the functional units of a processor busy at all times by overlapping execution of several instructions. Pipelined designs increase the rate at which instructions can be executed by allowing a new instruction to begin execution before a previous instruction is finished executing. A simple pipeline may have only five stages whereas an extended pipeline may have ten or more stages. In this manner, the pipeline hides the latency associated with the execution of any particular instruction. [0007]
  • The goal of pipeline processors is to execute multiple instructions per cycle (IPC). Due to pipeline hazards, actual throughput is reduced. Pipeline hazards include structural hazards, data hazards, and control hazards. Structural hazards arise when more than one instruction in the pipeline requires a particular hardware resource at the same time (e.g., two execution units requiring access to a single ALU resource in the same clock cycle). Data hazards arise when an instruction needs as input the output of an instruction that has not yet produced that output. Control hazards arise when an instruction changes the program counter (PC) because execution cannot continue until the target instruction from the new PC is fetched. [0008]
  • When hazards occur, the processor must stall or place “bubbles” (e.g., NOPs) in the pipeline until the hazard condition is resolved. This increases latency and decreases instruction throughput. As pipelines become longer, the likelihood of hazards increases. Hence, an effective mechanism for handling hazard conditions is important to achieving the benefits of deeper pipelines. [0009]
  • Another goal of many processors is to control the power used by the processor. Many applications, particularly those directed at mobile or battery operated environments, require low power usage. The execution pipelines of a computer consume a significant amount of power. Power consumption is largely caused by moving data between registers, files, and execution units. As data paths become wider, the power consumed to move the data increases. [0010]
  • Hence, in order to execute instructions efficiently at a high throughput within a pipeline it is important to coordinate and control the flow of instructions, operations, and data within the execution pipeline. The order and manner in which the operands and results of these instructions are made available to each other within the execution pipeline is of critical importance to the throughput of the pipeline. [0011]
  • SUMMARY OF THE INVENTION
  • The present invention involves a processor including a plurality of execution pipeline stages where each stage accepts a plurality of operand inputs and generates a result. A pipefile having at least the same number of entries as the number of execution pipeline stages is included in the processor. A pointer register is associated with each execution pipeline stage. A value is stored in at least one of the pointer registers, the value indicating a particular one of the entries in the pipefile. [0012]
  • The present invention involves a method, system and apparatus for forwarding data within a pipeline of a pipelined data processor having a plurality of execution pipeline stages where each stage accepts a plurality of operand inputs and generates a result. A pipefile is implemented having at least the same number of entries as the number of execution pipeline stages Each new instruction is assigned to one of the entries in the pipefile before the new instruction is executed. The pipefile entry assignment remains valid while the instruction remains in any of the execution pipeline stages. The new instruction is passed through the execution pipeline stages to generate a result. Upon successful completion of executing the new instruction, the result is written back from the assigned pipefile entry to an architectural register. [0013]
  • The foregoing and other features, utilities and advantages of the invention will be apparent from the following more particular description of a preferred embodiment of the invention as illustrated in the accompanying drawings.[0014]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows in block diagram form a computer system incorporating an apparatus and system in accordance with the present invention; [0015]
  • FIG. 2 shows a processor in block diagram form incorporating the apparatus and method in accordance with the present invention; [0016]
  • FIG. 3 illustrates a CPU core useful in the implementation of the processor and system shown in FIG. 1 and FIG. 2 in accordance with the present invention; [0017]
  • FIG. 4 shows an instruction fetch unit in which features of the present invention are embodied in a particular implementation; [0018]
  • FIG. 5 illustrates an exemplary execution pipeline in accordance with a specific embodiment of the present invention; [0019]
  • FIG. 6 illustrates comparative pipeline timing for the execution pipeline shown in FIG. 5; [0020]
  • FIG. 7A and FIG. 7B show exemplary a snapshot register entries in accordance with embodiments of the present invention; and [0021]
  • FIG. 8 shows an operand multiplexing mechanism in accordance with an embodiment of the present invention; and [0022]
  • FIG. 9 schematically illustrates internal operand forwarding mechanism in accordance with the present invention. [0023]
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Power efficient operation is an important feature for many data processors. This is particularly true for embedded processors so that they do not place undue demands on the power supply requirements for the system in which they are embedded. The present invention is illustrated in terms of a particular embedded processor system using a multi-stage pipeline for processing instructions. The present invention particularly involves a structure for efficiently forwarding data within the pipefile mechanism so that, for example, operands that are determined by a first instruction within the pipeline can be used by subsequent instructions before the first instruction has completed to write back. [0024]
  • Operand forwarding is important in avoiding pipeline stalls, but can lead to a significant amount of power loss as data is copied and moved between registers to make the data available throughout the pipeline. The power required is more significant when wide data words (e.g., 64-bit, 128-bit, or larger) are used. The present invention provides a mechanism that limits the need to copy data between registers within the pipeline. [0025]
  • The present invention implements a mechanism called a “pipefile” to improve power performance. Results from execution units are written on the results busses only once. They are captured by the pipefile which acts as a sort of cache. The results are forwarded as needed from the pipefile. A less efficient pipeline processor implementation simply moves the results from stage-to-stage through the execution pipeline without using a pipefile. However, since the results busses are heavily loaded due to the loads they are driving and parasitic impedance, driving the busses multiple times for the same interim result can be very power inefficient. Using the pipefile in accordance with the present invention, results from the execution stages need only be driven onto the results bus once. [0026]
  • In one implementation the pipefile mimics the pipeline and shifts the result from entry-to-entry as its producing instruction moves through the pipeline. This offers some improvement as the data is moved without the penalty of the heavily loaded results bus. In an improved implementation, once into the pipefile, a result stays in the entry until the instruction has completed and the result has been committed to the register file. The improved implementation avoids power loss associated with switching the transistor in the pipefile. [0027]
  • Any system is usefully described as a collection of processes or modules communicating via data objects or messages as shown in FIG. 1. The modules may be large collections of circuitry whose properties are somewhat loosely defined, and may vary in size or composition significantly. The data object or message is a communication between modules that make up the system. To actually connect a module within the system it is necessary to define an interface between the system and the component module. [0028]
  • The present invention is illustrated in terms of a [0029] media system 100 shown in FIG. 1. Media processor 100 comprises, for example, a “set-top box” for video processing, a video game controller, a digital video disk (DVD) player, and the like. Essentially, system 100 is a special purpose data processing system targeted at high throughput multimedia applications. Features of the present invention are embodied in processor 101 that operates to communicate and process data received through a high speed bus 102, peripheral bus 104, and memory bus 106.
  • [0030] Video controller 105 receives digital data from system bus 102 and generates video signals to display information on an external video monitor, television set, and the like. The generated video signals may be analog or digital Optionally, video controller may receive analog and/or digital video signals from external devices as well. Audio controller 107 operates in a manner akin to video controller 105, but differs in that it controls audio information rather than video Network I/O controller 109 may be a conventional network card, ISDN connection, modem, and the like for communicating digital information Mass storage device 111 coupled to high speed bus 102 may comprise magnetic disks, tape drives, CDROM, DVD, banks of random access memory, and the like. A wide variety of random access and read only memory technologies are available and are equivalent for purposes of the present invention. Mass storage 111 may include computer programs and data stored therein. In a particular example, high speed bus 102 is implemented as a peripheral component interconnect (PCI) industry standard bus. An advantage of using an industry standard bus is that a wide variety of expansion units such as controller's 105, 107, 109 and 111 are readily available.
  • [0031] Peripherals 113 include a variety of general purpose I/O devices that may require lower bandwidth communication than provided by high speed bus 102. Typical I/O devices include read only memory (ROM) devices such as game program cartridges, serial input devices such as a mouse or joystick, keyboards, and the like. Processor 101 includes corresponding serial port(s), parallel port(s), printer ports, and external timer ports to communicate with peripherals 113. Additionally, ports may be included to support communication with on-board ROM, such as a BIOS ROM, integrated with processor 101. External memory 103 is typically required to provide working storage for processor 101 and may be implemented using dynamic or static RAM, ROM, synchronous DRAM, or any of a wide variety of equivalent devices capable of storing digital data in a manner accessible to processor 101.
  • [0032] Processor 101 is illustrated in a greater detail in the functional diagram of FIG. 2. One module in a data processing system is a central processor unit (CPU) core 201. The CPU core 201 includes, among other components execution resources (e.g., arithmetic logic units, registers, control logic) and cache memory. These functional units, discussed in greater detail below, perform the functions of fetching instructions and data from memory, preprocessing fetched instructions, scheduling instructions to be executed, executing the instructions, managing memory transactions, and interfacing with external circuitry and devices.
  • [0033] CPU core 201 communicates with other components shown in FIG. 2 through a system bus 202. In the preferred implementation system bus 202 is a high-speed network bus using packet technology and is referred to herein as a “super highway”. Bus 202 couples to a variety of system components. Of particular importance are components that implement interfaces with external hardware such as external memory interface unit 203, PCI bridge 207, and peripheral bus 204.
  • The organization of interconnects in the system illustrated in FIG. 2 is guided by the principle of optimizing each interconnect for its specific purpose. The [0034] bus system 202 interconnect facilitates the integration of several different types of sub-systems. It is used for closely coupled subsystems which have stringent memory latency/bandwidth requirements. The peripheral subsystem bus 204 supports bus standards which allow easy integration of hardware of types indicated in reference to FIG. 1 through interface ports 213. PCI bridge 207 provides a standard interface that supports expansion using a variety of PCI standard devices that demand higher performance that available through peripheral port 204. The system bus 202 may be outfitted with an expansion port which supports the rapid integration of application modules without changing the other components of system 101. External memory interface 203 provides an interface between the system bus 202 and the external main memory subsystem 103 (shown in FIG. 1). The external memory interface comprises a port to system bus 202 and a DRAM controller.
  • The [0035] CPU core 201 can be represented as a collection of interacting functional units as shown in FIG. 3. These functional units, discussed in greater detail below, perform the functions of fetching instructions and data from memory, preprocessing fetched instructions, scheduling instructions to be executed, executing the instructions, managing memory transactions, and interfacing with external circuitry and devices.
  • A bus interface unit (BIU) [0036] 301 handles all requests to and from the system bus 202 and external memory. An instruction flow unit (IFU) 303 is the front end of the CPU pipe and controls fetch, predecode, decode, issue and branch operations in the preferred embodiment. In accordance with the preferred embodiment, IFU 303 includes a pipe control unit 401 (shown in FIG. 4) that implements features of the present invention. However, it is contemplated that the inventive features of the present invention may be usefully embodied in a number of alternative processor architectures that will benefit from the performance features of the present invention. Accordingly, these alternative embodiments are equivalent to the particular embodiments shown and described herein.
  • An execution unit (IEU) [0037] 305 handles all integer and multimedia instructions. The main CPU datapath includes an instruction cache unit (ICU) 307 implements an instruction cache (Icache not shown) and an instruction translation lookaside buffer (ITLB, not shown). Load store unit (LSU) 309 handles all memory instructions. A data cache control unit (DCU) 311 includes a data cache (Dcache, not shown) and a data translation lookaside buffer (DTLB, not shown). Although the present invention preferably uses separate data and instruction caches, it is contemplated that a unified cache can be used with some decrease in performance. In a typical embodiment, the functional units shown in FIG. 2, and some or all of cache memory 105 may be integrated in a single integrated circuit, although the specific components and integration density are a matter of design choice selected to meet the needs of a particular application.
  • FIG. 4 shows hardware resources within [0038] IFU 303 including a pipe control unit 401 in accordance with the present invention. FIG. 4 shows a simplified IFU block diagram with the internal blocks as well as the external interfacing units. As shown in FIG. 4, IFU 303 can be divided into the following functional blocks according to their functions: the Instruction Cache Control Unit (ICC) 413, the Fetch Unit (FE) 403, the Branch Unit (BR) 411, the Decode Unit 405, the Pipe Control Unit 401, and the Operand File Unit comprising register file 407 and pipe file 409.
  • [0039] IFU 303 functions as the sequencer of the CPU core 201 in accordance with the present invention. It coordinates the flow of instructions and data within the core 201 as well as merges the external events with the core internal activities. Its main functions are to fetch instructions from ICU 307 using fetch unit 403 and decode the instructions in decoder 405. IFU 303 checks for instruction inter-dependency, reads the operands from the register file 407 and sends the decoded instructions and the operands to the execution units (e.g., IEU 305, and LSU 309). In addition, IFU 303 couples to BIU 301 on instruction cache misses to fill the instruction cache within ICU 307 with the missing instructions from external memory.
  • Because of the sequencing role within the [0040] CPU core 201, IFU 303 interfaces with almost every other functional unit. The interface between IFU 303 and BIU 301 initiates the loading of instructions into the instruction cache. The interface between IFU 303 and ICU 307 provides the flow of instructions for execution. The interface between IFU 303 and IMU 305 and LSU 309 provides the paths for sending/receiving instructions, operands, results, as well as the control signals to enable the execution of instructions. In addition to these interfaces, IFU 303 may also receive external interrupt signals from an external interrupt controller (shown in FIG. 2), which samples and arbitrates external interrupts. IFU 303 will then arbitrate the external interrupts with internal exceptions and activates the appropriate handler to take care of the asynchronous events.
  • Once instructions are decoded, [0041] pipe control unit 401 monitors their execution through the remaining pipe stages. The main function of pipe control unit 401 is to ensure that instructions are executed smoothly and correctly that (i) instructions will be held in the decode stage until the source operands are ready or can be ready when needed, (ii) that synchronization and serialization requirements imposed by the instruction as well as internal/external events are observed, and (iii) that data operands/temporary results are forwarded correctly.
  • The operand file unit implements the architecturally defined general [0042] purpose register file 407. In addition, it also implements a limited version of a reorder buffer called “pipe file” 409 for storing and forwarding temporary results that are yet to be committed to architectural registers. Because CPU core 201 is principally directed at in-order execution, there is only a small window of time that execution results may be produced out-of-order. The present invention takes advantage of this property and implements a simplified version of the reorder buffer that allows temporary results to be forwarded as soon as they are produced, while avoiding the expensive tag passing/matching mechanism usually associated with a reorder buffer. The operand file implements the data path portion of this pipe file. The control is implemented in the pipe control unit 401.
  • [0043] Pipe file 409 operates to collect results from the execution units, and writes them back to the register file 407 during the writeback stage. Pipe file 409 is an important component of the present invention. One option for using pipe file 409 is to have a pipe file entry associated with each execution stage. This requires that interim results determined at an early execution stage be copied from entry-to-entry within pipefile 409 so that the interim result follows the instruction through the pipeline. The present invention involves a mechanism and method of operation that avoids this entry-to-entry data shifting. These features are described in greater detail hereinafter with respect to FIG. 9.
  • FIG. 5 and FIG. 6 illustrate an example execution pipeline in accordance with the present invention. The particular example is a scalar (i.e., single pipeline), single issue machine. The implementation in FIG. 5 and FIG. 6 includes three execution stages. Many instructions however execute in a single cycle. The present invention implements features to enable comprehensive forwarding within the pipeline to achieve a high instruction throughput. Although illustrated in terms of a single pipeline (i.e., scalar) machine, the teachings of the present invention are adapted to multiple pipeline machines in a straightforward manner. [0044]
  • In the [0045] pre-decode stage 503 the instruction cache access which was initiated in the previous cycle is completed and the instruction is returned to IFU 303 where it can be latched by mid-cycle. An instruction may spend from 1 to n cycles in stage 503 depending on downstream pipeline instructions. In the second half of stage 503, some pre-decoding of the instruction will be carried out. Decode stage 505 handles the full instruction decode, operand dependency checks and register file read and instruction issue to the execution units.
  • The [0046] first execution stage 507 implements the execution of all single cycle integer instructions as well as the address calculation for memory and branch instructions. The second execution stage 509 implements the second cycle of execution for all multicycle integer/multimedia instructions. Additionally it corresponds to the second cycle for load instructions. The third execution stage 511 implements the third cycle of execution for all multicycle integer/multimedia instructions and corresponds to the completion cycle for load instructions. Write back stage 513 is where all architectural state modified by an instruction (e.g. general purpose register, program counter etc.) is updated. The exception status of the instruction arriving in this stage or any external exception can prevent the update in this stage.
  • The [0047] pipe control unit 401 performs a number of operations in handling the instruction flow. An important feature of the pipe control unit 401 is the pipeline snapshot file 415 (shown in FIG. 4) implemented within pipe control unit 401. Snapshot file 415 may be implemented as a lookup table having a table entry 701 (shown in FIG. 7) corresponding to each execution stage in the pipeline. The snapshot file 415 provides a central resource for all pipeline control operations such as dependency checks, operand forwarding, exception handling, and the like. In a particular implementation, snapshot file 415 includes four entries corresponding to the three execution pipeline stages and the writeback pipeline stage.
  • FIG. 7A and FIG. 7B show exemplary snapshot files [0048] 701 and 702 indicating entries holding metadata describing the instruction execution state at the corresponding pipe stage. As instructions move from one stage to another, their associated snapshot entry moves to the corresponding snapshot entry 701 or 702. The contents of each snapshot entry 701 may be varied to meet the needs of a particular application. The specific examples shown in FIG. 7 correspond to pipeline control operations described hereinbelow. The essential functionality of examples 701 and 702 are similar although the implementation of that essential functionality differs between the examples. In comparing the examples, snapshot file 701 does not include a “STAGE” entry as that is implied by the index of the entry whereas example 702 includes an explicit STAGE entry. The single STAGE_RDY entry of FIG. 7B is implemented using three separate entries (E1_RESULT, E2_RESULT and E3_RESULT) in the example of FIG. 7A. The fields have the function generally described in the figures and additional or fewer fields may be added to meet the needs of a particular application.
  • In particular, [0049] snapshot entry 701 includes a pointer to the pipefile entry corresponding to that instruction. An instruction is assigned an entry in pipefile 409 in decode and the assigned value indicated in the instruction's pipefile entry 701. Other execution stages or hardware resources that desire to know which pipefile stage is being used by the instruction can look to the snapshot entry 701 for that information. In the particular example there are three pipefile entries corresponding to the three execution stages of the pipeline. Hence, only two bits of information are needed to point to the correct pipefile entry.
  • As an instruction moves through the pipeline, and results become available, the results are written to the specified pipe file entry in the execution stage indicated by the “stage_rdy” field in [0050] snapshot entry 702. Subsequently, the result remains in the same pipefile entry while the instruction moves through the pipeline. In this manner the present invention avoids power usage normally required to move the result from entry to entry within pipefile 409. Instead, only the two-bit pointer needs to be moved from entry to entry within snapshot entry 701 and 702. This can translate to hundreds or thousands of fewer transistor switching operations per clock cycle for a wide data word.
  • In operation, the snapshot register may be used by the [0051] pipe control unit 401 to perform a number of parallel checks to classify the instruction currently being processed by the decoder 405. For example, the three potential operand register fields of the instruction word are checked against the existing pipe snapshot to detect data dependency, forwarding dependence, write after write hazard, and write after write for an accumulating-type instruction.
  • Under normal conditions once an instruction has been issued to an execution unit its entry will progress through each stage of the snapshot file on each clock edge. At the beginning of each execution stage the control for writing the result to the pipefile is generated. This is determined by checking the E[0052] 1_RESULT, E2_RESULT, and E3_RESULT fields of the current execution stage. For example, if E1_RESULT field is set for the instruction executing in the EXE_1 stage 507, the result from EXE_1 stage 507 will then be written into the pipefile entry indexed by the PIPE_FILE_ENTRY field. Similarly, the result from the EXE_2 and EXE_3 stages will be written into the pipefile 409 when the E2_RESULT and E3_RESULT fields of the corresponding snapshot file entries are set. The write into pipefile 409 will occur even if the EXCEPTION field in snapshot file 702 is set. This is to allow transportation data for exceptions back to the branch unit. Once an instruction reaches write-back, the rdest_valid field also determines if the contents of the pipefile is written back to the architectural register file. Once in write-back, if no exception has occurred, the instruction has completed.
  • The snapshot register plays a role in managing [0053] pipefile 409 and operand file 407 updates in the event of exceptions. Even though an exception has been detected the pipefile 409 will continue to be updated with data according to the “stage_rdy” field of the snapshot file, While an excepting instruction is executing through the pipe, in certain cases the result data associated with the excepting data is of interest. A key point is that these results are written to pipefile 409 in the normal stage_rdy stage of the excepting instruction. As long as this rule is honored exception data is transported through the pipefile 409 as normal and will indicate to the branch unit 411 at write-back that exception data of interest is on the write-back bus.
  • Another general utility of the snapshot register is in handling internal operand forwarding within the pipeline. Because the [0054] snapshot entry 701 indicates which pipestage will produce a result to the pipefile 409, subsequent instructions can reliably use the interim result from the pipefile 409 before the interim result is committed to architectural state. This process is called internal operand forwarding. The present invention supports internal operand forwarding by providing a pipefile entry from which the interim result can be readily forwarded.
  • When decode indicates that it has a valid instruction the pipe control block determines from the instruction code the source of the operands for the instruction. The operand can be sourced from, for example: [0055]
  • Register operands; [0056]
  • Indirectly forwarded operands through the three pipefile entries; [0057]
  • Directly forwarded operands from the result busses; [0058]
  • The extended immediate field from the instruction; [0059]
  • The program counter; [0060]
  • The contents of an instruction address register (IAR); [0061]
  • The contents of a control register; and [0062]
  • A tied low constant field; [0063]
  • The above gives up to 12 possible sources of input to some operand. FIG. 8 illustrates an exemplary operand multiplexing (“muxing”) mechanism that enables rich sharing of operands within the pipeline. The mechanism shown in FIG. 8 is distributed throughout [0064] pipe control unit 401 as described below. The operand multiplexer mechanism of FIG. 8 produces three choices (e.g., IFU_SRC1, IFU_SRC2, IFU_SRC3) for the source operands provided to the first execution stage 507. Each execution stage produces a result (labeled EXE_1, EXE_2, and EXE_3 in FIG. 8) that may be used as a source operand input to the first execution stage 507. Execution stage 507 is associated with a multiplexors 809 a-809 c for selecting up to three source operands from those available. The specific examples given herein are for purposes of explanation and understanding, and are not a limitation on the actual implementation.
  • It should also be understood that [0065] execution stage 507, 509 and 511 shown in FIG. 8 are representative of all of the hardware resources used in that execution stage as defined by the processor microarchitecture. An execution stage is physically implemented using the hardware resources such as those shown in FIG. 3. The outputs of multiplexors 809 are physically coupled to each of the hardware resources that will use the source operands during its operation.
  • The multiplexing of these operand sources in the particular example is distributed in the following way: [0066]
  • The program counter (PC), instruction address registers, and control register contents are pre-muxed in the branch [0067] unit using multiplexors 801 and 803. All these inputs are available at the start of the cycle.
  • The decode constant extracted from the instruction and possibly tied high zeroes are pre-muxed in the decode [0068] stage using multiplexor 811.
  • The outputs of the [0069] pipefile 409 are muxed with the program counter data and decode constant data respectively in multiplexors 805 and 813.
  • The register file contents are muxed with the pipefile [0070] outputs using multiplexors 807, 815, and 821 to produce source operands which are distributed down the execution datapath (IFU_SRC1, IFU_SRC2, IFU_SRC3 in FIG. 8).
  • Forwarding of completing results is done locally within the execution datapath as suggested by the connection from the output of EXE_[0071] 3 stage to the input of multiplexor 809. As the result is being driven back up the datapath from the various stages of execution (imu_result_ex1, _ex2 and _ex3), the result taps back into the multiplexor 809 latch at the input to the execution sub-units. The result is also driven back up to the pipefile for ultimate storage in the register file. Pipe control unit 401 controls the selection of the multiplexer 809 latches.
  • The LSU ex[0072] 3 result is muxed with the output of the IMU ex3 result (from the multiplier). This is also controlled by the pipe control unit 401.
  • In this manner, [0073] pipe control unit 401 generates the control signals for multiplexors and execution stage resources. This enables the source operand inputs used by each execution stage to be selected from among a plurality of possible inputs. Of particular significance is that each source operand can be forwarded from the interim results stored in the pipefile if valid results are available in the pipefile. This is useful in handling data hazards in a manner that limits the need to stall the pipeline or fill the pipeline with bubbles while data dependencies resolve. The particular choice and distribution of operand sources can include more or fewer sources to meet the needs of a particular application and unless specified otherwise herein the examples are provided for example purposes only.
  • Moreover, each source operand is desirably allowed to be taken from the execution unit's own result output. This is particularly useful for accumulate-type operations where the destination register is used in a series of instructions to hold an accumulating result. Without this feature, pipeline bubbles would likely be inserted between accumulate instructions thereby reducing throughput significantly. Using this feature, the decoder can issue accumulating type instructions one-after-another. [0074]
  • FIG. 9 that schematically illustrates the execution stages of a pipeline and the operand sources for each stage. Each execution stage (EXE_[0075] 1, EXE_2 and EXE_3) may generate a result. The specific stage that generates a result for any given instruction will vary from instruction-to-instruction, but is preferably indicated in the “stage_rdy” field of the snapshot file entry 702 or the E1_RESULT, E2_RESULT and E3_RESULT fields described hereinbefore. Each source operand can be taken from the execution unit's own result output. FIG. 9 shows an operand bus comprising IFU_SRC1, IFU_SRC2 and IFU_SRC3 (determined as shown in FIG. 8) and a results bus comprising EXE_1_RESULT, EXE_2_RESULT and EXE_3_RESULT. The results bus carries results to appropriate entries in pipefile 409.
  • In the embodiment shown in FIG. 9 each execution stage corresponds to a specific entry in the pipe file [0076] 409 (e.g., EXE_2 corresponds to pipefile entry 409A, EXE_3 stage 509 corresponds to entry 409B). Results are written from the result bus into pipefile 409 according to the “stage_rdy” value in the snapshot register (FIG. 7A) or the E1_RESULT through E3_RESULT entries (FIG. 7B) as described hereinbefore. Pipefile 409A takes the EXE_1 result and can forward its contents when the instruction that produces the result is in the EXE_2 stage. Similarly, pipefile entry 409B takes the EXE_2 result and 409C takes the EXE_3 result respectively. Otherwise, results are moved sequentially from entry 409A to 409B to 409C. Entry 409C corresponds to the write back pipe stage. Assuming the snapshot register entry 701 corresponding to the instruction in the write back stage is valid and does not indicate an exception, the value stored in pipefile stage 409 is copied to the appropriate register in register file 407.
  • Significantly, the operands for each execution stage can be selected from either the operand bus or the results bus. Hence, a result that is ready in EXE_[0077] 1 will be driven onto the EXE_1_RESULT line and can be used as an operand on the following cycle in the second and third execution stages before being written to either register file 407 or the pipefile 409. Similarly, a result determined in EXE_3 can be used on the next clock cycle as an operand for an instruction executing in the first execution stage (EXE_1). This enables the instruction to be issued to EXE_1 without delays or pipeline bubbles normally associated with waiting for the EXE_3_RESULT to be written out to a register or rename register.
  • Furthermore, [0078] execution stage 507 can use its own output as well as the outputs of stages 509 and 511 as an operand for the next cycle. This is done, for example, by selecting EXE_1_RESULT, EXE_2_RESULT or EXE_3_RESULT as one of its operand inputs. This is particularly useful for accumulate-type operations where the destination register is used in a series of instructions to hold an accumulating result. Without this feature, pipeline bubbles would likely be inserted between accumulate instructions thereby reducing throughput significantly. Using this feature, the decoder can issue accumulating type instructions one-after-another.
  • The results are coupled to a [0079] corresponding selector unit 901. Each selector selectively couples the result to one of the result bus lines. Each selector is controlled by, for example, the pointer value (labeled POINTER_1, POINTER_2 and POINTER_3 in FIG. 9) corresponding to that pipe stage. The pointer values are determined from the PIPE_FILE_ENTRY and E1_RESULT, E2_RESULT and E3_RESULT fields of snapshot entry 701. Alternatively, the pointer value 903 may be stored in the snapshot file entry 701 as described hereinbefore, or may be stored in a separate register that operates in a manner such that the pointer value remains associated with a particular instruction as the instruction moves through the pipeline. The result is written to the specified pipefile entry 409 a-409 c.
  • [0080] Pipefile 409 preferably comprises dual ported memory structure so that the contents of any entry 409 a-409 c can be written to and/or read out at any time. The memory within pipefile 409 is typically implemented using CMOS or BiCMOS static random access memory (SRAM) technology using four or more transistors per stored bit. A multiplexor set 903 selectively couples the data stored in pipefile entries 409 a-409 c to appropriate lines on a pipefile bus 904. The pipefile bus 904 provides values to the multiplexing mechanism shown in FIG. 8, for example. Multiplexor set 903 is controlled by pipe control unit 401 to couple appropriate bus lines to corresponding entries 409 a-409 c in pipefile 409.
  • As a particular example, assume an instruction that generates its result in EXE_[0081] 1 and the pointer values are set such that the EXE_1 result is written to pipefile entry 409 b. From pipefile entry 409 b the result can be multiplexed onto any of the IFU_SRC lines by appropriate settings in multiplexor set 903. On the next pipe cycle, the example instruction will move to pipe stage EXE_2, while pipefile entry 409 b remains unchanged. In this manner, a result needs only be written to the results bus one time while remaining continuously available for forwarding while the instruction remains in the pipeline. the hundreds of transistors used to store the value in entry 409 b do not have to be switched until after the value is written back and the pipe file entry is reassigned to an instruction in the decoder.
  • It is contemplated that the functionality of [0082] multiplexor 903 may be implemented in a variety of ways depending on the level of operand forwarding needed in a particular implementation. For example, if operand forwarding from the pipefile is not needed, there would be no corresponding need to generate the PIPEFILE_SCR1, PIPEFILE_SCR2 and PIPEFILE_SCR3 lines. The writeback line is controlled by the writeback stage pointer and selects one of the pipefile entries for writeback to an architectural register in register file 407.
  • While the invention has been particularly shown and described with reference to a preferred embodiment thereof, it will be understood by those skills in the art that various other changes in the form and details may be made without departing from the spirit and scope of the invention. The various embodiments have been described using hardware examples, but the present invention can be readily implemented in software. For example, it is contemplated that a programmable logic device, hardware emulator, software simulator, or the like of sufficient complexity could implement the present invention as a computer program product including a computer usable medium having computer readable code embodied therein to perform precise architectural update in an emulated or simulated out-of-order machine. Accordingly, these and other variations are equivalent to the specific implementations and embodiments described herein. [0083]

Claims (14)

What is claimed is:
1. A method for forwarding data within a pipeline of a pipelined data processor comprising the steps of:
providing a plurality of execution pipeline stages where each stage accepts a plurality of operand inputs and generates a result;
providing a pipefile comprising at least the same number of entries as the number of execution pipeline stages;
assigning each new instruction to one of the entries in the pipefile before the new instruction is executed, wherein the pipefile entry assignment remains valid while the instruction remains in any of the execution pipeline stages;
passing the new instruction through the execution pipeline stages to generate a result;
storing the result in the assigned pipefile entry; and
upon successful completion of executing the new instruction, writing back the result from the assigned pipefile entry to an architectural register.
2. The method of
claim 1
further comprising selectively coupling the result generated by each execution pipeline stage to an operand input of one of the execution pipeline stages.
3. The method of
claim 1
wherein the step of assigning is performed at a decode stage before the new instruction is passed though any of the execution pipeline stages.
4. The method of
claim 1
wherein the assigning is performed so that each instruction in each execution pipeline stage is assigned to a unique one of the pipefile entries.
5. The method of
claim 1
wherein the execution pipeline stages comprise a write back pipeline stage, the write back pipeline stage having an instruction therein assigned to one of the pipefile entires, and the assigning further comprises assigning the new instruction to the pipefile entry currently being used by the write back pipeline stage.
6. The method of
claim 1
further comprising:
providing a pointer register associated with each execution pipeline stage; and
storing in each pointer register a value indicating the pipefile entry assigned to the instruction currently in the associated execution pipeline stage.
7. The method of
claim 6
further comprising moving the value stored in each pointer register to another pointer register at each cycle of the pipeline so that the value is always stored in a pointer register associated with the instruction to which the pipefile entry identified by the value is assigned.
8. A data processor comprising:
a plurality of execution pipeline stages where each stage accepts a plurality of operand inputs and generates a result;
a pipefile comprising at least the same number of entries as the number of execution pipeline stages;
a pointer register associated with each execution pipeline stage; and
a value stored in at least one of the pointer registers, the value indicating a particular one of the entries in the pipefile.
9. The data processor of
claim 8
further comprising:
a selector coupled to each execution pipeline stage that produces a result, the selector coupled to selectively route the result to one of the pipefile entries identified by the value stored in the pointer register associated with that execution pipeline stage.
10. The data processor of
claim 8
further comprising:
a pipefile bus communicating data stored in the pipefile, the pipefile bus comprising a plurality of lines where each line is associated with a particular pipeline execution stage;
a selector controlled by the values stored in the pointer registers, the selector coupled to each entry of the pipefile and the selector coupled to selectively route the data stored in each entry of the pipefile to a particular line of the pipefile bus.
11. The data processor of
claim 8
further comprising a decoder pipeline stage operative to receive a new instruction before the new instruction is passed to the execution pipeline stages, the decoder including logic for assigning the value to be stored in the pointer registers.
12. The data processor of
claim 11
wherein the execution pipeline stages include a write back pipeline stage, wherein the logic for assigning operates to assign values in a round-robin fashion such that the new instruction is assigned a value currently in the pointer register associated with the write back stage.
13. A method for forwarding data within a pipeline of a pipelined data processor comprising the steps of:
providing a plurality of execution pipeline stages where each stage includes logic for producing an instruction result;
providing a pipefile comprising a plurality of entries where each entry is associated with an execution pipeline stage;
when a result is generated by an execution pipeline stage, capturing the result in the associated entry in the pipefile; and
forwarding the captured result from the pipefile upon demand from an execution pipeline stage.
14. The method of
claim 13
further comprising:
passing the instruction through the execution pipeline stages;
shifting the captured results in the pipefile so that the captured result remains in a pipefile entry associated with an execution stage in which its producing instruction resides.
US09/410,929 1999-10-01 1999-10-01 Mechanism for power efficient processing in a pipeline processor Expired - Fee Related US6351803B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US09/410,929 US6351803B2 (en) 1999-10-01 1999-10-01 Mechanism for power efficient processing in a pipeline processor
JP2000286978A JP2001142700A (en) 1999-10-01 2000-09-21 Efficient power processing mechanism in pipeline processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/410,929 US6351803B2 (en) 1999-10-01 1999-10-01 Mechanism for power efficient processing in a pipeline processor

Publications (2)

Publication Number Publication Date
US20010054139A1 true US20010054139A1 (en) 2001-12-20
US6351803B2 US6351803B2 (en) 2002-02-26

Family

ID=23626837

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/410,929 Expired - Fee Related US6351803B2 (en) 1999-10-01 1999-10-01 Mechanism for power efficient processing in a pipeline processor

Country Status (2)

Country Link
US (1) US6351803B2 (en)
JP (1) JP2001142700A (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7058793B1 (en) * 1999-12-20 2006-06-06 Unisys Corporation Pipeline controller for providing independent execution between the preliminary and advanced stages of a synchronous pipeline
US7024544B2 (en) * 2003-06-24 2006-04-04 Via-Cyrix, Inc. Apparatus and method for accessing registers in a processor
US20060095732A1 (en) * 2004-08-30 2006-05-04 Tran Thang M Processes, circuits, devices, and systems for scoreboard and other processor improvements
DE102007041212A1 (en) * 2007-08-31 2009-03-05 Advanced Micro Devices, Inc., Sunnyvale Latency coverage and application to the generation of multiprocessor test generator templates
JP6171599B2 (en) * 2013-06-11 2017-08-02 サンケン電気株式会社 Semiconductor device and control method thereof
US10102114B1 (en) * 2017-03-08 2018-10-16 Amazon Technologies, Inc. Code testing and approval for deployment to production environment

Family Cites Families (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3584690D1 (en) 1984-06-20 1992-01-02 Convex Computer Corp INPUT / OUTPUT BUS FOR COMPUTER.
US4935867A (en) 1986-03-04 1990-06-19 Advanced Micro Devices, Inc. Signal processor memory management unit with indirect addressing using selectable offsets and modulo values for indexed address calculations
US4814981A (en) 1986-09-18 1989-03-21 Digital Equipment Corporation Cache invalidate protocol for digital data processing system
US5483518A (en) 1992-06-17 1996-01-09 Texas Instruments Incorporated Addressable shadow port and protocol for serial bus networks
JPH0666056B2 (en) 1989-10-12 1994-08-24 甲府日本電気株式会社 Information processing system
JPH03217949A (en) 1990-01-23 1991-09-25 Hitachi Ltd Computer system
US5452432A (en) 1990-08-14 1995-09-19 Chips And Technologies, Inc. Partially resettable, segmented DMA counter
JP2984463B2 (en) 1991-06-24 1999-11-29 株式会社日立製作所 Microcomputer
US5423050A (en) 1991-11-27 1995-06-06 Ncr Corporation Intermodule test across system bus utilizing serial test bus
US5724549A (en) 1992-04-06 1998-03-03 Cyrix Corporation Cache coherency without bus master arbitration signals
GB2266606B (en) 1992-04-27 1996-02-14 Intel Corp A microprocessor with an external command mode
US5448576A (en) 1992-10-29 1995-09-05 Bull Hn Information Systems Inc. Boundary scan architecture extension
JP3231429B2 (en) 1992-11-06 2001-11-19 株式会社日立製作所 Semiconductor integrated circuit device having central processing unit and multiplier
JPH06150023A (en) 1992-11-06 1994-05-31 Hitachi Ltd Microcomputer and system thereof
JP3524110B2 (en) 1992-11-06 2004-05-10 株式会社ルネサステクノロジ Microcomputer system
US5627842A (en) 1993-01-21 1997-05-06 Digital Equipment Corporation Architecture for system-wide standardized intra-module and inter-module fault testing
JP2731692B2 (en) 1993-04-28 1998-03-25 日本電気アイシーマイコンシステム株式会社 Debug device
US5598551A (en) 1993-07-16 1997-01-28 Unisys Corporation Cache invalidation sequence system utilizing odd and even invalidation queues with shorter invalidation cycles
DE69415600T2 (en) 1993-07-28 1999-07-15 Koninkl Philips Electronics Nv Microcontroller with hardware troubleshooting support based on the boundary scan method
JP3904244B2 (en) 1993-09-17 2007-04-11 株式会社ルネサステクノロジ Single chip data processor
EP0652516A1 (en) 1993-11-03 1995-05-10 Advanced Micro Devices, Inc. Integrated microprocessor
US5596734A (en) 1993-12-17 1997-01-21 Intel Corporation Method and apparatus for programming embedded memories of a variety of integrated circuits using the IEEE test access port
US5828825A (en) 1993-12-22 1998-10-27 Intel Corporation Method and apparatus for pseudo-direct access to embedded memories of a micro-controller integrated circuit via the IEEE test access port
US5434804A (en) 1993-12-29 1995-07-18 Intel Corporation Method and apparatus for synchronizing a JTAG test control signal to an on-chip clock signal
US5488688A (en) 1994-03-30 1996-01-30 Motorola, Inc. Data processor with real-time diagnostic capability
JPH07287668A (en) 1994-04-19 1995-10-31 Hitachi Ltd Data processor
GB9417602D0 (en) 1994-09-01 1994-10-19 Inmos Ltd A controller for implementing scan testing
JPH08329687A (en) 1995-06-05 1996-12-13 Hitachi Ltd Semiconductor integrated circuit
JP3713312B2 (en) 1994-09-09 2005-11-09 株式会社ルネサステクノロジ Data processing device
JP3672634B2 (en) 1994-09-09 2005-07-20 株式会社ルネサステクノロジ Data processing device
JP3740195B2 (en) 1994-09-09 2006-02-01 株式会社ルネサステクノロジ Data processing device
US5848247A (en) 1994-09-13 1998-12-08 Hitachi, Ltd. Microprocessor having PC card interface
US5613153A (en) 1994-10-03 1997-03-18 International Business Machines Corporation Coherency and synchronization mechanisms for I/O channel controllers in a data processing system
US5751621A (en) 1994-11-17 1998-05-12 Hitachi, Ltd. Multiply-add unit and data processing apparatus using it
TW330265B (en) 1994-11-22 1998-04-21 Hitachi Ltd Semiconductor apparatus
JP2752592B2 (en) 1994-12-28 1998-05-18 日本ヒューレット・パッカード株式会社 Microprocessor, signal transmission method between microprocessor and debug tool, and tracing method
US5778237A (en) 1995-01-10 1998-07-07 Hitachi, Ltd. Data processor and single-chip microcomputer with changing clock frequency and operating voltage
US5664197A (en) 1995-04-21 1997-09-02 Intel Corporation Method and apparatus for handling bus master channel and direct memory access (DMA) channel access requests at an I/O controller
US5867726A (en) 1995-05-02 1999-02-02 Hitachi, Ltd. Microcomputer
US5570375A (en) 1995-05-10 1996-10-29 National Science Council Of R.O.C. IEEE Std. 1149.1 boundary scan circuit capable of built-in self-testing
US5860127A (en) 1995-06-01 1999-01-12 Hitachi, Ltd. Cache memory employing dynamically controlled data array start timing and a microcomputer using the same
US5774701A (en) 1995-07-10 1998-06-30 Hitachi, Ltd. Microprocessor operating at high and low clok frequencies
US5708773A (en) 1995-07-20 1998-01-13 Unisys Corporation JTAG interface system for communicating with compliant and non-compliant JTAG devices
US5737516A (en) 1995-08-30 1998-04-07 Motorola, Inc. Data processing system for performing a debug function and method therefor
US5704034A (en) 1995-08-30 1997-12-30 Motorola, Inc. Method and circuit for initializing a data processing system
JP3655403B2 (en) 1995-10-09 2005-06-02 株式会社ルネサステクノロジ Data processing device
JP3623840B2 (en) 1996-01-31 2005-02-23 株式会社ルネサステクノロジ Data processing apparatus and microprocessor
US5950012A (en) 1996-03-08 1999-09-07 Texas Instruments Incorporated Single chip microprocessor circuits, systems, and methods for self-loading patch micro-operation codes and patch microinstruction codes
US5978874A (en) 1996-07-01 1999-11-02 Sun Microsystems, Inc. Implementing snooping on a split-transaction computer system bus
JPH09311786A (en) 1996-03-18 1997-12-02 Hitachi Ltd Data processor
JP3579205B2 (en) 1996-08-06 2004-10-20 株式会社ルネサステクノロジ Semiconductor storage device, semiconductor device, data processing device, and computer system
GB9617033D0 (en) 1996-08-14 1996-09-25 Int Computers Ltd Diagnostic memory access
US5768152A (en) 1996-08-28 1998-06-16 International Business Machines Corp. Performance monitoring through JTAG 1149.1 interface
JP3778573B2 (en) 1996-09-27 2006-05-24 株式会社ルネサステクノロジ Data processor and data processing system
JP3790307B2 (en) 1996-10-16 2006-06-28 株式会社ルネサステクノロジ Data processor and data processing system
JPH10177520A (en) 1996-10-16 1998-06-30 Hitachi Ltd Data processor and data processing system
JP3641327B2 (en) 1996-10-18 2005-04-20 株式会社ルネサステクノロジ Data processor and data processing system
GB9622686D0 (en) 1996-10-31 1997-01-08 Sgs Thomson Microelectronics A test port controller and a method of effecting communication using the same
US5983017A (en) 1996-11-12 1999-11-09 Lsi Logic Corporation Virtual monitor debugging method and apparatus
US5953538A (en) 1996-11-12 1999-09-14 Digital Equipment Corporation Method and apparatus providing DMA transfers between devices coupled to different host bus bridges
US5771240A (en) 1996-11-14 1998-06-23 Hewlett-Packard Company Test systems for obtaining a sample-on-the-fly event trace for an integrated circuit with an integrated debug trigger apparatus and an external pulse pin
US5956477A (en) 1996-11-25 1999-09-21 Hewlett-Packard Company Method for processing information in a microprocessor to facilitate debug and performance monitoring
US5896550A (en) 1997-04-03 1999-04-20 Vlsi Technology, Inc. Direct memory access controller with full read/write capability
US5978902A (en) 1997-04-08 1999-11-02 Advanced Micro Devices, Inc. Debug interface including operating system access of a serial/parallel debug port
US5944841A (en) 1997-04-15 1999-08-31 Advanced Micro Devices, Inc. Microprocessor with built-in instruction tracing capability
GB9802097D0 (en) 1998-01-30 1998-03-25 Sgs Thomson Microelectronics DMA controller
GB9806184D0 (en) 1998-03-23 1998-05-20 Sgs Thomson Microelectronics A cache coherency mechanism
GB9809203D0 (en) 1998-04-29 1998-07-01 Sgs Thomson Microelectronics Packet distribution in a microcomputer

Also Published As

Publication number Publication date
US6351803B2 (en) 2002-02-26
JP2001142700A (en) 2001-05-25

Similar Documents

Publication Publication Date Title
CN106406849B (en) Method and system for providing backward compatibility, non-transitory computer readable medium
US6279100B1 (en) Local stall control method and structure in a microprocessor
US8069340B2 (en) Microprocessor with microarchitecture for efficiently executing read/modify/write memory operand instructions
JP3548132B2 (en) Method and apparatus for flushing pipeline stages in a multithreaded processor
US5974523A (en) Mechanism for efficiently overlapping multiple operand types in a microprocessor
US5546597A (en) Ready selection of data dependent instructions using multi-cycle cams in a processor performing out-of-order instruction execution
US5553256A (en) Apparatus for pipeline streamlining where resources are immediate or certainly retired
US7254697B2 (en) Method and apparatus for dynamic modification of microprocessor instruction group at dispatch
US6336183B1 (en) System and method for executing store instructions
US5867684A (en) Method and processor that permit concurrent execution of a store multiple instruction and a dependent instruction
US8145887B2 (en) Enhanced load lookahead prefetch in single threaded mode for a simultaneous multithreaded microprocessor
EP2671150B1 (en) Processor with a coprocessor having early access to not-yet issued instructions
WO2000033183A9 (en) Method and structure for local stall control in a microprocessor
US20040215936A1 (en) Method and circuit for using a single rename array in a simultaneous multithread system
US6633971B2 (en) Mechanism for forward data in a processor pipeline using a single pipefile connected to the pipeline
JP2008226236A (en) Configurable microprocessor
US20020056034A1 (en) Mechanism and method for pipeline control in a processor
US6581155B1 (en) Pipelined, superscalar floating point unit having out-of-order execution capability and processor employing the same
US6351803B2 (en) Mechanism for power efficient processing in a pipeline processor
US7302553B2 (en) Apparatus, system and method for quickly determining an oldest instruction in a non-moving instruction queue
US7003649B2 (en) Control forwarding in a pipeline digital processor
US6408381B1 (en) Mechanism for fast access to control space in a pipeline processor
US6988121B1 (en) Efficient implementation of multiprecision arithmetic
US7149883B1 (en) Method and apparatus selectively to advance a write pointer for a queue based on the indicated validity or invalidity of an instruction stored within the queue
Richardson et al. Precise exception handling for a self-timed processor

Legal Events

Date Code Title Description
AS Assignment

Owner name: HITACHI, LTD., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PENG, CHIH-JUI;REEL/FRAME:011087/0015

Effective date: 19991104

AS Assignment

Owner name: HITACHI LTD., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CHUA-EOAN, LEW;REEL/FRAME:011199/0317

Effective date: 20000922

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20100226