|Publication number||US7475393 B2|
|Application number||US 10/993,971|
|Publication date||Jan 6, 2009|
|Filing date||Nov 19, 2004|
|Priority date||Aug 29, 2003|
|Also published as||US20050071835|
|Publication number||10993971, 993971, US 7475393 B2, US 7475393B2, US-B2-7475393, US7475393 B2, US7475393B2|
|Inventors||Raymond Brooke Essick, IV, Brian Geoffrey Lucas|
|Original Assignee||Motorola, Inc.|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (14), Non-Patent Citations (1), Referenced by (23), Classifications (11), Legal Events (6)|
|External Links: USPTO, USPTO Assignment, Espacenet|
This application claims priority to U.S. patent application Ser. No. 10/652,135, titled “Method and Apparatus for Elimination of Prolog and Epilog Instructions in a Vector Processor”, filed Aug. 29, 2003.
This invention relates generally to the field of Vector Processors. More particularly, certain embodiments consistent with this invention relate to a method and apparatus for unrolling cross-iteration computations in a vector processor.
Software pipelining for programmable, very long instruction word (VLIW) computers is a technique for introducing parallelism into machine computation of software loops. If different parts of the software loop use different hardware resources, the computation of one iteration of the loop may be started before the prior iteration has finished, thus reducing the total computation time. In this way several iterations of the loop may be in progress at any one time. In machines controlled by VLIW instructions, the instructions in the middle of the loop (where the pipeline is full) are different from the instructions at the start of the loop (the prolog) and the instructions at the end of the loop (the epilog). If a computation requires a number of different loops, a relatively large amount of memory is required to store instructions for the epilog and prolog portions of the loops.
Software pipelining for programmable VLIW machines, such as the IA-64, is accomplished by predicating instructions and executing them conditionally as the software pipeline fills and drains. The predication mechanism tags instructions with a predicate that conditions execution and committing of results to the register file in a general-purpose processor. This approach is generalized in these processors because the prediction mechanism is also used for general conditional execution. A disadvantage of this technique is the requirement for a centralized predicate register file.
Loop-unrolling is a common technique to improve the throughput of inner loops. This unrolling increases efficiency in processors having multiple functional units and also allows overlapping of various operational latencies. However, loop-unrolling has shortcomings when the number of input data items is not a multiple of the unrolling factor. This is exacerbated if the one iteration of the calculation uses a value calculated in a different iteration. This is called a cross-iteration dependency.
Loops that accumulate (e.g. dot products) cannot be unrolled for higher concurrency unless the multiple inputs to the accumulator can be managed. For example, if the calculation of a dot product is broken into two parts, the code will not work when an odd number of iterations are required. Previously, loop unrolling techniques use epilog instructions to deal with the residual computations, as described above.
The novel features believed characteristic of the invention are set forth in the appended claims. The invention itself, however, as well as the preferred mode of use, and further objects and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawing(s), wherein:
While this invention is susceptible of embodiment in many different forms, there is shown in the drawings and will herein be described in detail one or more specific embodiments, with the understanding that the present disclosure is to be considered as exemplary of the principles of the invention and not intended to limit the invention to the specific embodiments shown and described. In the description below, like reference numerals are used to describe the same, similar or corresponding parts in the several views of the drawings.
The Reconfigurable Streaming Vector Processor (RSVP) is a statically scheduled VLIW machine that executes dataflow graphs on vector data (data streams) in a highly pipelined fashion. The pipelining of calculations for the RSVP and other VLIW vector processors requires a significant amount of storage (memory) for prolog and epilog instructions. One aspect of the present invention is a method and apparatus that utilizes data validity tags for unrolling cross-iteration computations without the need to store epilog instructions.
The RSVP architecture provides resources for deeply pipelining computations, limited only by the true data dependencies and the resource limitations of the hardware. These deep computational pipelines would require many VLIW control words in the general case both for priming the pipeline (prolog) and draining the pipeline (epilog). In one example, the memory requirement for an MPEG4 encoder was reduced by 74% by eliminating prolog and epilog VLIW control words.
The prolog and epilog instructions are generally a subset of the instructions in the body of the loop. According to the present invention, mechanisms are provided to guard the execution (committing of results) to the correct subset of the loop-body. Consequently, the data path can repeatedly execute the loop-body, thereby eliminating the prolog and epilog code.
By way of example, a simple RSVP routine for performing a quantization operation will be described. The routine is coded in linearized flow form as:
This exemplary linearized flow form uses the functional operations:
A second aspect of the present invention is the elimination of the epilog instructions. In one embodiment of this aspect, illustrated in
In addition, the input vector stream sources (input VSUs) may have private iteration counts to maintain the correct architectural state at graph completion (so that, architecturally, input addresses are left pointing to the next valid element in the array upon graph completion).
In a further embodiment of the present invention, source iteration counters are used for inputs to eliminate epilog instructions. This approach is illustrated in
An exemplary embodiment of RVSP hardware 100 is shown in
An example of an output stream unit is shown in
An example of an input stream unit is shown in
One embodiment of the method of present invention is illustrated in the flow chart in
The flow chart in
The flow chart in
The flow chart in
An example of the computation of the dot product of two vectors is shown in
An alternative computation of a dot product is shown in
In accordance with a further aspect of the present invention, a functional unit is provided that is sensitive to partially valid inputs. In the example of the ADDX element described above, when only one input is tagged as invalid, that input is treated as being zero and the output is tagged as valid. When both inputs are tagged as invalid, the output is tagged as invalid. Of course, when both inputs are tagged as valid, the output is tagged as valid. Equivalently, when only one input is tagged as invalid, the other input is passed through the functional unit and tagged as valid.
The ADDX functional element, and other functional elements described below, may be implemented as a specific hardware element that always treats a mixture of valid and invalid inputs as being partially valid, or it may be implemented as a functional element in which the response to a mixture of valid and invalid inputs is controlled by an operation code or instruction. For example, if the programmer or scheduler determines that the result of an operation on a mixture of valid and invalid inputs should be valid, the functional unit is operated in a first mode. Whereas, if the programmer or scheduler determines that the result of an operation on a mixture of valid and invalid inputs should be invalid, the functional unit is operated in a second mode. The mode of operation may be controlled by an operation code or instruction.
It will be apparent to those of ordinary skill in the art that other functional units may be introduced that are sensitive to partially valid inputs. Examples include MINX (which outputs the minimum of the inputs) and MAXX (which outputs the minimum of the inputs). In this example, when only one input is tagged as invalid, the other input is passed through the functional unit and tagged as valid. Further examples include a functional unit that performs a multiply/accumulate operation and functional units that perform logic operations such as AND, OR and XOR.
Referring again to
A further representation of the dot product computation is shown in
At step time 0, the accumulator (ACC) is initialized to zero and the two elements of each vector are loaded. At time step 3, the first partial product is added into the accumulator and all of the invalid data has been purged from the functional units. Steps 0-3 form the effective prolog of the computation, but no special code is required. Step 3 forms the loop body of the computation and may be repeated for longer vectors. At time step 4, the last elements of the vectors are loaded. Since there are an odd number of elements, the outputs of LD 3 and LD 4 are tagged as invalid. Consequently, the output of multiplier MUL 2 is invalid in time step 5. Node E7 has one valid input and one invalid input, and so passes the valid input as its valid output to accumulator node E8. The computation ends at time step 7 (this may be indicated by the expiration of a sink counter, for example).
In this example, the accumulator (ACC) functional unit is configured to ignore (or treat as zero) any invalid input. This prevents the initial zero value from being tagged as invalid during the prolog, and allows the accumulated value to persist after the end of the computation (as in time step 8, for example).
It will be apparent to those of ordinary skill in the art that counters, such as the source iteration counter and the sink iteration counter, may be incremented or decremented. The term ‘adjusting’ will be used in the sequel to mean either incrementing or decrementing. In addition, the counters need not be initialized. Instead, the starting value may be noted and the difference between the starting value and current value used to determine if the counter has expired. Hence, counter initialization is taken to mean setting the counter to a specified value or noting the current value of the counter.
Those of ordinary skill in the art will recognize that the present invention has been described in terms of exemplary embodiments based upon use of RSVP hardware architecture. However, the invention should not be so limited, since the present invention could be implemented using other processors, which are equivalents to the invention as, described and claimed. Similarly, general purpose computers, microprocessor based computers, digital signal processors, microcontrollers, dedicated processors, custom circuits and/or ASICS may be used to construct alternative equivalent embodiments of the present invention. In addition, those skilled in the art will appreciate that the processes described above can be implemented in any number of variations and in many suitable programming languages without departing from the present invention. For example, the order of certain operations carried out can often be varied, additional operations can be added or operations can be deleted without departing from the invention. Such variations are contemplated and considered equivalent.
While the invention has been described in conjunction with specific embodiments, it is evident that many alternatives, modifications, permutations and variations will become apparent to those of ordinary skill in the art in light of the foregoing description. Accordingly, it is intended that the present invention embrace all such alternatives, modifications and variations as fall within the scope of the appended claims.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US5021945 *||Jun 26, 1989||Jun 4, 1991||Mcc Development, Ltd.||Parallel processor system for processing natural concurrencies and method therefor|
|US5542084||Nov 22, 1994||Jul 30, 1996||Wang Laboratories, Inc.||Method and apparatus for executing an atomic read-modify-write instruction|
|US5790880 *||Jan 4, 1996||Aug 4, 1998||Advanced Micro Devices||Microprocessor configured to dynamically connect processing elements according to data dependencies|
|US5852729||Jun 29, 1993||Dec 22, 1998||Korg, Inc.||Code segment replacement apparatus and real time signal processor using same|
|US6289443||Jan 27, 1999||Sep 11, 2001||Texas Instruments Incorporated||Self-priming loop execution for loop prolog instruction|
|US6539541||Aug 20, 1999||Mar 25, 2003||Intel Corporation||Method of constructing and unrolling speculatively counted loops|
|US6912709 *||Dec 29, 2000||Jun 28, 2005||Intel Corporation||Mechanism to avoid explicit prologs in software pipelined do-while loops|
|US7200738 *||Apr 18, 2002||Apr 3, 2007||Micron Technology, Inc.||Reducing data hazards in pipelined processors to provide high processor utilization|
|US7272704 *||May 13, 2004||Sep 18, 2007||Verisilicon Holdings (Cayman Islands) Co. Ltd.||Hardware looping mechanism and method for efficient execution of discontinuity instructions|
|US20020112228||Dec 7, 2000||Aug 15, 2002||Granston Elana D.||Method for collapsing the prolog and epilog of software pipelined loops|
|US20030154469||Aug 21, 2002||Aug 14, 2003||Timothy Anderson||Apparatus and method for improved execution of a software pipeline loop procedure in a digital signal processor|
|US20040064682 *||Sep 27, 2002||Apr 1, 2004||Hung Nguyen||System and method for simultaneously executing multiple conditional execution instruction groups|
|US20060101251 *||Nov 14, 2005||May 11, 2006||Lsi Logic Corporation||System and method for simultaneously executing multiple conditional execution instruction groups|
|US20060190706 *||Apr 18, 2006||Aug 24, 2006||Baxter Jeffery J||Processor utilizing novel architectural ordering scheme|
|1||Hennessy and Patterson, Computer Architecture A Quantitative Approach, 1996, Morgan Kaufman Publishers, Inc., Second Edition, pp. 239-247.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7873947 *||Jan 18, 2011||Arun Lakhotia||Phylogeny generation|
|US8010954 *||Aug 30, 2011||The Mathworks, Inc.||Parallel programming interface to dynamically allocate program portions|
|US8239844||Aug 7, 2012||The Mathworks, Inc.||Method of using parallel processing constructs and dynamically allocating program portions|
|US8239845||Oct 20, 2008||Aug 7, 2012||The Mathworks, Inc.||Media for using parallel processing constructs|
|US8239846||Oct 20, 2008||Aug 7, 2012||The Mathworks, Inc.||Device for performing parallel processing of distributed arrays|
|US8250550||Oct 20, 2008||Aug 21, 2012||The Mathworks, Inc.||Parallel processing of distributed arrays and optimum data distribution|
|US8255889||Oct 20, 2008||Aug 28, 2012||The Mathworks, Inc.||Method of using parallel processing constructs and dynamically allocating program portions|
|US8255890||Aug 28, 2012||The Mathworks, Inc.||Media for performing parallel processing of distributed arrays|
|US8302076 *||Oct 30, 2012||Landmark Graphics Corporation||Systems and methods for improved parallel ILU factorization in distributed sparse linear systems|
|US8527973||Aug 22, 2011||Sep 3, 2013||The Mathworks, Inc.||Parallel programming interface to dynamicaly allocate program portions|
|US8707280||Jun 29, 2012||Apr 22, 2014||The Mathworks, Inc.||Using parallel processing constructs and dynamically allocating program portions|
|US8707281||Jul 23, 2012||Apr 22, 2014||The Mathworks, Inc.||Performing parallel processing of distributed arrays|
|US8726251 *||Mar 29, 2011||May 13, 2014||Oracle International Corporation||Pipelined loop parallelization with pre-computations|
|US8813053||Sep 25, 2012||Aug 19, 2014||Landmark Graphics Corporation||Systems and methods for improved parallel ILU factorization in distributed sparse linear systems|
|US20080201721 *||May 15, 2007||Aug 21, 2008||The Mathworks, Inc.||Parallel programming interface|
|US20090044179 *||Oct 20, 2008||Feb 12, 2009||The Mathworks, Inc.||Media for performing parallel processing of distributed arrays|
|US20090044180 *||Oct 20, 2008||Feb 12, 2009||The Mathworks, Inc.||Device for performing parallel processing of distributed arrays|
|US20090044196 *||Oct 20, 2008||Feb 12, 2009||The Mathworks, Inc.||Method of using parallel processing constructs|
|US20090044197 *||Oct 20, 2008||Feb 12, 2009||The Mathworks, Inc.||Device for using parallel processing constructs|
|US20090049435 *||Oct 20, 2008||Feb 19, 2009||The Mathworks, Inc.||Parallel processing of distributed arrays|
|US20090132867 *||Oct 20, 2008||May 21, 2009||The Mathworks, Inc.||Media for using parallel processing constructs|
|US20100122237 *||Nov 12, 2008||May 13, 2010||Landmark Graphics Corporation, A Halliburton Company||Systems and Methods For Improved Parallel ILU Factorization in Distributed Sparse Linear Systems|
|US20120254888 *||Mar 29, 2011||Oct 4, 2012||Oracle International Corporation||Pipelined loop parallelization with pre-computations|
|U.S. Classification||717/151, 712/E09.017, 717/150, 712/229, 712/241, 712/E09.05|
|Cooperative Classification||G06F9/3842, G06F9/3001|
|European Classification||G06F9/30A1A, G06F9/38E2|
|Nov 19, 2004||AS||Assignment|
Owner name: MOTOROLA, INC., ILLINOIS, ILLINOIS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ESSICK, IV, RAYMOND BROOKE;LUCAS, BRIAN GEOFFREY;REEL/FRAME:016017/0995
Effective date: 20041116
|Dec 13, 2010||AS||Assignment|
Owner name: MOTOROLA MOBILITY, INC, ILLINOIS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MOTOROLA, INC;REEL/FRAME:025673/0558
Effective date: 20100731
|Jun 25, 2012||FPAY||Fee payment|
Year of fee payment: 4
|Oct 2, 2012||AS||Assignment|
Owner name: MOTOROLA MOBILITY LLC, ILLINOIS
Free format text: CHANGE OF NAME;ASSIGNOR:MOTOROLA MOBILITY, INC.;REEL/FRAME:029216/0282
Effective date: 20120622
|Nov 24, 2014||AS||Assignment|
Owner name: GOOGLE TECHNOLOGY HOLDINGS LLC, CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MOTOROLA MOBILITY LLC;REEL/FRAME:034419/0001
Effective date: 20141028
|Jul 6, 2016||FPAY||Fee payment|
Year of fee payment: 8