WO2004042623A2 - Method and system for the design of pipelines of processors - Google Patents

Method and system for the design of pipelines of processors Download PDF

Info

Publication number
WO2004042623A2
WO2004042623A2 PCT/US2003/032340 US0332340W WO2004042623A2 WO 2004042623 A2 WO2004042623 A2 WO 2004042623A2 US 0332340 W US0332340 W US 0332340W WO 2004042623 A2 WO2004042623 A2 WO 2004042623A2
Authority
WO
WIPO (PCT)
Prior art keywords
pipeline
schedule
hardware
processor
steps
Prior art date
Application number
PCT/US2003/032340
Other languages
French (fr)
Other versions
WO2004042623A3 (en
Inventor
Robert S. Schreiber
Shail A. Gupta
Bantwal R RAU
Vinod K. Kathail
Santosh G. Abraham
Original Assignee
Hewlett-Packard Development Company, L.P.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett-Packard Development Company, L.P. filed Critical Hewlett-Packard Development Company, L.P.
Priority to JP2004550023A priority Critical patent/JP2006505061A/en
Priority to EP03774793A priority patent/EP1559041A2/en
Priority to AU2003282604A priority patent/AU2003282604A1/en
Publication of WO2004042623A2 publication Critical patent/WO2004042623A2/en
Publication of WO2004042623A3 publication Critical patent/WO2004042623A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design

Definitions

  • the present invention is related to computer hardware design and, more particularly, to methods, software and systems for the design of pipelines of processors.
  • a large class of modern embedded-system computations can be expressed as a sequence of transformation on one or more streams of data.
  • the corresponding architecture of such systems is typically organized as a pipeline of processors wherein each stage accepts data from an initial input or from an output of a previous stage, performs a specific task or transformation on the data, and passes the resulting data along to the next stage (if any) or an output of the pipeline.
  • processor is used here to encompass a broad class of computing devices, both programmable and nonprogrammable, both statically and dynamically scheduled.
  • the data passed between stages may be "fine-grained" (e.g., word-level) or coarse-grained (e.g., a block or a stripe of data elements).
  • the data exchange between stages may be synchronized using a handshake mechanism between stages (e.g., asynchronous) or timed by a system- ide clock and a fixed delay between stages (e.g., synchronous operations.)
  • the design of such a pipeline of processors involves the design of each stage performing a specific task including its initialization, fmalization and control, the design of the buffering mechanism between each pair of stages, and the synchronization mechanism used between every pair of producer (i.e., source or transmitting) and consumer (i.e., destination or receiving) processor stages.
  • Such pipeline architectures are designed manually by looking at a functional specification of the system (i.e., a design document and or a reference implementation of functionality) and carefully identifying all the components and parameters of the design and optimizing them for rninimal cost and maximal performance.
  • a method of designing a pipeline comprises the steps of: accepting a task procedure expressed in a standard programming language, the task procedure including a collection of computational steps, and serving to define the computational function that is to be performed by the pipeline; accepting a performance requirement of the pipeline; and automatically creating a hardware description of the pipeline, the pipeline comprising a plurality of interconnected processor stages, each of the processor stages for performing a respective one of the computational steps, the pipeline having characteristics consistent with the performance requirement of the pipeline.
  • a method of designing a pipeline comprises the steps of: reading a task procedure and a desired throughput of the pipeline, the task procedure including one or more statements; identifying iteration spaces, input, output, and internal (i.e., local to the task procedure) data structures; analyzing dependencies between statements; finding at least one dependence relation between the statements; calculating a valid and desirable multi-schedule, this being a scheduled start time (relative to the start time of said task procedure) for each point in each of said iteration spaces, as well as a scheduled time for each operation relative to the start time of the iteration in which the operation resides; optimizing access of at least one internal data structure using the multi-schedule to minimize a size of a hardware buffer; producing a hardware processor for each loop nest and straight-line segment; and producing optimized hardware buffers to contain values of the internal data structures.
  • a system for designing a pipeline comprises: a memory storing a set of program instructions; and a processor connected to the memory and responsive to the set of program instructions for: (i) accepting a task procedure expressed in a standard programming language, the task procedure including a sequence of computational steps, (ii) accepting a performance requirement of the pipeline, and (iii) automatically creating a hardware description of the pipeline, the pipeline comprising a plurality of interconnected processor stages, each of the processor stages for performing a respective one of the computational steps, the pipeline having characteristics consistent with the performance requirement of the pipeline.
  • a program of computer instructions stored on a computer readable medium includes computer code for performing the steps of: accepting a task procedure expressed in a standard prograrriming language, the task procedure including a sequence of computational steps; accepting a performance requirement of the pipeline; and automatically creating a hardware description of the pipeline, the pipeline comprising a plurality of interconnected processor stages, each of the processor stages for performing a respective one of the computational steps, the pipeline having characteristics consistent with the performance requirement of the pipeline.
  • FIGURE 1 is a block diagram of a generic prior art pipeline configuration of processors
  • FIGURE 2 is a block diagram of a computer system consistent with one embodiment of the present invention.
  • FIGURE 3 is a portion of code defining a series of steps to be implemented in hardware in the form of a pipeline of processors according to one embodiment of the present invention
  • FIGURE 4 is a flow chart of the steps performed when synthesizing a processor pipeline according to one embodiment of the present invention.
  • FIGURES 5A and 5B illustrate a flow chart of the steps performed when synthesizing an asynchronous complex pipeline composed of synchronous processor sub- pipelines according to another embodiment of the present invention.
  • a large class of modern embedded system computations can be expressed as a sequence of transformations on a stream of data.
  • the sequence of transformations may be performed by an acyclic network of process stages hereby known as a "general pipeline” (or simply “pipeline”) with at least one start stage that accepts input data from external sources, at least one end stage which outputs data to external destinations, and some number of intermediate stages, each of which accepts data from at least one preceding stage, performs a specific computation or transformation and forwards its results to at least one subsequent stage.
  • a simple example of a "general pipeline” is the common notion of a linear pipeline consisting of a linear sequence of processing stages, where the first stage of the pipeline accepts input data, and each subsequent stage of the pipeline may accept data from the previous stage, may perform a specific computation or transformation on the data, and may pass the result along to the next stage of the pipeline or, in the case of the last stage of a pipeline, output the data.
  • the entire sequence of computations on a given set of input data is called a "task”, and the computation within each stage of the pipeline for that input data is called a "step”.
  • control information may also be necessary to ensure that the various stages of the pipeline perform their function at the appropriate time.
  • Pipeline stages may be separated by buffers, e.g., registers, fifos, or random-access memory, that may be used to store data between the various stages of the pipeline.
  • the present invention includes embodiments for synthesizing a hardware pipeline or a pipeline hardware processor.
  • a hardware pipeline processor implements a sequence of steps applied to a sequence of data packets. Each item in the sequence of data packets is processed by the series of steps in succession, different items in the sequence of data being processed in parallel by each of the stages of the pipeline.
  • Such pipeline configurations are used throughout the electronics industry to provide enhanced computational capabilities and, in particular, where substantial "number crunching" and real-time throughput are required, such as for image processing performed by photo-quality color printers, digital cameras, etc.
  • a pipeline may be configured such that data is input to step 1 (e.g., a first functional unit or system such as a processor) of an « step process, an output being provided from the final step n.
  • step 1 e.g., a first functional unit or system such as a processor
  • Other input streams may enter into other steps of the computation, and other output streams may emerge from other steps of the computation.
  • Data progresses along each of the steps ⁇ — n such that all of n steps are performed at least partially in parallel, but operating on different data.
  • such architectures are used, for example, in digital cameras and printers to implement a color pipeline and otherwise perform image processing.
  • Embodiments of the present invention include mechanisms to design and synthesize the architecture of some pipelines automatically.
  • interstage buffer may take the form of a register, set or file of registers, shift register, hardware FIFO, random-access memory (RAM), or any other hardware structure suitable for storing data values.
  • the type and kind of such buffer storage may be chosen by a design system so as to minimize cost.
  • Internal data structures that are referenced in only one loop nest may be stored in optimized intra-stage buffer storage, the type of which and related parameters may also be chosen by a design system to minimize cost.
  • a mechanism may make all the decisions concerning hardware resources to be used, mapping and scheduling the tasks to be performed onto these resources and generating control logic for step initialization, fmalization, and the start stop of the various stages automatically.
  • Embodiments of the invention may include steps and mechanisms to carefully balance several objectives and synthesis of the pipeline including, for example, obtaining high throughput in the pipeline, matching the input/output (I O) rates of various stages, matching the order in which the elements of a data block are produced with the order in which the elements of the data block are consumed, minimizing a size of inter-stage buffers.
  • Each of these steps, to the extent implemented by the various embodiments of the invention may include further advantageous methods and structures.
  • minimization of inter- or intra-stage buffer resource requirements may implement a buffer reuse procedure wherein buffer resources are utilized in a time-shared fashion.
  • features of the various embodiments of the invention may include choosing synchronization mechanism and granularity appropriate to the pipeline.
  • a designer may provide a description of the functions of the pipeline by providing a segment of program code describing the sequence of computational steps, preferably in the form of a sequence of loop nests.
  • Each of the loop nests describes one of the steps to be implemented by a respective stage of the pipeline.
  • some portions of straight-line code may be included between loop nests.
  • the segments of straight-line code maybe handled as if loop nests of depth zero.
  • a further aspect of the invention allows a user to describe the performance desired by specifying criteria such as, for example, how often a task is to be submitted to and processed by the pipeline. This may require a certain minimum interval between tasks as specified by the user, this interval called the minimum inter-task [initiation] interval (MITI).
  • MITI minimum inter-task [initiation] interval
  • the MITI specifies the fastest rate at which tasks may be "pushed through” (i.e., processed by) the pipeline.
  • a design may also be required to adhere to constraints on the order of reading the input and order of producing the output.
  • FIGURE 2 depicts a detailed view of computer 200 compatible with an embodiment of the invention, including main memory 201, Cache 203, secondary storage device 204, Central Processing Unit (CPU) 206, video display 207 and input/output device 208 connected by system bus 209.
  • Main memory 201 stores compiler 202.
  • Compiler 202 may operate on source code 205 stored in secondary storage device 204 to produce executable object code. For simplicity it is assumed that related functions including, for example, linking and loading, are performed as necessary to provide an executable object module.
  • memory allocation, including for Cache 203 may be performed by a combination of software and other systems including, but not limited to, a compiler, linker, loader, specialized utility, etc.
  • an example of a procedure is provided to be implemented by a pipeline of processors.
  • the procedure should be well defined with respect to specified internal data and with specified input and output data clearly shown.
  • Another desirable feature of the procedure is that it be well structured, consisting of a sequence of loop nests and intervening well-structured code. That is, among other things, it is preferable that the procedure be expressed without "go to" statements.
  • the desired performance of the pipeline may be provided, one way of doing so is by specification of the MITI.
  • the MITI describes the minimum time between the arrivals of input data packets. For example, if the MITI is 100 machine cycles, then the hardware pipeline must be able to accept a new data packet every 100 cycles, and therefore, maintain a rate of task handling up to one task every 100 cycles.
  • embodiments of the invention incorporate the use of static scheduling.
  • statically scheduled hardware the time in which every operation in the task procedure is fixed (relative to the task's start time) in advance (e.g., synchronous operation), rather than being dynamically determined as the result of some event or events occurring during a computation (e.g., asynchronous).
  • Static scheduling may be illustrated in connection with the procedure of FIGURE 3. Therein, five assignment statements S1-S5 are found within "task_proc". To each of these assignment statements are associated a space of loop iterations and associated values of the enclosing loop index variables. For example, the iteration space for SI is: ⁇ il
  • ⁇ ⁇ il ⁇ 100 ⁇ and, for S5, the space being:
  • a start time may be associated with each iteration of every iteration space. These start times comprise an iteration schedule.
  • the operations found in each iteration space may have a start time relative to the start of the iteration in which they reside. These relative start times, one for each operation in the task procedure, comprise an operation schedule. The combination of an iteration schedule and an operation schedule precisely determines the start time of every operation that is or may be performed during the execution on one task, relative to the start time of the task.
  • affine functions of iteration index vectors may be employed.
  • An integer vector k is associated with each iteration space.
  • iteration i starts at time k_0 plus the dot product of i and (k_l , k_2, ... , k_d). We denote this by ⁇ k,i>.
  • a different vector k is chosen for each iteration space.
  • an operation (such at the add operation in statement S 5 of the task procedure shown in Figure 3) that occurs in some iteration space whose index vector is denoted by i, may have a start time, denoted by t(op), given by an operation schedule, relative to the start time of iteration i.
  • t(op) a start time
  • an affine iteration schedule such as is described in the previous paragraph
  • the instance of the operation that occurs in iteration i is scheduled to start at time ⁇ k,i> + t(op).
  • Such a schedule consisting of the affine iteration schedule and the one-per- operation relative start times, is called a shifted-affine schedule.
  • an affine schedule is provided organizing the computation of a loop nest.
  • a suitable mechanism for providing for creation of a hardware processor from a single loop nest is provided by the HP labs PICO-NPA system.
  • This system uses an affine schedule to achieve a specified compute time for a loop nest implemented by a hardware processor. Using such a system, it is possible to call the task procedure that specifies a function as a multi-loop.
  • the collection of affine schedules (expressed as the vectors k), one for each iteration space, provides an affine iteration multi-schedule, referred to herein simply as a multi-schedule in this application.
  • inventions of the invention may include a process for determining a shifted multi-schedule that satisfies both certain required and certain desirable criteria.
  • required criteria may include a temporal ordering and a required throughput
  • desired criteria may include a reduced total cost of the hardware used to realize the pipeline.
  • it must be possible to carry out the required computations in the temporal order specified by the shifted multi-schedule without violating the semantics of the original program. This means, among other things, that an operation 02 that requires a particular value computed by an operation 01 should not execute before the execution of 01, such that all required values are available when needed by an operation 02.
  • a shifted multi-schedule is considered to be "valid” if it meets this requirement. It is mathematically possible to deterrnine whether a given affine iteration schedule has the property such that it is possible to find an operation schedule that, together with the given affine iteration schedule, comprises a valid shifted multi-schedule. Thus, an affine iteration schedule is considered valid if it is possible to find an operation schedule that, together with the affine iteration schedule, comprises a valid shifted multi-schedule.
  • the throughput implied by the multi-schedule must at least as great as that required by the user. For example, if the user has specified a MITI, then the total length of the schedule for the iteration space (the difference between the max of ⁇ k,i> (the latest start time of any iteration in the space) and the min of ⁇ k,i> (the earliest start time of any iteration) must be less than the specified MITI.
  • Embodiments of the invention may consider and minimize the total cost of the hardware. In general, the total hardware cost is dominated by the hardware required to realize the specified operations and storage required for internal data structures.
  • Embodiments of the invention include steps that lead to the design of hardware pipelines in which each loop nest of a task procedure is implemented by a pipeline stage in which an array of processing elements is deployed, and each such processing element is itself pipelined, and has the capacity to process iterations of the loop nest periodically, at a rate determined by an initiation interval (H).
  • H initiation interval
  • Such a pipelined processor starts execution and finishes execution of one iteration of the loop nest every II machine cycles.
  • Embodiments of the invention include steps for determining, for each loop nest in the task procedure, both the number of processing elements to be deployed in the pipeline stage that implements the loop nest and the initiation interval of the processing elements, thereby dete ⁇ riiriing the throughput of the pipeline stage, in a manner consistent with the criterion of achieving the required MITI.
  • embodiments of the invention include steps, processing and/or structure to identify data dependencies within each iteration space and across iteration spaces.
  • the MITI is used to decide the number of processors and the capacity or performance of each processor needed to obtain the desired throughput for each loop nest.
  • the dependencies and the throughput information are used to schedule the iterations of each loop nest (e.g. to determine a mapping from iteration space index vector to the start time of the iteration, such as the affine mapping with parameters k_l ..., k_n) and map the iterations to space (e.g.
  • the data produced by one loop nest and consumed by another is passed via an inter-stage buffer whose type, size, and data mapping (the location of a given data item in the buffer is determined by a data mapping) is generated by the pipeline design process.
  • the schedule of a pair of loop nests is selected such that the size of such inter-stage buffers is rmnimized.
  • Each loop nest is then transformed according to the iteration schedules identified into a time and space loop so as to transform the addressing of each data structure to the appropriate kind of inter-stage buffer.
  • each loop maybe synthesized into an array of processors according to its iteration schedule and throughput requirement, and then multiple arrays of processors are "strung" together in a pipeline communicating via inter-stage buffers.
  • a back-end hardware synthesis step may arrive at a detailed hardware synthesis of processor arrays and a determination of the schedule of each operation of each loop body relative to the start time for the corresponding iteration of the loop.
  • the mechanism to transform a single loop nest into an array of processors is extended to ensure that after the detailed synthesis and scheduling of operations, the addressing and the size of the inter-stage buffers is maintained constant.
  • inter-stage buffer hardware may be adjusted to compensate for perturbations of the schedule.
  • Hardware to initialize and finalize each stage of the pipeline may also be generated based on an analysis of the "livein”.and "liveout" values from each loop nest (i.e., the first and last uses of the data).
  • a pipelined timing controller that is generated automatically using the time schedule of the previously identified loop nests may control each stage of the pipeline. The entire pipeline architecture is then provided in a standard description language at the register-transfer level.
  • the operations of iteration i are scheduled at time equal to the sum of ⁇ k,i> and a relative schedule time for the operation.
  • the affine multi-schedule (the vectors k, one per iteration space) is determined first according to the methods of the invention.
  • the operation schedule is determined afterwards.
  • the parameters of data buffers may be finalized after the operation: schedule is chosen.
  • parameters of data buffers are chosen before an operation schedule, and an operation schedule maybe chosen afterwards with constraints derived from the use of data buffers whose parameters have been determined.
  • an embodiment of a method according to the invention may begin at step 401.
  • a task procedure is read together with desired throughput to be provided by a pipeline.
  • the task procedure maybe provided as a set of instructions such as depicted and described in connection with FIGURE 3.
  • the task procedure may be expressed in a standard programming language, for example the C programming language, as indicated in step 412.
  • Throughput may be provided in the form of MTU, as indicated in step 413 " .
  • the sequence of steps 403 through 410, with the ancillary steps 414 through 418 are executed. Together they comprise a macro- step of creating a hardware description of a synchronous hardware pipeline as interconnected, synchronously scheduled processor stages.
  • step 403 iteration spaces, input, output, and internal data structures to be used by the pipeline are identified.
  • step 404 an analysis is performed of all the dependencies between the statements and all the dependence relationships are found.
  • step 405 user constraints on the schedule, in the form of a specification by the user of some components of the affine multi-schedule, can be accepted. These allow the user to specify, among other things, a required order for reading the input or writing the output from the pipeline.
  • step 406 a calculation may be performed to provide a valid and desirable multi-schedule. Such calculation may include a step (step 416 in the figure) of determining the number of processing elements and their initiation interval for each loop nest.
  • step 406 Optimization of the ways in which internal data structures are accessed is performed at step 407, making use of knowledge of the multi-schedule to reduce the size of the hardware buffers used to store internal data structures.
  • optimization may include a step (e.g., step 417) of choosing a type of buffer storage for implementing an internal data structure or data structures.
  • step 417 e.g., step 417) of choosing a type of buffer storage for implementing an internal data structure or data structures.
  • such optimization may include deterrmr ⁇ ng a folded mapping of the array to memory locations so as to minimize the amount of memory required for storage of the array, as indicated in step 415.
  • Techniques for reducing or minimizing the size of the internal data structures may include timesharing of memory through folded mappings of data arrays to memory, as described in co- pending U.S. Application Serial No. (attorney docket no. 100110564-1) entitled OPTIMIZING MEMORY USAGE WTT ⁇ DATA LIFETIMES and/or by the optimized management of memory as described, for example, in co-pending U.S. Application Serial No. (attorney docket no. 100110565-1) entitled METHOD OF AND SYSTEM FOR MEMORY MANAGEMENT OPTIMIZATION, both of which are incorporated herein in their entireties by reference.
  • a hardware processor is produced for each loop nest and straight-line segment in the task procedure; the hardware processor may take the form of a cost-reduced synchronously scheduled processor or array of processing elements, as indicated in step 418.
  • an optimized hardware buffer configuration is produced that contains the values of the internal data structures.
  • a pipeline controller is produced as well; it signals each pipeline segment to start at the appropriate clock cycles for a particular task.
  • a pipelined architecture is provided that may be used to generate a register- transfer level (RTL) description of one pipeline.
  • an embodiment of another method according to the invention may begin at step 501.
  • a task procedure is read together with a desired throughput to be provided by a pipeline.
  • the task procedure may be provided as a set of instructions such as depicted and described in connection with FIGURE 3.
  • the task procedure may be expressed in a standard programming language, for example the C programming language, as indicated in step 515.
  • Throughput may be provided in the form of MITI, as indicated in step 516.
  • iteration spaces, input, output, and internal data structures to be used by the pipeline are identified.
  • an analysis is performed of all the dependencies between the statements and all the dependence relationships are found.
  • a data-flow graph representation of the task procedure is constructed.
  • the data-flow graph is segmented into connected subgraphs; this segmentation may be derived automatically using heuristic algorithms or it may be provided by a user and accepted by the automatic design procedure.
  • step 507 user constraints on the schedule, in the form of a specification by the user of some components of the affine multi-schedule, can be accepted. These allow the user to specify, among other things, a required order for reading the input or writing the output from the pipeline.
  • Steps 508 through 512, together with their ancillary steps 517 through 521 are then repeated, once for each of the subgraphs determined in step 506.
  • These steps constitute the macro-step of generating a hardware description of a synchronous hardware sub-pipeline and control unit for each subgraph of the segmented data-flow graph. A description of these steps follows.
  • a calculation may be performed to provide a valid and desirable multi-schedule.
  • Such calculation may include a step 519 of deterrruning the number of processing elements and their initiation interval for each loop nest. User specified constraints, such as a partial specification of the multi-schedule, are honored at step 517. Further details of step 508 are discussed below.
  • Optimization of the ways in which internal data structures are accessed is performed at step 509, making use of knowledge of the multi-schedule to reduce the size of the hardware buffers used to store internal data structures.
  • Such optimization may include a step (e.g., step 520) of choosing a type of buffer storage for implementing an internal data structure or data structures.
  • such optimization may include determining a folded mapping of the array to memory locations so as to minimize the amount of memory required for storage of the array, as indicated in step 518.
  • Techniques for reducing or minimizing the size of the internal data structures may include time-sharing of memory through folded mappings of data arrays to memory, as described in co-pending U.S. Application Serial No. (attorney docket no. 100110564-1).
  • a hardware processor is produced for each loop nest and straight-line segment in the task procedure; said hardware processor may take the form of a cost-reduced synchronously scheduled processor or array of processing elements, as indicated in step 521.
  • an optimized hardware buffer configuration is produced that contains the values of the internal data structures.
  • a pipeline controller is produced as well; it signals each pipeline segment to start at the appropriate clock cycles for a particular task.
  • a pipelined architecture is provided that may be used to generate a register-transfer level (RTL) description of one synchronous sub-pipeline in an asynchronous complex pipeline.
  • step 513 hardware descriptions are produced for synchronization hardware and expandable data buffers connecting the synchronous sub-pipelines previously created (one per subgraph). As a result of this processing, an RTL hardware description of an asynchronous complex pipeline for implementing the task procedure is created.
  • a feature of the various embodiments of the invention may include use of an optimized, statically chosen, affine multi-schedule.
  • Use of an affine multi-schedule supports the building of efficient hardware that does not waste time and resources doing run time synchronization.
  • the multi-schedule is optimized in the sense that, among all possible, legal affine multi-schedules, the selected schedule achieves required performance criteria while minimizing hardware implementation costs. This multi-schedule may be identified automatically.
  • the multi-schedule may consist of an affine schedule k for each iteration space.
  • the affine schedule has a constant term k_0 and a linear term (k_l, ..., k_d). All of these terms represent integers.
  • the total schedule length for an iteration space may be computed given its affine schedule such that the total schedule length equals:
  • Finding a multi-schedule also may take into consideration dependencies between iterations of the single loop nest, and dependencies between iterations of different loop nests. These dependencies determine whether or not a multi-schedule is valid. For dependencies between iterations of a single loop nest, it is required that the linear portions of the schedule vectors k satisfy certain linear inequalities, dete ⁇ nined by analysis of the dependence relation. The schedule vectors for other iteration spaces may not be a concern, and constant terms need not be considered. For dependencies between statements in different loop nests, one of which is necessarily a predecessor of the other in the sequential control flow, the situation is somewhat different.
  • one way to find a valid multi-schedule includes the steps of:
  • Step 406 further may include estimating a hardware cost for a selected multischedule. This may be performed by modeling the hardware cost, measuring the cost of the implementation of each step as a statically scheduled, dedicated, special purpose processor array, as well as calculating costs of inter-stage and intra-stage buffers that hold values of the internal data structures as needed. Estimators are used to determine these costs, the estimators driven by the user program and by a proposed multi-schedule. The resultant estimations support selection from among proposed, valid multi-schedules, using the estimated hardware cost as a selection criterion so as to minimize such hardware costs.
  • One platform for performing the processing specified by step 408 includes the "program in, chip out” (PICO) technology developed by Hewlett-Packard Laboratories (HPL) and described in U.S. Patent 6,298,071 issued October 2, 2001 and U.S. Patent Application Nos. 09/378,289 and 09/378,431, all of which are herein incorporated in their entireties by reference.
  • PICO program in, chip out
  • a task procedure may be automatically represented as a dataflow graph in which each graph vertex represents one of the iteration spaces of the task procedure, and graph edges represent a flow of data between the iteration spaces.
  • the dataflow graph may be segmented into connected subgraphs such that each subgraph is suitable for implementation as a synchronous statically scheduled pipeline, using the methods of the present invention.
  • hardware may be synthesized for an ensemble of hardware pipelines, one for each subgraph of the dataflow graph, and each in a separate synchronous hardware clock domain.
  • Asynchronous protocols and expandable storage elements such as FIFO or RAM storage may be used to buffer data and synchronize transactions that span multiple clock domains.
  • One possible implementation for a subgraph is compilation onto a programmable processor whose throughput is known but whose schedule is dynamically determined.

Abstract

A method of designing a pipeline comprises the steps of: accepting a task procedure expressed in a standard programming language, the task procedure including a sequence of computational steps; accepting a performance requirement of the pipeline; and automatically creating a hardware description of the pipeline, the pipeline comprising a plurality of interconnected processor stages, each of the processor stages for performing a respective one of the computational steps, the pipeline having characteristics consistent with the performance requirement of the pipeline.

Description

METHOD AND SYSTEM FOR THE DESIGN OF PIPELINES OF PROCESSORS
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present application is related to U.S. Patent Application Serial No. (Attorney Docket No. 100110564-1) entitled SYSTEM AND METHOD OF OPTIMIZING MEMORY USAGE WITH DATA LIFETIMES and to U.S. Patent Application No. (Attorney Docket No. 100110565-1) entitled METHOD AND SYSTEM FOR MEMORY MANAGEMENT OPTIMIZATION, and to (Attorney Docket No. 100110558-1) entitled SYSTEM FOR AND A METHOD OF CONTROLLING PIPELINE PROCESS STAGES, all of which are incorporated herein in their entireties by reference.
FIELD OF THE INVENTION
[0002] The present invention is related to computer hardware design and, more particularly, to methods, software and systems for the design of pipelines of processors.
DESCRIPTION OF RELATED ART
[0003] A large class of modern embedded-system computations can be expressed as a sequence of transformation on one or more streams of data. The corresponding architecture of such systems is typically organized as a pipeline of processors wherein each stage accepts data from an initial input or from an output of a previous stage, performs a specific task or transformation on the data, and passes the resulting data along to the next stage (if any) or an output of the pipeline. The term processor is used here to encompass a broad class of computing devices, both programmable and nonprogrammable, both statically and dynamically scheduled. The data passed between stages may be "fine-grained" (e.g., word-level) or coarse-grained (e.g., a block or a stripe of data elements). The data exchange between stages may be synchronized using a handshake mechanism between stages (e.g., asynchronous) or timed by a system- ide clock and a fixed delay between stages (e.g., synchronous operations.) The design of such a pipeline of processors involves the design of each stage performing a specific task including its initialization, fmalization and control, the design of the buffering mechanism between each pair of stages, and the synchronization mechanism used between every pair of producer (i.e., source or transmitting) and consumer (i.e., destination or receiving) processor stages. Such pipeline architectures are designed manually by looking at a functional specification of the system (i.e., a design document and or a reference implementation of functionality) and carefully identifying all the components and parameters of the design and optimizing them for rninimal cost and maximal performance.
[0004] Various publications have addressed the design of such parallel systems. K. K. Danckaert, K. Masselos, F. Catthoor and H. De Man "Strategy for Power-Efficient Design of Parallel Systems", IEEE Transactions on Very Large Scale Integration (VLSI) Systems, Vol. 7, No. 2, June 1999 describes a system-level storage organization for multidimensional signals used as a first step prior to formulation of parallelization or partitioning decisions. F. Vermeulen, F. Catthoor, D. Verkest, H. De Man, "Extended Design Reuse Trade-Offs in Hardware-Software Architecture Mapping," CODES, 2000, proposes a switching protocol providing fine-grain control using a control-flow inspection mechanism and an interrupt mechanism where all necessary data is available in a shared memory. P. Panda, F. Catthoor, N. Dutt, K Danckaert, E. Brockmeyer, C. Kulkarnia, A. Vandercappelle and P. Kjeldsberg "Data and Memory Optimization Techniques for Embedded Systems", ACM Transactions on Design Automation of electronic Systems, Vol 6, No. 2, April 2001, Pages 149 - 206 includes a survey of various techniques used for data and memory optimization in embedded systems. These and all other publications and patents mentioned herein and throughout this specification are indicative of the technology of the present invention and are incorporated in their entirety by reference.
[0005] While ultimately providing a pipeline design and architecture, such manual design methodologies are slow, error prone, and may not achieve optimal design results within the constraint of time and human resources available.
BRIEF SUMMARY OF THE INVENTION
[0006] According to one aspect of the invention, a method of designing a pipeline comprises the steps of: accepting a task procedure expressed in a standard programming language, the task procedure including a collection of computational steps, and serving to define the computational function that is to be performed by the pipeline; accepting a performance requirement of the pipeline; and automatically creating a hardware description of the pipeline, the pipeline comprising a plurality of interconnected processor stages, each of the processor stages for performing a respective one of the computational steps, the pipeline having characteristics consistent with the performance requirement of the pipeline.
[0007] According to another aspect of the invention, a method of designing a pipeline comprises the steps of: reading a task procedure and a desired throughput of the pipeline, the task procedure including one or more statements; identifying iteration spaces, input, output, and internal (i.e., local to the task procedure) data structures; analyzing dependencies between statements; finding at least one dependence relation between the statements; calculating a valid and desirable multi-schedule, this being a scheduled start time (relative to the start time of said task procedure) for each point in each of said iteration spaces, as well as a scheduled time for each operation relative to the start time of the iteration in which the operation resides; optimizing access of at least one internal data structure using the multi-schedule to minimize a size of a hardware buffer; producing a hardware processor for each loop nest and straight-line segment; and producing optimized hardware buffers to contain values of the internal data structures.
[0008] According to another aspect of the invention, a system for designing a pipeline comprises: a memory storing a set of program instructions; and a processor connected to the memory and responsive to the set of program instructions for: (i) accepting a task procedure expressed in a standard programming language, the task procedure including a sequence of computational steps, (ii) accepting a performance requirement of the pipeline, and (iii) automatically creating a hardware description of the pipeline, the pipeline comprising a plurality of interconnected processor stages, each of the processor stages for performing a respective one of the computational steps, the pipeline having characteristics consistent with the performance requirement of the pipeline.
[0009] According to another aspect of the invention, a program of computer instructions stored on a computer readable medium includes computer code for performing the steps of: accepting a task procedure expressed in a standard prograrriming language, the task procedure including a sequence of computational steps; accepting a performance requirement of the pipeline; and automatically creating a hardware description of the pipeline, the pipeline comprising a plurality of interconnected processor stages, each of the processor stages for performing a respective one of the computational steps, the pipeline having characteristics consistent with the performance requirement of the pipeline. BRIEF DESCRIPTION OF THE DRAWINGS
[0010] FIGURE 1 is a block diagram of a generic prior art pipeline configuration of processors;
[0011] FIGURE 2 is a block diagram of a computer system consistent with one embodiment of the present invention;
[0012] FIGURE 3 is a portion of code defining a series of steps to be implemented in hardware in the form of a pipeline of processors according to one embodiment of the present invention;
[0013] FIGURE 4 is a flow chart of the steps performed when synthesizing a processor pipeline according to one embodiment of the present invention; and
[0014] FIGURES 5A and 5B illustrate a flow chart of the steps performed when synthesizing an asynchronous complex pipeline composed of synchronous processor sub- pipelines according to another embodiment of the present invention.
DETAILED DESCRIPTION
[0015] A large class of modern embedded system computations can be expressed as a sequence of transformations on a stream of data. The sequence of transformations may be performed by an acyclic network of process stages hereby known as a "general pipeline" (or simply "pipeline") with at least one start stage that accepts input data from external sources, at least one end stage which outputs data to external destinations, and some number of intermediate stages, each of which accepts data from at least one preceding stage, performs a specific computation or transformation and forwards its results to at least one subsequent stage. A simple example of a "general pipeline" is the common notion of a linear pipeline consisting of a linear sequence of processing stages, where the first stage of the pipeline accepts input data, and each subsequent stage of the pipeline may accept data from the previous stage, may perform a specific computation or transformation on the data, and may pass the result along to the next stage of the pipeline or, in the case of the last stage of a pipeline, output the data. The entire sequence of computations on a given set of input data is called a "task", and the computation within each stage of the pipeline for that input data is called a "step". In addition to the data that is passed between stages of the pipeline, control information may also be necessary to ensure that the various stages of the pipeline perform their function at the appropriate time. Pipeline stages may be separated by buffers, e.g., registers, fifos, or random-access memory, that may be used to store data between the various stages of the pipeline.
[0016] The present invention includes embodiments for synthesizing a hardware pipeline or a pipeline hardware processor. Such a hardware pipeline processor implements a sequence of steps applied to a sequence of data packets. Each item in the sequence of data packets is processed by the series of steps in succession, different items in the sequence of data being processed in parallel by each of the stages of the pipeline. Such pipeline configurations are used throughout the electronics industry to provide enhanced computational capabilities and, in particular, where substantial "number crunching" and real-time throughput are required, such as for image processing performed by photo-quality color printers, digital cameras, etc.
[0017] Referring to FIGURE 1 of the drawings, a pipeline may be configured such that data is input to step 1 (e.g., a first functional unit or system such as a processor) of an « step process, an output being provided from the final step n. Other input streams may enter into other steps of the computation, and other output streams may emerge from other steps of the computation. Data progresses along each of the steps \ — n such that all of n steps are performed at least partially in parallel, but operating on different data. As mentioned, such architectures are used, for example, in digital cameras and printers to implement a color pipeline and otherwise perform image processing. Embodiments of the present invention include mechanisms to design and synthesize the architecture of some pipelines automatically.
[0018] Although not shown in Figure 1, between any two stages of the pipeline there may be a buffer of some kind for storing data values that are in transit from an earlier stage to a later stage. In this application, these hardware structures are called interstage buffers. An inter-stage buffer may take the form of a register, set or file of registers, shift register, hardware FIFO, random-access memory (RAM), or any other hardware structure suitable for storing data values. The type and kind of such buffer storage may be chosen by a design system so as to minimize cost. Internal data structures that are referenced in only one loop nest may be stored in optimized intra-stage buffer storage, the type of which and related parameters may also be chosen by a design system to minimize cost. [0019] A mechanism according to an embodiment of the invention may make all the decisions concerning hardware resources to be used, mapping and scheduling the tasks to be performed onto these resources and generating control logic for step initialization, fmalization, and the start stop of the various stages automatically. Embodiments of the invention may include steps and mechanisms to carefully balance several objectives and synthesis of the pipeline including, for example, obtaining high throughput in the pipeline, matching the input/output (I O) rates of various stages, matching the order in which the elements of a data block are produced with the order in which the elements of the data block are consumed, minimizing a size of inter-stage buffers. Each of these steps, to the extent implemented by the various embodiments of the invention, may include further advantageous methods and structures. For example, minimization of inter- or intra-stage buffer resource requirements may implement a buffer reuse procedure wherein buffer resources are utilized in a time-shared fashion. Further, features of the various embodiments of the invention may include choosing synchronization mechanism and granularity appropriate to the pipeline.
[0020] According to one embodiment of the invention, a designer may provide a description of the functions of the pipeline by providing a segment of program code describing the sequence of computational steps, preferably in the form of a sequence of loop nests. Each of the loop nests describes one of the steps to be implemented by a respective stage of the pipeline. In addition, some portions of straight-line code may be included between loop nests. Conceptually, the segments of straight-line code maybe handled as if loop nests of depth zero.
[0021] A further aspect of the invention allows a user to describe the performance desired by specifying criteria such as, for example, how often a task is to be submitted to and processed by the pipeline. This may require a certain minimum interval between tasks as specified by the user, this interval called the minimum inter-task [initiation] interval (MITI). The MITI specifies the fastest rate at which tasks may be "pushed through" (i.e., processed by) the pipeline. A design may also be required to adhere to constraints on the order of reading the input and order of producing the output.
[0022] FIGURE 2 depicts a detailed view of computer 200 compatible with an embodiment of the invention, including main memory 201, Cache 203, secondary storage device 204, Central Processing Unit (CPU) 206, video display 207 and input/output device 208 connected by system bus 209. Main memory 201 stores compiler 202. Compiler 202 may operate on source code 205 stored in secondary storage device 204 to produce executable object code. For simplicity it is assumed that related functions including, for example, linking and loading, are performed as necessary to provide an executable object module. Thus, memory allocation, including for Cache 203, may be performed by a combination of software and other systems including, but not limited to, a compiler, linker, loader, specialized utility, etc.
[0023] Using a system such as depicted in FIGURE 2, appropriate program code may be stored in secondary storage device 204 to implement a method and system according to embodiments of the invention.
[0024] Referring to FIGURE 3, an example of a procedure is provided to be implemented by a pipeline of processors. The procedure should be well defined with respect to specified internal data and with specified input and output data clearly shown. Another desirable feature of the procedure is that it be well structured, consisting of a sequence of loop nests and intervening well-structured code. That is, among other things, it is preferable that the procedure be expressed without "go to" statements. In addition to the specification of the procedure itself, the desired performance of the pipeline may be provided, one way of doing so is by specification of the MITI. As previously described, the MITI describes the minimum time between the arrivals of input data packets. For example, if the MITI is 100 machine cycles, then the hardware pipeline must be able to accept a new data packet every 100 cycles, and therefore, maintain a rate of task handling up to one task every 100 cycles.
[0025] Preferably, embodiments of the invention incorporate the use of static scheduling. In statically scheduled hardware, the time in which every operation in the task procedure is fixed (relative to the task's start time) in advance (e.g., synchronous operation), rather than being dynamically determined as the result of some event or events occurring during a computation (e.g., asynchronous).
[0026] Static scheduling may be illustrated in connection with the procedure of FIGURE 3. Therein, five assignment statements S1-S5 are found within "task_proc". To each of these assignment statements are associated a space of loop iterations and associated values of the enclosing loop index variables. For example, the iteration space for SI is: {il | θ <= il < 100} and, for S5, the space being:
{(i2,i3) I 0 <= i2 < 100 & 0 <= i3 < 5} . > For a sequence of statements such as S2 and S3 that occur in a straight-line code segment, i.e., between nests, there is a single iteration space consisting of one "point", corresponding to the one execution of each of the statements in the execution of the task procedure.
[0027] To provide a static schedule, a start time may be associated with each iteration of every iteration space. These start times comprise an iteration schedule. In addition, the operations found in each iteration space may have a start time relative to the start of the iteration in which they reside. These relative start times, one for each operation in the task procedure, comprise an operation schedule. The combination of an iteration schedule and an operation schedule precisely determines the start time of every operation that is or may be performed during the execution on one task, relative to the start time of the task.
[0028] To provide a useable iteration schedule, affine functions of iteration index vectors may be employed. An integer vector k is associated with each iteration space. The start time of the operations at an iteration whose index vector is i = (i__l, i_2,...i_d) is given by iteration_start_time (i) = k_0 +i_l k_l + ... +i_d k_d.
That is, as expressed in the formula above, iteration i starts at time k_0 plus the dot product of i and (k_l , k_2, ... , k_d). We denote this by <k,i>. A different vector k is chosen for each iteration space.
[0029] Thus, an operation (such at the add operation in statement S 5 of the task procedure shown in Figure 3) that occurs in some iteration space whose index vector is denoted by i, may have a start time, denoted by t(op), given by an operation schedule, relative to the start time of iteration i. When an affine iteration schedule such as is described in the previous paragraph is used, the instance of the operation that occurs in iteration i is scheduled to start at time <k,i> + t(op). Such a schedule, consisting of the affine iteration schedule and the one-per- operation relative start times, is called a shifted-affine schedule.
[0030] Considering each of the index spaces, an affine schedule is provided organizing the computation of a loop nest. A suitable mechanism for providing for creation of a hardware processor from a single loop nest is provided by the HP labs PICO-NPA system. This system uses an affine schedule to achieve a specified compute time for a loop nest implemented by a hardware processor. Using such a system, it is possible to call the task procedure that specifies a function as a multi-loop. The collection of affine schedules (expressed as the vectors k), one for each iteration space, provides an affine iteration multi-schedule, referred to herein simply as a multi-schedule in this application. The combination of an operation schedule and an affine multi-schedule comprise a shifted multi-schedule. Thus, embodiments of the invention may include a process for determining a shifted multi-schedule that satisfies both certain required and certain desirable criteria. Such required criteria may include a temporal ordering and a required throughput, while desired criteria may include a reduced total cost of the hardware used to realize the pipeline. In particular, it must be possible to carry out the required computations in the temporal order specified by the shifted multi-schedule without violating the semantics of the original program. This means, among other things, that an operation 02 that requires a particular value computed by an operation 01 should not execute before the execution of 01, such that all required values are available when needed by an operation 02. A shifted multi-schedule is considered to be "valid" if it meets this requirement. It is mathematically possible to deterrnine whether a given affine iteration schedule has the property such that it is possible to find an operation schedule that, together with the given affine iteration schedule, comprises a valid shifted multi-schedule. Thus, an affine iteration schedule is considered valid if it is possible to find an operation schedule that, together with the affine iteration schedule, comprises a valid shifted multi-schedule.
[0031] Another requirement is that the throughput implied by the multi-schedule must at least as great as that required by the user. For example, if the user has specified a MITI, then the total length of the schedule for the iteration space (the difference between the max of <k,i> (the latest start time of any iteration in the space) and the min of <k,i> (the earliest start time of any iteration) must be less than the specified MITI.
[0032] Among desirable criteria, embodiments of the invention may consider and minimize the total cost of the hardware. In general, the total hardware cost is dominated by the hardware required to realize the specified operations and storage required for internal data structures. [0033] Embodiments of the invention include steps that lead to the design of hardware pipelines in which each loop nest of a task procedure is implemented by a pipeline stage in which an array of processing elements is deployed, and each such processing element is itself pipelined, and has the capacity to process iterations of the loop nest periodically, at a rate determined by an initiation interval (H). Such a pipelined processor starts execution and finishes execution of one iteration of the loop nest every II machine cycles. Embodiments of the invention include steps for determining, for each loop nest in the task procedure, both the number of processing elements to be deployed in the pipeline stage that implements the loop nest and the initiation interval of the processing elements, thereby deteπriiriing the throughput of the pipeline stage, in a manner consistent with the criterion of achieving the required MITI.
[0034] Broadly, embodiments of the invention include steps, processing and/or structure to identify data dependencies within each iteration space and across iteration spaces. The MITI is used to decide the number of processors and the capacity or performance of each processor needed to obtain the desired throughput for each loop nest. The dependencies and the throughput information are used to schedule the iterations of each loop nest (e.g. to determine a mapping from iteration space index vector to the start time of the iteration, such as the affine mapping with parameters k_l ..., k_n) and map the iterations to space (e.g. determine a mapping from iteration index vector to processor element in an array of multiple processor elements) and to identify the start of each loop nest relative to the start of the previous loop nest (e.g., k_0 in the affine mapping above). The data produced by one loop nest and consumed by another is passed via an inter-stage buffer whose type, size, and data mapping (the location of a given data item in the buffer is determined by a data mapping) is generated by the pipeline design process. The schedule of a pair of loop nests is selected such that the size of such inter-stage buffers is rmnimized. Each loop nest is then transformed according to the iteration schedules identified into a time and space loop so as to transform the addressing of each data structure to the appropriate kind of inter-stage buffer. This code is then converted into an assembly-level representation to enable further optimizations geared towards hardware synthesis. In particular, first, each loop maybe synthesized into an array of processors according to its iteration schedule and throughput requirement, and then multiple arrays of processors are "strung" together in a pipeline communicating via inter-stage buffers. A back-end hardware synthesis step may arrive at a detailed hardware synthesis of processor arrays and a determination of the schedule of each operation of each loop body relative to the start time for the corresponding iteration of the loop. Preferably, the mechanism to transform a single loop nest into an array of processors is extended to ensure that after the detailed synthesis and scheduling of operations, the addressing and the size of the inter-stage buffers is maintained constant. Alternatively, after such a detailed operation schedule has been obtained, the type and the size of inter-stage buffer hardware and the addressing mechanisms used to access data in such buffers may be adjusted to compensate for perturbations of the schedule. Hardware to initialize and finalize each stage of the pipeline may also be generated based on an analysis of the "livein".and "liveout" values from each loop nest (i.e., the first and last uses of the data). Finally, a pipelined timing controller that is generated automatically using the time schedule of the previously identified loop nests may control each stage of the pipeline. The entire pipeline architecture is then provided in a standard description language at the register-transfer level.
[0035] In one embodiment of the invention, the operations of iteration i are scheduled at time equal to the sum of <k,i> and a relative schedule time for the operation. In a preferred embodiment of the invention, the affine multi-schedule (the vectors k, one per iteration space) is determined first according to the methods of the invention. The operation schedule is determined afterwards. The parameters of data buffers may be finalized after the operation: schedule is chosen. In another embodiment, parameters of data buffers are chosen before an operation schedule, and an operation schedule maybe chosen afterwards with constraints derived from the use of data buffers whose parameters have been determined.
[0036] Referring to FIGURE 4, an embodiment of a method according to the invention may begin at step 401. At step 402, a task procedure is read together with desired throughput to be provided by a pipeline. As previously described, the task procedure maybe provided as a set of instructions such as depicted and described in connection with FIGURE 3. The task procedure may be expressed in a standard programming language, for example the C programming language, as indicated in step 412. Throughput may be provided in the form of MTU, as indicated in step 413". In response to these inputs, the sequence of steps 403 through 410, with the ancillary steps 414 through 418, are executed. Together they comprise a macro- step of creating a hardware description of a synchronous hardware pipeline as interconnected, synchronously scheduled processor stages.
[0037] At step 403, iteration spaces, input, output, and internal data structures to be used by the pipeline are identified. At step 404 an analysis is performed of all the dependencies between the statements and all the dependence relationships are found. At step 405, user constraints on the schedule, in the form of a specification by the user of some components of the affine multi-schedule, can be accepted. These allow the user to specify, among other things, a required order for reading the input or writing the output from the pipeline. At step 406 a calculation may be performed to provide a valid and desirable multi-schedule. Such calculation may include a step (step 416 in the figure) of determining the number of processing elements and their initiation interval for each loop nest. User specified constraints, such as a partial specification of the multi-schedule, are honored in this step as indicated (step 414). Further details of step 406 are discussed below. Optimization of the ways in which internal data structures are accessed is performed at step 407, making use of knowledge of the multi-schedule to reduce the size of the hardware buffers used to store internal data structures. Such optimization may include a step (e.g., step 417) of choosing a type of buffer storage for implementing an internal data structure or data structures. In the case of internal arrays, such optimization may include deterrmrύng a folded mapping of the array to memory locations so as to minimize the amount of memory required for storage of the array, as indicated in step 415. Techniques for reducing or minimizing the size of the internal data structures may include timesharing of memory through folded mappings of data arrays to memory, as described in co- pending U.S. Application Serial No. (attorney docket no. 100110564-1) entitled OPTIMIZING MEMORY USAGE WTTΗ DATA LIFETIMES and/or by the optimized management of memory as described, for example, in co-pending U.S. Application Serial No. (attorney docket no. 100110565-1) entitled METHOD OF AND SYSTEM FOR MEMORY MANAGEMENT OPTIMIZATION, both of which are incorporated herein in their entireties by reference.
[0038] At step 408 a hardware processor is produced for each loop nest and straight-line segment in the task procedure; the hardware processor may take the form of a cost-reduced synchronously scheduled processor or array of processing elements, as indicated in step 418. At step 409, an optimized hardware buffer configuration is produced that contains the values of the internal data structures. Finally, at step 410, a pipeline controller is produced as well; it signals each pipeline segment to start at the appropriate clock cycles for a particular task. As a result of this processing, a pipelined architecture is provided that may be used to generate a register- transfer level (RTL) description of one pipeline.
[0039] Referring to FIGURE 5, an embodiment of another method according to the invention may begin at step 501. At step 502, a task procedure is read together with a desired throughput to be provided by a pipeline. As previously described, the task procedure may be provided as a set of instructions such as depicted and described in connection with FIGURE 3. The task procedure may be expressed in a standard programming language, for example the C programming language, as indicated in step 515. Throughput may be provided in the form of MITI, as indicated in step 516. As a result of these inputs, at step 503, iteration spaces, input, output, and internal data structures to be used by the pipeline are identified. At step 504 an analysis is performed of all the dependencies between the statements and all the dependence relationships are found. At step 505, a data-flow graph representation of the task procedure is constructed. At step 506, the data-flow graph is segmented into connected subgraphs; this segmentation may be derived automatically using heuristic algorithms or it may be provided by a user and accepted by the automatic design procedure.
[0040] At step 507, user constraints on the schedule, in the form of a specification by the user of some components of the affine multi-schedule, can be accepted. These allow the user to specify, among other things, a required order for reading the input or writing the output from the pipeline.
[0041] Steps 508 through 512, together with their ancillary steps 517 through 521 are then repeated, once for each of the subgraphs determined in step 506. These steps constitute the macro-step of generating a hardware description of a synchronous hardware sub-pipeline and control unit for each subgraph of the segmented data-flow graph. A description of these steps follows.
[0042] At step 508 a calculation may be performed to provide a valid and desirable multi-schedule. Such calculation may include a step 519 of deterrruning the number of processing elements and their initiation interval for each loop nest. User specified constraints, such as a partial specification of the multi-schedule, are honored at step 517. Further details of step 508 are discussed below. Optimization of the ways in which internal data structures are accessed is performed at step 509, making use of knowledge of the multi-schedule to reduce the size of the hardware buffers used to store internal data structures. Such optimization may include a step (e.g., step 520) of choosing a type of buffer storage for implementing an internal data structure or data structures. In the case of internal arrays, such optimization may include determining a folded mapping of the array to memory locations so as to minimize the amount of memory required for storage of the array, as indicated in step 518. Techniques for reducing or minimizing the size of the internal data structures may include time-sharing of memory through folded mappings of data arrays to memory, as described in co-pending U.S. Application Serial No. (attorney docket no. 100110564-1).
[0043] At step 510 a hardware processor is produced for each loop nest and straight-line segment in the task procedure; said hardware processor may take the form of a cost-reduced synchronously scheduled processor or array of processing elements, as indicated in step 521. At step 511, an optimized hardware buffer configuration is produced that contains the values of the internal data structures. At step 512, a pipeline controller is produced as well; it signals each pipeline segment to start at the appropriate clock cycles for a particular task. As a result of this processing, a pipelined architecture is provided that may be used to generate a register-transfer level (RTL) description of one synchronous sub-pipeline in an asynchronous complex pipeline.
[0044] Finally, at step 513, hardware descriptions are produced for synchronization hardware and expandable data buffers connecting the synchronous sub-pipelines previously created (one per subgraph). As a result of this processing, an RTL hardware description of an asynchronous complex pipeline for implementing the task procedure is created.
[0045] A feature of the various embodiments of the invention may include use of an optimized, statically chosen, affine multi-schedule. Use of an affine multi-schedule supports the building of efficient hardware that does not waste time and resources doing run time synchronization. The multi-schedule is optimized in the sense that, among all possible, legal affine multi-schedules, the selected schedule achieves required performance criteria while minimizing hardware implementation costs. This multi-schedule may be identified automatically. [0046] The multi-schedule may consist of an affine schedule k for each iteration space. The affine schedule has a constant term k_0 and a linear term (k_l, ..., k_d). All of these terms represent integers. The total schedule length for an iteration space may be computed given its affine schedule such that the total schedule length equals:
(max iterationjstart ime (i) - min iteration jstart_time (i)) where i ranges over the iteration space and iteration_start_time (i) may be given in terms of clock cycles or other units used by the user to specify the MITI. Thus, the multi-schedule is fast enough if the total schedule length for each iteration space is less than MITI.
[0047] Finding a multi-schedule also may take into consideration dependencies between iterations of the single loop nest, and dependencies between iterations of different loop nests. These dependencies determine whether or not a multi-schedule is valid. For dependencies between iterations of a single loop nest, it is required that the linear portions of the schedule vectors k satisfy certain linear inequalities, deteπnined by analysis of the dependence relation. The schedule vectors for other iteration spaces may not be a concern, and constant terms need not be considered. For dependencies between statements in different loop nests, one of which is necessarily a predecessor of the other in the sequential control flow, the situation is somewhat different. In this case, whatever the linear terms in the multi-schedule happen to be, it will be possible to select a sufficiently large constant term k_0 in the schedule of the successor nest to delay starting the nest under consideration until the necessary data is available. Thus, one way to find a valid multi-schedule according to an embodiment in the invention includes the steps of:
— finding a valid linear schedule for the iteration spaces of each loop nest separately; and
— selecting constant terms for the loop nests in sequence, making each constant term as small as possible but without sacrificing validity.
[0048] Step 406 further may include estimating a hardware cost for a selected multischedule. This may be performed by modeling the hardware cost, measuring the cost of the implementation of each step as a statically scheduled, dedicated, special purpose processor array, as well as calculating costs of inter-stage and intra-stage buffers that hold values of the internal data structures as needed. Estimators are used to determine these costs, the estimators driven by the user program and by a proposed multi-schedule. The resultant estimations support selection from among proposed, valid multi-schedules, using the estimated hardware cost as a selection criterion so as to minimize such hardware costs.
[0049] One platform for performing the processing specified by step 408 includes the "program in, chip out" (PICO) technology developed by Hewlett-Packard Laboratories (HPL) and described in U.S. Patent 6,298,071 issued October 2, 2001 and U.S. Patent Application Nos. 09/378,289 and 09/378,431, all of which are herein incorporated in their entireties by reference.
[0050] In another embodiment of the invention, a task procedure may be automatically represented as a dataflow graph in which each graph vertex represents one of the iteration spaces of the task procedure, and graph edges represent a flow of data between the iteration spaces. The dataflow graph may be segmented into connected subgraphs such that each subgraph is suitable for implementation as a synchronous statically scheduled pipeline, using the methods of the present invention. Then hardware may be synthesized for an ensemble of hardware pipelines, one for each subgraph of the dataflow graph, and each in a separate synchronous hardware clock domain. Asynchronous protocols and expandable storage elements such as FIFO or RAM storage may be used to buffer data and synchronize transactions that span multiple clock domains. One possible implementation for a subgraph is compilation onto a programmable processor whose throughput is known but whose schedule is dynamically determined.

Claims

CLAIMSWhat is claimed is:
1. A method of designing a pipeline comprising the steps of: accepting a task procedure expressed in a standard programming language, said task procedure including a sequence of computational steps; accepting a performance requirement of the pipeline; and automatically creating a hardware description of the pipeline, the pipeline comprising a plurality of interconnected processor stages, each of said processor stages for performing a respective one of said computational steps, said pipeline having characteristics consistent with said performance requirement of the pipeline.
2. The method according to claim 1 wherein said performance requirement includes definition of a mirrimum intertask interval (MITI) parameter value.
3. The method according to claim 1 wherein said step of automatically creating a hardware description of the pipeline further comprises the steps of: determining a set of iteration spaces consistent with said task procedure; determining a valid and desirable affine multi-schedule; and producing a hardware pipeline and associated control mechanism description providing a functionality consistent with said affine multi-schedule.
4. The method of claim 3 wherein the step of determining a valid and desirable affine multi-schedule further includes honoring a designer-specified constraint such as a specification of some part of the multi-schedule.
5. The method of claim 3 wherein the step of determining a valid and desirable multi-schedule further includes a step of determining a processor count and an initiation interval for each of a plurality of iteration spaces.
6. The method according to claim 1 wherein said step of automatically creating a hardware description of the pipeline further comprises a step of creating a pipeline control mechanism for starting an operation of each of said processor stages.
7. The method according to claim 1 wherein said step of automatically creating a hardware description of the pipeline further comprises the steps of: segmenting a data-flow graph of a task procedure; and determining a valid and desirable multi-schedule for each segment of said segmented data-flow graph; and automatically producing a hardware description of a synchronous hardware sub-pipeline and control unit for each segment of said segmented data-flow graph; and automatically producing a hardware description of asynchronous and expandable data and control interfaces between said synchronous hardware sub-pipelines.
'8. The method according to claim 3 further comprising the steps of: identifying internal array data structures; and determining a type of buffer storage for implementing each of said array data structures.
9. The method according to claim 8 further comprising the steps of: determining a folding of said arrays; and determining an implementation of RAM buffer storage, said RAM buffer storage being reduced in size consistent with the folding of said internal arrays.
10. The method according to claim 3 in which said computational stages comprise pipelines stages implemented as cost-reduced synchronously scheduled processors.
11. The method according to claim 10 in which said cost-reduced synchronously scheduled processor comprises of an array of processing elements.
12. The method according to claim 10 in which said cost-reduced synchronously scheduled processor is not programmable.
13. A method of designing a pipeline comprising the steps of: reading a task procedure and a desired throughput of the pipeline, said task procedure including one or more statements; identifying an iteration spaces, input, output, and internal data structures; analyzing dependencies between statements; finding at least one dependence relation between said statements; calculating a valid and desirable multi-schedule; optimizing access of at least one internal data structure using said multi-schedule to minimize a size of a hardware buffer; producing a hardware processor for each loop nest and straight-line segment; and producing optimized hardware buffers to contain values of said internal data structures.
14. A system for designing a pipeline, said system comprising: a memory storing a set of program instructions; and a processor connected to said memory and responsive to said set of program instructions for:
(i) accepting a task procedure expressed in a standard programming language, said task procedure including a sequence of computational steps,
(ii) accepting a performance requirement of the pipeline, and
(iii) automatically creating a hardware description of the pipeline, the pipeline comprising a plurahty of interconnected processor stages, each of said processor stages for performing a respective one of said computational steps, said pipeline having characteristics consistent with said performance requirement of the pipeline.
15. The system according to claim 14 wherein said performance requirement includes definition of a miriimum intertask interval (MITI) parameter value, said processor responsive to said MITI for creating said hardware description.
16. The system according to claim 14 wherein said processor is further responsive to said set of program instructions for: determining a valid and desirable affine multi-schedule; and producing a hardware pipeline and associated control mechanism description providing a functionality consistent with said affine multi-schedule.
17. The system according to claim 16 wherein said processor is further responsive to said set of program instructions for: determining a folding of internal array data structures; and determining an implementation of RAM buffer storage; said RAM buffer storage being reduced in size consistent with the folding of said internal arrays.
18. The system according to claim 16 in which said computational stages comprise pipelines stages implemented as cost-reduced synchronously scheduled processors.
19. A program of computer instructions stored on a computer readable medium, said program comprising computer code for performing the steps of: accepting a task procedure expressed in a standard programming language, said task, procedure including a sequence of computational steps; accepting a performance requirement of the pipeline; and automatically creating a hardware description of the pipeline, the pipeline comprising a plurality of interconnected processor stages, each of said processor stages for perfoπning a respective one of said computational steps, said pipeline having characteristics consistent with said performance requirement of the pipeline.
20. The program of computer instructions according to claim 19 wherein said performance requirement includes definition of a minimum intertask interval (MITI) parameter value.
21. The program of computer instructions according to claim 19 wherein said program further comprises computer code for performing the steps of: determining a valid and desirable affine multi-schedule; and producing a hardware pipeline and associated control mechanism description providing a functionality consistent with said affine multi-schedule.
22. The program of computer instructions according to claim 21 further comprising computer code for performing the steps of: determining a folding of internal arrays; and determining an implementation of a RAM buffer storage; said RAM buffer storage being reduced in size consistent with the folding of said internal arrays.
23. The program of computer instructions according to claim 21 in which said computational stages comprise pipelines stages implemented as cost-reduced synchronously scheduled processors.
PCT/US2003/032340 2002-10-31 2003-10-10 Method and system for the design of pipelines of processors WO2004042623A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2004550023A JP2006505061A (en) 2002-10-31 2003-10-10 Processor pipeline design method and design system
EP03774793A EP1559041A2 (en) 2002-10-31 2003-10-10 Method and system for the design of pipelines of processors
AU2003282604A AU2003282604A1 (en) 2002-10-31 2003-10-10 Method and system for the design of pipelines of processors

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/284,932 2002-10-31
US10/284,932 US7107199B2 (en) 2002-10-31 2002-10-31 Method and system for the design of pipelines of processors

Publications (2)

Publication Number Publication Date
WO2004042623A2 true WO2004042623A2 (en) 2004-05-21
WO2004042623A3 WO2004042623A3 (en) 2004-12-29

Family

ID=32175032

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2003/032340 WO2004042623A2 (en) 2002-10-31 2003-10-10 Method and system for the design of pipelines of processors

Country Status (5)

Country Link
US (1) US7107199B2 (en)
EP (1) EP1559041A2 (en)
JP (1) JP2006505061A (en)
AU (1) AU2003282604A1 (en)
WO (1) WO2004042623A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006350849A (en) * 2005-06-17 2006-12-28 Toshiba Corp Behavioral synthesis device and automatic behavioral synthesis method
JP2013235475A (en) * 2012-05-10 2013-11-21 Mitsubishi Electric Corp Circuit design support device, circuit design support method and program

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8087011B2 (en) * 2007-09-26 2011-12-27 International Business Machines Corporation Domain stretching for an advanced dual-representation polyhedral loop transformation framework
US8060870B2 (en) * 2007-09-26 2011-11-15 International Business Machines Corporation System and method for advanced polyhedral loop transformations of source code in a compiler
US8056065B2 (en) * 2007-09-26 2011-11-08 International Business Machines Corporation Stable transitions in the presence of conditionals for an advanced dual-representation polyhedral loop transformation framework
US8087010B2 (en) * 2007-09-26 2011-12-27 International Business Machines Corporation Selective code generation optimization for an advanced dual-representation polyhedral loop transformation framework
US8402449B1 (en) 2008-01-10 2013-03-19 The Mathworks, Inc. Technique for automatically assigning placement for pipeline registers within code generated from a program specification
JP5009243B2 (en) * 2008-07-02 2012-08-22 シャープ株式会社 Behavioral synthesis apparatus, behavioral synthesis method, program, recording medium, and semiconductor integrated circuit manufacturing method
US8127262B1 (en) * 2008-12-18 2012-02-28 Xilinx, Inc. Communicating state data between stages of pipelined packet processor
JP5644432B2 (en) * 2010-12-02 2014-12-24 日本電気株式会社 Behavioral synthesis system, behavioral synthesis method, behavioral synthesis program, and semiconductor device
US8464190B2 (en) * 2011-02-17 2013-06-11 Maxeler Technologies Ltd. Method of, and apparatus for, stream scheduling in parallel pipelined hardware
US9727377B2 (en) * 2011-07-14 2017-08-08 Siemens Aktiengesellschaft Reducing the scan cycle time of control applications through multi-core execution of user programs
US9251554B2 (en) * 2012-12-26 2016-02-02 Analog Devices, Inc. Block-based signal processing
JP6249360B2 (en) * 2013-05-17 2017-12-20 国立大学法人 筑波大学 Hardware design apparatus and hardware design program
GB2523188A (en) * 2014-02-18 2015-08-19 Ibm Method and system for pipeline depth exploration in a register transfer level design description of an electronic circuit
US9813381B2 (en) * 2014-06-18 2017-11-07 Open Text Sa Ulc Flexible and secure transformation of data using stream pipes
US9875104B2 (en) 2016-02-03 2018-01-23 Google Llc Accessing data in multi-dimensional tensors
US10248908B2 (en) 2017-06-19 2019-04-02 Google Llc Alternative loop limits for accessing data in multi-dimensional tensors
GB2568776B (en) 2017-08-11 2020-10-28 Google Llc Neural network accelerator with parameters resident on chip

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5764951A (en) * 1995-05-12 1998-06-09 Synopsys, Inc. Methods for automatically pipelining loops
WO2001059593A2 (en) * 2000-02-10 2001-08-16 Xilinx, Inc. A means and method for compiling high level software languages into algorithmically equivalent hardware representations

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6064819A (en) * 1993-12-08 2000-05-16 Imec Control flow and memory management optimization
US5455938A (en) 1994-09-14 1995-10-03 Ahmed; Sultan Network based machine instruction generator for design verification
US5742814A (en) * 1995-11-01 1998-04-21 Imec Vzw Background memory allocation for multi-dimensional signal processing
DE69804708T2 (en) * 1997-03-29 2002-11-14 Imec Vzw Method and device for size optimization of storage units
JP3291238B2 (en) 1998-02-19 2002-06-10 富士通株式会社 Processor simulator and simulation method
EP0974898A3 (en) * 1998-07-24 2008-12-24 Interuniversitair Microelektronica Centrum Vzw A method for determining a storage-bandwidth optimized memory organization of an essentially digital device
US6298071B1 (en) * 1998-09-03 2001-10-02 Diva Systems Corporation Method and apparatus for processing variable bit rate information in an information distribution system
US6934250B1 (en) * 1999-10-14 2005-08-23 Nokia, Inc. Method and apparatus for an output packet organizer
US6230114B1 (en) 1999-10-29 2001-05-08 Vast Systems Technology Corporation Hardware and software co-simulation including executing an analyzed user program
US6594814B1 (en) * 1999-12-29 2003-07-15 National Science Council Dynamic pipelining approach for high performance circuit design
US6853968B2 (en) * 2000-01-20 2005-02-08 Arm Limited Simulation of data processing apparatus
US6772415B1 (en) * 2000-01-31 2004-08-03 Interuniversitair Microelektronica Centrum (Imec) Vzw Loop optimization with mapping code on an architecture
JP2001222442A (en) * 2000-02-08 2001-08-17 Fujitsu Ltd Method for testing pipe line and method for generating pipe line test instruction and its storage medium
US7200138B2 (en) * 2000-03-01 2007-04-03 Realtek Semiconductor Corporation Physical medium dependent sub-system with shared resources for multiport xDSL system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5764951A (en) * 1995-05-12 1998-06-09 Synopsys, Inc. Methods for automatically pipelining loops
WO2001059593A2 (en) * 2000-02-10 2001-08-16 Xilinx, Inc. A means and method for compiling high level software languages into algorithmically equivalent hardware representations

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MCFARLAND M C ET AL: "THE HIGH-LEVEL SYNTHESIS OF DIGITAL SYSTEMS" PROCEEDINGS OF THE IEEE, IEEE. NEW YORK, US, vol. 78, no. 2, 1 February 1990 (1990-02-01), pages 301-318, XP000128906 ISSN: 0018-9219 *
PARK N ET AL: "Sehwa: a software package for synthesis of pipelines from behavioral specifications" IEEE TRANS. COMPUT.-AIDED DES. INTEGR. CIRCUITS SYST. (USA), IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, MARCH 1988, USA, vol. 7, no. 3, March 1988 (1988-03), pages 356-370, XP002291439 ISSN: 0278-0070 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006350849A (en) * 2005-06-17 2006-12-28 Toshiba Corp Behavioral synthesis device and automatic behavioral synthesis method
JP2013235475A (en) * 2012-05-10 2013-11-21 Mitsubishi Electric Corp Circuit design support device, circuit design support method and program

Also Published As

Publication number Publication date
JP2006505061A (en) 2006-02-09
US20040088529A1 (en) 2004-05-06
AU2003282604A1 (en) 2004-06-07
US7107199B2 (en) 2006-09-12
WO2004042623A3 (en) 2004-12-29
EP1559041A2 (en) 2005-08-03

Similar Documents

Publication Publication Date Title
US7107199B2 (en) Method and system for the design of pipelines of processors
EP3262503B1 (en) Hardware instruction generation unit for specialized processors
Grandpierre et al. From algorithm and architecture specifications to automatic generation of distributed real-time executives: a seamless flow of graphs transformations
Bakshi et al. Partitioning and pipelining for performance-constrained hardware/software systems
Bhattacharyya et al. Dynamic dataflow graphs
de Araujo Code generation algorithms for digital signal processors
Van Meerbergen et al. PHIDEO: high-level synthesis for high throughput applications
US8972923B2 (en) Method and apparatus and software code for generating a hardware stream processor design
Chen et al. MULTIPAR: Behavioral partition for synthesizing multiprocessor architectures
Jeon et al. Loop pipelining in hardware-software partitioning
Teich et al. Exact partitioning of affine dependence algorithms
Coelho Jr et al. Dynamic scheduling and synchronization synthesis of concurrent digital systems under system-level constraints
Haldar et al. Automated synthesis of pipelined designs on FPGAs for signal and image processing applications described in MATLAB
US20220147675A1 (en) Method of realizing a hardware device for executing operations defined by a high-level software code
Theelen et al. Dynamic dataflow graphs
Sriram et al. Statically sceduling communication resources in multiprocessor dsp architectures
Ku et al. Synthesis of asics with hercules and hebe
Hausmans et al. Unified dataflow model for the analysis of data and pipeline parallelism, and buffer sizing
Hoang et al. A compiler for multiprocessor DSP implementation
Corvino et al. Design space exploration for efficient data intensive computing on socs
Kalms et al. DECISION: Distributing OpenVX Applications on CPUs, GPUs and FPGAs using OpenCL
Falk et al. Integrated modeling using finite state machines and dataflow graphs
Styles et al. Exploiting program branch probabilities in hardware compilation
Maas et al. A processor-coprocessor architecture for high end video applications
JP6761182B2 (en) Information processing equipment, information processing methods and programs

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2003774793

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2004550023

Country of ref document: JP

WWP Wipo information: published in national office

Ref document number: 2003774793

Country of ref document: EP