WO2006039711A1 - Service layer architecture for memory access system and method - Google Patents

Service layer architecture for memory access system and method Download PDF

Info

Publication number
WO2006039711A1
WO2006039711A1 PCT/US2005/035814 US2005035814W WO2006039711A1 WO 2006039711 A1 WO2006039711 A1 WO 2006039711A1 US 2005035814 W US2005035814 W US 2005035814W WO 2006039711 A1 WO2006039711 A1 WO 2006039711A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory
data
behaviors
pipeline
controller
Prior art date
Application number
PCT/US2005/035814
Other languages
French (fr)
Inventor
Brent I. Gouldey
Joel J. Fuster
John Rapp
Mark Jones
Original Assignee
Lockheed Martin Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lockheed Martin Corporation filed Critical Lockheed Martin Corporation
Publication of WO2006039711A1 publication Critical patent/WO2006039711A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/34Circuit design for reconfigurable circuits, e.g. field programmable gate arrays [FPGA] or programmable logic devices [PLD]
    • G06F30/343Logical level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1405Saving, restoring, recovering or retrying at machine instruction level
    • G06F11/1407Checkpointing the instruction stream
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1415Saving, restoring, recovering or retrying at system level
    • G06F11/1417Boot up procedures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1415Saving, restoring, recovering or retrying at system level
    • G06F11/142Reconfiguring to eliminate the error
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2023Failover techniques
    • G06F11/2025Failover techniques using centralised failover control functionality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2023Failover techniques
    • G06F11/2028Failover techniques eliminating a faulty processor or activating a spare
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2051Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant in regular structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • G06F13/1694Configuration of memory controller to different memory types
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7867Architectures of general purpose stored program computers comprising a single central processing unit with reconfigurable architecture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/80Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
    • G06F15/8053Vector processors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/32Circuit design at the digital level
    • G06F30/327Logic synthesis; Behaviour synthesis, e.g. mapping logic, HDL to netlist, high-level language to RTL or netlist
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/34Circuit design for reconfigurable circuits, e.g. field programmable gate arrays [FPGA] or programmable logic devices [PLD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q9/00Arrangements in telecontrol or telemetry systems for selectively calling a substation from a main station, in which substation desired apparatus is selected for applying a control signal thereto or for obtaining measured values therefrom
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2035Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant without idle spare hardware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2038Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant with a single idle spare processing component

Definitions

  • BACKGROUND During the operation of a computer system, programs executing on the system access memory in the computer system to store data generated by the program and retrieve data being processed by the program.
  • a memory controller To access data stored in memory, a memory controller generates the appropriate signals to access the desired data stored in memory. For example, data is typically physically stored in memory in an array of rows and columns of memory storage locations, each memory location having a corresponding address. To access data stored in a particular location, the memory controller must apply a read or write command to the memory along with the address of the desired data. In response to the command and address from the controller, the memory accesses the corresponding storage location and either writes data to or reads data from that location.
  • a two dimensional array for example, consists of a plurality of data elements arranged in rows and columns.
  • the memory controller simply stores these elements one after another in consecutive storage locations in the memory. Whiie the data elements are stored in this manner, operations performed on the individual elements of the array many times necessitate that elements stored in nonconsecutive memory locations be accessed.
  • Figure 1 shows on the left a 10x8 matrix 100 consisting of 10 rows and 8 columns of data elements DEn-DE 10 8, each data element being represented as a circle.
  • the data elements DE 11 -DE 108 may be referred to generally as DE when not referring to a specific one or ones of the elements, while the subscripts will be included only when referring to a specific one or ones of the elements.
  • the data elements DE of the matrix 100 are stored in the storage locations of a memory 102, as indicated by arrow 104.
  • the data elements DE-n-DE-ios are stored in consecutive storage locations with a given row of storage locations in the memory 102.
  • the row in memory 102 is designated as having an address 0 and the data elements DEn-DE 10S are stored in consecutive columns within the row, with the columns being designated 0-4F hexadecimal.
  • the data element DE11 is stored in storage location having row address 0 and column address 0
  • data element DE21 is stored in row address 0 and column address 1 , and so on.
  • the storage locations in the memory 102 having row address 0 and column addresses 00-4F containing the data elements DEn-DEio ⁇ are shown in four separate columns merely for ease of illustration.
  • the first column of data elements DE11 -DE101 and second column of data elements DE12-DE102 are stored in storage locations 0-13 in the memory 102, which are shown in the first column of storage locations.
  • the data elements DE13-DE103 and DE14-DE104 in the third and fourth columns of the matrix 100 are stored in storage locations 14-27, respectively, in the memory 102.
  • the data elements DE15-DE105 and DE16-DE106 are stored in storage locations 28-3B and data elements DE17-DE107 and DE18- DE108 are stored in storage locations 3C-4F.
  • the data elements DE contained in respective rows of the matrix 100 may correspond to vectors being processed by a program executing on a computer system (not shown) containing the memory 102.
  • the data elements DE of a desired row in the matrix 100 must be accessed to retrieve the desired vector. From the above description of the storage of the data elements DE in the memory 102, the retrieval of desired data elements in this situation is seen as requiring data elements stored in nonconsecutive storage locations to be accessed.
  • a stride value s which equals 10 in the example of Figure 1 , corresponds to the difference between addresses of consecutive data elements being accessed. As seen in the example for the vector corresponding to row 3 in the matrix 100, the stride value S between consecutive data elements DE31 and DE32 equals 10, as is true for each pair of consecutive data elements in this example.
  • a mask array is generated that would effectively block out all of the data elements of each matrix except the data element that is desired. This mask array is then converted into read instructions that are applied to the memory 102 so that only the unmasked data element DE in each matrix is retrieved.
  • Existing memory controllers may include circuitry that allows segmenting and striding of memory to improve performance by implementing some of the functionality for generating nonsequential addresses in the controller instead of in software. Segmentation of memory divides memory into a number of segments or partitions, such as dividing a 256 megabyte static random access memory (SRAM) into 256 one-megabyte partitions. Partitioning the memory allows instructions applied to the controller to include smaller addresses, with a controller state machine altering the addresses by adding an offset to access the proper address.
  • SRAM static random access memory
  • the offset is determined based upon a segment address provided to the controller. Striding involves the nonsequential generation of addresses separated by a defined value defined as the stride value S, as previously discussed. While some controllers may include circuitry to stride through memory, in such controllers the stride value S is set prior to operation of the associated memory and typically cannot be changed while a program is executing on the computer system containing the memory controller and memory. Moreover, in such memory controllers the stride value S is typically limited to a constant value.
  • a memory subsystem includes a memory controller operable to generate first control signals according to a standard interface.
  • a memory interface adapter is coupled to the memory controller and is operable responsive to the first control signals to develop second control signals adapted to be applied to a memory subsystem to access desired storage locations within the memory subsystem.
  • FIG. 1 is a diagram illustrating the storage of data elements of a matrix in a conventional memory system.
  • FIG. 2 is a functional block diagram of computer system having a peer vector machine (PVM) architecture including a hardware implemented memory service layer for generating desired memory addresses to implement desired memory behaviors according to one embodiment of the present invention.
  • FIG. 3 is a functional block diagram illustrating in more detail the memory controller, memory service layer, and memory subsystem of FIG. 2 according to one embodiment of the present invention.
  • PVM peer vector machine
  • FIG. 4 is a functional block diagram illustrating in more detail an example of the attachable behaviors circuitry or hardware implemented address- generation circuitry contained in the memory controller of FIG. 3 according to one embodiment of the present invention.
  • FIG. 5 is a functional block diagram illustrating in more detail another example of the attachable behaviors or hardware implemented address- generation circuitry contained in the memory controller of FIG. 3 according to another embodiment of the present invention.
  • FIG. 6 is a more detailed functional block diagram of one embodiment of the host processor and pipeline accelerator of the peer vector machine (PVM) of FIG. 2.
  • PVM peer vector machine
  • FIG. 7 is a more detailed block diagram of the pipeline accelerator of FIG. 6 according to one embodiment of the present invention.
  • FIG. 8 is an even more detailed block diagram of the hardwired pipeline circuit and the data memory of FIG. 7 according to one embodiment of the present invention.
  • FIG. 9 is a block diagram of the interface 142 of FIG. 8 according to an embodiment of the invention.
  • FIG. 10 is a block diagram of the interface 140 of FIG. 8 according to an embodiment of the invention.
  • FIG. 2 is a functional block diagram of computer system 200 having a peer vector machine (PVM) architecture that includes a hardware implemented memory service layer 202 for generating memory addresses to implement desired memory behaviors according to one embodiment of the present invention.
  • the peer vector machine architecture is a new computing architecture that includes a host processor 204 that controls the overall operation and decision making operations of the system 200 and a pipeline accelerator 206 that includes programmable hardware circuitry for performing mathematically intensive operations on data, as will be described in more detail below.
  • the pipeline accelerator 206 and host processor 204 are termed "peers" that communicate with each through data vectors transferred over a communications channel referred to as a pipeline bus 208.
  • a memory controller 210 in the pipeline accelerator 206 contains the memory service layer 202 and communicates through this service layer to a memory subsystem 212 coupled to the controller.
  • the peer vector machine architecture divides the processing power of the system into two primary components, the pipeline accelerator 206 and host processor 204 that together form a peer vector machine.
  • the host processor 204 performs a portion of the overall computing burden of the system 200 and primarily handles all decision making operations of the system.
  • the pipeline accelerator 206 on the other hand does not execute any programming instructions and handles the remaining portion of the processing burden, primarily performing mathematically intensive or "number crunching" types of operations.
  • the use of the peer vector machine enables the system 200 to process data faster than conventional computing architectures such as multiprocessor architectures.
  • the pipeline accelerator 206 may be implemented through an application specific integrated circuit (ASIC) or through programmable logic integrated circuits (PLICs) such as a field programmable gate array (FPGA).
  • ASIC application specific integrated circuit
  • PLICs programmable logic integrated circuits
  • FPGA field programmable gate array
  • the pipeline accelerator 206 communicates with the host processor 204 over the pipeline bus 208 typically through an industry standard communications interface (not shown), such as a interface implementing the Rapid I/O or TCP/IP communications protocols.
  • an industry standard communications interface not shown
  • the use of such a standard communications interface simplifies the design and modification of the pipeline accelerator 206 as well as the modification of the memory service layer 202 to adaptively perform different required memory behaviors, as will be discussed in more detail below.
  • the host processor 204 determines which data is to be processed by the pipeline accelerator 206, and transfers such data in the form of data vectors over the pipeline bus 308 to the pipeline accelerator.
  • the host processor 204 can also communicate configuration commands to the pipeline accelerator 206 over the pipeline bus 208 to configure the hardware circuitry pipeline accelerator to perform desired tasks.
  • Use of an industry standard interface or bus protocol on the bus 208 enables circuitry on both sides of the bus to be more easily modified, for example.
  • the host processor 204 typically transfers desired data over the pipeline bus 208 to the pipeline accelerator 206 for processing, the pipeline accelerator may also directly receive data, process the data, and then communicate this processed data back to the host processor 204 via the pipeline bus.
  • the memory controller 210 stores the received data in the memory subsystem 212 during processing of the data by the pipeline accelerator 206.
  • the memory service layer 202 in the memory controller 210 has attachable behaviors, meaning the memory service layer may be configured or programmed to perform desired memory behaviors.
  • the host processor 204 communicates the appropriate commands over the pipeline bus 208 to the pipeline accelerator 206. It should be noted that the circuitry within the memory service layer 202 for performing various memory behaviors will be different, with some circuitry possibly requiring no configuration and the configuration of other types of circuitry differing depending on the specifics of the circuitry.
  • the memory service layer 202 operates in combination with the other circuitry in the memory controller 210 to access data elements stored in the memory subsystem 212 according to the desired memory behavior such as accessing elements in sliced arrays, masked arrays, or sliced and masked arrays, for example.
  • FIG. 3 is a functional block diagram illustrating in more detail the memory controller 210, memory service layer 202, and memory subsystem 212 of FIG. 2 according to one embodiment of the present invention.
  • An input first-in- first-out (FIFO) buffer 300 receives data to be written into the memory subsystem 212, which in the example of FIG. 3 is a ZBT SRAM memory, from the pipeline accelerator 206.
  • a FIFO buffer 302 receives data being read from the memory subsystem 212 from the memory controller 210.
  • the FIFO buffers 300 and 302 are shown separate from the memory controller 210, these buffers may be considered part of the memory controller in the embodiment of FIG. 2.
  • the memory controller 210 applies appropriate control signals 304 to a memory interface adapter 306.
  • the memory interface adapter 306 applies suitable control signals 308 to a physical control layer 310.
  • the physical control layer 310 develops control signals 312 in response to the control signals 308 from the memory interface adapter 306, with the control signals 312 being applied to the memory subsystem 212 to read data from or write data into the desired storage locations within the memory subsystem.
  • the memory interface adapter 306 decouples the memory controller 210 and the memory subsystem 212, which allows the same controller to be utilized with different types of memory subsystems.
  • control signals includes all required signals to perform the described function, and thus, for example, the control signals 304, 308, and 312 include all required control, address, and data signals to perform the desired access of the memory subsystem 212.
  • the memory service layer 202 within the memory controller 210 includes a write index register 314 that stores a write index value which the memory service layer utilizes to select specific parameters to be utilized for a particular memory behavior during write operations (i.e., during the writing of data into the memory subsystem 212).
  • a write offset register 316 stores a write offset value that is added to a write base address received by the memory controller 210, with the base address being typically supplied from either the host processor 204 via the bus 208 and pipeline accelerator 206 or from one of a plurality of pipeline units (not shown) contained in the pipeline accelerator, as will be explained in more detail below.
  • a read index register 318 stores a read index value which the memory service layer 202 utilizes to select specific parameters to be utilized for a particular memory behavior during read operations (i.e., during the reading of data from the memory subsystem 212).
  • a read offset register 320 stores a read offset value that is added to a read base address received by the memory controller 210. In the example of FIG. 3, these index and offset values are shown as being provided by a program "myApplication" which corresponds to a hardware pipeline (not shown) in the pipeline accelerator 206..
  • the memory service layer 202 further includes attachable behaviors circuitry 322 that utilizes the values stored in the registers 314-320 along with parameters loaded into the circuitry from the host processor 202 through attachable ports 324 to generate memory addresses to implement desired memory behaviors.
  • the specific circuitry contained within the attachable behaviors circuitry 322 depends upon the desired address patterns that the circuitry is designed to perform, with each address pattern corresponding to a respective memory behavior. Two sample embodiments of the attachable behaviors circuitry 322 will now be described in more detail with reference to FIGS. 4 and 5.
  • FIG. 4 is a more detailed functional block diagram of one embodiment of the memory controller 210 of FIG. 3.
  • the controller 210 includes attachable behaviors circuitry 400, which as previously described is the hardware implemented address-generation circuitry that enables the controller to perform desired memory behaviors.
  • the attachable behaviors circuitry 400 thus corresponds to one embodiment of the attachable behaviors circuitry 322 previously discussed with reference to FIG. 3.
  • FIG. 4 shows only components associated with writing data to the memory subsystem 212 (FIG. 3), with the components for reading data from the memory subsystem being analogous and understood by those skilled in the art. All components in the memory controller 210 other than the attachable behaviors circuitry 400 are conventional and will therefore be described only briefly to provide a sufficient basis for understanding the operation of the attachable behaviors circuitry.
  • the controller 210 includes a controller state machine 402 which controls the overall operation of the controller and handles such functions as ensuring proper time division multiplexing of data on a data bus of the controller between read and write operations.
  • the memory controller 210 further includes a segment table 404 that provides for partitioning of the storage capacity of the memory subsystem 212 into a number of different logical blocks are memory partitions.
  • the segment table 404 includes a plurality of segment index values, base address values, and full and empty flag values. Each memory partition is assigned an associated segment index value, and thus when a write command is applied to the memory controller that write command includes a segment index value corresponding to the memory partition to which data is to be written.
  • each memory partition is assigned a base address corresponding to the address of the first storage location in the partition.
  • Each memory partition has a known size, and thus by knowing the base address each storage location within a given memory partition can be accessed.
  • the full flag indicates whether a given memory partition is full of data while the empty flag indicates no data is stored in the associated memory partition.
  • each row defines these values for a corresponding memory partition. For example, assume the first row in the segment table 404 contains the segment index value corresponding to a first memory partition.
  • the base address and full and empty flags in this first row corresponding to the base address value for the first memory partition and the flags indicate the status of data stored within that partition.
  • the segment table 404 includes a corresponding row of values.
  • the controller state machine 402 provides the base address, which is designated BA, for the memory partition to which data is to be written to a write state machine 404 as represented by a write base address box 406.
  • the write state machine 404 triggers the controller state machine 402 to start generating memory addresses once the base address BA is applied, as represented by the box 408.
  • the controller state machine 402 also determines whether the base address BA is valid for the given memory partition to which data is being written, as represented by the box 410.
  • the write state machine 404 provides an input read request to the input FIFO 300 (FIG. 3) as represented by box 412.
  • the input FIFO 300 provides write data to be written to the memory subsystem 212 to the controller 210, as represented by box 414.
  • the controller 210 generates a write command or request 416.
  • the write data 414 and request 416 defined two of the three components that must be supplied to the memory subsystem 212 to access the correct storage locations, with the third component being the current write address CWA as represented by box 418.
  • the write state machine 404 generates a write address WA that is derived from the applied base address BA plus a write offset value WOV stored in a write offset register 420.
  • the write address WA generated by the write state machine 404 equals the base address BA plus the write offset value WOV stored in the register 420.
  • the write offset register 420 is one of the components in the attachable behaviors circuitry 400 that enables the circuitry to generate the desired pattern of memory addresses to achieve the desired memory behavior.
  • the attachable behaviors circuitry 400 further includes a stride value register 422 for storing a stride value S1 , where the stride value is a number to be added to a previous memory address to obtain the current memory address, as previously described with reference to FIG. 1.
  • a register 424 stores a number of times value N1 indicating the number of times the stride value S1 is to be added to the current address.
  • a register 426 stores a total count value TC indicating the total number of times to perform the stride value S1 for the given number of times parameter N1.
  • the attachable behaviors circuitry 400 further includes a write IOCB RAM 428 that stores an array of values for the stride S1 , number of times N1 , and total count TC values.
  • Each row of the array stored in the write IOCB RAM 428 stores a respective stride S1 , number of times N1 , and total count TC value, with the values for these parameters from one of the rows being stored in the registers 422-426 during operation of the memory controller 210.
  • a write index register 430 stores an index value which determines which row of stride S1 , number of times N1 , and total count TC values are stored in the registers 422- 426. In this way, during operation of the memory controller 210 the host processor 204 (FIG. 2) can change the index value stored in the register 430 to thereby change the values S1 , N1 , TC stored in the registers 422-426.
  • write offset value WOV stored in the register 420 could also be stored in the write IOCB RAM 428 and loaded into this register as are the S1 , N1 , and TC values. Also note that the values S1 , N1 , TC could be loaded directly into the registers 422-426 in other embodiments of the present invention, and thus the write IOCB RAM 428 is not required in all embodiments.
  • a summing circuit 432 sums the stride value S1 with the current write address CWA and this sum is applied to a multiplexer 434.
  • the multiplexer 434 outputs the write address WA from the write state machine 404 as the current write address CWA. Thereafter, the multiplexer 434 outputs this sum of the current write address CWA plus the stride value S1 from the summation circuit 432 as the new current write address.
  • the memory controller 210 applies current write address CWA along with the write request 416 and write data 414 to the memory interface adapter 306 which, as previously described, generates control signals 308 that are applied through a physical control layer 310 (FIG. 3) to access the proper storage locations in the memory subsystem 212 (FIG. 3).
  • the embodiment of the attachable behaviors circuit 400 shown in FIG. 4 is merely an example that provides the memory controller 210 with a striding memory behavior.
  • the attachable behaviors circuit 400 will include different or additional components to provide the memory controller 210 with all desired memory behaviors.
  • the number and type of registers along with the particular parameters stored in the write IOCB RAM 428 will of course very depending upon the type of memory behavior.
  • FIG. 5 is a more detailed functional block diagram of another embodiment of the memory controller 210 of FIG. 3.
  • the controller 210 includes attachable behaviors circuitry 500, which once again is hardware implemented address-generation circuitry that enables the controller to perform desired memory behaviors. All components in the embodiment of FIG. 5 that are the same as those previously described with reference to FIG. 4 have been given the same reference numbers, and for the sake of brevity will not again be described in detail.
  • the attachable behaviors circuitry 500 works in combination with an existing general write state machine 502 contained in the memory controller 210, in contrast to the embodiment of FIG. 4 in which the write state machine 404 is modified to perform regular memory accesses along with the desired memory behaviors.
  • the attachable behaviors circuitry 500 includes an attachable state machine 504 that works in combination with the general write state machine 502 to provide the desired memory accesses.
  • the embodiment of FIG. 5 would typically be an easier design to implement since the presumably known operable general write state machine 502 would already exist and no modifications to this known functional component are made.
  • the write state machine 404 is a modified version of a general write state machine.
  • One example memory pattern is for the case of a Triangular matrix where the matrix is stored with no wasted memory space.
  • the write IOCB RAM 428 stores values SI-Sn, NI-Nn, PCI-PCn, and TC that are loaded through a configuration adapter 514.
  • the configuration adapter 514 loads these values into the write IOCB RAM 428 from values supplied either from software running on the host processor 204 (FIG. 2) or from a pipeline unit contained in the pipeline accelerator 206 (FIG. 2).
  • the values SI-Sn, NI-Nn, PCI-PCn, and TC and components in the attachable behaviors circuitry 500 operate in combination to provide a set of a regular memory accesses, such as may occur where the data structure being accessed in the memory subsystem 212 is a sparse array.
  • FIG. 4 and FIG. 5 illustrates the need for being able to insert new or different circuits exhibiting different behaviors into the persistent function of the memory controller through a standard, well established, well defined interface. New implementations of memory behavior can be achieved by the designer as long as it complies with the standard attachable behavior interface.
  • FIG. 6 is a more detailed functional block diagram of a peer vector machine 40 that may be included in the system 200 of FIG. 2 according to one embodiment of the present invention.
  • the peer vector machine 40 includes a host processor 42 corresponding to the host processor 204 of FIG. 2 and a pipeline accelerator 44 corresponding to the pipeline accelerator 206 of FIG. 2.
  • the host processor 42 communicates with the pipeline accelerator 44 through a pipeline bus 50 that corresponds to the communications channel 208 of FIG. 2.
  • Data is communicated over the pipeline bus 50 according to an industry standard interface in one embodiment of present invention, which facilitates the design and modification of the machine 40.
  • the peer vector machine 40 generally and the host processor 42 and pipeline accelerator 44 more specifically are described in more detail in U.S. Patent Appln. No. 10/684,102 entitled IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD (Attorney Docket No. 1934-11-3), Appln. No. 10/684,053 entitled COMPUTING MACHINE HAVING IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD (Attorney Docket No. 1934-12-3), Appln. No. 10/683,929 entitled
  • PIPELINE ACCELERATOR FOR IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD (Attorney Docket No. 1934-13-3), Appln. No. 10/684,057 entitled PROGRAMMABLE CIRCUIT AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934-14-3), and 10/683,932 entitled PIPELINE ACCELERATOR HAVING MULTIPLE PIPELINE UNITS AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934-15-3), all of which have a common filing date of October 9, 2003 and a common owner and which are incorporated herein by reference.
  • the peer vector computing machine 40 includes a processor memory 46, an interface memory 48, a bus 50, a firmware memory 52, an optional raw-data input port 54, a processed-data output port 58, and an optional router 61.
  • the host processor 42 includes a processing unit 62 and a message handler 64
  • the processor memory 46 includes a processing-unit memory 66 and a handler memory 68, which respectively serve as both program and working memories for the processor unit and the message handler.
  • the processor memory 46 also includes an accelerator-configuration registry 70 and a message-configuration registry 72, which store respective configuration data that allow the host processor 42 to configure the functioning of the accelerator 44 and the format of the messages that the message handler 64 sends and receives.
  • the pipeline accelerator 44 is disposed on at least one programmable logic integrated circuit (PLIC) (not shown) and includes hardwired pipelines 74i - 74 n , which process respective data without executing program instructions.
  • PLIC programmable logic integrated circuit
  • the firmware memory 52 stores the configuration firmware for the accelerator 44. If the accelerator 44 is disposed on multiple PLICs, these PLICs and their respective firmware memories may be disposed in multiple pipeline units (FIG. 6). The accelerator 44 and pipeline units are discussed further below and in previously cited U.S. Patent Appln. No. 10/683,932 entitled PIPELINE ACCELERATOR HAVING MULTIPLE PIPELINE UNITS AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934-15-3). Alternatively, the accelerator 44 may be disposed on at least one application specific integrated circuit (ASIC), and thus may have internal interconnections that are not configurable. In this alternative, the machine 40 may omit the firmware memory 52. Furthermore, although the accelerator 44 is shown including multiple pipelines 74, it may include only a single pipeline. In addition, although not shown, the accelerator 44 may include one or more processors such as a digital- signal processor (DSP).
  • DSP digital- signal processor
  • FIG. 7 is a more detailed block diagram of the pipeline accelerator 44 of FIG. 4 according to one embodiment of the present invention.
  • the accelerator 44 includes one or more pipeline units 78, one of which is shown in FIG. 7.
  • Each pipeline unit 78 includes a pipeline circuit 80, such as a PLIC or an ASIC.
  • a pipeline circuit 80 such as a PLIC or an ASIC.
  • each pipeline unit 78 is a "peer" of the host processor 42 and of the other pipeline units of the accelerator 44.
  • each pipeline unit 78 can communicate directly with the host processor 42 or with any other pipeline unit.
  • this peer-vector architecture prevents data "bottlenecks" that otherwise might occur if all of the pipeline units 78 communicated through a central location such as a master pipeline unit (not shown) or the host processor 42. Furthermore, it allows one to add or remove peers from the peer-vector machine 40 (FIG. 6) without significant modifications to the machine.
  • the pipeline circuit 80 includes a communication interface 82, which transfers data between a peer, such as the host processor 42 (FIG. 6), and the following other components of the pipeline circuit: the hardwired pipelines 74i-74 n (FIG. 6) via a communication shell 84, a controller 86, an exception manager 88, and a configuration manager 90.
  • the pipeline circuit 80 may also include an industry-standard bus interface 91. Alternatively, the functionality of the interface 91 may be included within the communication interface 82. Where a bandwidth- enhancement technique such as xDSL is utilized to increase the effective bandwidth of the pipeline bus 50, the communication interface 82 and bus interface 91 are modified as necessary to implement the bandwidth-enhancement technique, as will be appreciated by those skilled in the art.
  • the communication interface 82 sends and receives data in a format recognized by the message handler 64 (FIG. 6), and thus typically facilitates the design and modification of the peer-vector machine 40 (FIG. 6). For example, if the data format is an industry standard such as the Rapid I/O format, then one need not design a custom interface between the host processor 42 and the accelerator 44. Furthermore, by allowing the pipeline circuit 80 to communicate with other peers, such as the host processor 42 (FIG. 6), via the pipeline bus 50 instead of via a non-bus interface, one can change the number of pipeline units 78 by merely connecting or disconnecting them (or the circuit cards that hold them) to the pipeline bus instead of redesigning a non-bus interface from scratch each time a pipeline unit is added or removed.
  • the data format is an industry standard such as the Rapid I/O format
  • the controller 86 synchronizes the hardwired pipelines 74-[.74 n and monitors and controls the sequence in which they perform the respective data operations in response to communications, i.e., "events," from other peers.
  • a peer such as the host processor 42 may send an event to the pipeline unit 78 via the pipeline bus 50 to indicate that the peer has finished sending a block of data to the pipeline unit and to cause the hardwired pipelines 74i.74 n to begin processing this data.
  • An event that includes data is typically called a message, and an event that does not include data is typically called a "door bell.”
  • the pipeline unit 78 may also synchronize the pipelines 74i.74 n in response to a synchronization signal.
  • the exception manager 88 monitors the status of the hardwired pipelines 74i-74 n , the communication interface 82, the communication shell 84, the controller 86, and the bus interface 91, and reports exceptions to the host processor 42 (FIG. 6). For example, if a buffer in the communication interface 82 overflows, then the exception manager 88 reports this to the host processor 42.
  • the exception manager may also correct, or attempt to correct, the problem giving rise to the exception. For example, for an overflowing buffer, the exception manager 88 may increase the size of the buffer, either directly or via the configuration manager 90 as discussed below.
  • the configuration manager 90 sets the soft configuration of the hardwired pipelines 74i-74 n , the communication interface 82, the communication shell 84, the controller 86, the exception manager 88, and the interface 91 in response to soft-configuration data from the host processor 42 (FIG- 6) — as discussed in previously cited U.S. Patent App. Serial No. 10/684,102 entitled IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD (Attorney Docket No. 1934-11-3), the hard configuration denotes the actual topology, on the transistor and circuit-block level, of the pipeline circuit 80, and the soft configuration denotes the physical parameters (e.g., data width, table size) of the hard-configured components.
  • the hard configuration denotes the actual topology, on the transistor and circuit-block level, of the pipeline circuit 80
  • the soft configuration denotes the physical parameters (e.g., data width, table size) of the hard-configured components.
  • soft configuration data is similar to the data that can be loaded into a register of a processor (not shown in FIG. 4) to set the operating mode (e.g., burst-memory mode) of the processor.
  • the host processor 42 may send soft-configuration data that causes the configuration manager 90 to set the number and respective priority levels of queues in the communication interface 82.
  • the exception manager 88 may also send soft-configuration data that causes the configuration manager 90 to, e.g., increase the size of an overflowing buffer in the communication interface 82.
  • the pipeline unit 78 of the accelerator 44 includes the data memory 92, an optional communication bus 94, and, if the pipeline circuit is a PLIC, the firmware memory 52 (FIG. 4).
  • the data memory 92 buffers data as it flows between another peer, such as the host processor 42 (FIG. 6), and the hardwired pipelines 74 r 74 n , and is also a working memory for the hardwired pipelines.
  • the data memory 92 corresponds to the memory subsystem 212 of FIG. 2.
  • the communication interface 82 interfaces the data memory 92 to the pipeline bus 50 (via the communication bus 94 and industry-standard interface 91 if present), and the communication shell 84 interfaces the data memory to the hardwired pipelines 74 r 74 n .
  • the industry-standard interface 91 is a conventional bus-interface circuit that reduces the size and complexity of the communication interface 82 by effectively offloading some of the interface circuitry from the communication interface. Therefore, if one wishes to change the parameters of the pipeline bus 50 or router 61 (FIG. 6), then he need only modify the interface 91 and not the communication interface 82. Alternatively, one may dispose the interface 91 in an IC (not shown) that is external to the pipeline circuit 80. Offloading the interface 91 from the pipeline circuit 80 frees up resources on the pipeline circuit for use in, e.g., the hardwired pipelines 74i-74 n and the controller 86. Or, as discussed above, the bus interface 91 may be part of the communication interface 82.
  • the firmware memory 52 stores the firmware that sets the hard configuration of the pipeline circuit.
  • the memory 52 loads the firmware into the pipeline circuit 80 during the configuration of the accelerator 44, and may receive modified firmware from the host processor 42 (FIG. 6) via the communication interface 82 during or after the configuration of the accelerator.
  • the loading and receiving of firmware is further discussed in previously cited U.S. Patent App. Serial No. 10/684,057 entitled PROGRAMMABLE CIRCUIT AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934- 14-3).
  • the pipeline circuit 80, data memory 92, and firmware memory 52 may be disposed on a circuit board or card 98, which may be plugged into a pipeline-bus connector (not shown) much like a daughter card can be plugged into a slot of a mother board in a personal computer (not shown).
  • a circuit board or card 98 which may be plugged into a pipeline-bus connector (not shown) much like a daughter card can be plugged into a slot of a mother board in a personal computer (not shown).
  • conventional ICs and components such as a power regulator and a power sequencer may also be disposed on the card 98 as is known.
  • the sensors 36 could also include suitable cards that plug into slots and include wiring or other required components for coupling such a card to the actual transducer portion of the each sensor.
  • One such card could be associated with each sensor 36 or each sensor could include a respective card.
  • FIG. 8 is a block diagram of the pipeline unit 78 of FIG. 6 according to an embodiment of the invention.
  • the pipeline circuit 80 receives a master CLOCK signal, which drives the below-described components of the pipeline circuit either directly or indirectly.
  • the pipeline circuit 80 may generate one or more slave clock signals (not shown) from the master CLOCK signal in a conventional manner.
  • the pipeline circuit 80 may also a receive a synchronization signal SYNC as discussed below.
  • the data memory 92 includes an input dual-port-static-random-access memory (DPSRAM) 100, an output DPSRAM 102, and an optional working DPSRAM 104.
  • DPSRAM dual-port-static-random-access memory
  • the input DPSRAM 100 includes an input port 106 for receiving data from a peer, such as the host processor 42 (FIG. 6), via the communication interface 82, and includes an output port 108 for providing this data to the hardwired pipelines 74i-74 n via the communication shell 84.
  • a peer such as the host processor 42 (FIG. 6)
  • the input DPSRAM 100 includes an input port 106 for receiving data from a peer, such as the host processor 42 (FIG. 6), via the communication interface 82, and includes an output port 108 for providing this data to the hardwired pipelines 74i-74 n via the communication shell 84.
  • Having two ports, one for data input and one for data output, increases the speed and efficiency of data transfer to/from the DPSRAM 100 because the communication interface 82 can write data to the DPSRAM while the pipelines 74i -74 n read data from the DPSRAM.
  • using the DPSRAM 100 to buffer data from a peer such as the host processor 42 allows the peer and the
  • the peer can send data to the pipelines 74i-74 n without "waiting" for the pipelines to complete a current operation.
  • the pipelines 74i-74 n can retrieve data without "waiting" for the peer to complete a data-sending operation.
  • the output DPSRAM 102 includes an input port 110 for receiving data from the hardwired pipelines 74 r 74 n via the communication shell 84, and includes an output port "/72 for providing this data to a peer, such as the host processor 42 (FlG. 6), via the communication interface 82.
  • a peer such as the host processor 42 (FlG. 6)
  • the two data ports 110 (input) and 112 (output) increase the speed and efficiency of data transfer to/from the DPSRAM 102, and using the DPSRAM 102 to buffer data from the pipelines 74i-74 n allows the peer and the pipelines to operate asynchronously relative to one another.
  • the pipelines 74 ⁇ -74 n can publish data to the peer without "waiting" for the output-data handler 126 to complete a data transfer to the peer or to another peer.
  • the output-data handler 126 can transfer data to a peer without "waiting" for the pipelines 74i-74 n to complete a data-publishing operation.
  • the working DPSRAM 104 includes an input port 114 for receiving data from the hardwired pipelines 74 ⁇ 74 n via the communication shell 84, and includes an output port 176 for returning this data back to the pipelines via the communication shell.
  • the pipelines 74i-74 n may need to temporarily store partially processed, i.e., intermediate, data before continuing the processing of this data.
  • a first pipeline such as the pipeline 74i
  • a second pipeline such as the pipeline 74 2
  • the first pipeline may need to temporarily store the intermediate data until the second pipeline retrieves it.
  • the working DPSRAM 104 provides this temporary storage.
  • the two data ports 114 (input) and 116 (output) increase the speed and efficiency of data transfer between the pipelines 74i-74 n and the DPSRAM 104.
  • including a separate working DPSRAM "/04 typically increases the speed and efficiency of the pipeline circuit 80 by allowing the DPSRAMs 100 and 102 to function exclusively as data-input and data-output buffers, respectively.
  • either or both of the DPSRAMS 100 and 102 can also be a working memory for the pipelines 74i-74 n when the DPSRAM 104 is omitted, and even when it is present.
  • DPSRAMS 100, 102, and 104 are described as being external to the pipeline circuit 80, one or more of these DPSRAMS, or equivalents thereto, may be internal to the pipeline circuit.
  • the communication interface 82 includes an industry-standard bus adapter 118, an input-data handler 120, input-data and input-event queues 122 and 124, an output-data handler 126, and output-data and output-event queues 128 and 130.
  • the queues 122, 124, 128, and 130 are shown as single queues, one or more of these queues may include sub queues (not shown) that allow segregation by, e.g., priority, of the values stored in the queues or of the respective data that these values represent.
  • the industry-standard bus adapter 118 includes the physical layer that allows the transfer of data between the pipeline circuit 80 and the pipeline bus 50 (FIG. 6) via the communication bus 94 (FIG. 7). Therefore, if one wishes to change the parameters of the bus 94, then he need only modify the adapter 118 and not the entire communication interface 82. Where the industry-standard bus interface 91 is omitted from the pipeline unit 78, then the adapter 118 may be modified to allow the transfer of data directly between the pipeline bus 50 and the pipeline circuit 80. In this latter implementation, the modified adapter 118 includes the functionality of the bus interface 91, and one need only modify the adapter 118 if he/she wishes to change the parameters of the bus 50. For example, where a bandwidth-enhancement technique such as ADSL is utilized to communicate data over the bus 50 the adapter 118 is modified accordingly to implement the bandwidth-enhancement technique.
  • a bandwidth-enhancement technique such as ADSL
  • the input-data handler 120 receives data from the industry-standard adapter 118, loads the data into the DPSRAM 100 via the input port 106, and generates and stores a pointer to the data and a corresponding data identifier in the input-data queue 122. If the data is the payload of a message from a peer, such as the host processor 42 (FIG. 3), then the input-data handier 120 extracts the data from the message before loading the data into the DPSRAM 100.
  • the input-data handler 120 includes an interface 132, which writes the data to the input port 106 of the DPSRAM 100 and which is further discussed below in conjunction with FIG. 6.
  • the input-data handler 120 can omit the extraction step and load the entire message into the DPSRAM 100.
  • the input-data handler 120 also receives events from the industry-standard bus adapter 118, and loads the events into the input-event queue 124.
  • the input-data handler 120 includes a validation manager 134, which determines whether received data or events are intended for the pipeline circuit 80.
  • the validation manager 134 may make this determination by analyzing the header (or a portion thereof) of the message that contains the data or the event, by analyzing the type of data or event, or the analyzing the instance identification (i.e., the hardwired pipeline 74 for which the data/event is intended) of the data or event. If the input-data handler 120 receives data or an event that is not intended for the pipeline circuit 80, then the validation manager 134 prohibits the input-data handler from loading the received data/even. Where the peer-vector machine 40 includes the router 61 (FIG.
  • the validation manager 134 may also cause the input-data handler 120 to send to the host processor 42 (FIG. 3) an exception message that identifies the exception (erroneously received data/event) and the peer that caused the exception.
  • the output-data handler 126 retrieves processed data from locations of the DPSRAM 102 pointed to by the output-data queue 128, and sends the processed data to one or more peers, such as the host processor 42 (FlG. 3), via the industry-standard bus adapter 118.
  • the output-data handler 126 includes an interface 136, which reads the processed data from the DPSRAM 102 via the port 112.
  • the interface 136 is further discussed below in conjunction with FIG. 10.
  • the output-data handler 126 also retrieves from the output-event queue 130 events generated by the pipelines 74i - 74 n , and sends the retrieved events to one or more peers, such as the host processor 42 (FIG. 6) via the industry-standard bus adapter 118.
  • the output-data handler 126 includes a subscription manager 138, which includes a list of peers, such as the host processor 42 (FIG. 6), that subscribe to the processed data and to the events; the output-data handler uses this list to send the data/events to the correct peers. If a peer prefers the data/event to be the payload of a message, then the output-data handler 126 retrieves the network or bus-port address of the peer from the subscription manager 138, generates a header that includes the address, and generates the message from the data/event and the header.
  • a subscription manager 138 which includes a list of peers, such as the host processor 42 (FIG. 6), that subscribe to the processed data and to the events; the output-data handler uses this list to send the data/events to the correct peers. If a peer prefers the data/event to be the payload of a message, then the output-data handler 126 retrieves the network or bus-port address of the peer from the subscription manager 138, generate
  • the communication shell 84 includes a physical layer that interfaces the hardwired pipelines 74i-74 n to the output-data queue 128, the controller 86, and the DPSRAMs 100, 102, and 104.
  • the shell 84 includes interfaces 140 and 142, and optional interfaces 144 and 146.
  • the interfaces 140 and 146 may be similar to the interface 136; the interface 140 reads input data from the DPSRAM 100 via the port 108, and the interface 146 reads intermediate data from the
  • the interfaces 142 and 144 may be similar to the interface 132; the interface 142 writes processed data to the DPSRAM 102 via the port 110, and the interface 144 writes intermediate data to the DPSRAM 104 via the port 114.
  • the controller 86 includes a sequence manager 148 and a synchronization interface 150, which receives one or more synchronization signals SYNC.
  • a peer such as the host processor 42 (FIG. 6), or a device (not shown) external to the peer-vector machine 40 (FIG. 6) may generate the SYNC signal, which triggers the sequence manager 148 to activate the hardwired pipelines 74 r 74 n as discussed below and in previously cited U.S. Patent App. Serial No. 10/683,932 entitled PIPELINE ACCELERATOR HAVING MULTIPLE PIPELINE UNITS AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934-15-3).
  • the synchronization interface 150 may also generate a SYNC signal to trigger the pipeline circuit 80 or to trigger another peer.
  • the events from the input-event queue 124 also trigger the sequence manager 148 to activate the hardwired pipelines 74r74 n as discussed below. [75] The sequence manager 148 sequences the hardwired pipelines 74r
  • each pipeline 74 has at least three operating states: preprocessing, processing, and post processing.
  • preprocessing the pipeline 74, e.g., initializes its registers and retrieves input data from the DPSRAM 100.
  • the pipeline 74 e.g., operates on the retrieved data, temporarily stores intermediate data in the DPSRAM 104, retrieves the intermediate data from the DPSRAM 104, and operates on the intermediate data to generate result data.
  • post processing the pipeline 74, e.g., loads the result data into the DPSRAM 102.
  • the sequence manager 148 monitors the operation of the pipelines 74i- 74 n and instructs each pipeline when to begin each of its operating states. And one may distribute the pipeline tasks among the operating states differently than described above. For example, the pipeline 74 may retrieve input data from the DPSRAM 100 during the processing state instead of during the preprocessing state. [76] Furthermore, the sequence manager 148 maintains a predetermined internal operating synchronization among the hardwired pipelines 74i-74 n .
  • the pipelines 74i-74 n may be desired to synchronize the pipelines such that while the first pipeline 74i is in a preprocessing state, the second pipeline 74 2 is in a processing state and the third pipeline 74 3 is in a post-processing state. Because a state of one pipeline 74 may require a different number of clock cycles than a concurrently performed state of another pipeline, the pipelines 74i-74 n may lose synchronization if allowed to run freely. Consequently, at certain times there may be a "bottle neck," as, for example, multiple pipelines 74 simultaneously attempt to retrieve data from the DPSRAM 100.
  • the sequence manager 148 allows all of the pipelines 74 to complete a current operating state before allowing any of the pipelines to proceed to a next operating state. Therefore, the time that the sequence manager 148 allots for a current operating state is long enough to allow the slowest pipeline 74 to complete that state.
  • circuitry (not shown) for maintaining a predetermined operating synchronization among the hardwired pipelines 74i-74 n may be included within the pipelines themselves. [77]
  • the sequence manager 148 synchronizes the operation of the pipelines to the operation of other peers, such as the host processor 42 (FIG.
  • a SYNC signal triggers a time-critical function but requires significant hardware resources; comparatively, an event typically triggers a non-time-critical function but requires significantly fewer hardware resources.
  • PIPELINE ACCELERATOR HAVING MULTIPLE PIPELINE UNITS AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No.
  • a SYNC signal is routed directly from peer to peer, it can trigger a function more quickly than an event, which must makes its way through, e.g., the pipeline bus 50 (FIG. 6), the input-data handler 120, and the input-event queue 124. But because they are separately routed, the SYNC signals require dedicated circuitry, such as routing lines, buffers, and the SYNC interface 150, of the pipeline circuit 80. Conversely, because they use the existing data-transfer infrastructure (e.g. the pipeline bus 50 and the input-data handler 120), the events require only the dedicated input-event queue 124. Consequently, designers tend to use events to trigger all but the most time-critical functions.
  • FIG. 9 is a block diagram of the interface 142 of FIG. 8 according to an embodiment of the invention.
  • a memory controller 152 corresponds to the memory controller 210 of FIG. 2 that is contained within the memory service layer 202 according to embodiments of the present invention.
  • the interface 142 writes processed data from the hardwired pipelines 74i-74 n to the DPSRAM 102.
  • the structure of the interface 142 reduces or eliminates data "bottlenecks" and, where the pipeline circuit 80 (FIG. 8) is a PLIC, makes efficient use of the PLICs local and global routing resources.
  • the interface 142 includes write channels 150i - 15O n , one channel for each hardwired pipeline 74i - 74 n (FIG. 5), and includes the controller 152.
  • the channel 150i is discussed below, it being understood that the operation and structure of the other channels 15O 2 - 15O n are similar unless stated otherwise.
  • the channel 150i includes a write-address/data FIFO 754* and a address/data register 156i.
  • the FIFO 754* stores the data that the pipeline 74i writes to the
  • the FIFO 754* reduces or eliminates the data bottleneck that may occur if the pipeline 74-t had to "wait" to write data to the channel 750* until the controller 752 finished writing previous data. [84] The FIFO 754* receives the data from the pipeline 74i via a bus
  • the FIFO 154 1 receives the address of the location to which the data is to be written via a bus 760*, and provides the data and address to the register 756* via busses 762* and 164 1 , respectively. Furthermore, the FIFO 154 1 receives a WRITE FIFO signal from the pipeline 74 1 on a line I66 1 , receives a CLOCK signal via a line I68 1 , and provides a FIFO FULL signal to the pipeline 74i on a line 170i. In addition, the FIFO 154i receives a READ FIFO signal from the controller 152 via a line 172i, and provides a FIFO EMPTY signal to the controller via a line 174 ⁇ Where the pipeline circuit 80 (FIG.
  • the busses 158 1t 16O 1 , 162 1 , and 1Q4i and the lines I66 1 , 168 1t 17O 1 , 172 1 ⁇ and 174 1 are preferably formed using local routing resources.
  • local routing resources are preferred to global routing resources because the signal-path lengths are generally shorter and the routing is easier to implement.
  • the register 156i receives the data to be written and the address of the write location from the FIFO 154 1 via the busses 162i and 164i, respectively, and provides the data and address to the port 110 of the DPSRAM 102 (FIG. 8) via an address/data bus 176. Furthermore, the register 156i also receives the data and address from the registers W6 2 - 156 n via an address/data bus 178i as discussed below. In addition, the register We 1 receives a SHIFT/LOAD signal from the controller 152 via a line 180. Where the pipeline circuit 80 (FIG. 8) is a PLIC, the bus 176 is typically formed using global routing resources, and the busses 178i - 178 n .i and the line 180 are preferably formed using local routing resources.
  • the controller 152 provides a WRITE DPSRAM signal to the port 110 of the DPSRAM 102 (FIG. 8) via a line 182.
  • the FIFO 154 1 drives the FIFO FULL signal to the logic level corresponding to the current state ("full” or “not full") of the FIFO.
  • the pipeline drives the data and corresponding address onto the busses 158-t and 16O 1 , respectively, and asserts the WRITE signal, thus loading the data and address into the FIFO. If the FIFO 154 1 is full, however, the pipeline 74 1 waits until the FIFO is not full before loading the data. [90] Then, the FIFO 754* drives the FIFO EMPTY signal to the logic level corresponding to the current state ("empty" or "not empty") of the FIFO.
  • READ FIFO signal drives the SHIFT/LOAD signal to the load logic level, thus loading the first loaded data and address from the FIFO into the register 156- ⁇ . If the FIFO 754 / is empty, the controller 152 does not assert READ FIFO, but does drive SHIFT load to the load logic level if any of the other FIFOs 154 2 -154 n are not empty.
  • the channels 15O 2 - 15O n operate in a similar manner such that first-loaded data in the FIFOs 154 2 - 154 n are respectively loaded into the registers 156 2 .156 n .
  • the controller 152 drives the SHIFT/LOAD signal to the shift logic level and asserts the WRITE DPSRAM signal, thus serially shifting the data and addresses from the registers 156i - 156 n onto the address/data bus 176 and loading the data into the corresponding locations of the DPSRAM 102.
  • the data and address from the register 156i are shifted onto the bus 776 such that the data from the FIFO 754 ? is loaded into the addressed location of the DPSRAM 702.
  • the data and address from the register 156 2 are shifted into the register 756 ?
  • the data and address from the register 756 3 are shifted into the register 156 2 , and so on.
  • the data and address from the register 756* are shifted onto the bus 776 such that the data from the FIFO 154 2 is loaded into the addressed location of the DPSRAM 702.
  • the data and address from the register 756 2 are shifted into the register 156i, the data and address from the register 756 3 (not shown) are shifted into the register 756 2 , and so on.
  • the controller 752 may implement these shift cycles by pulsing the SHIFT/LOAD signal, or by generating a shift clock signal (not shown) that is coupled to the registers 156i-156 n .
  • the controller may bypass the empty register, and thus shorten the shift operation by avoiding shifting null data and a null address onto the bus 176.
  • the interface 144 is similar to the interface 142, and the interface 132 is also similar to the interface 142 except that the interface 132 includes only one write channel 150.
  • FIG. 10 is a block diagram of the interface 740 of FIG. 8 according to an embodiment of the invention.
  • a memory controller 192 corresponds to the memory controller 210 of FIG. 2 that is contained in the memory service layer 202 according to embodiments of the present invention.
  • the interface 140 reads input data from the DPSRAM 100 and transfers this data to the hardwired 74i-74 n .
  • the structure of the interface 140 reduces or eliminates data "bottlenecks" and, where the pipeline circuit 80 (FIG. 8) is a PLIC, makes efficient use of the PLICs local and global routing resources.
  • the interface 140 includes read channels 19O 1 - 19O n , one channel for each hardwired pipeline 74i - 74 n (FIG. 8), and the controller 192.
  • the read channel 19O 1 is discussed below, it being understood that the operation and structure of the other read channels I ⁇ O ⁇ - 79O n are similar unless stated otherwise.
  • the channel 190i includes a FIFO 194i and an address/identifier
  • the FIFO 194i includes two sub-FIFOs (not shown), one for storing the address of the location within the DPSRAM 100 from which the pipeline 74i wishes to read the input data, and the other for storing the data read from the DPSRAM 100.
  • the FIFO 194i reduces or eliminates the bottleneck that may occur if the pipeline 74i had to "wait" to provide the read address to the channel 19O 1 until the controller 192 finished reading previous data, or if the controller had to wait until the pipeline 74 1 retrieved the read data before the controller could read subsequent data.
  • the FIFO 194i receives the read address from the pipeline 74i via a bus 19Bi and provides the address and ID to the register 196i via a bus 200-/. Since the ID corresponds to the pipeline 74i and typically does not change, the FIFO 194 1 may store the ID and concatenate the ID with the address.
  • the pipeline 74i may provide the ID to the FIFO 194i via the bus 198L
  • the FIFO 194 ⁇ receives a READY WRITE FIFO signal from the pipeline 74i via a line 202i, receives a CLOCK signal via a line 204i, and provides a FIFO FULL (of read addresses) signal to the pipeline via a line 206i.
  • the FIFO 194 ⁇ receives a WRITE/READ FIFO signal from the controller 192 via a line 208 ⁇ , and provides a FIFO EMPTY signal to the controller via a line 210i.
  • the FIFO 194i receives the read data and the corresponding ID from the controller 192 via a bus 212, and provides this data to the pipeline 74i via a bus 214i,
  • the pipeline circuit 80 (FIG. 8) is a PLIC
  • the busses 198i, 20O 1 , and 214i and the lines 202i, 204 1t 20S 1 , 208i, and 210i are preferably formed using local routing resources, and the bus 212 is typically formed using global routing resources.
  • the register 196i receives the address of the location to be read and the corresponding ID from the FIFO 194i via the bus 206i, provides the address to the port 108 of the DPSRAM 100 (FIG. 8) via an address bus 216, and provides the ID to the controller 192 via a bus 218. Furthermore, the register 196i also receives the addresses and IDs from the registers 196 ⁇ - 196 n via an address/ID bus 220i as discussed below. In addition, the register 196i receives a SHIFT/LOAD signal from the controller 192 via a line 222. Where the pipeline circuit 80 (FIG. 8) is a PLIC, the bus 216 is typically formed using global routing resources, and the busses 220 ⁇ .220 ⁇ and the line 222 are preferably formed using local routing resources.
  • the controller 192 receives the data read from the port 108 of the DPSRAM 100 (FIG. 8) via a bus 224 and generates a READ DPSRAM signal on a line 226, which couples this signal to the port 108.
  • the pipeline circuit 80 (FIG. 8) is a PLIC
  • the bus 224 and the line 226 are typically formed using global routing resources.
  • the FIFO 194 ⁇ drives the FIFO FULL signal to the logic level corresponding to the current state ("full" or “not full") of the FIFO relative to the read addresses. That is, if the FIFO 194i is full of addresses to be read, then it drives the logic level of FIFO FULL to one level, and if the FIFO is not full of read addresses, it drives the logic level of FIFO FULL to another level.
  • the pipeline drives the address of the data to be read onto the bus 198 1t and asserts the READ/WRITE FIFO signal to a write level, thus loading the address into the FIFO.
  • the pipeline 74i gets the address from the input-data queue 122 via the sequence manager 148. If, however, the FIFO 194i is full of read addresses, the pipeline 74 1 waits until the FIFO is not full before loading the read address.
  • the FIFO 194 ⁇ drives the FIFO EMPTY signal to the logic level corresponding to the current state ("empty" or “not empty") of the FIFO relative to the read addresses. That is, if the FIFO 194i is loaded with at least one read address, it drives the logic level of FIFO EMPTY to one level, and if the FIFO is loaded with no read addresses, it drives the logic level of FIFO EMPTY to another level.
  • the channels 19Qi - 19O n operate in a similar manner such that the controller 192 respectively loads the first-loaded addresses and IDs from the FIFOs 194 2 - 194 n into the registers 196 2 - 196 n . If all of the FIFOs 194 r 194 n are empty, then the controller 192 waits for at least one of the FIFOs to receive an address before proceeding.
  • the controller 192 drives the SHIFT/LOAD signal to the shift logic level and asserts the READ DPSRAM signal to serially shift the addresses and IDs from the registers 196i - 196 n onto the address and ID busses 216 and 218 and to serially read the data from the corresponding locations of the DPSRAM 100 via the bus 224.
  • the controller 192 drives the received data and corresponding ID — the ID allows each of the FIFOs 194i - 194 n to determine whether it is an intended recipient of the data — onto the bus 212, and drives the WRITE/READ FIFO signal to a write level, thus serially writing the data to the respective FIFO, 194i-194 n .
  • the hardwired pipelines 74i-74 n sequentially assert their READ/WRITE FIFO signals to a read level and sequentially read the data via the busses 214r214 n .
  • the controller 192 shifts the address and ID from the register 196i onto the busses 216 and 218, respectively, asserts read DPSRAM, and thus reads the data from the corresponding location of the DPSRAM 100 via the bus 224 and reads the ID from the bus 218.
  • the controller 192 drives WRITE/READ FIFO signal on the line 208 ⁇ to a write level and drives the received data and the ID onto the bus 212. Because the ID is the ID from the FIFO 194i, the FIFO 194i recognizes the ID and thus loads the data from the bus 212 in response the write level of the WRITE/READ FIFO signal.
  • the remaining FIFOs 194 2 - 194 ⁇ do not load the data because the ID on the bus 212 does not correspond to their IDs. Then, the pipeline 74i asserts the READ/WRITE FIFO signal on the line 2Q2i to the read level and retrieves the read data via the bus 214i. Also during the first shift cycle, the address and ID from the register 196 2 are shifted into the register 196i, the address and ID from the register 196 3 (not shown) are shifted into the register 196 2 , and so on. Alternatively, the controller 192 may recognize the ID and drive only the WRITE/READ FIFO signal on the line 208i to the write level.
  • the WRITE/READ FIFO signal may be only a read signal, and the FIFO 194i (as well as the other FIFOs 194 2 -194 n ) may load the data on the bus 212 when the ID on the bus 212 matches the ID of the FIFO 194i. This eliminates the need of the controller 192 to generate a write signal.
  • the controller 192 reads data from the location of the DPSRAM 100 specified by the FIFO 194 2 .
  • the controller 192 drives the WRITE/READ FIFO signal to a write level and drives the received data and the ID onto the bus 212. Because the ID is the ID from the FIFO 194 2 , the FIFO 194 2 recognizes the ID and thus loads the data from the bus 212. The remaining FIFOs 194i and 194 3 - 194 n do not load the data because the ID on the bus 212 does not correspond to their IDs.
  • the pipeline 74 2 asserts its READ/WRITE FIFO signal to the read level and retrieves the read data via the bus 214 2 . Also during the second shift cycle, the address and ID from the register 196 2 is shifted into the register 196i, the address and ID from the register 196 3 (not shown) is shifted into the register 196 2 , and so on. [114] This continues for n shift cycles, i.e., until the address and ID from the register 196 n (which is the address and ID from the FIFO 194 n ) are respectively shifted onto the bus 216 and 218.
  • the controller 192 may implement these shift cycles by pulsing the SHIFT/LOAD signal, or by generating a shift clock signal (not shown) that is coupled to the registers 196i-196 n . Furthermore, if one of the registers 196r196 2 is empty during a particular shift operation because its corresponding FIFO 194i-194 n is empty, then the controller 192 may bypass the empty register, and thus shorten the shift operation by avoiding shifting a null address onto the bus 216.
  • the interface 144 is similar to the interface 140, and the interface 136 is also similar to the interface 140 except that the interface 136 includes only one read channel 190, and thus includes no ID circuitry.

Abstract

A memory subsystem includes a memory controller operable to generate first control signals according to a standard interface. A memory interface adapter is coupled to the memory controller and is operable responsive to the first control signals to develop second control signals adapted to be applied to a memory subsystem to access desired storage locations within the memory subsystem.

Description

SERVICELAYERARCHITECTUREFORMEMORYACCESSSYSTEMAND
METHOD
CLAIM OF PRIORITY
[1] This application claims priority to U.S. Provisional Application Serial No. 60/615,050, filed on 1 October 2004, which is incorporated by reference.
CROSS REFERENCE TO RELATED APPLICATIONS
[2] This application is related to U.S. Patent App. Serial Nos.
10/684,102 entitled IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD (Attorney Docket No. 1934-011-03), 10/684,053 entitled COMPUTING MACHINE HAVING IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD (Attorney Docket No. 1934-012-03), 10/684,057 entitled PROGRAMMABLE CIRCUIT AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934-014-03), and 10/683,932 entitled PIPELINE ACCELERATOR HAVING MULTIPLE PIPELINE UNITS AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934- 015-03), which have a common filing date and owner and which are incorporated by reference.
BACKGROUND [3] During the operation of a computer system, programs executing on the system access memory in the computer system to store data generated by the program and retrieve data being processed by the program. To access data stored in memory, a memory controller generates the appropriate signals to access the desired data stored in memory. For example, data is typically physically stored in memory in an array of rows and columns of memory storage locations, each memory location having a corresponding address. To access data stored in a particular location, the memory controller must apply a read or write command to the memory along with the address of the desired data. In response to the command and address from the controller, the memory accesses the corresponding storage location and either writes data to or reads data from that location. [4] Depending on the type of data being stored and processed, the accessing of the required data may be relatively complicated and thus inefficient. This is true because programs executing on the computer system must store and retrieve data for various types of more complicated data structures, such as vectors and arrays. A two dimensional array, for example, consists of a plurality of data elements arranged in rows and columns. To store the data elements of the array in memory, the memory controller simply stores these elements one after another in consecutive storage locations in the memory. Whiie the data elements are stored in this manner, operations performed on the individual elements of the array many times necessitate that elements stored in nonconsecutive memory locations be accessed.
[5] An example of the storage and access issues presented by a two- dimensional matrix stored in memory will now be described in more detail with reference to Figure 1. Figure 1 shows on the left a 10x8 matrix 100 consisting of 10 rows and 8 columns of data elements DEn-DE108, each data element being represented as a circle. In the following description, note that the data elements DE11-DE108 may be referred to generally as DE when not referring to a specific one or ones of the elements, while the subscripts will be included only when referring to a specific one or ones of the elements. The data elements DE of the matrix 100 are stored in the storage locations of a memory 102, as indicated by arrow 104. The data elements DE-n-DE-iosare stored in consecutive storage locations with a given row of storage locations in the memory 102. In the example of Figure 1 the row in memory 102 is designated as having an address 0 and the data elements DEn-DE10S are stored in consecutive columns within the row, with the columns being designated 0-4F hexadecimal. Thus, the data element DE11 is stored in storage location having row address 0 and column address 0, data element DE21 is stored in row address 0 and column address 1 , and so on. In Figure 1 , the storage locations in the memory 102 having row address 0 and column addresses 00-4F containing the data elements DEn-DEioβ are shown in four separate columns merely for ease of illustration.
[6] For the matrix 100, the first column of data elements DE11 -DE101 and second column of data elements DE12-DE102 are stored in storage locations 0-13 in the memory 102, which are shown in the first column of storage locations. The data elements DE13-DE103 and DE14-DE104 in the third and fourth columns of the matrix 100 are stored in storage locations 14-27, respectively, in the memory 102. Finally, the data elements DE15-DE105 and DE16-DE106 are stored in storage locations 28-3B and data elements DE17-DE107 and DE18- DE108 are stored in storage locations 3C-4F.
[7] When accessing the stored data elements DE, common mathematical manipulations of these elements may result in relatively complicated memory accesses or "memory behaviors". For example, the data elements DE contained in respective rows of the matrix 100 may correspond to vectors being processed by a program executing on a computer system (not shown) containing the memory 102. In this situation, the data elements DE of a desired row in the matrix 100 must be accessed to retrieve the desired vector. From the above description of the storage of the data elements DE in the memory 102, the retrieval of desired data elements in this situation is seen as requiring data elements stored in nonconsecutive storage locations to be accessed. For example, if the third row of data elements DE31-DE38 is to be retrieved, the data element DE31 stored in location 2 in the memory 102 must be accessed, then the data element DE32 stored in location C, and so on. The data elements DE31 and DE32 are illustrated in the storage locations 2 and C within the memory 102. [8] A stride value s, which equals 10 in the example of Figure 1 , corresponds to the difference between addresses of consecutive data elements being accessed. As seen in the example for the vector corresponding to row 3 in the matrix 100, the stride value S between consecutive data elements DE31 and DE32 equals 10, as is true for each pair of consecutive data elements in this example. Such a stride value S can be utilized to generate addresses for the desired data elements DE in this and other memory behaviors requiring nonsequential access of storage locations. For example, to generate addresses to access all data elements DE in row 3 of the matrix 100, all that is required is a base address corresponding to the address of the first data element (DE31 in this example), stride value S, and a total number N of times (7 in this example) to add the stride value to the immediately prior address. Using these parameters, each address equals a base address (BA) plus n times the stride value S where n varies from 0 to N (address = BA+nxS) for n = 0-7). [9] Many different types of memory behaviors which involve the nonsequential access of storage locations are common and complicate the retrieval of the desired data elements DE in the memory 102. Examples of different types of memory behaviors that include the such nonsequential accessing of data elements include accessing simple and complex vectors, simple indexed arrays, sliced arrays, masked arrays, sliced and masked arrays, vectors and arrays of user defined data structures, and sliced and masked arrays of user defined structures. For example, a mask array is commonly utilized to extract the desired data elements DE while leaving the other data elements in the alone. If it was desired to extract just one data element DE contained in the same position in a number of different matrices 100 stored in the memory 102, and the element was in the same position for each matrix, then a mask array is generated that would effectively block out all of the data elements of each matrix except the data element that is desired. This mask array is then converted into read instructions that are applied to the memory 102 so that only the unmasked data element DE in each matrix is retrieved.
[10] While a formula analogous to that developed above for the vector example can be developed for these types of memory behaviors, for a number of reasons these types of memory behaviors or can adversely affect the operation of the memory 102, as will be appreciated by those skilled in the art. Typically, such complicated memory behaviors are handled in software, which slows the access of the desired data elements DE. The programming language C++, for example, has a valarray data structure that will take a mask and then generate the proper memory addresses to apply to memory 102 to retrieve the desired data elements DE. The translation and processing of the defined mask to generate the required addresses to access the corresponding data elements DE in memory 102 is done in software. Once the mask is converted into addresses, these addresses are applied to the memory 102, typically via a memory controller (not shown), to retrieve the desired data elements. [11] One drawback to this approach is that the translation of the mask array into corresponding addresses is performed in software. The software translates elements in the mask array into corresponding physical addresses that are then applied to the memory 102. While performing these translations in software provides flexibility, the execution of the required programming instructions to perform the conversions is not trivial and thus may take a relatively long time. For example, even where the mask array only includes values such that only one data element DE is to be selected from the data elements of the matrix 100, the software translation algorithm still has to go through and determine the address of that single unmasked data element. The time required to perform such translations, particularly where a large number of accesses to arrays stored in memory 102 are involved,, may certainly be long enough to slow down the overall operation of the computer system containing the memory. [12] Existing memory controllers may include circuitry that allows segmenting and striding of memory to improve performance by implementing some of the functionality for generating nonsequential addresses in the controller instead of in software. Segmentation of memory divides memory into a number of segments or partitions, such as dividing a 256 megabyte static random access memory (SRAM) into 256 one-megabyte partitions. Partitioning the memory allows instructions applied to the controller to include smaller addresses, with a controller state machine altering the addresses by adding an offset to access the proper address. The offset is determined based upon a segment address provided to the controller. Striding involves the nonsequential generation of addresses separated by a defined value defined as the stride value S, as previously discussed. While some controllers may include circuitry to stride through memory, in such controllers the stride value S is set prior to operation of the associated memory and typically cannot be changed while a program is executing on the computer system containing the memory controller and memory. Moreover, in such memory controllers the stride value S is typically limited to a constant value.
[13] Although existing memory controllers may provide segmentation and striding functionality, this functionality is limited and not easily changed. Moreover, this functionality does not enable many more complicated memory behaviors to be implemented in hardware, meaning such behaviors must be done through software with the attendant decrease in performance. There is a need for a system and method for implementing complex memory behaviors in hardware to allow for high-speed access of memory. SUMMARY
[14] According to one aspect of the present invention, a memory subsystem includes a memory controller operable to generate first control signals according to a standard interface. A memory interface adapter is coupled to the memory controller and is operable responsive to the first control signals to develop second control signals adapted to be applied to a memory subsystem to access desired storage locations within the memory subsystem.
BRIEF DESCRIPTION OF THE DRAWINGS
[15] FIG. 1 is a diagram illustrating the storage of data elements of a matrix in a conventional memory system.
[16] FIG. 2 is a functional block diagram of computer system having a peer vector machine (PVM) architecture including a hardware implemented memory service layer for generating desired memory addresses to implement desired memory behaviors according to one embodiment of the present invention. [17] FIG. 3 is a functional block diagram illustrating in more detail the memory controller, memory service layer, and memory subsystem of FIG. 2 according to one embodiment of the present invention.
[18] FIG. 4 is a functional block diagram illustrating in more detail an example of the attachable behaviors circuitry or hardware implemented address- generation circuitry contained in the memory controller of FIG. 3 according to one embodiment of the present invention.
[19] FIG. 5 is a functional block diagram illustrating in more detail another example of the attachable behaviors or hardware implemented address- generation circuitry contained in the memory controller of FIG. 3 according to another embodiment of the present invention.
[20] FIG. 6 is a more detailed functional block diagram of one embodiment of the host processor and pipeline accelerator of the peer vector machine (PVM) of FIG. 2.
[21] FIG. 7 is a more detailed block diagram of the pipeline accelerator of FIG. 6 according to one embodiment of the present invention. [22] FIG. 8 is an even more detailed block diagram of the hardwired pipeline circuit and the data memory of FIG. 7 according to one embodiment of the present invention.
[23] FIG. 9 is a block diagram of the interface 142 of FIG. 8 according to an embodiment of the invention.
[24] FIG. 10 is a block diagram of the interface 140 of FIG. 8 according to an embodiment of the invention.
DETAILED DESCRIPTION
[25] FIG. 2 is a functional block diagram of computer system 200 having a peer vector machine (PVM) architecture that includes a hardware implemented memory service layer 202 for generating memory addresses to implement desired memory behaviors according to one embodiment of the present invention. The peer vector machine architecture is a new computing architecture that includes a host processor 204 that controls the overall operation and decision making operations of the system 200 and a pipeline accelerator 206 that includes programmable hardware circuitry for performing mathematically intensive operations on data, as will be described in more detail below. The pipeline accelerator 206 and host processor 204 are termed "peers" that communicate with each through data vectors transferred over a communications channel referred to as a pipeline bus 208. A memory controller 210 in the pipeline accelerator 206 contains the memory service layer 202 and communicates through this service layer to a memory subsystem 212 coupled to the controller.
[26] In the system 200, the peer vector machine architecture divides the processing power of the system into two primary components, the pipeline accelerator 206 and host processor 204 that together form a peer vector machine. The host processor 204 performs a portion of the overall computing burden of the system 200 and primarily handles all decision making operations of the system. The pipeline accelerator 206 on the other hand does not execute any programming instructions and handles the remaining portion of the processing burden, primarily performing mathematically intensive or "number crunching" types of operations. By combining the decision-making functionality of the host processor 204 and the number-crunching functionality of the pipeline accelerator 206, the use of the peer vector machine enables the system 200 to process data faster than conventional computing architectures such as multiprocessor architectures.
[27] With the peer vector machine architecture, the pipeline accelerator 206 may be implemented through an application specific integrated circuit (ASIC) or through programmable logic integrated circuits (PLICs) such as a field programmable gate array (FPGA). The pipeline accelerator 206 communicates with the host processor 204 over the pipeline bus 208 typically through an industry standard communications interface (not shown), such as a interface implementing the Rapid I/O or TCP/IP communications protocols. The use of such a standard communications interface simplifies the design and modification of the pipeline accelerator 206 as well as the modification of the memory service layer 202 to adaptively perform different required memory behaviors, as will be discussed in more detail below. [28] In operation, the host processor 204 determines which data is to be processed by the pipeline accelerator 206, and transfers such data in the form of data vectors over the pipeline bus 308 to the pipeline accelerator. The host processor 204 can also communicate configuration commands to the pipeline accelerator 206 over the pipeline bus 208 to configure the hardware circuitry pipeline accelerator to perform desired tasks. Use of an industry standard interface or bus protocol on the bus 208 enables circuitry on both sides of the bus to be more easily modified, for example. Although the host processor 204 typically transfers desired data over the pipeline bus 208 to the pipeline accelerator 206 for processing, the pipeline accelerator may also directly receive data, process the data, and then communicate this processed data back to the host processor 204 via the pipeline bus.
[29] Regardless of how the pipeline accelerator 206 receives data, the memory controller 210 stores the received data in the memory subsystem 212 during processing of the data by the pipeline accelerator 206. As will be explained in more detail below, the memory service layer 202 in the memory controller 210 has attachable behaviors, meaning the memory service layer may be configured or programmed to perform desired memory behaviors. To configure the memory service layer 202 to execute desired memory behaviors, the host processor 204 communicates the appropriate commands over the pipeline bus 208 to the pipeline accelerator 206. It should be noted that the circuitry within the memory service layer 202 for performing various memory behaviors will be different, with some circuitry possibly requiring no configuration and the configuration of other types of circuitry differing depending on the specifics of the circuitry. For more details on such configuration and different types of such circuitry, see U.S. Patent Appln. No. XXX entitled COMPUTER-BASED TOOL AND METHOD FOR DESIGNING AN ELECTRONIC CIRCUIT AND RELATED SYSTEM, and U.S. Patent Appln. No. YYY entitled LIBRARY FOR COMPUTER-BASED TOOL AND RELATED SYSTEM AND METHOD, which were filed on October 3, 2005 and which are incorporated herein by reference. In response to the commands, the pipeline accelerator 206 applies suitable control signals to the memory controller 210 which, in turn, configures the memory service layer 202 to execute the corresponding memory behaviors. Once configured, the memory service layer 202 operates in combination with the other circuitry in the memory controller 210 to access data elements stored in the memory subsystem 212 according to the desired memory behavior such as accessing elements in sliced arrays, masked arrays, or sliced and masked arrays, for example.
[30] FIG. 3 is a functional block diagram illustrating in more detail the memory controller 210, memory service layer 202, and memory subsystem 212 of FIG. 2 according to one embodiment of the present invention. An input first-in- first-out (FIFO) buffer 300 receives data to be written into the memory subsystem 212, which in the example of FIG. 3 is a ZBT SRAM memory, from the pipeline accelerator 206. Similarly, a FIFO buffer 302 receives data being read from the memory subsystem 212 from the memory controller 210. Although the FIFO buffers 300 and 302 are shown separate from the memory controller 210, these buffers may be considered part of the memory controller in the embodiment of FIG. 2. To read data from or write data into the memory subsystem 212, the memory controller 210 applies appropriate control signals 304 to a memory interface adapter 306. In response to the control signals 304 from the memory controller 210, the memory interface adapter 306 applies suitable control signals 308 to a physical control layer 310. The physical control layer 310 develops control signals 312 in response to the control signals 308 from the memory interface adapter 306, with the control signals 312 being applied to the memory subsystem 212 to read data from or write data into the desired storage locations within the memory subsystem. The memory interface adapter 306 decouples the memory controller 210 and the memory subsystem 212, which allows the same controller to be utilized with different types of memory subsystems. Only the adapter 306 need be modified to interface the controller 212 to different types of memory subsystems 212, saving design time and effort by utilizing a known operational controller. It should be noted that as used herein, the term control signals includes all required signals to perform the described function, and thus, for example, the control signals 304, 308, and 312 include all required control, address, and data signals to perform the desired access of the memory subsystem 212.
[31] In the embodiment of FIG. 3, the memory service layer 202 within the memory controller 210 includes a write index register 314 that stores a write index value which the memory service layer utilizes to select specific parameters to be utilized for a particular memory behavior during write operations (i.e., during the writing of data into the memory subsystem 212). A write offset register 316 stores a write offset value that is added to a write base address received by the memory controller 210, with the base address being typically supplied from either the host processor 204 via the bus 208 and pipeline accelerator 206 or from one of a plurality of pipeline units (not shown) contained in the pipeline accelerator, as will be explained in more detail below. A read index register 318 stores a read index value which the memory service layer 202 utilizes to select specific parameters to be utilized for a particular memory behavior during read operations (i.e., during the reading of data from the memory subsystem 212). A read offset register 320 stores a read offset value that is added to a read base address received by the memory controller 210. In the example of FIG. 3, these index and offset values are shown as being provided by a program "myApplication" which corresponds to a hardware pipeline (not shown) in the pipeline accelerator 206.. [32] The memory service layer 202 further includes attachable behaviors circuitry 322 that utilizes the values stored in the registers 314-320 along with parameters loaded into the circuitry from the host processor 202 through attachable ports 324 to generate memory addresses to implement desired memory behaviors. The specific circuitry contained within the attachable behaviors circuitry 322 depends upon the desired address patterns that the circuitry is designed to perform, with each address pattern corresponding to a respective memory behavior. Two sample embodiments of the attachable behaviors circuitry 322 will now be described in more detail with reference to FIGS. 4 and 5.
[33] FIG. 4 is a more detailed functional block diagram of one embodiment of the memory controller 210 of FIG. 3. The controller 210 includes attachable behaviors circuitry 400, which as previously described is the hardware implemented address-generation circuitry that enables the controller to perform desired memory behaviors. The attachable behaviors circuitry 400 thus corresponds to one embodiment of the attachable behaviors circuitry 322 previously discussed with reference to FIG. 3. Note that the embodiment of FIG. 4 shows only components associated with writing data to the memory subsystem 212 (FIG. 3), with the components for reading data from the memory subsystem being analogous and understood by those skilled in the art. All components in the memory controller 210 other than the attachable behaviors circuitry 400 are conventional and will therefore be described only briefly to provide a sufficient basis for understanding the operation of the attachable behaviors circuitry. [34] The controller 210 includes a controller state machine 402 which controls the overall operation of the controller and handles such functions as ensuring proper time division multiplexing of data on a data bus of the controller between read and write operations. The memory controller 210 further includes a segment table 404 that provides for partitioning of the storage capacity of the memory subsystem 212 into a number of different logical blocks are memory partitions. The segment table 404 includes a plurality of segment index values, base address values, and full and empty flag values. Each memory partition is assigned an associated segment index value, and thus when a write command is applied to the memory controller that write command includes a segment index value corresponding to the memory partition to which data is to be written.
Similarly, each memory partition is assigned a base address corresponding to the address of the first storage location in the partition. [35] Each memory partition has a known size, and thus by knowing the base address each storage location within a given memory partition can be accessed. The full flag indicates whether a given memory partition is full of data while the empty flag indicates no data is stored in the associated memory partition. In the segment table 404, each row defines these values for a corresponding memory partition. For example, assume the first row in the segment table 404 contains the segment index value corresponding to a first memory partition. The base address and full and empty flags in this first row corresponding to the base address value for the first memory partition and the flags indicate the status of data stored within that partition. Thus, for each memory partition the segment table 404 includes a corresponding row of values.
[36] The controller state machine 402 provides the base address, which is designated BA, for the memory partition to which data is to be written to a write state machine 404 as represented by a write base address box 406. The write state machine 404 triggers the controller state machine 402 to start generating memory addresses once the base address BA is applied, as represented by the box 408. The controller state machine 402 also determines whether the base address BA is valid for the given memory partition to which data is being written, as represented by the box 410. [37] During a write operation, the write state machine 404 provides an input read request to the input FIFO 300 (FIG. 3) as represented by box 412. In response to this input read request, the input FIFO 300 provides write data to be written to the memory subsystem 212 to the controller 210, as represented by box 414. Along with the write data 414, the controller 210 generates a write command or request 416. The write data 414 and request 416 defined two of the three components that must be supplied to the memory subsystem 212 to access the correct storage locations, with the third component being the current write address CWA as represented by box 418.
[38] The write state machine 404 generates a write address WA that is derived from the applied base address BA plus a write offset value WOV stored in a write offset register 420. The write address WA generated by the write state machine 404 equals the base address BA plus the write offset value WOV stored in the register 420. The write offset register 420 is one of the components in the attachable behaviors circuitry 400 that enables the circuitry to generate the desired pattern of memory addresses to achieve the desired memory behavior.
[39] The attachable behaviors circuitry 400 further includes a stride value register 422 for storing a stride value S1 , where the stride value is a number to be added to a previous memory address to obtain the current memory address, as previously described with reference to FIG. 1. A register 424 stores a number of times value N1 indicating the number of times the stride value S1 is to be added to the current address. A register 426 stores a total count value TC indicating the total number of times to perform the stride value S1 for the given number of times parameter N1. The attachable behaviors circuitry 400 further includes a write IOCB RAM 428 that stores an array of values for the stride S1 , number of times N1 , and total count TC values. Each row of the array stored in the write IOCB RAM 428 stores a respective stride S1 , number of times N1 , and total count TC value, with the values for these parameters from one of the rows being stored in the registers 422-426 during operation of the memory controller 210. A write index register 430 stores an index value which determines which row of stride S1 , number of times N1 , and total count TC values are stored in the registers 422- 426. In this way, during operation of the memory controller 210 the host processor 204 (FIG. 2) can change the index value stored in the register 430 to thereby change the values S1 , N1 , TC stored in the registers 422-426. Note that the write offset value WOV stored in the register 420 could also be stored in the write IOCB RAM 428 and loaded into this register as are the S1 , N1 , and TC values. Also note that the values S1 , N1 , TC could be loaded directly into the registers 422-426 in other embodiments of the present invention, and thus the write IOCB RAM 428 is not required in all embodiments.
[40] A summing circuit 432 sums the stride value S1 with the current write address CWA and this sum is applied to a multiplexer 434. During the first access of the memory subsystem 212, the multiplexer 434 outputs the write address WA from the write state machine 404 as the current write address CWA. Thereafter, the multiplexer 434 outputs this sum of the current write address CWA plus the stride value S1 from the summation circuit 432 as the new current write address. The memory controller 210 applies current write address CWA along with the write request 416 and write data 414 to the memory interface adapter 306 which, as previously described, generates control signals 308 that are applied through a physical control layer 310 (FIG. 3) to access the proper storage locations in the memory subsystem 212 (FIG. 3).
[41] The embodiment of the attachable behaviors circuit 400 shown in FIG. 4 is merely an example that provides the memory controller 210 with a striding memory behavior. In other embodiments, the attachable behaviors circuit 400 will include different or additional components to provide the memory controller 210 with all desired memory behaviors. The number and type of registers along with the particular parameters stored in the write IOCB RAM 428 will of course very depending upon the type of memory behavior.
[42] FIG. 5 is a more detailed functional block diagram of another embodiment of the memory controller 210 of FIG. 3. In the embodiment of FIG. 5, the controller 210 includes attachable behaviors circuitry 500, which once again is hardware implemented address-generation circuitry that enables the controller to perform desired memory behaviors. All components in the embodiment of FIG. 5 that are the same as those previously described with reference to FIG. 4 have been given the same reference numbers, and for the sake of brevity will not again be described in detail. The attachable behaviors circuitry 500 works in combination with an existing general write state machine 502 contained in the memory controller 210, in contrast to the embodiment of FIG. 4 in which the write state machine 404 is modified to perform regular memory accesses along with the desired memory behaviors. Thus, the attachable behaviors circuitry 500 includes an attachable state machine 504 that works in combination with the general write state machine 502 to provide the desired memory accesses. The embodiment of FIG. 5 would typically be an easier design to implement since the presumably known operable general write state machine 502 would already exist and no modifications to this known functional component are made. In contrast, in the embodiment of FIG. 4 the write state machine 404 is a modified version of a general write state machine. [43] One example memory pattern is for the case of a Triangular matrix where the matrix is stored with no wasted memory space. The example embodiment of the attachable behaviors circuitry 500 in FIG. 5 includes a number of stride value registers 506a-n, a plurality of number of times registers 508a-n, a total count register 510, and a plurality of partial count registers 512a-n. The write IOCB RAM 428 stores values SI-Sn, NI-Nn, PCI-PCn, and TC that are loaded through a configuration adapter 514. The configuration adapter 514 loads these values into the write IOCB RAM 428 from values supplied either from software running on the host processor 204 (FIG. 2) or from a pipeline unit contained in the pipeline accelerator 206 (FIG. 2). The values SI-Sn, NI-Nn, PCI-PCn, and TC and components in the attachable behaviors circuitry 500 operate in combination to provide a set of a regular memory accesses, such as may occur where the data structure being accessed in the memory subsystem 212 is a sparse array. [44] Contrasting FIG. 4 and FIG. 5 illustrates the need for being able to insert new or different circuits exhibiting different behaviors into the persistent function of the memory controller through a standard, well established, well defined interface. New implementations of memory behavior can be achieved by the designer as long as it complies with the standard attachable behavior interface.
[45] The FIG. 6 is a more detailed functional block diagram of a peer vector machine 40 that may be included in the system 200 of FIG. 2 according to one embodiment of the present invention. The peer vector machine 40 includes a host processor 42 corresponding to the host processor 204 of FIG. 2 and a pipeline accelerator 44 corresponding to the pipeline accelerator 206 of FIG. 2. The host processor 42 communicates with the pipeline accelerator 44 through a pipeline bus 50 that corresponds to the communications channel 208 of FIG. 2. Data is communicated over the pipeline bus 50 according to an industry standard interface in one embodiment of present invention, which facilitates the design and modification of the machine 40.
[46] The peer vector machine 40 generally and the host processor 42 and pipeline accelerator 44 more specifically are described in more detail in U.S. Patent Appln. No. 10/684,102 entitled IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD (Attorney Docket No. 1934-11-3), Appln. No. 10/684,053 entitled COMPUTING MACHINE HAVING IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD (Attorney Docket No. 1934-12-3), Appln. No. 10/683,929 entitled
PIPELINE ACCELERATOR FOR IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD (Attorney Docket No. 1934-13-3), Appln. No. 10/684,057 entitled PROGRAMMABLE CIRCUIT AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934-14-3), and 10/683,932 entitled PIPELINE ACCELERATOR HAVING MULTIPLE PIPELINE UNITS AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934-15-3), all of which have a common filing date of October 9, 2003 and a common owner and which are incorporated herein by reference. [47] In addition to the host processor 42 and the pipeline accelerator 44, the peer vector computing machine 40 includes a processor memory 46, an interface memory 48, a bus 50, a firmware memory 52, an optional raw-data input port 54, a processed-data output port 58, and an optional router 61.
[48] The host processor 42 includes a processing unit 62 and a message handler 64, and the processor memory 46 includes a processing-unit memory 66 and a handler memory 68, which respectively serve as both program and working memories for the processor unit and the message handler. The processor memory 46 also includes an accelerator-configuration registry 70 and a message-configuration registry 72, which store respective configuration data that allow the host processor 42 to configure the functioning of the accelerator 44 and the format of the messages that the message handler 64 sends and receives. [49] The pipeline accelerator 44 is disposed on at least one programmable logic integrated circuit (PLIC) (not shown) and includes hardwired pipelines 74i - 74n, which process respective data without executing program instructions. The firmware memory 52 stores the configuration firmware for the accelerator 44. If the accelerator 44 is disposed on multiple PLICs, these PLICs and their respective firmware memories may be disposed in multiple pipeline units (FIG. 6). The accelerator 44 and pipeline units are discussed further below and in previously cited U.S. Patent Appln. No. 10/683,932 entitled PIPELINE ACCELERATOR HAVING MULTIPLE PIPELINE UNITS AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934-15-3). Alternatively, the accelerator 44 may be disposed on at least one application specific integrated circuit (ASIC), and thus may have internal interconnections that are not configurable. In this alternative, the machine 40 may omit the firmware memory 52. Furthermore, although the accelerator 44 is shown including multiple pipelines 74, it may include only a single pipeline. In addition, although not shown, the accelerator 44 may include one or more processors such as a digital- signal processor (DSP).
[50] FIG. 7 is a more detailed block diagram of the pipeline accelerator 44 of FIG. 4 according to one embodiment of the present invention. The accelerator 44 includes one or more pipeline units 78, one of which is shown in FIG. 7. Each pipeline unit 78 includes a pipeline circuit 80, such as a PLIC or an ASIC. As discussed further below and in previously cited U.S. Patent App. Serial No. 10/683,932 entitled PIPELINE ACCELERATOR HAVING MULTIPLE PIPELINE UNITS AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934-15-3), each pipeline unit 78 is a "peer" of the host processor 42 and of the other pipeline units of the accelerator 44. That is, each pipeline unit 78 can communicate directly with the host processor 42 or with any other pipeline unit. Thus, this peer-vector architecture prevents data "bottlenecks" that otherwise might occur if all of the pipeline units 78 communicated through a central location such as a master pipeline unit (not shown) or the host processor 42. Furthermore, it allows one to add or remove peers from the peer-vector machine 40 (FIG. 6) without significant modifications to the machine.
[51] The pipeline circuit 80 includes a communication interface 82, which transfers data between a peer, such as the host processor 42 (FIG. 6), and the following other components of the pipeline circuit: the hardwired pipelines 74i-74n (FIG. 6) via a communication shell 84, a controller 86, an exception manager 88, and a configuration manager 90. The pipeline circuit 80 may also include an industry-standard bus interface 91. Alternatively, the functionality of the interface 91 may be included within the communication interface 82. Where a bandwidth- enhancement technique such as xDSL is utilized to increase the effective bandwidth of the pipeline bus 50, the communication interface 82 and bus interface 91 are modified as necessary to implement the bandwidth-enhancement technique, as will be appreciated by those skilled in the art. [52] The communication interface 82 sends and receives data in a format recognized by the message handler 64 (FIG. 6), and thus typically facilitates the design and modification of the peer-vector machine 40 (FIG. 6). For example, if the data format is an industry standard such as the Rapid I/O format, then one need not design a custom interface between the host processor 42 and the accelerator 44. Furthermore, by allowing the pipeline circuit 80 to communicate with other peers, such as the host processor 42 (FIG. 6), via the pipeline bus 50 instead of via a non-bus interface, one can change the number of pipeline units 78 by merely connecting or disconnecting them (or the circuit cards that hold them) to the pipeline bus instead of redesigning a non-bus interface from scratch each time a pipeline unit is added or removed.
[53] The hardwired pipelines 74i-74n perform respective operations on data as discussed above in conjunction with FIG. 6 and in previously cited U.S. Patent App. Serial No. 10/684,102 entitled IMPROVED COMPUTING
ARCHITECTURE AND RELATED SYSTEM AND METHOD (Attorney Docket No. 1934-11-3), and the communication shell 84 interfaces the pipelines to the other components of the pipeline circuit 80 and to circuits (such as a data memory 92 discussed below) external to the pipeline circuit. [54] The controller 86 synchronizes the hardwired pipelines 74-[.74n and monitors and controls the sequence in which they perform the respective data operations in response to communications, i.e., "events," from other peers. For example, a peer such as the host processor 42 may send an event to the pipeline unit 78 via the pipeline bus 50 to indicate that the peer has finished sending a block of data to the pipeline unit and to cause the hardwired pipelines 74i.74n to begin processing this data. An event that includes data is typically called a message, and an event that does not include data is typically called a "door bell." Furthermore, as discussed below in conjunction with FIG. 8, the pipeline unit 78 may also synchronize the pipelines 74i.74n in response to a synchronization signal.
[55] The exception manager 88 monitors the status of the hardwired pipelines 74i-74n, the communication interface 82, the communication shell 84, the controller 86, and the bus interface 91, and reports exceptions to the host processor 42 (FIG. 6). For example, if a buffer in the communication interface 82 overflows, then the exception manager 88 reports this to the host processor 42. The exception manager may also correct, or attempt to correct, the problem giving rise to the exception. For example, for an overflowing buffer, the exception manager 88 may increase the size of the buffer, either directly or via the configuration manager 90 as discussed below.
[56] The configuration manager 90 sets the soft configuration of the hardwired pipelines 74i-74n, the communication interface 82, the communication shell 84, the controller 86, the exception manager 88, and the interface 91 in response to soft-configuration data from the host processor 42 (FIG- 6) — as discussed in previously cited U.S. Patent App. Serial No. 10/684,102 entitled IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD (Attorney Docket No. 1934-11-3), the hard configuration denotes the actual topology, on the transistor and circuit-block level, of the pipeline circuit 80, and the soft configuration denotes the physical parameters (e.g., data width, table size) of the hard-configured components. That is, soft configuration data is similar to the data that can be loaded into a register of a processor (not shown in FIG. 4) to set the operating mode (e.g., burst-memory mode) of the processor. For example, the host processor 42 may send soft-configuration data that causes the configuration manager 90 to set the number and respective priority levels of queues in the communication interface 82. The exception manager 88 may also send soft-configuration data that causes the configuration manager 90 to, e.g., increase the size of an overflowing buffer in the communication interface 82. [57] Still referring to FIG. 7, in addition to the pipeline circuit 80, the pipeline unit 78 of the accelerator 44 includes the data memory 92, an optional communication bus 94, and, if the pipeline circuit is a PLIC, the firmware memory 52 (FIG. 4). The data memory 92 buffers data as it flows between another peer, such as the host processor 42 (FIG. 6), and the hardwired pipelines 74r74n, and is also a working memory for the hardwired pipelines. The data memory 92 corresponds to the memory subsystem 212 of FIG. 2. The communication interface 82 interfaces the data memory 92 to the pipeline bus 50 (via the communication bus 94 and industry-standard interface 91 if present), and the communication shell 84 interfaces the data memory to the hardwired pipelines 74r74n.
[58] The industry-standard interface 91 is a conventional bus-interface circuit that reduces the size and complexity of the communication interface 82 by effectively offloading some of the interface circuitry from the communication interface. Therefore, if one wishes to change the parameters of the pipeline bus 50 or router 61 (FIG. 6), then he need only modify the interface 91 and not the communication interface 82. Alternatively, one may dispose the interface 91 in an IC (not shown) that is external to the pipeline circuit 80. Offloading the interface 91 from the pipeline circuit 80 frees up resources on the pipeline circuit for use in, e.g., the hardwired pipelines 74i-74n and the controller 86. Or, as discussed above, the bus interface 91 may be part of the communication interface 82.
[59] As discussed above in conjunction with FIG. 6, where the pipeline circuit 80 is a PLIC, the firmware memory 52 stores the firmware that sets the hard configuration of the pipeline circuit. The memory 52 loads the firmware into the pipeline circuit 80 during the configuration of the accelerator 44, and may receive modified firmware from the host processor 42 (FIG. 6) via the communication interface 82 during or after the configuration of the accelerator. The loading and receiving of firmware is further discussed in previously cited U.S. Patent App. Serial No. 10/684,057 entitled PROGRAMMABLE CIRCUIT AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934- 14-3).
[60] Still referring to FIG. 7, the pipeline circuit 80, data memory 92, and firmware memory 52 may be disposed on a circuit board or card 98, which may be plugged into a pipeline-bus connector (not shown) much like a daughter card can be plugged into a slot of a mother board in a personal computer (not shown). Although not shown, conventional ICs and components such as a power regulator and a power sequencer may also be disposed on the card 98 as is known. The sensors 36 could also include suitable cards that plug into slots and include wiring or other required components for coupling such a card to the actual transducer portion of the each sensor. One such card could be associated with each sensor 36 or each sensor could include a respective card.
[61] Further details of the structure and operation of the pipeline unit 78 will now be discussed in conjunction with FIG. 8. FIG. 8 is a block diagram of the pipeline unit 78 of FIG. 6 according to an embodiment of the invention. For clarity, the firmware memory 52 is omitted from FIG. 8. The pipeline circuit 80 receives a master CLOCK signal, which drives the below-described components of the pipeline circuit either directly or indirectly. The pipeline circuit 80 may generate one or more slave clock signals (not shown) from the master CLOCK signal in a conventional manner. The pipeline circuit 80 may also a receive a synchronization signal SYNC as discussed below. The data memory 92 includes an input dual-port-static-random-access memory (DPSRAM) 100, an output DPSRAM 102, and an optional working DPSRAM 104.
[62] The input DPSRAM 100 includes an input port 106 for receiving data from a peer, such as the host processor 42 (FIG. 6), via the communication interface 82, and includes an output port 108 for providing this data to the hardwired pipelines 74i-74n via the communication shell 84. Having two ports, one for data input and one for data output, increases the speed and efficiency of data transfer to/from the DPSRAM 100 because the communication interface 82 can write data to the DPSRAM while the pipelines 74i -74n read data from the DPSRAM. Furthermore, as discussed above, using the DPSRAM 100 to buffer data from a peer such as the host processor 42 allows the peer and the pipelines 74i-74n to operate asynchronously relative to one and other. That is, the peer can send data to the pipelines 74i-74n without "waiting" for the pipelines to complete a current operation. Likewise, the pipelines 74i-74n can retrieve data without "waiting" for the peer to complete a data-sending operation.
[63] Similarly, the output DPSRAM 102 includes an input port 110 for receiving data from the hardwired pipelines 74r74n via the communication shell 84, and includes an output port "/72 for providing this data to a peer, such as the host processor 42 (FlG. 6), via the communication interface 82. As discussed above, the two data ports 110 (input) and 112 (output) increase the speed and efficiency of data transfer to/from the DPSRAM 102, and using the DPSRAM 102 to buffer data from the pipelines 74i-74n allows the peer and the pipelines to operate asynchronously relative to one another. That is, the pipelines 74^-74n can publish data to the peer without "waiting" for the output-data handler 126 to complete a data transfer to the peer or to another peer. Likewise, the output-data handler 126 can transfer data to a peer without "waiting" for the pipelines 74i-74n to complete a data-publishing operation.
[64] The working DPSRAM 104 includes an input port 114 for receiving data from the hardwired pipelines 74^74n via the communication shell 84, and includes an output port 176 for returning this data back to the pipelines via the communication shell. While processing input data received from the DPSRAM 100, the pipelines 74i-74n may need to temporarily store partially processed, i.e., intermediate, data before continuing the processing of this data. For example, a first pipeline, such as the pipeline 74i, may generate intermediate data for further processing by a second pipeline, such as the pipeline 742] thus, the first pipeline may need to temporarily store the intermediate data until the second pipeline retrieves it. The working DPSRAM 104 provides this temporary storage. As discussed above, the two data ports 114 (input) and 116 (output) increase the speed and efficiency of data transfer between the pipelines 74i-74n and the DPSRAM 104. Furthermore, including a separate working DPSRAM "/04 typically increases the speed and efficiency of the pipeline circuit 80 by allowing the DPSRAMs 100 and 102 to function exclusively as data-input and data-output buffers, respectively. But, with slight modification to the pipeline circuit 80, either or both of the DPSRAMS 100 and 102 can also be a working memory for the pipelines 74i-74n when the DPSRAM 104 is omitted, and even when it is present.
[65] Although the DPSRAMS 100, 102, and 104 are described as being external to the pipeline circuit 80, one or more of these DPSRAMS, or equivalents thereto, may be internal to the pipeline circuit.
[66] Still referring to FIG. 8, the communication interface 82 includes an industry-standard bus adapter 118, an input-data handler 120, input-data and input-event queues 122 and 124, an output-data handler 126, and output-data and output-event queues 128 and 130. Although the queues 122, 124, 128, and 130 are shown as single queues, one or more of these queues may include sub queues (not shown) that allow segregation by, e.g., priority, of the values stored in the queues or of the respective data that these values represent.
[67] The industry-standard bus adapter 118 includes the physical layer that allows the transfer of data between the pipeline circuit 80 and the pipeline bus 50 (FIG. 6) via the communication bus 94 (FIG. 7). Therefore, if one wishes to change the parameters of the bus 94, then he need only modify the adapter 118 and not the entire communication interface 82. Where the industry-standard bus interface 91 is omitted from the pipeline unit 78, then the adapter 118 may be modified to allow the transfer of data directly between the pipeline bus 50 and the pipeline circuit 80. In this latter implementation, the modified adapter 118 includes the functionality of the bus interface 91, and one need only modify the adapter 118 if he/she wishes to change the parameters of the bus 50. For example, where a bandwidth-enhancement technique such as ADSL is utilized to communicate data over the bus 50 the adapter 118 is modified accordingly to implement the bandwidth-enhancement technique.
[68] The input-data handler 120 receives data from the industry-standard adapter 118, loads the data into the DPSRAM 100 via the input port 106, and generates and stores a pointer to the data and a corresponding data identifier in the input-data queue 122. If the data is the payload of a message from a peer, such as the host processor 42 (FIG. 3), then the input-data handier 120 extracts the data from the message before loading the data into the DPSRAM 100. The input-data handler 120 includes an interface 132, which writes the data to the input port 106 of the DPSRAM 100 and which is further discussed below in conjunction with FIG. 6. Alternatively, the input-data handler 120 can omit the extraction step and load the entire message into the DPSRAM 100. The input-data handler 120 also receives events from the industry-standard bus adapter 118, and loads the events into the input-event queue 124.
[69] Furthermore, the input-data handler 120 includes a validation manager 134, which determines whether received data or events are intended for the pipeline circuit 80. The validation manager 134 may make this determination by analyzing the header (or a portion thereof) of the message that contains the data or the event, by analyzing the type of data or event, or the analyzing the instance identification (i.e., the hardwired pipeline 74 for which the data/event is intended) of the data or event. If the input-data handler 120 receives data or an event that is not intended for the pipeline circuit 80, then the validation manager 134 prohibits the input-data handler from loading the received data/even. Where the peer-vector machine 40 includes the router 61 (FIG. 3) such that the pipeline unit 78 should receive only data/events that are intended for the pipeline unit, the validation manager 134 may also cause the input-data handler 120 to send to the host processor 42 (FIG. 3) an exception message that identifies the exception (erroneously received data/event) and the peer that caused the exception. [70] The output-data handler 126 retrieves processed data from locations of the DPSRAM 102 pointed to by the output-data queue 128, and sends the processed data to one or more peers, such as the host processor 42 (FlG. 3), via the industry-standard bus adapter 118. The output-data handler 126 includes an interface 136, which reads the processed data from the DPSRAM 102 via the port 112. The interface 136 is further discussed below in conjunction with FIG. 10. The output-data handler 126 also retrieves from the output-event queue 130 events generated by the pipelines 74i - 74n, and sends the retrieved events to one or more peers, such as the host processor 42 (FIG. 6) via the industry-standard bus adapter 118.
[71] Furthermore, the output-data handler 126 includes a subscription manager 138, which includes a list of peers, such as the host processor 42 (FIG. 6), that subscribe to the processed data and to the events; the output-data handler uses this list to send the data/events to the correct peers. If a peer prefers the data/event to be the payload of a message, then the output-data handler 126 retrieves the network or bus-port address of the peer from the subscription manager 138, generates a header that includes the address, and generates the message from the data/event and the header.
[72] Although the technique for storing and retrieving data stored in the DPSRAMS 100 and 102 involves the use of pointers and data identifiers, one may modify the input- and output-data handlers 120 and 126 to implement other data-management techniques. Conventional examples of such data-management techniques include pointers using keys or tokens, input/output control (IOC) block, and spooling. [73] The communication shell 84 includes a physical layer that interfaces the hardwired pipelines 74i-74n to the output-data queue 128, the controller 86, and the DPSRAMs 100, 102, and 104. The shell 84 includes interfaces 140 and 142, and optional interfaces 144 and 146. The interfaces 140 and 146 may be similar to the interface 136; the interface 140 reads input data from the DPSRAM 100 via the port 108, and the interface 146 reads intermediate data from the
DPSRAM 104 via the port 116. The interfaces 142 and 144 may be similar to the interface 132; the interface 142 writes processed data to the DPSRAM 102 via the port 110, and the interface 144 writes intermediate data to the DPSRAM 104 via the port 114.
[74] The controller 86 includes a sequence manager 148 and a synchronization interface 150, which receives one or more synchronization signals SYNC. A peer, such as the host processor 42 (FIG. 6), or a device (not shown) external to the peer-vector machine 40 (FIG. 6) may generate the SYNC signal, which triggers the sequence manager 148 to activate the hardwired pipelines 74r 74n as discussed below and in previously cited U.S. Patent App. Serial No. 10/683,932 entitled PIPELINE ACCELERATOR HAVING MULTIPLE PIPELINE UNITS AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934-15-3). The synchronization interface 150 may also generate a SYNC signal to trigger the pipeline circuit 80 or to trigger another peer. In addition, the events from the input-event queue 124 also trigger the sequence manager 148 to activate the hardwired pipelines 74r74n as discussed below. [75] The sequence manager 148 sequences the hardwired pipelines 74r
74n through their respective operations via the communication shell 84. Typically, each pipeline 74 has at least three operating states: preprocessing, processing, and post processing. During preprocessing, the pipeline 74, e.g., initializes its registers and retrieves input data from the DPSRAM 100. During processing, the pipeline 74, e.g., operates on the retrieved data, temporarily stores intermediate data in the DPSRAM 104, retrieves the intermediate data from the DPSRAM 104, and operates on the intermediate data to generate result data. During post processing, the pipeline 74, e.g., loads the result data into the DPSRAM 102. Therefore, the sequence manager 148 monitors the operation of the pipelines 74i- 74n and instructs each pipeline when to begin each of its operating states. And one may distribute the pipeline tasks among the operating states differently than described above. For example, the pipeline 74 may retrieve input data from the DPSRAM 100 during the processing state instead of during the preprocessing state. [76] Furthermore, the sequence manager 148 maintains a predetermined internal operating synchronization among the hardwired pipelines 74i-74n. For example, to avoid all of the pipelines 74i-74n simultaneously retrieving data from the DPSRAM 100, it may be desired to synchronize the pipelines such that while the first pipeline 74i is in a preprocessing state, the second pipeline 742 is in a processing state and the third pipeline 743 is in a post-processing state. Because a state of one pipeline 74 may require a different number of clock cycles than a concurrently performed state of another pipeline, the pipelines 74i-74n may lose synchronization if allowed to run freely. Consequently, at certain times there may be a "bottle neck," as, for example, multiple pipelines 74 simultaneously attempt to retrieve data from the DPSRAM 100. To prevent the loss of synchronization and its undesirable consequences, the sequence manager 148 allows all of the pipelines 74 to complete a current operating state before allowing any of the pipelines to proceed to a next operating state. Therefore, the time that the sequence manager 148 allots for a current operating state is long enough to allow the slowest pipeline 74 to complete that state. Alternatively, circuitry (not shown) for maintaining a predetermined operating synchronization among the hardwired pipelines 74i-74n may be included within the pipelines themselves. [77] In addition to sequencing and internally synchronizing the hardwired pipelines 74i - 74n, the sequence manager 148 synchronizes the operation of the pipelines to the operation of other peers, such as the host processor 42 (FIG. 6), and to the operation of other external devices in response to one or more SYNC signals or to an event in the input-events queue 124. [78] Typically, a SYNC signal triggers a time-critical function but requires significant hardware resources; comparatively, an event typically triggers a non-time-critical function but requires significantly fewer hardware resources. As discussed in previously cited U.S. Patent App. Serial No. 10/683,932 entitled PIPELINE ACCELERATOR HAVING MULTIPLE PIPELINE UNITS AND RELATED COMPUTING MACHINE AND METHOD (Attorney Docket No. 1934- 15-3), because a SYNC signal is routed directly from peer to peer, it can trigger a function more quickly than an event, which must makes its way through, e.g., the pipeline bus 50 (FIG. 6), the input-data handler 120, and the input-event queue 124. But because they are separately routed, the SYNC signals require dedicated circuitry, such as routing lines, buffers, and the SYNC interface 150, of the pipeline circuit 80. Conversely, because they use the existing data-transfer infrastructure (e.g. the pipeline bus 50 and the input-data handler 120), the events require only the dedicated input-event queue 124. Consequently, designers tend to use events to trigger all but the most time-critical functions.
[79] For some examples of function triggering and generally a more detailed description of function triggering, see Appln. No. 10/683,929 entitled PIPELINE ACCELERATOR FOR IMPROVED COMPUTING ARCHITECTURE AND RELATED SYSTEM AND METHOD (Attorney Docket No. 1934-13-3).
[80] FIG. 9 is a block diagram of the interface 142 of FIG. 8 according to an embodiment of the invention. In FIG. 9, a memory controller 152 corresponds to the memory controller 210 of FIG. 2 that is contained within the memory service layer 202 according to embodiments of the present invention. As discussed above in conjunction with FIG. 8, the interface 142 writes processed data from the hardwired pipelines 74i-74n to the DPSRAM 102. As discussed below, the structure of the interface 142 reduces or eliminates data "bottlenecks" and, where the pipeline circuit 80 (FIG. 8) is a PLIC, makes efficient use of the PLICs local and global routing resources.
[81] The interface 142 includes write channels 150i - 15On, one channel for each hardwired pipeline 74i - 74n (FIG. 5), and includes the controller 152. For purposes of illustration, the channel 150i is discussed below, it being understood that the operation and structure of the other channels 15O2 - 15On are similar unless stated otherwise.
[82] The channel 150i includes a write-address/data FIFO 754* and a address/data register 156i.
[83] The FIFO 754* stores the data that the pipeline 74i writes to the
DPSRAM 102, and stores the address of the location within the DPSRAM 702 to which the pipeline writes the data, until the controller 752 can actually write the data to the DPSRAM 702 via the register 756*. Therefore, the FIFO 754* reduces or eliminates the data bottleneck that may occur if the pipeline 74-t had to "wait" to write data to the channel 750* until the controller 752 finished writing previous data. [84] The FIFO 754* receives the data from the pipeline 74i via a bus
158i, receives the address of the location to which the data is to be written via a bus 760*, and provides the data and address to the register 756* via busses 762* and 1641, respectively. Furthermore, the FIFO 1541 receives a WRITE FIFO signal from the pipeline 741 on a line I661, receives a CLOCK signal via a line I681, and provides a FIFO FULL signal to the pipeline 74i on a line 170i. In addition, the FIFO 154i receives a READ FIFO signal from the controller 152 via a line 172i, and provides a FIFO EMPTY signal to the controller via a line 174^ Where the pipeline circuit 80 (FIG. 8) is a PLIC, the busses 1581t 16O1, 1621, and 1Q4i and the lines I661, 1681t 17O1, 1721} and 1741 are preferably formed using local routing resources. Typically, local routing resources are preferred to global routing resources because the signal-path lengths are generally shorter and the routing is easier to implement.
[85] The register 156i receives the data to be written and the address of the write location from the FIFO 1541 via the busses 162i and 164i, respectively, and provides the data and address to the port 110 of the DPSRAM 102 (FIG. 8) via an address/data bus 176. Furthermore, the register 156i also receives the data and address from the registers W62 - 156n via an address/data bus 178i as discussed below. In addition, the register We1 receives a SHIFT/LOAD signal from the controller 152 via a line 180. Where the pipeline circuit 80 (FIG. 8) is a PLIC, the bus 176 is typically formed using global routing resources, and the busses 178i - 178n.i and the line 180 are preferably formed using local routing resources.
[86] In addition to receiving the FIFO EMPTY signal and generating the
READ FIFO and SHIFT/LOAD signals, the controller 152 provides a WRITE DPSRAM signal to the port 110 of the DPSRAM 102 (FIG. 8) via a line 182.
[87] Still referring to FIG. 9, the operation of the interface 142 is discussed.
[88] First, the FIFO 1541 drives the FIFO FULL signal to the logic level corresponding to the current state ("full" or "not full") of the FIFO.
[89] Next, if the FIFO 154i is not full and the pipeline 74i has processed data to write, the pipeline drives the data and corresponding address onto the busses 158-t and 16O1, respectively, and asserts the WRITE signal, thus loading the data and address into the FIFO. If the FIFO 1541 is full, however, the pipeline 741 waits until the FIFO is not full before loading the data. [90] Then, the FIFO 754* drives the FIFO EMPTY signal to the logic level corresponding to the current state ("empty" or "not empty") of the FIFO.
[91] Next, if the FIFO 154i is not empty, the controller 152 asserts the
READ FIFO signal and drives the SHIFT/LOAD signal to the load logic level, thus loading the first loaded data and address from the FIFO into the register 156-ι. If the FIFO 754/ is empty, the controller 152 does not assert READ FIFO, but does drive SHIFT load to the load logic level if any of the other FIFOs 1542-154n are not empty.
[92] The channels 15O2 - 15On operate in a similar manner such that first-loaded data in the FIFOs 1542 - 154n are respectively loaded into the registers 1562.156n.
[93] Then, the controller 152 drives the SHIFT/LOAD signal to the shift logic level and asserts the WRITE DPSRAM signal, thus serially shifting the data and addresses from the registers 156i - 156n onto the address/data bus 176 and loading the data into the corresponding locations of the DPSRAM 102.
Specifically, during a first shift cycle, the data and address from the register 156i are shifted onto the bus 776 such that the data from the FIFO 754? is loaded into the addressed location of the DPSRAM 702. Also during the first shift cycle, the data and address from the register 1562 are shifted into the register 756?, the data and address from the register 7563 (not shown) are shifted into the register 1562, and so on. During a second shift cycle, the data and address from the register 756* are shifted onto the bus 776 such that the data from the FIFO 1542 is loaded into the addressed location of the DPSRAM 702. Also during the second shift cycle, the data and address from the register 7562 are shifted into the register 156i, the data and address from the register 7563 (not shown) are shifted into the register 7562, and so on. There are n shift cycles, and during the nth shift cycle the data and address from the register 756π (which is the data and address from the FIFO 754π) is shifted onto the bus 776. The controller 752 may implement these shift cycles by pulsing the SHIFT/LOAD signal, or by generating a shift clock signal (not shown) that is coupled to the registers 156i-156n. Furthermore, if one of the registers 156i-156n is empty during a particular shift operation because its corresponding FIFO 1541-154n was empty when the controller 752 loaded the register, then the controller may bypass the empty register, and thus shorten the shift operation by avoiding shifting null data and a null address onto the bus 176.
[94] Referring to FIGS. 8 and 9, according to an embodiment of the invention, the interface 144 is similar to the interface 142, and the interface 132 is also similar to the interface 142 except that the interface 132 includes only one write channel 150.
[95] FIG. 10 is a block diagram of the interface 740 of FIG. 8 according to an embodiment of the invention. In FIG. 10, a memory controller 192 corresponds to the memory controller 210 of FIG. 2 that is contained in the memory service layer 202 according to embodiments of the present invention. As discussed above in conjunction with FIG. 8, the interface 140 reads input data from the DPSRAM 100 and transfers this data to the hardwired 74i-74n. As discussed below, the structure of the interface 140 reduces or eliminates data "bottlenecks" and, where the pipeline circuit 80 (FIG. 8) is a PLIC, makes efficient use of the PLICs local and global routing resources.
[96] The interface 140 includes read channels 19O1 - 19On, one channel for each hardwired pipeline 74i - 74n (FIG. 8), and the controller 192. For purposes of illustration, the read channel 19O1 is discussed below, it being understood that the operation and structure of the other read channels IΘO - 79On are similar unless stated otherwise.
[97] The channel 190i includes a FIFO 194i and an address/identifier
(ID) register 196i. As discussed below, the identifier identifies the pipeline 74i- 74n that makes the request to read data from a particular location of the DPSRAM 100 to receive the data. [98] The FIFO 194i includes two sub-FIFOs (not shown), one for storing the address of the location within the DPSRAM 100 from which the pipeline 74i wishes to read the input data, and the other for storing the data read from the DPSRAM 100. Therefore, the FIFO 194i reduces or eliminates the bottleneck that may occur if the pipeline 74i had to "wait" to provide the read address to the channel 19O1 until the controller 192 finished reading previous data, or if the controller had to wait until the pipeline 741 retrieved the read data before the controller could read subsequent data. [99] The FIFO 194i receives the read address from the pipeline 74i via a bus 19Bi and provides the address and ID to the register 196i via a bus 200-/. Since the ID corresponds to the pipeline 74i and typically does not change, the FIFO 1941 may store the ID and concatenate the ID with the address. Alternatively, the pipeline 74i may provide the ID to the FIFO 194i via the bus 198L Furthermore, the FIFO 194ή receives a READY WRITE FIFO signal from the pipeline 74i via a line 202i, receives a CLOCK signal via a line 204i, and provides a FIFO FULL (of read addresses) signal to the pipeline via a line 206i. In addition, the FIFO 194^ receives a WRITE/READ FIFO signal from the controller 192 via a line 208ή, and provides a FIFO EMPTY signal to the controller via a line 210i. Moreover, the FIFO 194i receives the read data and the corresponding ID from the controller 192 via a bus 212, and provides this data to the pipeline 74i via a bus 214i, Where the pipeline circuit 80 (FIG. 8) is a PLIC, the busses 198i, 20O1, and 214i and the lines 202i, 2041t 20S1, 208i, and 210i are preferably formed using local routing resources, and the bus 212 is typically formed using global routing resources.
[100] The register 196i receives the address of the location to be read and the corresponding ID from the FIFO 194i via the bus 206i, provides the address to the port 108 of the DPSRAM 100 (FIG. 8) via an address bus 216, and provides the ID to the controller 192 via a bus 218. Furthermore, the register 196i also receives the addresses and IDs from the registers 196 - 196n via an address/ID bus 220i as discussed below. In addition, the register 196i receives a SHIFT/LOAD signal from the controller 192 via a line 222. Where the pipeline circuit 80 (FIG. 8) is a PLIC, the bus 216 is typically formed using global routing resources, and the busses 220^.220^ and the line 222 are preferably formed using local routing resources.
[101] In addition to receiving the FIFO EMPTY signal, generating the
WRITE/READ FIFO and SHIFT/LOAD signals, and providing the read data and corresponding ID, the controller 192 receives the data read from the port 108 of the DPSRAM 100 (FIG. 8) via a bus 224 and generates a READ DPSRAM signal on a line 226, which couples this signal to the port 108. Where the pipeline circuit 80 (FIG. 8) is a PLIC, the bus 224 and the line 226 are typically formed using global routing resources. [102] Still referring to FIG. 10, the operation of the interface 140 is discussed.
[103] First, the FIFO 194Λ drives the FIFO FULL signal to the logic level corresponding to the current state ("full" or "not full") of the FIFO relative to the read addresses. That is, if the FIFO 194i is full of addresses to be read, then it drives the logic level of FIFO FULL to one level, and if the FIFO is not full of read addresses, it drives the logic level of FIFO FULL to another level.
[104] Next, if the FIFO 194i is not full of read addresses and the pipeline
74i is ready for more input data to process, the pipeline drives the address of the data to be read onto the bus 1981t and asserts the READ/WRITE FIFO signal to a write level, thus loading the address into the FIFO. As discussed above in conjunction with FIG. 8, the pipeline 74i gets the address from the input-data queue 122 via the sequence manager 148. If, however, the FIFO 194i is full of read addresses, the pipeline 741 waits until the FIFO is not full before loading the read address.
[105] Then, the FIFO 194ή drives the FIFO EMPTY signal to the logic level corresponding to the current state ("empty" or "not empty") of the FIFO relative to the read addresses. That is, if the FIFO 194i is loaded with at least one read address, it drives the logic level of FIFO EMPTY to one level, and if the FIFO is loaded with no read addresses, it drives the logic level of FIFO EMPTY to another level.
[106] Next, if the FIFO 194i is not empty, the controller 192 asserts the
WRITE/READ FIFO signal to the read logic level and drives the SHIFT/LOAD signal to the load logic level, thus loading the first loaded address and the ID from the FIFO into the register 196^
[107] The channels 19Qi - 19On operate in a similar manner such that the controller 192 respectively loads the first-loaded addresses and IDs from the FIFOs 1942 - 194n into the registers 1962 - 196n. If all of the FIFOs 194r194n are empty, then the controller 192 waits for at least one of the FIFOs to receive an address before proceeding.
[108] Then, the controller 192 drives the SHIFT/LOAD signal to the shift logic level and asserts the READ DPSRAM signal to serially shift the addresses and IDs from the registers 196i - 196n onto the address and ID busses 216 and 218 and to serially read the data from the corresponding locations of the DPSRAM 100 via the bus 224.
[109] Next, the controller 192 drives the received data and corresponding ID — the ID allows each of the FIFOs 194i - 194n to determine whether it is an intended recipient of the data — onto the bus 212, and drives the WRITE/READ FIFO signal to a write level, thus serially writing the data to the respective FIFO, 194i-194n.
[110] Then, the hardwired pipelines 74i-74n sequentially assert their READ/WRITE FIFO signals to a read level and sequentially read the data via the busses 214r214n.
[111] Still referring to FIG. 10, a more detailed discussion of their data- read operator is presented.
[112] During a first shift cycle, the controller 192 shifts the address and ID from the register 196i onto the busses 216 and 218, respectively, asserts read DPSRAM, and thus reads the data from the corresponding location of the DPSRAM 100 via the bus 224 and reads the ID from the bus 218. Next, the controller 192 drives WRITE/READ FIFO signal on the line 208^ to a write level and drives the received data and the ID onto the bus 212. Because the ID is the ID from the FIFO 194i, the FIFO 194i recognizes the ID and thus loads the data from the bus 212 in response the write level of the WRITE/READ FIFO signal. The remaining FIFOs 1942 - 194π do not load the data because the ID on the bus 212 does not correspond to their IDs. Then, the pipeline 74i asserts the READ/WRITE FIFO signal on the line 2Q2i to the read level and retrieves the read data via the bus 214i. Also during the first shift cycle, the address and ID from the register 1962 are shifted into the register 196i, the address and ID from the register 1963 (not shown) are shifted into the register 1962, and so on. Alternatively, the controller 192 may recognize the ID and drive only the WRITE/READ FIFO signal on the line 208i to the write level. This eliminates the need for the controller 192 to send the ID to the FIFOs 194r194n. In another alternative, the WRITE/READ FIFO signal may be only a read signal, and the FIFO 194i (as well as the other FIFOs 1942-194n) may load the data on the bus 212 when the ID on the bus 212 matches the ID of the FIFO 194i. This eliminates the need of the controller 192 to generate a write signal.
[113] During a second shift cycle, the address and ID from the register
196i is shifted onto the busses 216 and 218 such that the controller 192 reads data from the location of the DPSRAM 100 specified by the FIFO 1942. Next, the controller 192 drives the WRITE/READ FIFO signal to a write level and drives the received data and the ID onto the bus 212. Because the ID is the ID from the FIFO 1942, the FIFO 1942 recognizes the ID and thus loads the data from the bus 212. The remaining FIFOs 194i and 1943 - 194n do not load the data because the ID on the bus 212 does not correspond to their IDs. Then, the pipeline 742 asserts its READ/WRITE FIFO signal to the read level and retrieves the read data via the bus 2142. Also during the second shift cycle, the address and ID from the register 1962 is shifted into the register 196i, the address and ID from the register 1963 (not shown) is shifted into the register 1962, and so on. [114] This continues for n shift cycles, i.e., until the address and ID from the register 196n (which is the address and ID from the FIFO 194n) are respectively shifted onto the bus 216 and 218. The controller 192 may implement these shift cycles by pulsing the SHIFT/LOAD signal, or by generating a shift clock signal (not shown) that is coupled to the registers 196i-196n. Furthermore, if one of the registers 196r1962 is empty during a particular shift operation because its corresponding FIFO 194i-194n is empty, then the controller 192 may bypass the empty register, and thus shorten the shift operation by avoiding shifting a null address onto the bus 216.
[115] Referring to FIGS. 8 and 9, according to an embodiment of the invention, the interface 144 is similar to the interface 140, and the interface 136 is also similar to the interface 140 except that the interface 136 includes only one read channel 190, and thus includes no ID circuitry.
[116] The preceding discussion is presented to enable a person skilled in the art to make and use the invention. Various modifications to the embodiments will be readily apparent to those skilled in the art, and the generic principles herein may be applied to other embodiments and applications without departing from the spirit and scope of the present invention. Thus, the present invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.

Claims

WHAT IS CLAIMED IS:
1. A memory subsystem, comprising: a memory controller operable to generate first control signals according to a standard interface; and a memory interface adapter coupled to the memory controller, the memory interface adapter operable responsive to the first control signals to develop second control signals adapted to be applied to a memory subsystem to access desired storage locations.
2. The memory subsystem of claim 1 further comprising a physical control layer and wherein the second control signals from the memory interface adapter are applied through the physical control layer to access desired storage locations in the memory subsystem.
3. The memory subsystem of claim 1 wherein the memory controller further includes attachable behaviors circuitry adapted to be configured to operate or operable to execute corresponding memory behaviors without executing programming instructions.
4. The memory subsystem of claim 3 wherein the memory behaviors executed by the attachable behaviors circuitry includes generating memory addresses to stride through a portion of memory.
5. The memory subsystem of claim 3 wherein the memory behaviors circuitry is formed in a field programmable gate array (FPGA).
6. A memory service layer component including a memory controller and an attachable behaviors circuit, the attachable behaviors circuit adapted to receive configuration data to implement corresponding memory behaviors, the attachable behaviors circuit operable in combination with the memory controller to generate memory addresses using the configuration data, the generated memory addresses defining the corresponding memory behaviors.
7. The memory service layer component of claim 6 wherein the memory controller and attachable behaviors circuit include the same read and write state machine circuitry.
8. The memory service layer component of claim 6 wherein the memory controller includes a general read and general write state machine circuitry and wherein the attachable behaviors circuit includes separate attachable read and write state machines.
9. The memory service layer component of claim 6 further including a storage memory for storing configuration parameter and a plurality of register for storing selected ones of the stored configuration parameters, wherein the parameters stored in the registers define a particular memory behavior implemented by the memory service layer.
10. A peer vector machine, comprising: a host processor; and a pipeline accelerator coupled to the host processor, the pipeline accelerator including at least one hardwired pipeline units operable to process data without executing programming instructions, and the accelerator further including a memory service layer component including a memory controller and an attachable behaviors circuit, the attachable behaviors circuit adapted to receive configuration data from the hardwired pipeline units, the configuration data causing the attachable behaviors circuit to implement corresponding memory behaviors, and the attachable behaviors circuit operable in combination with the memory controller to generate memory addresses using the configuration data, the generated memory addresses defining the corresponding memory behaviors.
PCT/US2005/035814 2004-10-01 2005-10-03 Service layer architecture for memory access system and method WO2006039711A1 (en)

Applications Claiming Priority (12)

Application Number Priority Date Filing Date Title
US61505004P 2004-10-01 2004-10-01
US61515804P 2004-10-01 2004-10-01
US61517004P 2004-10-01 2004-10-01
US61515704P 2004-10-01 2004-10-01
US61519204P 2004-10-01 2004-10-01
US61519304P 2004-10-01 2004-10-01
US60/615,158 2004-10-01
US60/615,170 2004-10-01
US60/615,157 2004-10-01
US60/615,050 2004-10-01
US60/615,193 2004-10-01
US60/615,192 2004-10-01

Publications (1)

Publication Number Publication Date
WO2006039711A1 true WO2006039711A1 (en) 2006-04-13

Family

ID=35645569

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/US2005/035814 WO2006039711A1 (en) 2004-10-01 2005-10-03 Service layer architecture for memory access system and method
PCT/US2005/035813 WO2006039710A2 (en) 2004-10-01 2005-10-03 Computer-based tool and method for designing an electronic circuit and related system and library for same

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/US2005/035813 WO2006039710A2 (en) 2004-10-01 2005-10-03 Computer-based tool and method for designing an electronic circuit and related system and library for same

Country Status (2)

Country Link
US (8) US7676649B2 (en)
WO (2) WO2006039711A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7676649B2 (en) 2004-10-01 2010-03-09 Lockheed Martin Corporation Computing machine with redundancy and related systems and methods
US7987341B2 (en) 2002-10-31 2011-07-26 Lockheed Martin Corporation Computing machine using software objects for transferring data that includes no destination information
WO2016125202A1 (en) * 2015-02-04 2016-08-11 Renesas Electronics Corporation Data transfer apparatus
WO2017112285A1 (en) * 2015-12-24 2017-06-29 Intel Corporation Disaggregating block storage controller stacks
US10275160B2 (en) 2015-12-21 2019-04-30 Intel Corporation Method and apparatus to enable individual non volatile memory express (NVME) input/output (IO) Queues on differing network addresses of an NVME controller
US10893050B2 (en) 2016-08-24 2021-01-12 Intel Corporation Computer product, method, and system to dynamically provide discovery services for host nodes of target systems and storage resources in a network
US10970231B2 (en) 2016-09-28 2021-04-06 Intel Corporation Management of virtual target storage resources by use of an access control list and input/output queues

Families Citing this family (66)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7124318B2 (en) * 2003-09-18 2006-10-17 International Business Machines Corporation Multiple parallel pipeline processor having self-repairing capability
US7263672B2 (en) * 2004-09-03 2007-08-28 Abb Research Ltd. Methods, systems, and data models for describing an electrical device
US7797661B2 (en) * 2004-09-03 2010-09-14 Abb Research Ag Method and apparatus for describing and managing properties of a transformer coil
KR100633099B1 (en) * 2004-10-15 2006-10-11 삼성전자주식회사 System using data bus and method for operation controlling thereof
US7409475B2 (en) * 2004-10-20 2008-08-05 Kabushiki Kaisha Toshiba System and method for a high-speed shift-type buffer
US7814696B2 (en) * 2004-10-29 2010-10-19 Lockheed Martin Corporation Projectile accelerator and related vehicle and method
US7984581B2 (en) 2004-10-29 2011-07-26 Lockheed Martin Corporation Projectile accelerator and related vehicle and method
US7331031B2 (en) * 2005-03-03 2008-02-12 Lsi Logic Corporation Method for describing and deploying design platform sets
US7721241B2 (en) * 2005-07-29 2010-05-18 Abb Research Ltd. Automated method and tool for documenting a transformer design
US7571395B1 (en) * 2005-08-03 2009-08-04 Xilinx, Inc. Generation of a circuit design from a command language specification of blocks in matrix form
US7574683B2 (en) * 2005-08-05 2009-08-11 John Wilson Automating power domains in electronic design automation
US7366998B1 (en) * 2005-11-08 2008-04-29 Xilinx, Inc. Efficient communication of data between blocks in a high level modeling system
US7913223B2 (en) * 2005-12-16 2011-03-22 Dialogic Corporation Method and system for development and use of a user-interface for operations, administration, maintenance and provisioning of a telecommunications system
US7577870B2 (en) * 2005-12-21 2009-08-18 The Boeing Company Method and system for controlling command execution
US8055444B2 (en) * 2006-04-04 2011-11-08 Yahoo! Inc. Content display and navigation interface
JP5437556B2 (en) * 2006-07-12 2014-03-12 日本電気株式会社 Information processing apparatus and processor function changing method
WO2008014493A2 (en) * 2006-07-28 2008-01-31 Drc Computer Corporation Configurable processor module accelerator using a progrmmable logic device
CN100428174C (en) * 2006-10-31 2008-10-22 哈尔滨工业大学 Embedded fault injection system and its method
US8706987B1 (en) 2006-12-01 2014-04-22 Synopsys, Inc. Structured block transfer module, system architecture, and method for transferring
US8289966B1 (en) 2006-12-01 2012-10-16 Synopsys, Inc. Packet ingress/egress block and system and method for receiving, transmitting, and managing packetized data
US8127113B1 (en) 2006-12-01 2012-02-28 Synopsys, Inc. Generating hardware accelerators and processor offloads
US8614633B1 (en) * 2007-01-08 2013-12-24 Lockheed Martin Corporation Integrated smart hazard assessment and response planning (SHARP) system and method for a vessel
EP1986369B1 (en) * 2007-04-27 2012-03-07 Accenture Global Services Limited End user control configuration system with dynamic user interface
US7900168B2 (en) * 2007-07-12 2011-03-01 The Mathworks, Inc. Customizable synthesis of tunable parameters for code generation
US8145894B1 (en) * 2008-02-25 2012-03-27 Drc Computer Corporation Reconfiguration of an accelerator module having a programmable logic device
EP2294570B1 (en) 2008-05-30 2014-07-30 Advanced Micro Devices, Inc. Redundancy methods and apparatus for shader column repair
US8195882B2 (en) 2008-05-30 2012-06-05 Advanced Micro Devices, Inc. Shader complex with distributed level one cache system and centralized level two cache
US8502832B2 (en) * 2008-05-30 2013-08-06 Advanced Micro Devices, Inc. Floating point texture filtering using unsigned linear interpolators and block normalizations
US8773864B2 (en) * 2008-06-18 2014-07-08 Lockheed Martin Corporation Enclosure assembly housing at least one electronic board assembly and systems using same
US8189345B2 (en) * 2008-06-18 2012-05-29 Lockheed Martin Corporation Electronics module, enclosure assembly housing same, and related systems and methods
US20100106668A1 (en) * 2008-10-17 2010-04-29 Louis Hawthorne System and method for providing community wisdom based on user profile
US8856463B2 (en) * 2008-12-16 2014-10-07 Frank Rau System and method for high performance synchronous DRAM memory controller
US8127262B1 (en) * 2008-12-18 2012-02-28 Xilinx, Inc. Communicating state data between stages of pipelined packet processor
US8156264B2 (en) * 2009-04-03 2012-04-10 Analog Devices, Inc. Digital output sensor FIFO buffer with single port memory
US8276159B2 (en) * 2009-09-23 2012-09-25 Microsoft Corporation Message communication of sensor and other data
US8225269B2 (en) * 2009-10-30 2012-07-17 Synopsys, Inc. Technique for generating an analysis equation
US8914672B2 (en) * 2009-12-28 2014-12-16 Intel Corporation General purpose hardware to replace faulty core components that may also provide additional processor functionality
US8705052B2 (en) * 2010-04-14 2014-04-22 Hewlett-Packard Development Company, L.P. Communicating state data to a network service
US8887054B2 (en) 2010-04-15 2014-11-11 Hewlett-Packard Development Company, L.P. Application selection user interface
US8370784B2 (en) * 2010-07-13 2013-02-05 Algotochip Corporation Automatic optimal integrated circuit generator from algorithms and specification
JP5555116B2 (en) * 2010-09-29 2014-07-23 キヤノン株式会社 Information processing apparatus and inter-processor communication control method
DE102010062191B4 (en) * 2010-11-30 2012-06-28 Siemens Aktiengesellschaft Pipeline system and method for operating a pipeline system
US8943113B2 (en) * 2011-07-21 2015-01-27 Xiaohua Yi Methods and systems for parsing and interpretation of mathematical statements
US8898516B2 (en) * 2011-12-09 2014-11-25 Toyota Jidosha Kabushiki Kaisha Fault-tolerant computer system
US9015289B2 (en) * 2012-04-12 2015-04-21 Netflix, Inc. Method and system for evaluating the resiliency of a distributed computing service by inducing a latency
US9716802B2 (en) 2012-04-12 2017-07-25 Hewlett-Packard Development Company, L.P. Content model for a printer interface
US10270709B2 (en) 2015-06-26 2019-04-23 Microsoft Technology Licensing, Llc Allocating acceleration component functionality for supporting services
US8635567B1 (en) * 2012-10-11 2014-01-21 Xilinx, Inc. Electronic design automation tool for guided connection assistance
US10140129B2 (en) 2012-12-28 2018-11-27 Intel Corporation Processing core having shared front end unit
US9417873B2 (en) 2012-12-28 2016-08-16 Intel Corporation Apparatus and method for a hybrid latency-throughput processor
US9361116B2 (en) 2012-12-28 2016-06-07 Intel Corporation Apparatus and method for low-latency invocation of accelerators
US10346195B2 (en) 2012-12-29 2019-07-09 Intel Corporation Apparatus and method for invocation of a multi threaded accelerator
US9990212B2 (en) * 2013-02-19 2018-06-05 Empire Technology Development Llc Testing and repair of a hardware accelerator image in a programmable logic circuit
US8739103B1 (en) * 2013-03-04 2014-05-27 Cypress Semiconductor Corporation Techniques for placement in highly constrained architectures
US9898339B2 (en) * 2013-03-12 2018-02-20 Itron, Inc. Meter reading data validation
JP6455132B2 (en) * 2014-12-22 2019-01-23 富士通株式会社 Information processing apparatus, processing method, and program
US9792154B2 (en) 2015-04-17 2017-10-17 Microsoft Technology Licensing, Llc Data processing system having a hardware acceleration plane and a software plane
US10198294B2 (en) 2015-04-17 2019-02-05 Microsoft Licensing Technology, LLC Handling tenant requests in a system that uses hardware acceleration components
US20160308649A1 (en) * 2015-04-17 2016-10-20 Microsoft Technology Licensing, Llc Providing Services in a System having a Hardware Acceleration Plane and a Software Plane
KR20160148952A (en) * 2015-06-17 2016-12-27 에스케이하이닉스 주식회사 Memory system and operating method of memory system
US10216555B2 (en) 2015-06-26 2019-02-26 Microsoft Technology Licensing, Llc Partially reconfiguring acceleration components
US20170046466A1 (en) * 2015-08-10 2017-02-16 International Business Machines Corporation Logic structure aware circuit routing
US10545770B2 (en) * 2016-11-14 2020-01-28 Intel Corporation Configurable client hardware
US10545925B2 (en) 2018-06-06 2020-01-28 Intel Corporation Storage appliance for processing of functions as a service (FaaS)
US10922463B1 (en) * 2019-10-20 2021-02-16 Xilinx, Inc. User dialog-based automated system design for programmable integrated circuits
CN112631168A (en) * 2020-12-09 2021-04-09 广东电网有限责任公司 FPGA-based deformation detector control circuit design method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6018793A (en) * 1997-10-24 2000-01-25 Cirrus Logic, Inc. Single chip controller-memory device including feature-selectable bank I/O and architecture and methods suitable for implementing the same
US6205516B1 (en) * 1997-10-31 2001-03-20 Brother Kogyo Kabushiki Kaisha Device and method for controlling data storage device in data processing system
JP2002149424A (en) * 2000-09-06 2002-05-24 Internatl Business Mach Corp <Ibm> A plurality of logical interfaces to shared coprocessor resource
US6662285B1 (en) * 2001-01-09 2003-12-09 Xilinx, Inc. User configurable memory system having local and global memory blocks
US6684314B1 (en) * 2000-07-14 2004-01-27 Agilent Technologies, Inc. Memory controller with programmable address configuration

Family Cites Families (163)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2006114A (en) * 1931-04-27 1935-06-25 Rosenmund Karl Wilhelm Aliphatic-aromatic amine and process of making same
US3665173A (en) 1968-09-03 1972-05-23 Ibm Triple modular redundancy/sparing
CH658567GA3 (en) 1984-03-28 1986-11-28
US4782461A (en) 1984-06-21 1988-11-01 Step Engineering Logical grouping of facilities within a computer development system
US4703475A (en) * 1985-12-04 1987-10-27 American Telephone And Telegraph Company At&T Bell Laboratories Data communication method and apparatus using multiple physical data links
US4985832A (en) * 1986-09-18 1991-01-15 Digital Equipment Corporation SIMD array processing system with routing networks having plurality of switching stages to transfer messages among processors
US4873626A (en) * 1986-12-17 1989-10-10 Massachusetts Institute Of Technology Parallel processing system with processor array having memory system included in system memory
US4914653A (en) * 1986-12-22 1990-04-03 American Telephone And Telegraph Company Inter-processor communication protocol
US4774574A (en) * 1987-06-02 1988-09-27 Eastman Kodak Company Adaptive block transform image coding method and apparatus
US4862407A (en) * 1987-10-05 1989-08-29 Motorola, Inc. Digital signal processing apparatus
US4956771A (en) * 1988-05-24 1990-09-11 Prime Computer, Inc. Method for inter-processor data transfer
US4915653A (en) * 1988-12-16 1990-04-10 Amp Incorporated Electrical connector
US5212777A (en) 1989-11-17 1993-05-18 Texas Instruments Incorporated Multi-processor reconfigurable in single instruction multiple data (SIMD) and multiple instruction multiple data (MIMD) modes and method of operation
US5317752A (en) 1989-12-22 1994-05-31 Tandem Computers Incorporated Fault-tolerant computer system with auto-restart after power-fall
US5185871A (en) 1989-12-26 1993-02-09 International Business Machines Corporation Coordination of out-of-sequence fetching between multiple processors using re-execution of instructions
US5867399A (en) * 1990-04-06 1999-02-02 Lsi Logic Corporation System and method for creating and validating structural description of electronic system from higher-level and behavior-oriented description
US5553002A (en) 1990-04-06 1996-09-03 Lsi Logic Corporation Method and system for creating and validating low level description of electronic design from higher level, behavior-oriented description, using milestone matrix incorporated into user-interface
US5555201A (en) * 1990-04-06 1996-09-10 Lsi Logic Corporation Method and system for creating and validating low level description of electronic design from higher level, behavior-oriented description, including interactive system for hierarchical display of control and dataflow information
US5572437A (en) * 1990-04-06 1996-11-05 Lsi Logic Corporation Method and system for creating and verifying structural logic model of electronic design from behavioral description, including generation of logic and timing models
US5623418A (en) 1990-04-06 1997-04-22 Lsi Logic Corporation System and method for creating and validating structural description of electronic system
US5544067A (en) * 1990-04-06 1996-08-06 Lsi Logic Corporation Method and system for creating, deriving and validating structural description of electronic system from higher level, behavior-oriented description, including interactive schematic design and simulation
US5598344A (en) * 1990-04-06 1997-01-28 Lsi Logic Corporation Method and system for creating, validating, and scaling structural description of electronic device
US5421028A (en) * 1991-03-15 1995-05-30 Hewlett-Packard Company Processing commands and data in a common pipeline path in a high-speed computer graphics system
JPH0581216A (en) 1991-09-20 1993-04-02 Hitachi Ltd Parallel processor
US5283883A (en) * 1991-10-17 1994-02-01 Sun Microsystems, Inc. Method and direct memory access controller for asynchronously reading/writing data from/to a memory with improved throughput
US5159449A (en) * 1991-12-26 1992-10-27 Workstation Technologies, Inc. Method and apparatus for data reduction in a video image data reduction system
JP2500038B2 (en) 1992-03-04 1996-05-29 インターナショナル・ビジネス・マシーンズ・コーポレイション Multiprocessor computer system, fault tolerant processing method and data processing system
EP0566015A3 (en) 1992-04-14 1994-07-06 Eastman Kodak Co Neural network optical character recognition system and method for classifying characters in amoving web
US5339413A (en) * 1992-08-21 1994-08-16 International Business Machines Corporation Data stream protocol for multimedia data streaming data processing system
US5383187A (en) * 1992-09-18 1995-01-17 Hughes Aricraft Company Adaptive protocol for packet communications network and method
US5603043A (en) * 1992-11-05 1997-02-11 Giga Operations Corporation System for compiling algorithmic language source code for implementation in programmable hardware
US5361373A (en) 1992-12-11 1994-11-01 Gilson Kent L Integrated circuit computing device comprising a dynamically configurable gate array having a microprocessor and reconfigurable instruction execution means and method therefor
EP0626661A1 (en) * 1993-05-24 1994-11-30 Societe D'applications Generales D'electricite Et De Mecanique Sagem Digital image processing circuitry
US5392393A (en) * 1993-06-04 1995-02-21 Sun Microsystems, Inc. Architecture for a high performance three dimensional graphics accelerator
US5490088A (en) * 1994-02-28 1996-02-06 Motorola, Inc. Method of handling data retrieval requests
US5583964A (en) 1994-05-02 1996-12-10 Motorola, Inc. Computer utilizing neural network and method of using same
US5493508A (en) * 1994-06-01 1996-02-20 Lsi Logic Corporation Specification and design of complex digital systems
US5568614A (en) 1994-07-29 1996-10-22 International Business Machines Corporation Data streaming between peer subsystems of a computer system
JP3365581B2 (en) 1994-07-29 2003-01-14 富士通株式会社 Information processing device with self-healing function
US5710910A (en) 1994-09-30 1998-01-20 University Of Washington Asynchronous self-tuning clock domains and method for transferring data among domains
US5649135A (en) * 1995-01-17 1997-07-15 International Business Machines Corporation Parallel processing system and method using surrogate instructions
US5692183A (en) 1995-03-31 1997-11-25 Sun Microsystems, Inc. Methods and apparatus for providing transparent persistence in a distributed object operating environment
JP2987308B2 (en) * 1995-04-28 1999-12-06 松下電器産業株式会社 Information processing device
US6282578B1 (en) 1995-06-26 2001-08-28 Hitachi, Ltd. Execution management method of program on reception side of message in distributed processing system
US5752071A (en) * 1995-07-17 1998-05-12 Intel Corporation Function coprocessor
US5649176A (en) 1995-08-10 1997-07-15 Virtual Machine Works, Inc. Transition analysis and circuit resynthesis method and device for digital circuit modeling
US5732107A (en) * 1995-08-31 1998-03-24 Northrop Grumman Corporation Fir interpolator with zero order hold and fir-spline interpolation combination
US5648732A (en) 1995-10-04 1997-07-15 Xilinx, Inc. Field programmable pipeline array
KR100460398B1 (en) * 1995-10-09 2005-01-17 마쯔시다덴기산교 가부시키가이샤 Optical disk, optical recorder, optical reproducing device, encrypted communication system, and authorizing system for use of program
JPH09106407A (en) 1995-10-12 1997-04-22 Toshiba Corp Design supporting system
US5640107A (en) * 1995-10-24 1997-06-17 Northrop Grumman Corporation Method for in-circuit programming of a field-programmable gate array configuration memory
JPH09148907A (en) 1995-11-22 1997-06-06 Nec Corp Synchronous semiconductor logic device
US5784636A (en) * 1996-05-28 1998-07-21 National Semiconductor Corporation Reconfigurable computer architecture for use in signal processing applications
US5916307A (en) * 1996-06-05 1999-06-29 New Era Of Networks, Inc. Method and structure for balanced queue communication between nodes in a distributed computing application
US6115047A (en) * 1996-07-01 2000-09-05 Sun Microsystems, Inc. Method and apparatus for implementing efficient floating point Z-buffering
US6023742A (en) * 1996-07-18 2000-02-08 University Of Washington Reconfigurable computing architecture for providing pipelined data paths
US5963454A (en) * 1996-09-25 1999-10-05 Vlsi Technology, Inc. Method and apparatus for efficiently implementing complex function blocks in integrated circuit designs
US5892962A (en) 1996-11-12 1999-04-06 Lucent Technologies Inc. FPGA-based processor
JP3406790B2 (en) 1996-11-25 2003-05-12 株式会社東芝 Data transfer system and data transfer method
US6028939A (en) 1997-01-03 2000-02-22 Redcreek Communications, Inc. Data security system and method
US5978578A (en) * 1997-01-30 1999-11-02 Azarya; Arnon Openbus system for control automation networks
US5941999A (en) * 1997-03-31 1999-08-24 Sun Microsystems Method and system for achieving high availability in networked computer systems
US5931959A (en) * 1997-05-21 1999-08-03 The United States Of America As Represented By The Secretary Of The Air Force Dynamically reconfigurable FPGA apparatus and method for multiprocessing and fault tolerance
US5996059A (en) 1997-07-01 1999-11-30 National Semiconductor Corporation System for monitoring an execution pipeline utilizing an address pipeline in parallel with the execution pipeline
US6784903B2 (en) * 1997-08-18 2004-08-31 National Instruments Corporation System and method for configuring an instrument to perform measurement functions utilizing conversion of graphical programs into hardware implementations
US5987620A (en) 1997-09-19 1999-11-16 Thang Tran Method and apparatus for a self-timed and self-enabled distributed clock
US6216191B1 (en) * 1997-10-15 2001-04-10 Lucent Technologies Inc. Field programmable gate array having a dedicated processor interface
JPH11120156A (en) * 1997-10-17 1999-04-30 Nec Corp Data communication system in multiprocessor system
US6076152A (en) * 1997-12-17 2000-06-13 Src Computers, Inc. Multiprocessor computer architecture incorporating a plurality of memory algorithm processors in the memory subsystem
US6049222A (en) * 1997-12-30 2000-04-11 Xilinx, Inc Configuring an FPGA using embedded memory
DE69919059T2 (en) 1998-02-04 2005-01-27 Texas Instruments Inc., Dallas Data processing system with a digital signal processor and a coprocessor and data processing method
US7152027B2 (en) * 1998-02-17 2006-12-19 National Instruments Corporation Reconfigurable test system
US6096091A (en) * 1998-02-24 2000-08-01 Advanced Micro Devices, Inc. Dynamically reconfigurable logic networks interconnected by fall-through FIFOs for flexible pipeline processing in a system-on-a-chip
US6230253B1 (en) 1998-03-31 2001-05-08 Intel Corporation Executing partial-width packed data instructions
US6112288A (en) * 1998-05-19 2000-08-29 Paracel, Inc. Dynamic configurable system of parallel modules comprising chain of chips comprising parallel pipeline chain of processors with master controller feeding command and data
US6247118B1 (en) * 1998-06-05 2001-06-12 Mcdonnell Douglas Corporation Systems and methods for transient error recovery in reduced instruction set computer processors via instruction retry
US6202139B1 (en) 1998-06-19 2001-03-13 Advanced Micro Devices, Inc. Pipelined data cache with multiple ports and processor with load/store unit selecting only load or store operations for concurrent processing
US6282627B1 (en) * 1998-06-29 2001-08-28 Chameleon Systems, Inc. Integrated processor and programmable data path chip for reconfigurable computing
US6253276B1 (en) * 1998-06-30 2001-06-26 Micron Technology, Inc. Apparatus for adaptive decoding of memory addresses
US5916037A (en) 1998-09-11 1999-06-29 Hill; Gaius Golf swing training device and method
US6237054B1 (en) 1998-09-14 2001-05-22 Advanced Micro Devices, Inc. Network interface unit including a microcontroller having multiple configurable logic blocks, with a test/program bus for performing a plurality of selected functions
US6192384B1 (en) 1998-09-14 2001-02-20 The Board Of Trustees Of The Leland Stanford Junior University System and method for performing compound vector operations
US6862563B1 (en) * 1998-10-14 2005-03-01 Arc International Method and apparatus for managing the configuration and functionality of a semiconductor design
US6405266B1 (en) * 1998-11-30 2002-06-11 Hewlett-Packard Company Unified publish and subscribe paradigm for local and remote publishing destinations
US6247134B1 (en) * 1999-03-31 2001-06-12 Synopsys, Inc. Method and system for pipe stage gating within an operating pipelined circuit for power savings
US6308311B1 (en) * 1999-05-14 2001-10-23 Xilinx, Inc. Method for reconfiguring a field programmable gate array from a host
US6477170B1 (en) 1999-05-21 2002-11-05 Advanced Micro Devices, Inc. Method and apparatus for interfacing between systems operating under different clock regimes with interlocking to prevent overwriting of data
EP1061439A1 (en) 1999-06-15 2000-12-20 Hewlett-Packard Company Memory and instructions in computer architecture containing processor and coprocessor
EP1061438A1 (en) 1999-06-15 2000-12-20 Hewlett-Packard Company Computer architecture containing processor and coprocessor
US20030014627A1 (en) 1999-07-08 2003-01-16 Broadcom Corporation Distributed processing in a cryptography acceleration chip
GB2352548B (en) * 1999-07-26 2001-06-06 Sun Microsystems Inc Method and apparatus for executing standard functions in a computer system
JP3892998B2 (en) * 1999-09-14 2007-03-14 富士通株式会社 Distributed processing device
JP2001142695A (en) 1999-10-01 2001-05-25 Hitachi Ltd Loading method of constant to storage place, loading method of constant to address storage place, loading method of constant to register, deciding method of number of code bit, normalizing method of binary number and instruction in computer system
US6526430B1 (en) 1999-10-04 2003-02-25 Texas Instruments Incorporated Reconfigurable SIMD coprocessor architecture for sum of absolute differences and symmetric filtering (scalable MAC engine for image processing)
US6516420B1 (en) 1999-10-25 2003-02-04 Motorola, Inc. Data synchronizer using a parallel handshaking pipeline wherein validity indicators generate and send acknowledgement signals to a different clock domain
US6625749B1 (en) * 1999-12-21 2003-09-23 Intel Corporation Firmware mechanism for correcting soft errors
US6606360B1 (en) * 1999-12-30 2003-08-12 Intel Corporation Method and apparatus for receiving data
US6611920B1 (en) 2000-01-21 2003-08-26 Intel Corporation Clock distribution system for selectively enabling clock signals to portions of a pipelined circuit
US6326806B1 (en) * 2000-03-29 2001-12-04 Xilinx, Inc. FPGA-based communications access point and system for reconfiguration
US6624819B1 (en) * 2000-05-01 2003-09-23 Broadcom Corporation Method and system for providing a flexible and efficient processor for use in a graphics processing system
US6532009B1 (en) * 2000-05-18 2003-03-11 International Business Machines Corporation Programmable hardwired geometry pipeline
US6817005B2 (en) 2000-05-25 2004-11-09 Xilinx, Inc. Modular design method and system for programmable logic devices
DE10026118C2 (en) * 2000-05-26 2002-11-28 Ronald Neuendorf Device for moistening liquid-absorbing agents, such as toilet paper
US6839873B1 (en) * 2000-06-23 2005-01-04 Cypress Semiconductor Corporation Method and apparatus for programmable logic device (PLD) built-in-self-test (BIST)
US6982976B2 (en) * 2000-08-11 2006-01-03 Texas Instruments Incorporated Datapipe routing bridge
US7196710B1 (en) 2000-08-23 2007-03-27 Nintendo Co., Ltd. Method and apparatus for buffering graphics data in a graphics system
GB0023409D0 (en) 2000-09-22 2000-11-08 Integrated Silicon Systems Ltd Data encryption apparatus
JP3880310B2 (en) * 2000-12-01 2007-02-14 シャープ株式会社 Semiconductor integrated circuit
US6708239B1 (en) * 2000-12-08 2004-03-16 The Boeing Company Network device interface for digitally interfacing data channels to a controller via a network
US6785841B2 (en) 2000-12-14 2004-08-31 International Business Machines Corporation Processor with redundant logic
US6925549B2 (en) * 2000-12-21 2005-08-02 International Business Machines Corporation Asynchronous pipeline control interface using tag values to control passing data through successive pipeline stages
US20020087829A1 (en) * 2000-12-29 2002-07-04 Snyder Walter L. Re-targetable communication system
WO2002063421A2 (en) * 2001-01-03 2002-08-15 University Of Southern California System level applications of adaptive computing (slaac) technology
JPWO2002057921A1 (en) * 2001-01-19 2004-07-22 株式会社日立製作所 Electronic circuit device
US7000213B2 (en) * 2001-01-26 2006-02-14 Northwestern University Method and apparatus for automatically generating hardware from algorithms described in MATLAB
US7036059B1 (en) * 2001-02-14 2006-04-25 Xilinx, Inc. Techniques for mitigating, detecting and correcting single event upset effects in systems using SRAM-based field programmable gate arrays
US20030061409A1 (en) * 2001-02-23 2003-03-27 Rudusky Daryl System, method and article of manufacture for dynamic, automated product fulfillment for configuring a remotely located device
US6848060B2 (en) * 2001-02-27 2005-01-25 International Business Machines Corporation Synchronous to asynchronous to synchronous interface
JP2002269063A (en) 2001-03-07 2002-09-20 Toshiba Corp Massaging program, messaging method of distributed system, and messaging system
JP3873639B2 (en) * 2001-03-12 2007-01-24 株式会社日立製作所 Network connection device
JP2002281079A (en) 2001-03-21 2002-09-27 Victor Co Of Japan Ltd Image data transmitting device
US7065672B2 (en) 2001-03-28 2006-06-20 Stratus Technologies Bermuda Ltd. Apparatus and methods for fault-tolerant computing using a switching fabric
US6530073B2 (en) 2001-04-30 2003-03-04 Lsi Logic Corporation RTL annotation tool for layout induced netlist changes
US7219309B2 (en) 2001-05-02 2007-05-15 Bitstream Inc. Innovations for the display of web pages
US7210022B2 (en) * 2001-05-15 2007-04-24 Cloudshield Technologies, Inc. Apparatus and method for interconnecting a processor to co-processors using a shared memory as the communication interface
US7076595B1 (en) * 2001-05-18 2006-07-11 Xilinx, Inc. Programmable logic device including programmable interface core and central processing unit
US6985975B1 (en) 2001-06-29 2006-01-10 Sanera Systems, Inc. Packet lockstep system and method
US20030086595A1 (en) * 2001-11-07 2003-05-08 Hui Hu Display parameter-dependent pre-transmission processing of image data
US7106715B1 (en) * 2001-11-16 2006-09-12 Vixs Systems, Inc. System for providing data to multiple devices and method thereof
US7143418B1 (en) 2001-12-10 2006-11-28 Xilinx, Inc. Core template package for creating run-time reconfigurable cores
US7065560B2 (en) 2002-03-12 2006-06-20 Hewlett-Packard Development Company, L.P. Verification of computer program versions based on a selected recipe from a recipe table
US20040013258A1 (en) * 2002-07-22 2004-01-22 Web. De Ag Communications environment having a connection device
US7550870B2 (en) 2002-05-06 2009-06-23 Cyber Switching, Inc. Method and apparatus for remote power management and monitoring
US7073158B2 (en) 2002-05-17 2006-07-04 Pixel Velocity, Inc. Automated system for designing and developing field programmable gate arrays
US7137020B2 (en) * 2002-05-17 2006-11-14 Sun Microsystems, Inc. Method and apparatus for disabling defective components in a computer system
US7117390B1 (en) * 2002-05-20 2006-10-03 Sandia Corporation Practical, redundant, failure-tolerant, self-reconfiguring embedded system architecture
US7024654B2 (en) * 2002-06-11 2006-04-04 Anadigm, Inc. System and method for configuring analog elements in a configurable hardware device
US20030231649A1 (en) 2002-06-13 2003-12-18 Awoseyi Paul A. Dual purpose method and apparatus for performing network interface and security transactions
US7076681B2 (en) 2002-07-02 2006-07-11 International Business Machines Corporation Processor with demand-driven clock throttling power reduction
EP1383042B1 (en) 2002-07-19 2007-03-28 STMicroelectronics S.r.l. A multiphase synchronous pipeline structure
US7017140B2 (en) 2002-08-29 2006-03-21 Bae Systems Information And Electronic Systems Integration Inc. Common components in interface framework for developing field programmable based applications independent of target circuit board
WO2004042562A2 (en) 2002-10-31 2004-05-21 Lockheed Martin Corporation Pipeline accelerator and related system and method
CA2503620A1 (en) 2002-10-31 2004-05-21 Lockheed Martin Corporation Programmable circuit and related computing machine and method
US7418574B2 (en) 2002-10-31 2008-08-26 Lockheed Martin Corporation Configuring a portion of a pipeline accelerator to generate pipeline date without a program instruction
US7200114B1 (en) * 2002-11-18 2007-04-03 At&T Corp. Method for reconfiguring a router
US7185225B2 (en) * 2002-12-02 2007-02-27 Marvell World Trade Ltd. Self-reparable semiconductor and method thereof
US7260794B2 (en) * 2002-12-20 2007-08-21 Quickturn Design Systems, Inc. Logic multiprocessor for FPGA implementation
US20040203383A1 (en) * 2002-12-31 2004-10-14 Kelton James Robert System for providing data to multiple devices and method thereof
US7178112B1 (en) * 2003-04-16 2007-02-13 The Mathworks, Inc. Management of functions for block diagrams
US7096444B2 (en) * 2003-06-09 2006-08-22 Kuoching Lin Representing device layout using tree structure
CN1802622A (en) 2003-06-10 2006-07-12 皇家飞利浦电子股份有限公司 Embedded computing system with reconfigurable power supply and/or clock frequency domains
US7230541B2 (en) * 2003-11-19 2007-06-12 Baker Hughes Incorporated High speed communication for measurement while drilling
US7228520B1 (en) 2004-01-30 2007-06-05 Xilinx, Inc. Method and apparatus for a programmable interface of a soft platform on a programmable logic device
US7284225B1 (en) 2004-05-20 2007-10-16 Xilinx, Inc. Embedding a hardware object in an application system
US7360196B1 (en) * 2004-06-02 2008-04-15 Altera Corporation Technology mapping for programming and design of a programmable logic device by equating logic expressions
US7143368B1 (en) * 2004-06-10 2006-11-28 Altera Corporation DSP design system level power estimation
ITPD20040058U1 (en) * 2004-07-06 2004-10-06 Marchioro Spa MODULAR CAGE STRUCTURE
WO2006039713A2 (en) 2004-10-01 2006-04-13 Lockheed Martin Corporation Configurable computing machine and related systems and methods
US7676649B2 (en) * 2004-10-01 2010-03-09 Lockheed Martin Corporation Computing machine with redundancy and related systems and methods
TWM279747U (en) * 2004-11-24 2005-11-01 Jing-Jung Chen Improved structure of a turbine blade
US20070030816A1 (en) * 2005-08-08 2007-02-08 Honeywell International Inc. Data compression and abnormal situation detection in a wireless sensor network
WO2007137266A2 (en) 2006-05-22 2007-11-29 Coherent Logix Incorporated Designing an asic based on execution of a software program on a processing system
KR20080086423A (en) 2008-09-01 2008-09-25 김태진 Secondary comb for cutting hair

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6018793A (en) * 1997-10-24 2000-01-25 Cirrus Logic, Inc. Single chip controller-memory device including feature-selectable bank I/O and architecture and methods suitable for implementing the same
US6205516B1 (en) * 1997-10-31 2001-03-20 Brother Kogyo Kabushiki Kaisha Device and method for controlling data storage device in data processing system
US6684314B1 (en) * 2000-07-14 2004-01-27 Agilent Technologies, Inc. Memory controller with programmable address configuration
JP2002149424A (en) * 2000-09-06 2002-05-24 Internatl Business Mach Corp <Ibm> A plurality of logical interfaces to shared coprocessor resource
US6829697B1 (en) * 2000-09-06 2004-12-07 International Business Machines Corporation Multiple logical interfaces to a shared coprocessor resource
US6662285B1 (en) * 2001-01-09 2003-12-09 Xilinx, Inc. User configurable memory system having local and global memory blocks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PATENT ABSTRACTS OF JAPAN vol. 2002, no. 09 4 September 2002 (2002-09-04) *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7987341B2 (en) 2002-10-31 2011-07-26 Lockheed Martin Corporation Computing machine using software objects for transferring data that includes no destination information
US8250341B2 (en) 2002-10-31 2012-08-21 Lockheed Martin Corporation Pipeline accelerator having multiple pipeline units and related computing machine and method
US7809982B2 (en) 2004-10-01 2010-10-05 Lockheed Martin Corporation Reconfigurable computing machine and related systems and methods
US7676649B2 (en) 2004-10-01 2010-03-09 Lockheed Martin Corporation Computing machine with redundancy and related systems and methods
WO2016125202A1 (en) * 2015-02-04 2016-08-11 Renesas Electronics Corporation Data transfer apparatus
US10275160B2 (en) 2015-12-21 2019-04-30 Intel Corporation Method and apparatus to enable individual non volatile memory express (NVME) input/output (IO) Queues on differing network addresses of an NVME controller
US11385795B2 (en) 2015-12-21 2022-07-12 Intel Corporation Method and apparatus to enable individual non volatile memory express (NVMe) input/output (IO) queues on differing network addresses of an NVMe controller
US10013168B2 (en) 2015-12-24 2018-07-03 Intel Corporation Disaggregating block storage controller stacks
US10649660B2 (en) 2015-12-24 2020-05-12 Intel Corporation Disaggregating block storage controller stacks
WO2017112285A1 (en) * 2015-12-24 2017-06-29 Intel Corporation Disaggregating block storage controller stacks
US10893050B2 (en) 2016-08-24 2021-01-12 Intel Corporation Computer product, method, and system to dynamically provide discovery services for host nodes of target systems and storage resources in a network
US10970231B2 (en) 2016-09-28 2021-04-06 Intel Corporation Management of virtual target storage resources by use of an access control list and input/output queues
US11630783B2 (en) 2016-09-28 2023-04-18 Intel Corporation Management of accesses to target storage resources

Also Published As

Publication number Publication date
US20060101307A1 (en) 2006-05-11
US20060087450A1 (en) 2006-04-27
US20060230377A1 (en) 2006-10-12
US7809982B2 (en) 2010-10-05
US20060085781A1 (en) 2006-04-20
US8073974B2 (en) 2011-12-06
WO2006039710A9 (en) 2006-07-20
US7619541B2 (en) 2009-11-17
US20060123282A1 (en) 2006-06-08
US20060149920A1 (en) 2006-07-06
US7487302B2 (en) 2009-02-03
WO2006039710A2 (en) 2006-04-13
US7676649B2 (en) 2010-03-09
WO2006039710A3 (en) 2006-06-01
US20060101253A1 (en) 2006-05-11
US20060101250A1 (en) 2006-05-11

Similar Documents

Publication Publication Date Title
US7487302B2 (en) Service layer architecture for memory access system and method
US20040136241A1 (en) Pipeline accelerator for improved computing architecture and related system and method
AU2003287320B2 (en) Pipeline accelerator and related system and method
WO2004042562A2 (en) Pipeline accelerator and related system and method
US7047370B1 (en) Full access to memory interfaces via remote request
US7421524B2 (en) Switch/network adapter port for clustered computers employing a chain of multi-adaptive processors in a dual in-line memory module format
EP2116938B1 (en) Operation apparatus and control method
US8990456B2 (en) Method and apparatus for memory write performance optimization in architectures with out-of-order read/request-for-ownership response
CN101111828B (en) System and method for a memory with combined line and word access
US9304775B1 (en) Dispatching of instructions for execution by heterogeneous processing engines
CN114827048A (en) Dynamic configurable high-performance queue scheduling method, system, processor and protocol
JP3952856B2 (en) Caching method
US6253272B1 (en) Execution suspension and resumption in multi-tasking host adapters
US6604163B1 (en) Interconnection of digital signal processor with program memory and external devices using a shared bus interface
US6199143B1 (en) Computing system with fast data transfer of CPU state related information
US6401151B1 (en) Method for configuring bus architecture through software control
US20040128475A1 (en) Widely accessible processor register file and method for use
US6425029B1 (en) Apparatus for configuring bus architecture through software control
US20240086201A1 (en) Input channel processing for triggered-instruction processing element
CN111788553A (en) Packing and unpacking network and method for variable bit width data formats
EP1193606A2 (en) Apparatus and method for a host port interface unit in a digital signal processing unit
AU2002356010A1 (en) Switch/network adapter port for clustered computers employing a chain of multi-adaptive processors in a dual in-line memory module format

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase