EP0293700B1 - Linear chain of parallel processors and method of using same - Google Patents

Linear chain of parallel processors and method of using same Download PDF

Info

Publication number
EP0293700B1
EP0293700B1 EP88108175A EP88108175A EP0293700B1 EP 0293700 B1 EP0293700 B1 EP 0293700B1 EP 88108175 A EP88108175 A EP 88108175A EP 88108175 A EP88108175 A EP 88108175A EP 0293700 B1 EP0293700 B1 EP 0293700B1
Authority
EP
European Patent Office
Prior art keywords
data
accumulator
processor
group
memory means
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP88108175A
Other languages
German (de)
French (fr)
Other versions
EP0293700A3 (en
EP0293700A2 (en
Inventor
Stephen S. Wilson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Applied Intelligent Systems Inc
Original Assignee
Applied Intelligent Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Applied Intelligent Systems Inc filed Critical Applied Intelligent Systems Inc
Publication of EP0293700A2 publication Critical patent/EP0293700A2/en
Publication of EP0293700A3 publication Critical patent/EP0293700A3/en
Application granted granted Critical
Publication of EP0293700B1 publication Critical patent/EP0293700B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3885Concurrent instruction execution, e.g. pipeline, look ahead using a plurality of independent parallel functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/80Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
    • G06F15/8007Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors single instruction multiple data [SIMD] multiprocessors
    • G06F15/8015One dimensional arrays, e.g. rings, linear arrays, buses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline, look ahead
    • G06F9/3885Concurrent instruction execution, e.g. pipeline, look ahead using a plurality of independent parallel functional units
    • G06F9/3887Concurrent instruction execution, e.g. pipeline, look ahead using a plurality of independent parallel functional units controlled by a single instruction for multiple data lanes [SIMD]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining

Definitions

  • This invention relates to systems and methods for the processing and analysis of spatially related data arrays such as images, by means of a large array of programmable computing elements.
  • SIMD Single Instruction Multiple Data
  • K.E. Batcher "Design of a Massively Parallel Processor," IEEE Transactions on Computers, Sept. 1980, pp. 836-840, contains an array of 128x128 processors where image processing is an important application. Data is communicated between neighboring processing elements when an instruction that requires a neighborhood operation is performed. Image data arrays with dimensions larger than 1024x1024 are not uncommon.
  • processor arrays this large are not economically feasible, the array must be broken into smaller data array sizes with dimensions equivalent to the size of the processor array.
  • SIMD processors There are other types of SIMD processors, but they also generally experience the problem of data arrays larger than the processor array. Generally, for all these systems, all the memory associated with the processors is not large enough to hold the entire image along with extra memory capacity for intermediate computational results.
  • this article is disclosing method and system based on an array of individual processor units which includes an array of processor cells linked together by a plurality of connections and an array of memory means, with each of the memory means being associated with a respective one of the processor units and each of the memory means being arranged for storing one column of data.
  • a primary object of the present invention is to provided a simple method to allow a fixed array of processors to handle a large array of data while performing operations which require neighborhood and global processing of data.
  • Another object of the invention is to provide an effective method of indirect addressing of memory which operates independently for each SIMD processor in the array.
  • a further object of the invention is to provide a means of handling large arrays of data without resorting to memories and associated input and output mechanisms remote from the processing array.
  • This invention relates to a method of handling the processing of rectangular arrays of data where the entire data array is held in memory associated with an array of processing elements.
  • a plurality of identical individual processing units are connected in a linear chain, where there is one processing element per column of the data matrix, and each processing unit is coupled to a large enough memory to contain the entire height of the data matrix.
  • the identical processor units have connections to adjacent neighboring units and communicate neighborhood data therewith.
  • the processor unit employs a highly flexible accumulator therein which is used as a means for incrementing data, a wideband data communication means, a means for transporting 8x8 data subarrays, and a register for indirect addressing.
  • the combination of accumulator functions allow external byte-wide memories to be used and still provide operations which would otherwise be impossible.
  • the transpose means allows data which is generally treated as bit serial, to be converted to parallel byte-wide data.
  • the processor unit has a novel mode of operation where parallel arithmetic functions can be performed.
  • a two-level multiplexer within each processor unit is capable of performing both look-up-table functions and bit serial arithmetic functions.
  • a parallel processing system 9 of the present invention comprises an array 10 of identical individual neighborhood processing units 10a-10n, and an associated array 13 of single-bit-wide memories 13a-13n.
  • Each individual processing unit is respectively associated with an individual single-bit-wide column by multiple row memory, e.g., processing unit 10i is associated with memory 13i.
  • the processor units are shown in groups of eight, for example 10a-10h.
  • the memories 13 associated with such groups of eight processor units are preferably constructed from byte-wide memories, and these memories 13 are also shown in groups of eight, for example 13a-13h.
  • Neighborhood processing units 10b-10n receive neighboring data from adjacent processing elements on their immediate left or right via lines 11i-11n, for example.
  • Each neighborhood processing unit 10a-10n also connects to associated memories 13a-13n by means of bidirectional data transfer lines 12a-12n.
  • Data input device 20 provides a stream of data to first processing unit 10a via line 21a. Data are held in shift registers within the processing units with outputs passing to subsequent processing units via data shift lines 21i-21n, for example. Data is shifted through a chain of shift registers within processor units 10a-10n and is output via data line 21p to output device 22.
  • a host computer 25 sends controlling signals via lines 26 to controler 27. Both host 25 and controller 27 will send or receive data from the groups 10a-10n of eight processor units via lines 15.
  • Host 25 is coupled to address select unit 18 via control line 17 wherein instructions derived from the signal on line 17 will cause selector 18 to pass either address signals from controller 27 via sixteen parallel bit lines 21 or address signals from host 25 via sixteen bit lines 22 through to sixteen bit output lines 19.
  • Sixteen bit address lines 19 is shown split into two eight bit lines: low address byte lines 14, and high address byte lines 23.
  • Low address byte lines 14 are coupled to groups of eight processor units 10a-10n, where units 10a-10h are one example thereof. Each group of eight processors 10a-10h, for example, connects to associated memories 13a-13h via eight bit lines 28a-28n whose bits serve as the low address byte to memories 13a-13h therein.
  • High address byte lines 23 are coupled to groups of eight memories 13a-13n. All processor unit groups 10a-10n receive clock and control signals from controller 27 via control lines 29.
  • FIG. 2 shows a block diagram of a single processor unit 30, representative of any one of the processor units 10a-10n, which includes several external connections to identical adjacent processor units to the immediate left or right.
  • Connections 36-38 and 41-44 on the right side of processor unit 30 correspond to right-side connections such as connections 11c, for example, associated with any one of the processing units 10a-10n shown in FIG. 1.
  • connections 34-36, and 41, 42, 45 and 47 on the left side processor unit 30 correspond to left side connections such as connections 11a, for example, associated with any one of said processing units in FIG. 1.
  • I/O data connections 21e and 21f in FIG. 2 correspond to associated pairs of left and right data shift lines 21a-21p in FIG. 1; memory data connection 12e in FIG. 1 corresponds to an associated data transfer line 12a-12n; and host data connection 15e corresponds to one of the eight bit lines which constitute the data byte lines 15 in FIG. 1.
  • Connections to the processor cell 30 on its left side are carry in line 34, west input neighbor line 35, and middle cell output line 36, which acts as a east neighbor input to the typical adjacent processor immediately to the left.
  • Connections to the processor cell on the right are carry out line 37, east neighbor input line 38, and middle cell output line 36, which acts as a west neighbor input to the typical adjacent processor immediately to the right.
  • a sixteen-bit accumulator 51 is composed of two identical sections, namely an accumulator high byte register 54 and an accumulator low byte register 55.
  • Accumulator 51 has four different functions which include: sixteen bit bidirectional parallel in; sixteen bit bidirectional parallel out; sixteen bit shift register with a unidirectional serial input via line 40; and sixteen bit shift register with unidirectional serial input via line 63.
  • Sixteen input connections are provided by eight bit lines 45 and 47
  • sixteen output connections are provided by eight bit lines 41 and 43, which two pairs of lines respectively service the parallel in and parallel out ports of the combined sixteen stage shift register of accumulator 51 for shifting data therein to the east (via lines 41 and 43) and for receiving data therein from the west (via lines 45 and 47).
  • sixteen input connections are provided by eight bit lines 42 and 44
  • sixteen output connections are provided by eight bit lines 41 and 43, which two pairs of lines respectively serve as the parallel in and parallel out ports of combined shift register of accumulator 51 for shifting data therein to the west (via lines 41 and 43) and receiving data therein from the east (via lines 42 and 44).
  • Lines 41, 42, 43 and 44 connect to a similar accumulator in an adjacent (e.g., nearest neighbor) processor unit 30 to the immediate east.
  • Lines 45, 41, 47 and 43 connect to a similar accumulator in an adjacent processor unit to the immediate west.
  • Accumulator low byte register 55 also is connected to memory data line 12e which is provided as an input thereto, and can serve to increment the value of the data stored by register 55 therein.
  • a carry out signal of register 55 on line 53 serves to carry the incrementing process overflowing register 55 to high-byte accumulator register 54 therein.
  • Line 62 is a serial shift output line from accumulator high byte register 54.
  • selector unit 60 is instructed by signals derived from control line CON1 to pass the logic state of either line 62 or line 12e to selector output line 63, which is coupled to the serial input of accumulator low byte register 55. It is thus apparent that during serial shift operations, serial input to the accumulator low byte register 55 can be derived from either the memory data line 12e or from the serial output 62 of accumulator high byte register 54.
  • Any of four above-noted functions of accumulator 54 and 55 are selected by instructions derived on command lines CON2 and are activated upon receiving a clock signal via respective lines CLK1 and CLK2. A more detailed description of the accumulator and its functions is provided later.
  • Any one of the sixteen accumulator output lines 41 and 43 can be selected by selector 50 with instructions derived from control lines CON3.
  • the logical state (0 or 1) of the one accumulator line selected by selector 50 is provided on line 52, which is input to the processing cell 31 and to an output selector unit 33.
  • Coupled to output selector 33 are seven input signals 15e, 56, 52, 71, 72, 73 and 74. Based upon instructions received on control lines CON5, selector 33 will select and transfer the logical state of one of these seven input signals to output line 70.
  • Line 70 is coupled to a three-state gate 76 controlled by control line CON6.
  • the logic signal on output line 70 is transferred to memory data line 12e and can be written into memory 13e if the output of gate 76 is enabled by an appropriate instruction provided on control line CON6. If the instruction on line CON6 commands that gate 76 assume an inactive state, the gate's output will switch to a high impedance state, thus allowing memory 13e to access line 12e and write data therein if so instructed.
  • data from seven different sources of data connected to each processor unit 10a-10n can be written into its respective memory of the plurality of memories 13a-13n.
  • These seven sources include data from (1) the host data bus via line 15e; (2) the I/O unit 32 via line 56; (3) any selected output from accumulator 31 by means of accumulator output selector 50, via line 52; (4) the "condition” signal via line 71; (5) the "function” signal via line 72; (6) the “carry register” signal via line 73; and (7) the "transpose” signal via line 74.
  • I/O unit 32 is an eight-bit, unidirectional, parallel-in, parallel-out, serial-in, serial-out shift register.
  • the parallel inputs are received from eight input lines 21e; and parallel outputs are transferred to eight lines 21f.
  • the lines 21e and 21f are typical examples of lines 21a-21n shown in FIG. 1, and are connected respectively to adjacent processor units 10 on the immediate east and west.
  • the serial-in signal to I/O unit 32 is obtained from memory data line 12e.
  • the serial-out signal from I/O unit 32 is output on line 56 to output selector 33. Either a parallel or serial shift function is selected by an instruction received on control line CON4, which is clocked into I/O unit 32 upon receipt of a clock signal on line CLK3.
  • Input data to be processed by the system 9 of this invention will come from data source 20 (see FIG. 1) preferably in a raster scan format, that is the input data will be provided in the form of a stream of H successive rows of data, with each row having a length of n data bytes.
  • the data to be processed constitutes a data matrix having a height of H rows and a width W of n bytes.
  • the system 9 accepts the incoming data row by row, for example, from the output buffers of a solid-state imaging device such as a CCD scanning device. This data stream is delivered via lines 21a to the parallel-in input of I/O shift register 32. Data in a first row of bytes is input to the system in two steps, as follows.
  • Step 1 Controller 27 clocks all the I/O shift registers 32 to the east synchronously n times until the data stream of the first row is completely stored in all I/O shift registers 32 in all processor units 10a-10n.
  • Step 2 A first row of bits from the first row of bytes is read out of all of the I/O shift registers 32 via their respective lines 56 and transferred to memory data lines 12a-12n via output selectors 33 and gates 76. As part of this read-out operation, controller 27 supplies the desired addresses for memories 13a-13n via lines 21, address selector 18, and address lines 19 (see FIG. 1).
  • controller 27 causes the other seven rows of bits from the first row of bytes to be stored by successively supplying further addresses to memories 13a-13n while synchronously clocking the I/O shift registers 32 serially via lines CLK3 and CON4 so that successive rows of bits are read out to the lines 56.
  • the above-mentioned two step process is repeated until all successive rows of the data matrix are transmitted from data input device 20 to memories 13a-13n.
  • Step 1 First, controller 27 (FIG. 1) supplies the address of a desired first row of data bits to be output to memories 13a-13n, via lines 21, selector 18, and address lines 19, and causes the rows of bits of the specified memory addresses to be clocked up into the serial inputs of I/O shift registers 32.
  • Controller 27 proceeds in a similar manner by causing different addresses (e.g., addresses of adjacent, successive rows of data bits) to be supplied to memories 13a-13n until all eight bits of the desired first row of data bytes are output from memories 13a-13n into shift registers 32 via lines 12a-12n.
  • Step 2 controller 27 clocks I/O shift registers 32 a total of n times, so as to cause the eight bits of data to be shifted to the east via lines 21b-21p, so that the entire first row of data bytes consisting of eight rows of data bits enters output device 22 via lines 21p.
  • the above two-step process is repeated until all desired rows of the data matrix are transmitted from memories 13a-13n to data output device 22.
  • Data from memory 13e can be delivered to the host data bus line 15e by activating or enabling three-state gate 78 with a control signal received by gate 78 from line CON7. In like manner, data can be read or written directly between the host computer 25 and any of the memories 13a-13n.
  • FIG. 3 a detailed schematic diagram of a processor cell 31, which has a number of logic gates, flip-flops, selectors, and multiplexers, is shown. External command signals are depicted by and received on the lines labeled with the prefix "CMD”; clock signals are similarly depicted and received on the lines labeled with the prefix "CLK.” (The other external signal lines are designated with reference numerals consistent with those used in the other Figures.) Processor cell 31 can be placed in various functional states depending upon the combination of command signals it receives. This enables processor cell 31 to perform a wide variety of processing functions, each of which will now be explained in detail.
  • neighborhood processing is the transformation of an entire matrix of numbers or elements, wherein the transformation of each element of the matrix involves a function which uses the nearby neighbors of the element as independent variables.
  • three steps are required, as follows. First, data must be read from the associated memories 13a-13n into processing units 11a-11n, which units are each provided with enough on-board storage to hold nearest neighbor data in the horizontal and vertical directions for the element to be transformed that is currently associated with each processing unit. Secondly, the processing unit computes a transformation of the neighboring data according to some specific instruction, thereby modifying the data. Thirdly, the modified data must be written back into associated memories 13a-13n. These three steps are respectively called the read subcycle, the modify subcycle, and the write subcycle. This sequence of three steps is called a read-modify-write cycle, and may be repeated many times in order to completely process all the data according to some specified algorithm.
  • a first read operation causes a complete line of single bit data to be read from memories 13a-13n via lines 12a-12n into processing units 10a-10n depicted in FIG. 1.
  • the data being read in is loaded into flip-flops in the processing units for temporary storage, under the control of clock signal CLK4 applied thereto, of which south flip-flop 81 depicted in FIG. 3 is typical.
  • the data therein each correspond to a single bit in a first row of bits of the matrix stored in the memories 13a-13n.
  • a second read operation of an adjacent row of bits which operation includes another clock signal on line CLK4, causes the first bit stored in flip-flop 81 to shift up into middle flip-flop 82, while the second bit of data now occupies flip-flop 81. Any read operations thereafter are called read subcycles.
  • the initial read operations are often referred to as "filling the pipeline".
  • a third read operation which accesses the next adjacent row of data, in a like manner causes a further shifting of data so that the group of flip-flops 81, 82 and 83 contain bits from three adjacent rows of single bit data. Further read subcycles will cause the next set of three adjacent bits of data to occupy this group of flip-flops. Accordingly, the flip-flops 81-83 will contain a set of nearest neighbor data for a specific row in the north and south directions.
  • neighboring processing cells 31 on the immediate left and right of the processing cell 31 depicted in FIG. 3 contain data bits which correspond to the east and west neighbors of the FIG. 3 processor cell, and that output line 36 from the middle flip-flop 82 provides the east and west neighbor states for the processor cells adjacent to the FIG. 3 cell on the right and left respectively.
  • selector 85 is instructed by signals received on command lines CMD1 to pass the west neighbor signal on line 35 to output line 86, which delivers the signal to a first address input of carry (c) and sum (s) multiplexers 87 and 88.
  • selector 89 is instructed by signals received on command line CMD2 to pass the east neighbor signal on line 39 to output line 90, which delivers the signal to a second address input of multiplexers 87 and 88.
  • the logic state of the south neighbor is output by flip-flop 81 on line 91 and is delivered thereby to a third address input of multiplexers 87 and 88.
  • Command line CMD3 is set to a logic "1" so that due to well-known properties of AND gates, AND gate 92 effectively transfers onto line 94 the signal on line 93 which is the logic state of the north neighbor output by north flip-flop 83, where it is delivered to address input of multiplexer 95. If line 94 connected to the address input of multiplexer 95 is a logic "1" level, multiplexer 95 passes the signal on line 96 (which is the output by multiplexer 87) to its output port and line 98. If this address input is logic "0", multiplexer 95 passes the signal on line 97 (which is the output by multiplexer 88) to its output port and line 98.
  • multiplexers 87, 88 and 95 illustrated in FIG. 3 forms a two-level multiplexer, with multiplexers 87 and 88 being the first level and multiplexer 95 being the second level.
  • the collective action of this two-level multiplexer is that of a "truth table" having sixteen possible states.
  • the logic values of this truth table are derived from the states of command line inputs CMD4 and CMD5, which each contain eight lines.
  • the particular command line input which is chosen as the output in multiplexer 87 and in multiplexer 88 is determined by the state of the address inputs thereof.
  • the transformation signal 98 and middle cell output 36 are coupled to two address inputs of multiplexer 100.
  • the logic states of these two address inputs determine which one of four input signals received by multiplexer 100 on command lines CMD6 is selected, and provided by multiplexer 100 as an output on line 101.
  • Multiplexer 100 thus acts as a truth table transformation of the middle cell 36 and the neighborhood transformation result on line 98.
  • Line 101 and middle cell output 36 are input to selector 102, whose selection operation is controlled by the state of the input signal provided on line 103. A logic "0" on line 103 will cause selection and passing of the signal on line 01, while a logic "1" thereon will cause selection of the signal from the middle cell on line 36.
  • the output of selector 102 is called the function output, and its logic value is transferred to or imposed on line 104, for delivery to "function" flip-flop 105, where it is latched therein upon activation of clock signal CLK5.
  • a signal on memory data line 12e is latched into condition flip-flop 80 upon activation of clock signal CLK6.
  • Output line 106 connected to and bearing the output or state of flip-flop 80 is coupled to AND gate 107.
  • Command line CMD7 is also coupled to AND gate 107.
  • the condition flip-flop 80, AND gate 107 and selector 102 collectively form a conditional enable circuit, where the state of said condition flip-flop 80 controls whether the function flip-flop 105 will latch the state of the function output of selector 102 as determined by line 101 or use the state of the middle cell as received via line 36, which represents the untransformed state.
  • This conditional enable operation thus provides a means for selectively allowing some processing cells 31 to obey neighborhood transformation instructions received on command lines CMD1-CMD6 while allowing other cells 81 to effectively ignore such transformation instructions.
  • the above conditional operation of the processing cells 31 can be deactivated by a logic "0" command on command line CMD7.
  • the function output state on line 72 is selected by output selector 33 according to instructions on command line CON5, and passed thereby through gate 76 to memory data line 12e where it is then written into memory 13e.
  • Boolean operations are functionally similar to neighborhood operations, with the major difference being that any (arbitrary) lines of data bits may be written to flip-flops 81, 82, 83 and 84 of cell 31 illustrated in FIG. 3, and not just consecutive data bits from adjacent rows of bits in the data matrix, as is required to perform neighborhood operations. Boolean operations are performed by the system 9 in the following manner. According to some specified algorithm, controller 27 of FIG. 1 addresses memories 13a-13n and causes four rows of bit data to be read successively while clocking lines CLK4 so that flip-flops 81-84 (which are connected as a four-stage shift register) of the processor cells 31 receive and hold the four rows of data.
  • controller 27 of FIG. 1 addresses memories 13a-13n and causes four rows of bit data to be read successively while clocking lines CLK4 so that flip-flops 81-84 (which are connected as a four-stage shift register) of the processor cells 31 receive and hold the four rows of data.
  • each of the cells 31 are configured and operate in the following manner.
  • Selector 85 is instructed by signals on command lines CMD1 to pass the middle cell state on line 36 to output line 86 and first address input of multiplexers 87 and 88
  • selector 89 is instructed by signals on command lines CMD2 to pass the output of the X flip-flop signal on line 109 to its output and line 90, which leads to the second address input of multiplexers 87 and 88.
  • CMD3 is set to a logic "1". In a manner analogous to but different from the neighborhood operations, this new configuration of multiplexers 87, 88, and 93 collectively forms a general truth table transformation of the four states in flip-flops 81-84.
  • command signals CMD6 set the M multiplexer 100 so that it will pass only the state of input 98 to output line 101.
  • the condition flip-flop 80, AND gate 107 and selector 102 collectively form a conditional enable in a similar manner to that in the neighborhood transformations.
  • the resulting Boolean function output state of selector 102 is latched into flip-flop 105 and thereafter written back to memory 13e in the manner described with respect to the neighborhood operations.
  • control and command operations allow the processor cells 31 to perform arbitrary truth table transformations on a set of four arbitrary rows of data bits, based upon a truth table established by the state of command signals CMD3, CMD4 and CMD5.
  • the CLK 7 command coupled to the reset input of carry flip-flop 114 is momentarily activated (i.e., pulsed) so that the logic state therein is set to zero therewith.
  • a line of data corresponding to the least significant bit of a first data word is read from memories 13a-13n and clocked into south flip-flop 81 in a manner similar to that for neighborhood or Boolean operations.
  • a least significant bit of a second data word is read from the memories, with the result that middle flip-flop 82 contains the state of the first bit and south flip-flop 81 contains the state of the second bit.
  • Selector 85 is instructed by signals on command lines CMD1 to pass the state of middle flip-flop 82 output on line 36 to output line 86 and first address inputs of multiplexers 87 and 88, while selector 89 is instructed by signals on command lines CMD2 to pass the carry signal on line 115 (from the output of the carry flip-flop 114) to line 90 and the second address inputs of multiplexers 87 and 88.
  • Output line 91 of south flip-flop 81 is the third address input to multiplexers 87 and 88.
  • the states of command lines CMD5 are set in such a manner that "S" multiplexer 88 gives the truth table for a one bit sum or addition of the input values stored in flip-flops 81 and 82 and carry input value stored in flip-flop 114.
  • the states of command lines CMD4 are set in such a manner that "C” multiplexer 87 acts as a truth table for a carry propagate value for the three input values onto line 96.
  • the resultant carry propagate value and addition value are respectively stored in carry flip-flop 114 and function flip-flop 105 by activation of respective clock signals CLK5 and CLK8.
  • the state of function flip-flop 105 is output on line 72 and can be read back to memory 13 as is done in neighborhood operations.
  • next least significant bits from the same first data word and second data word are read into flip-flops 81 and 82 and are processed in a manner identical to the above.
  • New arithmetic sum values are generated and written to memories 13a-13n, and new carry propagate values are generated and stored in flip-flop 114.
  • two (or more) data words having any arbitrary number of bits can be added together.
  • conditional arithmetic operations can be readily provided through use of conditional flip-flop 80, as described earlier, in conjunction with the aforementioned serial arithmetic procedures.
  • Selector 85 may be instructed by signals on command lines CMD1 to pass the logic state of accumulator output line 52 therein to output line 86 of selector 85, and then the resulting arithmetic operation will involve the addition of bits of data words written into flip-flop 81 from memory, with words previously stored in the accumulator 51 and received via line 52.
  • selector 89 may be instructed by signals on command lines CMD2 to pass the logic state of carry-in signal on line 34 therein to output line 90 of selector 89, then the carry input for processor cell 31 will be obtained from the processing cell 31 to the immediate left via line 34 and the carry output will propagate to adjacent processing cell to the immediate right via line 37.
  • the data words have to be arranged in the memories and processing units such that successively significant bits are contiguous in the horizontal direction, where the most significant bits are toward the right.
  • a first row of data bits are read from memories 13e with each bit being clocked into south flip-flops 81.
  • a second row of data bits are read from memories 13e, with each bit being clocked into flip-flops 81, and wherein the first data bits are clocked into middle flip-flop 82.
  • bit serial arithmetic sum and carry signals are computed; however, the carry signals will propagate to the right, and the sum of the data bits comprising the data word will be stable shortly after the second row of data bits are read from memory 13e.
  • bit serial arithmetic the resultant sum in each cell 31 is then clocked into its function flip-flop 105 and can be written to memory.
  • conditional parallel arithmetic operations and parallel arithmetic operations involving data from accumulators 51 are provided in a manner similar to that for bit serial arithmetic.
  • care must be used in writing algorithms in order to avoid overflow, so that carry signals will not accidentally propagate from one data word to the next, since many data words are on the same line of bits.
  • the accumulator 51 receives serial input signals from processor cell 31 via line 40, and from memory 13e via memory data line 13e.
  • the serial input on line 40 is provided, as shown in FIG. 3, from accumulator input selector 116, which is instructed by signals on the command lines CMD8 to select either the memory signal on line 12e, or the function signal on line 104, or the accumulator output signal on line 52.
  • the state of the selected input signal is passed through to selector 116 to its output and line 40 which is coupled to accumulator 54 shown in FIG. 2.
  • FIG. 4 a diagram of accumulator high byte register 54 is shown within the large dashed rectangle, and eight one bit accumulator units 120a-120h are provided therein.
  • the other one bit accumulators 120b-120h are identical in construction to accumulator 120a, and thus need not be shown.
  • the lower section of accumulator 51 namely the accumulator low byte register 55 shown in FIG. 2 is identical in internal construction to the register 54.
  • the flip-flop 122 therein stores the value of one bit of eight bit word stored in accumulator section 54.
  • accumulator section 54 One function performed by accumulator section 54 is the incrementing of the value of the word stored therein.
  • output 123 from exclusive OR gate 124 contains the value of the incremented bit therein, whereas the inputs thereof are line 41a (which provides the value of the bit stored in flip-flop 122), and the first carry input connected to line 53 (which is the last carry line from the lower byte accumulator section 55.
  • a carry propagate function is formed by AND gate 125, and its output on line 26 contains the carry-out signal which is provided as the carry-in signal to the next one bit accumulator 120b.
  • Selector 127 is instructed by signals on control lines CON2 to pass as its output 129 the value of a selected one of four input signals, which are: (1) the value of the bit in accumulator unit 120b provided via line 128, (2) the value of a corresponding accumulator bit to the west provided via line 45, (3) the incremented value provided via line 123, and (4) the value on line 42a from the corresponding accumulator bit in an adjacent accumulator 154 located to the east.
  • the selected input signal is output to the accumulator flip-flop 122 via line 129, so that the selected value will be stored therein upon activation of clock signal CLK1.
  • accumulator section 54 can, upon receipt of the appropriate control signals, perform the following four functions: (1) increment the eight bit value stored therein by one or zero, depending on the state of its first carry input; (2) parallel shift all eight bit values stored therein east; (3) parallel shift all eight bit values stored therein west; (4) serial shift the eight bits stored therein down, with the serial shift input value provided to high-order one bit accumulator 120h by input line 40 being selected from various sources in the processor cell 31 (see FIG. 3).
  • processor units 10a-10n are logically arranged in groups of eight units, as further illustrated by representative group within the dashed rectangle 130, for three reasons: (1) the host computer data lines 15 are capable of reading eight bits at a time; (2) the memory associated with the groups 130 is provided most economically in the form of a byte wide memory 132; and (3) because further internal functions are best handled in eight bit sizes.
  • Eight processor units 30a-30h are shown with accumulator inputs and outputs 45a, 41a, 47a and 43a on the left and 41h-44h on the right. Processor units 30a-30h are coupled to the memory 132 via eight data lines 12a-12h.
  • Memory data lines 12a-12h are connected as a first input to accumulator left input selector 135, and the low byte input 134 of an adjacent accumulator is connected to a second input of selector 135.
  • Instructions received on control line CON8 coupled to accumulator left input selector 135 will select either the first or second input, and transfer selected signals therein along input lines 47a to the accumulator low byte register 55 (see FIG.2).
  • the eight output bits of accumulator low byte register 55h within processor unit 30h, which are output on lines 43h on the east side of group 130, are also respectively connected to the transpose inputs 74a-74h of the processors 30a-30h.
  • Each such transpose input is coupled to its output selector 33 (see FIG.2) so that if its selector 33 is instructed by signals on lines CON5 to pass the transpose signal 74, then memory 13e will store the accumulator bits from accumulator register 55h therein.
  • the grid 14 represents a small subarray of vertical bytes as they might be stored in memory 13, where A0-A7, B0-B7, C0-C7, etc., each represent a typical byte.
  • Transpose in The command signal on line CON8 (see FIG. 5) is set to cause left input selector 135 to select the output from memory data lines 12a-12h for delivery via lines 47a to processor unit 30a.
  • Eight parallel outputs of the accumulator low byte register 55h on the extreme right of FIG. 5 are also connected, via eight lines 43h, as a first set of inputs to low address selector 131.
  • the eight low address byte lines 14 (see FIG. 1) are connected as a second set of inputs to selector 131.
  • Instructions received on control line CON9 can set selector 131 to pass either accumulator output signals from the lines 43h, or the low address byte signals on lines 14, to lines 28, which are the eight least significant bits of an address delivered to memories 13a-13h located in byte-wide memory 132.
  • controller 27 can address memories 13a-13n, and load numbers into the eight accumulator sections any desired group 130 of eight processor units within processor units 30a - 30n, and then use the loaded numbers in the east accumulator section 55h of the selected eight processor unit group 130 to address the memories 13a-13n again.
  • This kind of function is commonly known as indirect addressing.
  • the data in the accumulator sections of group 130 can be shifted east and the indirect addressing can be repeated for the newly shifted values until all eight accumulators in the group have furnished indirect addresses to perform desired processing.
  • Command input selector 133 in FIG. 5 can be instructed by signals received on control line CON10 to pass the states of either the low address byte 14 or the byte from memories 13a-13n received via lines 12a-12f to command storage registers 137 via lines 138.
  • control line CON10 Upon activation of a clock signal on line CLK9, data on input lines 138 will be latched into command storage registers 137.
  • the command storage module 137 preferably is a four stage shift register which can store four bytes therein. Four clock cycles on line CLK9 are thus required to completely transfer a new set of four bytes of commands into the command storage register 137.
  • the four bytes of data stored within command storage register 137 are output via command lines CMD1-CMD8 and serve as the multitude of the command signals shown in and used to operate the processor cell 31 of FIG. 3.
  • data words for command storage module 137 for processing units 30a-30h of group 130 can be obtained from either the low address bytes by the lines 14 directly, or from memories 13a-13h, in which case the low address byte lines furnish an address to memories 13a-13h via low address selector 131 and line 28.
  • different groups 130 of processing units can simultaneously access different locations within memory 13, i.e., their respective assigned portions of memory 13, and receive different commands previously stored in memory 13.
  • a look-up table is commonly used where each data element in a data array or matrix is to be transformed according to a very complex rule. Ordinarily it would be very time consuming to make a computation in accordance with such a rule for every data element. But if the computation was made once, off line, for each possible data value of the combination of independent variables or inputs, and the results stored as horizontal bytes in the memory 13a-13n, then the processor units 13a-13n only need look up that value from the stored LUT array for each data point.
  • FIGS. 7A-7C Assume a desired LUT is stored in memory 13 in a horizontal format as in FIG.
  • the first step is to read eight such vertically stored data bytes represented by eight bit x eight bit segment 148 of the memories, 13a-13h into an 8x8 group 150 of low byte accumulators 55 while clocking the accumulators 55 "down", as depicted in FIG. 7A, by signal flow lines 152 and arrow 153.
  • the accumulator input selectors 116 of the processor cells 13a-13h need to pass the data from memory segment 148 of memories 13a-13h on lines 12a-12h to lines 40a-40h (see FIGS. 1 and 2). Eight clock cycles are required.
  • data bytes stored in group 150 are used as an indirect addresses which will address the memory space 154 in memories 13a-13h where the LUT is addressed as depicted in FIG. 7B by signal flow lines 156.
  • the number of rows in memory space 154 equals the number of rows in the LUT.
  • the data byte transformed according to the LUT in space 154 is read out of memory as indicated by signal flow lines 158 and shifted east as indicated by arrow 159 into the accumulator group 150.
  • the extreme right data byte is lost during the shift east and the next right-most data element occupies the extreme right position in group 150.
  • Eight clock cycles are required to process all eight bytes during this second step.
  • the data transformed by referencing entries in the LUT 154 is clocked east into the extreme left accumulator 55a of group 150 via input lines 47a, and in the process the data are transposed to the vertical format and held in group 150.
  • the new transformed values held in group 150 are written back into memory 13a-13h in the vertical format.
  • Eight more clock cycles are required in order to have the accumulators 55 of group 150 shift down fully, thus storing all eight transformed values.
  • the transformed values are stored back into memory segment 148, but if desired, can be stored at a different segment with the memories 13a-13h.
  • a histogram is a count of the number of times that each possible value of a group of data values occur in an entire data array, and is another operation which can be advantageously implemented by using the transpose and indirect addressing features of my invention.
  • a preferred technique for generating a histogram using the system 9 of my invention is as follows. First, the area in memories 13a-13n where the various counts associated with the histogram are to be accumulated is zeroed out. Assume for the sake of this example that the data is in the vertical format in an array 170 in memories 13a-13h shown in FIG.8A and the histogram count values will be accumulated in the horizontal format. A group of eight data bytes are loaded (serially downshifted) into the low byte accumulator group 150 as shown in FIG.
  • Processing cell 175 may be composed of the processor cells 31a-31h in processor units 10a-10h, for example. The count in the processing cell 175 is then incremented using the horizontal arithmetic mode of the processing cell, as indicated in FIG. 8B.
  • the incremented value is returned to the same memory location (row) from which it came, as depicted in FIG. 8C by signal flow lines 176 and 178.
  • the accumulator 150 is shifted east to get ready for the next count of the next data value.
  • the incrementing process illustrated in FIG. 8B and FIG. 8C occurs a total of eight times to count all the data loaded into the group 150 of eight accumulators during the step of FIG. 8A. All rows in the data matrix which spans, for example, several sets of eight-bit-wide columns in the memories 13a-13n are processed concurrently in a similar manner. Finally, after all rows within each set of columns have been processed, the several histograms, one for each group of eight columns, can be consolidated. Moreover, if a vertical format is needed, they can be transposed.
  • FIGS. 9A-9C depict data flow paths for an example of accumulators being used as accumulators, where sixteen bit numbers are added to values already within the accumulators.
  • data namely 8 sixteen-bit values in vertical format
  • the sixteen-bit accumulator 190 is clocked as indicated by arrows 194 down and also read into the processing cell 175 as indicated by upward signal flow path 195, wherein the two data inputs are added.
  • Accumulator 190 is formed from an eight-bit-wide by eight-bit-high accumulator section 192 (which may be the accumulator high-byte registers 54a-54h of eight adjacent processor units 30a-30h), and a corresponding 8x8 low byte accumulator group or section 150.
  • the sum produced in cell 175 is read back into the accumulator 190 while it is shifting down.
  • FIG. 9B illustrates that the accumulator 190 may be optionally shifted east or west if the next data value to be summed is in a different column of memory segment 188.
  • the two phases or sequence of steps depicted in FIGS. 9A and 9B are repeated as many times as are needed to complete the desired summation of various data from near (or distant) neighbors.
  • the shifts in either the east or west direction may be used to carry values or partial sums any arbitrary distance along the array of processing units 10a-10n; thus, the accumulation function is not confined to being performed within a given memory segment such as segment 188.
  • This accumulation technique illustrated by FIGS. 9A and 9B will handle convolutions, or sums with various multiplication factors by using the familiar multiplication technique of "shift and add" well-known to those skilled in the art.
  • the contents of the accumulator 190 are stored in memories 13a-13n by shifting the accumulator down as depicted in FIG. 9C by arrows 196. If desired, zeros may be shifted into accumulator 190 behind the outgoing data from processor cell 175. By shifting in zeros, the accumulator 190 is now ready to process another row of the data matrix.
  • FIGS. 10A and 10B depict a method of counting the number of bits in selected columns of a data matrix stored in memory using the system 9 of the present invention.
  • the combined low byte and high byte accumulator group 190 is placed in the increment mode, as suggested in FIG. 10A.
  • the accumulator group 190 is clocked while a byte-wide memory segment 200 of memories 13a-13h is read out, and a logic one bit in the data therein will increment the accumulator 190, whereas a logic zero bit therein will not alter the contents of the accumulator 190.
  • the segment 200 may contain an arbitrary number of rows.
  • each one bit accumulator 51a-51h within accumulator group 190 contains the sum of all logic "1" data bits in its associated data matrix column within memory segment 200.
  • the sums within the one bit accumulators of group 190 are shifted down and written into another byte-wide memory segment 202 of memories 13a-13h as depicted in FIG. 10B. Note that the sums thus stored in memory segment 202 are in a vertical data format of the type as illustrated in FIG. 6A, but will be sixteen bits high instead of eight bits high.
  • Sixteen-bit constant numbers can be added to each element of a data matrix by first loading the numbers stored in vertical format within a 16 row x 8-bit-wide memory segment 188 into the accumulator 190 as depicted in FIG. 11A by signal flow lines 210.
  • the various one-bit-wide, 16-bit high accumulators 54a-54h within the accumulator 190 can each contain a different number.
  • a bit serial addition occurs in the individual cells 31a-31h of processor cell group 175 in a sequence of two cycles.
  • 11B depicts the first cycle where a row of least significant bits of the data matrix is read (as illustrated by signal flow 212) into the processing cell 175, while at the same time the accumulator 190 is shifted down once, and a row of eight bits is loaded into the processing cell 175 as indicated by signal flow 214. The row of accumulator output bits from the bottom 216 of accumulator 190 are recycled back to the inputs of the top 218 thereof.
  • each resultant sum bit now in the individual cells 31a-31h of processing cell 175 is read into memory segment 188 as indicated by signal flow line 220.
  • the cycles depicted in FIG. 11B and 11C are repeated for all remaining more significant bits in the data words stored in segment 188.

Description

    BACKGROUND OF THE INVENTION
  • This invention relates to systems and methods for the processing and analysis of spatially related data arrays such as images, by means of a large array of programmable computing elements.
  • A number of systems have been developed which employ a large array of simple bit serial processors, each receiving the same instruction at any given time from a central controller. These types of systems are called "Single Instruction Multiple Data" (SIMD) parallel processors. There are various methods for communicating data from one processor to another. For example, the massively parallel processing described in K.E. Batcher, "Design of a Massively Parallel Processor," IEEE Transactions on Computers, Sept. 1980, pp. 836-840, contains an array of 128x128 processors where image processing is an important application. Data is communicated between neighboring processing elements when an instruction that requires a neighborhood operation is performed. Image data arrays with dimensions larger than 1024x1024 are not uncommon. Since processor arrays this large are not economically feasible, the array must be broken into smaller data array sizes with dimensions equivalent to the size of the processor array. There are other types of SIMD processors, but they also generally experience the problem of data arrays larger than the processor array. Generally, for all these systems, all the memory associated with the processors is not large enough to hold the entire image along with extra memory capacity for intermediate computational results.
  • Thus, a large external memory is necessary, and mechanisms must be able to handle the input and output of small subarray segments at high speed to preserve computing efficiency. Even if enough memory were supplied to each processor, so that the total memory associated with the ensemble of processors could contain the entire large array of image data, there would still remain the problem of communicating data between the various subarrays when neighborhood operations are performed. During an instruction clock cycle, every processor receives the output of its associated memory, so that processors on the edges of the array cannot receive data from neighboring subarrays because all memories are already engaged in reading an entire subarray. Thus, multiple clock cycles would be needed in reading data when subarray and neighboring subarray data are both needed in a computation. Generally, SIMD processors are less efficient in handling global processes where large areas of the data matrix must be analyzed, such as in histograms, feature extraction, and spatial transforms, such as the Hough transform, and Fourier analysis.
  • Indirect addressing is an important processing concept, but the difficulties with implementing it in a parallel processing environment have been recognized in the literature. See for example: A.L. Fisher & P.T. Highnam, "Real Time Image Processing on Scan Line Array Processors," IEEE Workshop on Pattern Analysis and Image Database Management, Nov. 18-20, 1985, pp. 484-489; and P.E. Danielson & T.S. Ericsson, "LIPP-Proposals for the Design of an Image Processor Array", chap. 11, pp. 157-178, COMPUTING STRUCTURES FOR IMAGE PROCESSING (Ed. M.J.B. Duff, Academic Press 1983). Large amounts of memory are required for indirect addressing to be useful because applications such as look-up-tables or histograms which can benefit from indirect addressing also require a large amount of memory. In SIMD processors the memory is generally integrated on the same chip as the processor, but technology limits the integration of both processors and memory on one chip so that the memory is too small for indirect addressing to solve any useful problems using these technologies. However, if the memory is outside the chip, then for a large number of integrated processors on a chip, there are too many address lines that the processors must handle, so that the number of signal paths is a strong limiting factor.
  • It is further referred to IEEE CONFERENCE PROCEEDINGS OF THE 13TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE, 2-5, June 1986, pages 338 - 345; A.L.FISHER: "Scan line array processors for image computation".
  • This article discloses an arrangement in which a commercial memory is used with a processor array through the use of interface chips. Fisher recognizes that indirect addressing (referred to as "independent addressing" by Fisher) of the memories would require a large number of pins in the chip, and suggests that the interface chip could be incorporated within the custom memory. The arrangement in Fisher assumes that there would exist a group of several address pins associated with each single data pin that is connected to each single processing element.
  • In addition this article is disclosing method and system based on an array of individual processor units which includes an array of processor cells linked together by a plurality of connections and an array of memory means, with each of the memory means being associated with a respective one of the processor units and each of the memory means being arranged for storing one column of data.
  • Since all processors simultaneously perform the same instructions in SIMD processor arrays, it has been recognized that a method is required to prevent some selected processors from performing the instructions, according to data values within the associated memory. Usually, a memory write inhibit function is used where a programmable flip-flop controls the write function for each memory in the array. However, the write inhibit function requires an extra line from the processor chip to the associated memory chip. Because of output pin limitations on the circuit chips, not too many processors can be integrated on a single chip. Also, cost effective byte-wide memories could not be utilized because the eight separate data lines cannot be separately inhibited.
  • Therefore, a primary object of the present invention is to provided a simple method to allow a fixed array of processors to handle a large array of data while performing operations which require neighborhood and global processing of data.
  • Another object of the invention is to provide an effective method of indirect addressing of memory which operates independently for each SIMD processor in the array.
  • A further object of the invention is to provide a means of handling large arrays of data without resorting to memories and associated input and output mechanisms remote from the processing array.
  • SUMMARY OF THE INVENTION
  • This invention relates to a method of handling the processing of rectangular arrays of data where the entire data array is held in memory associated with an array of processing elements. In the method of this invention, a plurality of identical individual processing units are connected in a linear chain, where there is one processing element per column of the data matrix, and each processing unit is coupled to a large enough memory to contain the entire height of the data matrix. The identical processor units have connections to adjacent neighboring units and communicate neighborhood data therewith.
  • The processor unit employs a highly flexible accumulator therein which is used as a means for incrementing data, a wideband data communication means, a means for transporting 8x8 data subarrays, and a register for indirect addressing. The combination of accumulator functions allow external byte-wide memories to be used and still provide operations which would otherwise be impossible. The transpose means allows data which is generally treated as bit serial, to be converted to parallel byte-wide data. Furthermore, the processor unit has a novel mode of operation where parallel arithmetic functions can be performed. Also, a two-level multiplexer within each processor unit is capable of performing both look-up-table functions and bit serial arithmetic functions.
  • In light of the foregoing problems and objects, there is provided, according to one aspect of my invention, a processing system and method as set out in claims 1 and 9, respectively.
  • These and other aspects, objects and advantages of the present invention will be more fully understood by reference to the following detailed description taken in conjunction with the various figures and appended claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The drawings form an integral part of the description of the preferred embodiments and are to be read in conjunction therewith. Like reference numerals designate identical components in the different Figures, where:
    • FIG. 1 is a block diagram of a parallel processing overall system of the present invention which employs a linear string of processor units;
    • FIG. 2 is a block diagram of a typical single processor unit;
    • FIG. 3 is a detailed schematic of the processor cell in of the FIG. 2 processor unit;
    • FIG. 4 is a detailed diagram illustrating the construction of eight accumulator cells which constitute of the accumulator high byte register of the FIG. 2 processor unit;
    • FIG. 5 is a block diagram of interconnections between a group of eight processor units and between a byte-wide memory associated therewith;
    • FIGS. 6A and 6B illustrate a transpose operation;
    • FIGS. 7A-7C depict signal flow for a look-up-table computation;
    • FIGS. 8A-8C depict signal flow for a histogram computation;
    • FIGS. 9A-9C depict signal flow for a data accumulation application;
    • FIGS. 10A and 10B depict signal flow for a counting application; and
    • FIGS. 11A-11C depict signal flow for adding constants to a data matrix in memory.
    DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Referring to FIG. 1, a parallel processing system 9 of the present invention comprises an array 10 of identical individual neighborhood processing units 10a-10n, and an associated array 13 of single-bit-wide memories 13a-13n. Each individual processing unit is respectively associated with an individual single-bit-wide column by multiple row memory, e.g., processing unit 10i is associated with memory 13i. The processor units are shown in groups of eight, for example 10a-10h. Likewise, the memories 13 associated with such groups of eight processor units are preferably constructed from byte-wide memories, and these memories 13 are also shown in groups of eight, for example 13a-13h. Neighborhood processing units 10b-10n receive neighboring data from adjacent processing elements on their immediate left or right via lines 11i-11n, for example. Each neighborhood processing unit 10a-10n also connects to associated memories 13a-13n by means of bidirectional data transfer lines 12a-12n. Data input device 20 provides a stream of data to first processing unit 10a via line 21a. Data are held in shift registers within the processing units with outputs passing to subsequent processing units via data shift lines 21i-21n, for example. Data is shifted through a chain of shift registers within processor units 10a-10n and is output via data line 21p to output device 22. A host computer 25 sends controlling signals via lines 26 to controler 27. Both host 25 and controller 27 will send or receive data from the groups 10a-10n of eight processor units via lines 15. Host 25 is coupled to address select unit 18 via control line 17 wherein instructions derived from the signal on line 17 will cause selector 18 to pass either address signals from controller 27 via sixteen parallel bit lines 21 or address signals from host 25 via sixteen bit lines 22 through to sixteen bit output lines 19. Sixteen bit address lines 19 is shown split into two eight bit lines: low address byte lines 14, and high address byte lines 23. Low address byte lines 14 are coupled to groups of eight processor units 10a-10n, where units 10a-10h are one example thereof. Each group of eight processors 10a-10h, for example, connects to associated memories 13a-13h via eight bit lines 28a-28n whose bits serve as the low address byte to memories 13a-13h therein. High address byte lines 23 are coupled to groups of eight memories 13a-13n. All processor unit groups 10a-10n receive clock and control signals from controller 27 via control lines 29.
  • FIG. 2 shows a block diagram of a single processor unit 30, representative of any one of the processor units 10a-10n, which includes several external connections to identical adjacent processor units to the immediate left or right. Connections 36-38 and 41-44 on the right side of processor unit 30 correspond to right-side connections such as connections 11c, for example, associated with any one of the processing units 10a-10n shown in FIG. 1. Similarly, connections 34-36, and 41, 42, 45 and 47 on the left side processor unit 30 correspond to left side connections such as connections 11a, for example, associated with any one of said processing units in FIG. 1. Also, I/O data connections 21e and 21f in FIG. 2 correspond to associated pairs of left and right data shift lines 21a-21p in FIG. 1; memory data connection 12e in FIG. 1 corresponds to an associated data transfer line 12a-12n; and host data connection 15e corresponds to one of the eight bit lines which constitute the data byte lines 15 in FIG. 1.
  • Connections to the processor cell 30 on its left side are carry in line 34, west input neighbor line 35, and middle cell output line 36, which acts as a east neighbor input to the typical adjacent processor immediately to the left. Connections to the processor cell on the right are carry out line 37, east neighbor input line 38, and middle cell output line 36, which acts as a west neighbor input to the typical adjacent processor immediately to the right. The functions of the foregoing connections to and from processor cell 30 and the purposes thereof will be made clear in a later detailed discussion of the processor cell.
  • A sixteen-bit accumulator 51 is composed of two identical sections, namely an accumulator high byte register 54 and an accumulator low byte register 55. Accumulator 51 has four different functions which include: sixteen bit bidirectional parallel in; sixteen bit bidirectional parallel out; sixteen bit shift register with a unidirectional serial input via line 40; and sixteen bit shift register with unidirectional serial input via line 63. Sixteen input connections are provided by eight bit lines 45 and 47, and sixteen output connections are provided by eight bit lines 41 and 43, which two pairs of lines respectively service the parallel in and parallel out ports of the combined sixteen stage shift register of accumulator 51 for shifting data therein to the east (via lines 41 and 43) and for receiving data therein from the west (via lines 45 and 47). Similarly, sixteen input connections are provided by eight bit lines 42 and 44, and sixteen output connections are provided by eight bit lines 41 and 43, which two pairs of lines respectively serve as the parallel in and parallel out ports of combined shift register of accumulator 51 for shifting data therein to the west (via lines 41 and 43) and receiving data therein from the east (via lines 42 and 44). Lines 41, 42, 43 and 44 connect to a similar accumulator in an adjacent (e.g., nearest neighbor) processor unit 30 to the immediate east. Lines 45, 41, 47 and 43 connect to a similar accumulator in an adjacent processor unit to the immediate west. Accumulator low byte register 55 also is connected to memory data line 12e which is provided as an input thereto, and can serve to increment the value of the data stored by register 55 therein. A carry out signal of register 55 on line 53 serves to carry the incrementing process overflowing register 55 to high-byte accumulator register 54 therein. Line 62 is a serial shift output line from accumulator high byte register 54. During the aforementioned serial shift operations, selector unit 60 is instructed by signals derived from control line CON1 to pass the logic state of either line 62 or line 12e to selector output line 63, which is coupled to the serial input of accumulator low byte register 55. It is thus apparent that during serial shift operations, serial input to the accumulator low byte register 55 can be derived from either the memory data line 12e or from the serial output 62 of accumulator high byte register 54. Any of four above-noted functions of accumulator 54 and 55 are selected by instructions derived on command lines CON2 and are activated upon receiving a clock signal via respective lines CLK1 and CLK2. A more detailed description of the accumulator and its functions is provided later.
  • Any one of the sixteen accumulator output lines 41 and 43 can be selected by selector 50 with instructions derived from control lines CON3. The logical state (0 or 1) of the one accumulator line selected by selector 50 is provided on line 52, which is input to the processing cell 31 and to an output selector unit 33.
  • Coupled to output selector 33 are seven input signals 15e, 56, 52, 71, 72, 73 and 74. Based upon instructions received on control lines CON5, selector 33 will select and transfer the logical state of one of these seven input signals to output line 70. Line 70 is coupled to a three-state gate 76 controlled by control line CON6. The logic signal on output line 70 is transferred to memory data line 12e and can be written into memory 13e if the output of gate 76 is enabled by an appropriate instruction provided on control line CON6. If the instruction on line CON6 commands that gate 76 assume an inactive state, the gate's output will switch to a high impedance state, thus allowing memory 13e to access line 12e and write data therein if so instructed. In the same basic manner, data from seven different sources of data connected to each processor unit 10a-10n can be written into its respective memory of the plurality of memories 13a-13n. These seven sources include data from (1) the host data bus via line 15e; (2) the I/O unit 32 via line 56; (3) any selected output from accumulator 31 by means of accumulator output selector 50, via line 52; (4) the "condition" signal via line 71; (5) the "function" signal via line 72; (6) the "carry register" signal via line 73; and (7) the "transpose" signal via line 74.
  • I/O unit 32 is an eight-bit, unidirectional, parallel-in, parallel-out, serial-in, serial-out shift register. The parallel inputs are received from eight input lines 21e; and parallel outputs are transferred to eight lines 21f. The lines 21e and 21f are typical examples of lines 21a-21n shown in FIG. 1, and are connected respectively to adjacent processor units 10 on the immediate east and west. The serial-in signal to I/O unit 32 is obtained from memory data line 12e. The serial-out signal from I/O unit 32 is output on line 56 to output selector 33. Either a parallel or serial shift function is selected by an instruction received on control line CON4, which is clocked into I/O unit 32 upon receipt of a clock signal on line CLK3.
  • Data Input.
  • Input data to be processed by the system 9 of this invention will come from data source 20 (see FIG. 1) preferably in a raster scan format, that is the input data will be provided in the form of a stream of H successive rows of data, with each row having a length of n data bytes. Thus, it may be appreciated that the data to be processed constitutes a data matrix having a height of H rows and a width W of n bytes. The system 9 accepts the incoming data row by row, for example, from the output buffers of a solid-state imaging device such as a CCD scanning device. This data stream is delivered via lines 21a to the parallel-in input of I/O shift register 32. Data in a first row of bytes is input to the system in two steps, as follows. Step 1: Controller 27 clocks all the I/O shift registers 32 to the east synchronously n times until the data stream of the first row is completely stored in all I/O shift registers 32 in all processor units 10a-10n. Step 2: A first row of bits from the first row of bytes is read out of all of the I/O shift registers 32 via their respective lines 56 and transferred to memory data lines 12a-12n via output selectors 33 and gates 76. As part of this read-out operation, controller 27 supplies the desired addresses for memories 13a-13n via lines 21, address selector 18, and address lines 19 (see FIG. 1). In a similar manner, controller 27 causes the other seven rows of bits from the first row of bytes to be stored by successively supplying further addresses to memories 13a-13n while synchronously clocking the I/O shift registers 32 serially via lines CLK3 and CON4 so that successive rows of bits are read out to the lines 56. The above-mentioned two step process is repeated until all successive rows of the data matrix are transmitted from data input device 20 to memories 13a-13n.
  • Data Output.
  • The results after the processing of a data matrix by the system 9 can be output through I/O shift registers 32 by means of a two-step process which is a reverse of the process mentioned above for data input, and is as follows. Step 1: First, controller 27 (FIG. 1) supplies the address of a desired first row of data bits to be output to memories 13a-13n, via lines 21, selector 18, and address lines 19, and causes the rows of bits of the specified memory addresses to be clocked up into the serial inputs of I/O shift registers 32. Controller 27 proceeds in a similar manner by causing different addresses (e.g., addresses of adjacent, successive rows of data bits) to be supplied to memories 13a-13n until all eight bits of the desired first row of data bytes are output from memories 13a-13n into shift registers 32 via lines 12a-12n. Step 2: Next, controller 27 clocks I/O shift registers 32 a total of n times, so as to cause the eight bits of data to be shifted to the east via lines 21b-21p, so that the entire first row of data bytes consisting of eight rows of data bits enters output device 22 via lines 21p. The above two-step process is repeated until all desired rows of the data matrix are transmitted from memories 13a-13n to data output device 22.
  • Data from memory 13e can be delivered to the host data bus line 15e by activating or enabling three-state gate 78 with a control signal received by gate 78 from line CON7. In like manner, data can be read or written directly between the host computer 25 and any of the memories 13a-13n.
  • Now referring to FIG. 3, a detailed schematic diagram of a processor cell 31, which has a number of logic gates, flip-flops, selectors, and multiplexers, is shown. External command signals are depicted by and received on the lines labeled with the prefix "CMD"; clock signals are similarly depicted and received on the lines labeled with the prefix "CLK." (The other external signal lines are designated with reference numerals consistent with those used in the other Figures.) Processor cell 31 can be placed in various functional states depending upon the combination of command signals it receives. This enables processor cell 31 to perform a wide variety of processing functions, each of which will now be explained in detail.
  • NEIGHBORHOOD OPERATIONS
  • It is commonly known that neighborhood processing is the transformation of an entire matrix of numbers or elements, wherein the transformation of each element of the matrix involves a function which uses the nearby neighbors of the element as independent variables. In order to perform neighborhood processing operations using the system 9, three steps are required, as follows. First, data must be read from the associated memories 13a-13n into processing units 11a-11n, which units are each provided with enough on-board storage to hold nearest neighbor data in the horizontal and vertical directions for the element to be transformed that is currently associated with each processing unit. Secondly, the processing unit computes a transformation of the neighboring data according to some specific instruction, thereby modifying the data. Thirdly, the modified data must be written back into associated memories 13a-13n. These three steps are respectively called the read subcycle, the modify subcycle, and the write subcycle. This sequence of three steps is called a read-modify-write cycle, and may be repeated many times in order to completely process all the data according to some specified algorithm.
  • Initial Read Operations.
  • During a computation cycle involving nearest neighbor operations, a first read operation causes a complete line of single bit data to be read from memories 13a-13n via lines 12a-12n into processing units 10a-10n depicted in FIG. 1. The data being read in is loaded into flip-flops in the processing units for temporary storage, under the control of clock signal CLK4 applied thereto, of which south flip-flop 81 depicted in FIG. 3 is typical. The data therein each correspond to a single bit in a first row of bits of the matrix stored in the memories 13a-13n. A second read operation of an adjacent row of bits, which operation includes another clock signal on line CLK4, causes the first bit stored in flip-flop 81 to shift up into middle flip-flop 82, while the second bit of data now occupies flip-flop 81. Any read operations thereafter are called read subcycles. The initial read operations are often referred to as "filling the pipeline".
  • Read Subcycle.
  • A third read operation which accesses the next adjacent row of data, in a like manner causes a further shifting of data so that the group of flip- flops 81, 82 and 83 contain bits from three adjacent rows of single bit data. Further read subcycles will cause the next set of three adjacent bits of data to occupy this group of flip-flops. Accordingly, the flip-flops 81-83 will contain a set of nearest neighbor data for a specific row in the north and south directions. As should be apparent to those in the art, neighboring processing cells 31 on the immediate left and right of the processing cell 31 depicted in FIG. 3 contain data bits which correspond to the east and west neighbors of the FIG. 3 processor cell, and that output line 36 from the middle flip-flop 82 provides the east and west neighbor states for the processor cells adjacent to the FIG. 3 cell on the right and left respectively.
  • Modify Subcycle.
  • For neighborhood operations, selector 85 is instructed by signals received on command lines CMD1 to pass the west neighbor signal on line 35 to output line 86, which delivers the signal to a first address input of carry (c) and sum (s) multiplexers 87 and 88. At the same time, selector 89 is instructed by signals received on command line CMD2 to pass the east neighbor signal on line 39 to output line 90, which delivers the signal to a second address input of multiplexers 87 and 88. The logic state of the south neighbor is output by flip-flop 81 on line 91 and is delivered thereby to a third address input of multiplexers 87 and 88. Command line CMD3 is set to a logic "1" so that due to well-known properties of AND gates, AND gate 92 effectively transfers onto line 94 the signal on line 93 which is the logic state of the north neighbor output by north flip-flop 83, where it is delivered to address input of multiplexer 95. If line 94 connected to the address input of multiplexer 95 is a logic "1" level, multiplexer 95 passes the signal on line 96 (which is the output by multiplexer 87) to its output port and line 98. If this address input is logic "0", multiplexer 95 passes the signal on line 97 (which is the output by multiplexer 88) to its output port and line 98. Those skilled in the art will recognize that arrangement of multiplexers 87, 88 and 95 illustrated in FIG. 3 forms a two-level multiplexer, with multiplexers 87 and 88 being the first level and multiplexer 95 being the second level. The collective action of this two-level multiplexer is that of a "truth table" having sixteen possible states. The logic values of this truth table are derived from the states of command line inputs CMD4 and CMD5, which each contain eight lines. The particular command line input which is chosen as the output in multiplexer 87 and in multiplexer 88 is determined by the state of the address inputs thereof. Since the addresses provided to the address inputs of multiplexers 87, 88 and 95 are derived from signals depicting the states of the north, south, east and west neighbors relative to the state of middle flip-flop 82, it is apparent that the output signal on line 98 represents a general truth table transformation of the foregoing neighborhood of logic states.
  • The transformation signal 98 and middle cell output 36 are coupled to two address inputs of multiplexer 100. The logic states of these two address inputs determine which one of four input signals received by multiplexer 100 on command lines CMD6 is selected, and provided by multiplexer 100 as an output on line 101. Multiplexer 100 thus acts as a truth table transformation of the middle cell 36 and the neighborhood transformation result on line 98. Line 101 and middle cell output 36 are input to selector 102, whose selection operation is controlled by the state of the input signal provided on line 103. A logic "0" on line 103 will cause selection and passing of the signal on line 01, while a logic "1" thereon will cause selection of the signal from the middle cell on line 36. The output of selector 102 is called the function output, and its logic value is transferred to or imposed on line 104, for delivery to "function" flip-flop 105, where it is latched therein upon activation of clock signal CLK5.
  • A signal on memory data line 12e is latched into condition flip-flop 80 upon activation of clock signal CLK6. Output line 106 connected to and bearing the output or state of flip-flop 80 is coupled to AND gate 107. Command line CMD7 is also coupled to AND gate 107. The condition flip-flop 80, AND gate 107 and selector 102 collectively form a conditional enable circuit, where the state of said condition flip-flop 80 controls whether the function flip-flop 105 will latch the state of the function output of selector 102 as determined by line 101 or use the state of the middle cell as received via line 36, which represents the untransformed state. This conditional enable operation thus provides a means for selectively allowing some processing cells 31 to obey neighborhood transformation instructions received on command lines CMD1-CMD6 while allowing other cells 81 to effectively ignore such transformation instructions. The above conditional operation of the processing cells 31 can be deactivated by a logic "0" command on command line CMD7.
  • Write Subcycle.
  • Referring to FIG. 2, the function output state on line 72 is selected by output selector 33 according to instructions on command line CON5, and passed thereby through gate 76 to memory data line 12e where it is then written into memory 13e.
  • BOOLEAN OPERATIONS
  • Boolean operations (e.g., combinational logic operations) are functionally similar to neighborhood operations, with the major difference being that any (arbitrary) lines of data bits may be written to flip- flops 81, 82, 83 and 84 of cell 31 illustrated in FIG. 3, and not just consecutive data bits from adjacent rows of bits in the data matrix, as is required to perform neighborhood operations. Boolean operations are performed by the system 9 in the following manner. According to some specified algorithm, controller 27 of FIG. 1 addresses memories 13a-13n and causes four rows of bit data to be read successively while clocking lines CLK4 so that flip-flops 81-84 (which are connected as a four-stage shift register) of the processor cells 31 receive and hold the four rows of data. Thereafter, each of the cells 31 are configured and operate in the following manner. Selector 85 is instructed by signals on command lines CMD1 to pass the middle cell state on line 36 to output line 86 and first address input of multiplexers 87 and 88, while selector 89 is instructed by signals on command lines CMD2 to pass the output of the X flip-flop signal on line 109 to its output and line 90, which leads to the second address input of multiplexers 87 and 88. CMD3 is set to a logic "1". In a manner analogous to but different from the neighborhood operations, this new configuration of multiplexers 87, 88, and 93 collectively forms a general truth table transformation of the four states in flip-flops 81-84. Instructions from command signals CMD6 set the M multiplexer 100 so that it will pass only the state of input 98 to output line 101. The condition flip-flop 80, AND gate 107 and selector 102 collectively form a conditional enable in a similar manner to that in the neighborhood transformations. The resulting Boolean function output state of selector 102 is latched into flip-flop 105 and thereafter written back to memory 13e in the manner described with respect to the neighborhood operations.
  • Those in the art will readily appreciate from the foregoing that the above-mentioned control and command operations allow the processor cells 31 to perform arbitrary truth table transformations on a set of four arbitrary rows of data bits, based upon a truth table established by the state of command signals CMD3, CMD4 and CMD5.
  • Boolean Operations with Accumulator.
  • If selector 85 is instructed by command signals CMD1 to pass the logic state of accumulator output 52 to output line 86, then the Boolean transformation will involve the accumulator data in the operation. This configuration results in more processor flexibility and higher speed in some types of operations.
  • ARITHMETIC OPERATIONS Bit Serial Arithmetic.
  • To perform a computation cycle involving a bit serial arithmetic, the CLK 7 command coupled to the reset input of carry flip-flop 114 is momentarily activated (i.e., pulsed) so that the logic state therein is set to zero therewith. Next, a line of data corresponding to the least significant bit of a first data word is read from memories 13a-13n and clocked into south flip-flop 81 in a manner similar to that for neighborhood or Boolean operations. Next, a least significant bit of a second data word is read from the memories, with the result that middle flip-flop 82 contains the state of the first bit and south flip-flop 81 contains the state of the second bit. Selector 85 is instructed by signals on command lines CMD1 to pass the state of middle flip-flop 82 output on line 36 to output line 86 and first address inputs of multiplexers 87 and 88, while selector 89 is instructed by signals on command lines CMD2 to pass the carry signal on line 115 (from the output of the carry flip-flop 114) to line 90 and the second address inputs of multiplexers 87 and 88. Output line 91 of south flip-flop 81 is the third address input to multiplexers 87 and 88. The states of command lines CMD5 are set in such a manner that "S" multiplexer 88 gives the truth table for a one bit sum or addition of the input values stored in flip- flops 81 and 82 and carry input value stored in flip-flop 114. Specifically, the states of command lines CMD4 are set in such a manner that "C" multiplexer 87 acts as a truth table for a carry propagate value for the three input values onto line 96. The resultant carry propagate value and addition value are respectively stored in carry flip-flop 114 and function flip-flop 105 by activation of respective clock signals CLK5 and CLK8. The state of function flip-flop 105 is output on line 72 and can be read back to memory 13 as is done in neighborhood operations.
  • Next, the next least significant bits from the same first data word and second data word are read into flip- flops 81 and 82 and are processed in a manner identical to the above. New arithmetic sum values are generated and written to memories 13a-13n, and new carry propagate values are generated and stored in flip-flop 114. By repeating the foregoing process steps two (or more) data words having any arbitrary number of bits can be added together. Also, conditional arithmetic operations can be readily provided through use of conditional flip-flop 80, as described earlier, in conjunction with the aforementioned serial arithmetic procedures.
  • Bit Serial Arithmetic with Accumulator.
  • Selector 85 may be instructed by signals on command lines CMD1 to pass the logic state of accumulator output line 52 therein to output line 86 of selector 85, and then the resulting arithmetic operation will involve the addition of bits of data words written into flip-flop 81 from memory, with words previously stored in the accumulator 51 and received via line 52.
  • Parallel Arithmetic.
  • During arithmetic operations, selector 89 may be instructed by signals on command lines CMD2 to pass the logic state of carry-in signal on line 34 therein to output line 90 of selector 89, then the carry input for processor cell 31 will be obtained from the processing cell 31 to the immediate left via line 34 and the carry output will propagate to adjacent processing cell to the immediate right via line 37. Thus, in order to perform correct parallel arithmetic operations, the data words have to be arranged in the memories and processing units such that successively significant bits are contiguous in the horizontal direction, where the most significant bits are toward the right. To perform a computation cycle involving parallel arithmetic, a first row of data bits are read from memories 13e with each bit being clocked into south flip-flops 81. Next, a second row of data bits are read from memories 13e, with each bit being clocked into flip-flops 81, and wherein the first data bits are clocked into middle flip-flop 82. As in bit serial arithmetic, sum and carry signals are computed; however, the carry signals will propagate to the right, and the sum of the data bits comprising the data word will be stable shortly after the second row of data bits are read from memory 13e. As in bit serial arithmetic, the resultant sum in each cell 31 is then clocked into its function flip-flop 105 and can be written to memory. It is also apparent that conditional parallel arithmetic operations and parallel arithmetic operations involving data from accumulators 51 are provided in a manner similar to that for bit serial arithmetic. When performing parallel arithmetic operations care must be used in writing algorithms in order to avoid overflow, so that carry signals will not accidentally propagate from one data word to the next, since many data words are on the same line of bits.
  • ACCUMULATOR
  • As best shown in FIG. 2, the accumulator 51 receives serial input signals from processor cell 31 via line 40, and from memory 13e via memory data line 13e. The serial input on line 40 is provided, as shown in FIG. 3, from accumulator input selector 116, which is instructed by signals on the command lines CMD8 to select either the memory signal on line 12e, or the function signal on line 104, or the accumulator output signal on line 52. The state of the selected input signal is passed through to selector 116 to its output and line 40 which is coupled to accumulator 54 shown in FIG. 2.
  • Now referring to FIG. 4, a diagram of accumulator high byte register 54 is shown within the large dashed rectangle, and eight one bit accumulator units 120a-120h are provided therein. A detailed schematic of one of the units, namely one bit accumulator 120a, is depicted in the smaller dotted rectangle. The other one bit accumulators 120b-120h are identical in construction to accumulator 120a, and thus need not be shown. Also, it should be understood that the lower section of accumulator 51, namely the accumulator low byte register 55 shown in FIG. 2 is identical in internal construction to the register 54. Returning to FIG. 4, the flip-flop 122 therein stores the value of one bit of eight bit word stored in accumulator section 54. One function performed by accumulator section 54 is the incrementing of the value of the word stored therein. Using the well-understood properties of exclusive OR gates, output 123 from exclusive OR gate 124 contains the value of the incremented bit therein, whereas the inputs thereof are line 41a (which provides the value of the bit stored in flip-flop 122), and the first carry input connected to line 53 (which is the last carry line from the lower byte accumulator section 55. Using a commonly known method, a carry propagate function is formed by AND gate 125, and its output on line 26 contains the carry-out signal which is provided as the carry-in signal to the next one bit accumulator 120b. Selector 127 is instructed by signals on control lines CON2 to pass as its output 129 the value of a selected one of four input signals, which are: (1) the value of the bit in accumulator unit 120b provided via line 128, (2) the value of a corresponding accumulator bit to the west provided via line 45, (3) the incremented value provided via line 123, and (4) the value on line 42a from the corresponding accumulator bit in an adjacent accumulator 154 located to the east. The selected input signal is output to the accumulator flip-flop 122 via line 129, so that the selected value will be stored therein upon activation of clock signal CLK1.
  • From the above description, it should be readily apparent that accumulator section 54 can, upon receipt of the appropriate control signals, perform the following four functions: (1) increment the eight bit value stored therein by one or zero, depending on the state of its first carry input; (2) parallel shift all eight bit values stored therein east; (3) parallel shift all eight bit values stored therein west; (4) serial shift the eight bits stored therein down, with the serial shift input value provided to high-order one bit accumulator 120h by input line 40 being selected from various sources in the processor cell 31 (see FIG. 3).
  • Referring to FIG. 5, processor units 10a-10n are logically arranged in groups of eight units, as further illustrated by representative group within the dashed rectangle 130, for three reasons: (1) the host computer data lines 15 are capable of reading eight bits at a time; (2) the memory associated with the groups 130 is provided most economically in the form of a byte wide memory 132; and (3) because further internal functions are best handled in eight bit sizes. Eight processor units 30a-30h are shown with accumulator inputs and outputs 45a, 41a, 47a and 43a on the left and 41h-44h on the right. Processor units 30a-30h are coupled to the memory 132 via eight data lines 12a-12h. Memory data lines 12a-12h are connected as a first input to accumulator left input selector 135, and the low byte input 134 of an adjacent accumulator is connected to a second input of selector 135. Instructions received on control line CON8 coupled to accumulator left input selector 135 will select either the first or second input, and transfer selected signals therein along input lines 47a to the accumulator low byte register 55 (see FIG.2). The eight output bits of accumulator low byte register 55h within processor unit 30h, which are output on lines 43h on the east side of group 130, are also respectively connected to the transpose inputs 74a-74h of the processors 30a-30h. Each such transpose input is coupled to its output selector 33 (see FIG.2) so that if its selector 33 is instructed by signals on lines CON5 to pass the transpose signal 74, then memory 13e will store the accumulator bits from accumulator register 55h therein.
  • Referring to FIG. 6A, an 8x8 grid 140 of bit values are shown. The grid 14 represents a small subarray of vertical bytes as they might be stored in memory 13, where A0-A7, B0-B7, C0-C7, etc., each represent a typical byte. There are two means of transposing data between the memory and accumulator, respectively called transpose in and transpose out which will now be explained. Transpose in: The command signal on line CON8 (see FIG. 5) is set to cause left input selector 135 to select the output from memory data lines 12a-12h for delivery via lines 47a to processor unit 30a. If the eight (horizontal) rows of the grid 140 of values are read from the memory 13 with most significant bits read first, and the eight adjacent accumulator sections 55a-55h within processing units 30a-30h are commanded to clock east in synchronization with the memory read instructions, then the data from grid 140 will be stored horizontally in these adjacent accumulator sections in the transposed form as shown in the grid 142 of FIG. 6B. Transpose out: If the data in the eight adjacent accumulator sections 55a-55h are stored as represented in grid 140 of FIG. 6A, and if output selector 33 (see FIG. 2) of each processor unit 30a-30h is commanded to select transpose inputs 74, then data array of grid 140 will be stored in memory as shown in grid 142 of FIG. 6B, after eight memory writes synchronized with eight accumulator east shifts have been performed.
  • Eight parallel outputs of the accumulator low byte register 55h on the extreme right of FIG. 5 are also connected, via eight lines 43h, as a first set of inputs to low address selector 131. The eight low address byte lines 14 (see FIG. 1) are connected as a second set of inputs to selector 131. Instructions received on control line CON9 can set selector 131 to pass either accumulator output signals from the lines 43h, or the low address byte signals on lines 14, to lines 28, which are the eight least significant bits of an address delivered to memories 13a-13h located in byte-wide memory 132.
  • In light of the foregoing, it should be appreciated by those skilled in the art that controller 27 (see FIG. 1) can address memories 13a-13n, and load numbers into the eight accumulator sections any desired group 130 of eight processor units within processor units 30a - 30n, and then use the loaded numbers in the east accumulator section 55h of the selected eight processor unit group 130 to address the memories 13a-13n again. This kind of function is commonly known as indirect addressing. The data in the accumulator sections of group 130 can be shifted east and the indirect addressing can be repeated for the newly shifted values until all eight accumulators in the group have furnished indirect addresses to perform desired processing. Applications of this technique will be described in later sections.
  • Command input selector 133 in FIG. 5 can be instructed by signals received on control line CON10 to pass the states of either the low address byte 14 or the byte from memories 13a-13n received via lines 12a-12f to command storage registers 137 via lines 138. Upon activation of a clock signal on line CLK9, data on input lines 138 will be latched into command storage registers 137. The command storage module 137 preferably is a four stage shift register which can store four bytes therein. Four clock cycles on line CLK9 are thus required to completely transfer a new set of four bytes of commands into the command storage register 137. The four bytes of data stored within command storage register 137 are output via command lines CMD1-CMD8 and serve as the multitude of the command signals shown in and used to operate the processor cell 31 of FIG. 3.
  • From the foregoing, it should be clear that data words for command storage module 137 for processing units 30a-30h of group 130 can be obtained from either the low address bytes by the lines 14 directly, or from memories 13a-13h, in which case the low address byte lines furnish an address to memories 13a-13h via low address selector 131 and line 28. By means of the latter method, different groups 130 of processing units can simultaneously access different locations within memory 13, i.e., their respective assigned portions of memory 13, and receive different commands previously stored in memory 13.
  • APPLICATIONS
  • Applications of the novel features of this invention, including the group of eight interconnected accumulators 55a-55h of group 130, transpose operations, and indirect addressing, will now be given. In the following discussion of these three applications, a detailed description of the precise commands and signal flows need not be presented since these rudimentary details have already been covered above, or may be very readily understood by referencing the above presentation.
  • Look-Up-Table.
  • A look-up table (LUT) is commonly used where each data element in a data array or matrix is to be transformed according to a very complex rule. Ordinarily it would be very time consuming to make a computation in accordance with such a rule for every data element. But if the computation was made once, off line, for each possible data value of the combination of independent variables or inputs, and the results stored as horizontal bytes in the memory 13a-13n, then the processor units 13a-13n only need look up that value from the stored LUT array for each data point. As an example of one implementation of the foregoing technique, consider the following method illustrated in FIGS. 7A-7C. Assume a desired LUT is stored in memory 13 in a horizontal format as in FIG. 6B, and the data to be transformed in accordance with the entries in the LUT are stored in memory 13a-13h in vertical format as in FIG. 6A. The first step is to read eight such vertically stored data bytes represented by eight bit x eight bit segment 148 of the memories, 13a-13h into an 8x8 group 150 of low byte accumulators 55 while clocking the accumulators 55 "down", as depicted in FIG. 7A, by signal flow lines 152 and arrow 153. To do this, the accumulator input selectors 116 of the processor cells 13a-13h need to pass the data from memory segment 148 of memories 13a-13h on lines 12a-12h to lines 40a-40h (see FIGS. 1 and 2). Eight clock cycles are required. Thereafter, during the second step data bytes stored in group 150 are used as an indirect addresses which will address the memory space 154 in memories 13a-13h where the LUT is addressed as depicted in FIG. 7B by signal flow lines 156. The number of rows in memory space 154 equals the number of rows in the LUT. The data byte transformed according to the LUT in space 154 is read out of memory as indicated by signal flow lines 158 and shifted east as indicated by arrow 159 into the accumulator group 150. The extreme right data byte is lost during the shift east and the next right-most data element occupies the extreme right position in group 150. Eight clock cycles are required to process all eight bytes during this second step. During these eight clock cycles, the data transformed by referencing entries in the LUT 154 is clocked east into the extreme left accumulator 55a of group 150 via input lines 47a, and in the process the data are transposed to the vertical format and held in group 150. Finally, as depicted in FIG. 7C by signal flow lines 160, the new transformed values held in group 150 are written back into memory 13a-13h in the vertical format. Eight more clock cycles are required in order to have the accumulators 55 of group 150 shift down fully, thus storing all eight transformed values. Typically, the transformed values are stored back into memory segment 148, but if desired, can be stored at a different segment with the memories 13a-13h.
  • Histogram.
  • A histogram is a count of the number of times that each possible value of a group of data values occur in an entire data array, and is another operation which can be advantageously implemented by using the transpose and indirect addressing features of my invention. A preferred technique for generating a histogram using the system 9 of my invention is as follows. First, the area in memories 13a-13n where the various counts associated with the histogram are to be accumulated is zeroed out. Assume for the sake of this example that the data is in the vertical format in an array 170 in memories 13a-13h shown in FIG.8A and the histogram count values will be accumulated in the horizontal format. A group of eight data bytes are loaded (serially downshifted) into the low byte accumulator group 150 as shown in FIG. 8A by arrow 171 and signal flow 172. The data value in the extreme right accumulator 55h of group 150 serves as an indirect address as indicated in FIG. 8B by signal flow 173 to the memory location (e.g., row) in memory segment 177 of memory 13 containing the count value for that particular data value. The count value is to be incremented, and is thus loaded into the eight bit-wide processing cell 175, as depicted in FIG. 8B by signal flow 174. Processing cell 175 may be composed of the processor cells 31a-31h in processor units 10a-10h, for example. The count in the processing cell 175 is then incremented using the horizontal arithmetic mode of the processing cell, as indicated in FIG. 8B. Using indirect addressing again, the incremented value is returned to the same memory location (row) from which it came, as depicted in FIG. 8C by signal flow lines 176 and 178. At the same time the accumulator 150 is shifted east to get ready for the next count of the next data value. The incrementing process illustrated in FIG. 8B and FIG. 8C occurs a total of eight times to count all the data loaded into the group 150 of eight accumulators during the step of FIG. 8A. All rows in the data matrix which spans, for example, several sets of eight-bit-wide columns in the memories 13a-13n are processed concurrently in a similar manner. Finally, after all rows within each set of columns have been processed, the several histograms, one for each group of eight columns, can be consolidated. Moreover, if a vertical format is needed, they can be transposed.
  • Accumulation.
  • FIGS. 9A-9C depict data flow paths for an example of accumulators being used as accumulators, where sixteen bit numbers are added to values already within the accumulators. In FIG. 9A, data, namely 8 sixteen-bit values in vertical format, is read from eight-bit-wide, sixteen row memory segment 188 of memories 13a-13h in bit serial form to the processor cell 175. At the same time the sixteen-bit accumulator 190 is clocked as indicated by arrows 194 down and also read into the processing cell 175 as indicated by upward signal flow path 195, wherein the two data inputs are added. Accumulator 190 is formed from an eight-bit-wide by eight-bit-high accumulator section 192 (which may be the accumulator high-byte registers 54a-54h of eight adjacent processor units 30a-30h), and a corresponding 8x8 low byte accumulator group or section 150. The sum produced in cell 175 is read back into the accumulator 190 while it is shifting down. FIG. 9B illustrates that the accumulator 190 may be optionally shifted east or west if the next data value to be summed is in a different column of memory segment 188. The two phases or sequence of steps depicted in FIGS. 9A and 9B are repeated as many times as are needed to complete the desired summation of various data from near (or distant) neighbors. The shifts in either the east or west direction may be used to carry values or partial sums any arbitrary distance along the array of processing units 10a-10n; thus, the accumulation function is not confined to being performed within a given memory segment such as segment 188. This accumulation technique illustrated by FIGS. 9A and 9B will handle convolutions, or sums with various multiplication factors by using the familiar multiplication technique of "shift and add" well-known to those skilled in the art. When all desired data addition cycles are completed, the contents of the accumulator 190 are stored in memories 13a-13n by shifting the accumulator down as depicted in FIG. 9C by arrows 196. If desired, zeros may be shifted into accumulator 190 behind the outgoing data from processor cell 175. By shifting in zeros, the accumulator 190 is now ready to process another row of the data matrix.
  • Counting.
  • FIGS. 10A and 10B depict a method of counting the number of bits in selected columns of a data matrix stored in memory using the system 9 of the present invention. The combined low byte and high byte accumulator group 190 is placed in the increment mode, as suggested in FIG. 10A. The accumulator group 190 is clocked while a byte-wide memory segment 200 of memories 13a-13h is read out, and a logic one bit in the data therein will increment the accumulator 190, whereas a logic zero bit therein will not alter the contents of the accumulator 190. Note that the segment 200 may contain an arbitrary number of rows. After all rows of data are processed, each one bit accumulator 51a-51h within accumulator group 190 contains the sum of all logic "1" data bits in its associated data matrix column within memory segment 200. The sums within the one bit accumulators of group 190 are shifted down and written into another byte-wide memory segment 202 of memories 13a-13h as depicted in FIG. 10B. Note that the sums thus stored in memory segment 202 are in a vertical data format of the type as illustrated in FIG. 6A, but will be sixteen bits high instead of eight bits high.
  • Add Constant.
  • Sixteen-bit constant numbers can be added to each element of a data matrix by first loading the numbers stored in vertical format within a 16 row x 8-bit-wide memory segment 188 into the accumulator 190 as depicted in FIG. 11A by signal flow lines 210. In general the various one-bit-wide, 16-bit high accumulators 54a-54h within the accumulator 190 can each contain a different number. Next, a bit serial addition occurs in the individual cells 31a-31h of processor cell group 175 in a sequence of two cycles. FIG. 11B depicts the first cycle where a row of least significant bits of the data matrix is read (as illustrated by signal flow 212) into the processing cell 175, while at the same time the accumulator 190 is shifted down once, and a row of eight bits is loaded into the processing cell 175 as indicated by signal flow 214. The row of accumulator output bits from the bottom 216 of accumulator 190 are recycled back to the inputs of the top 218 thereof. In the second cycle depicted in FIG. 11C, each resultant sum bit now in the individual cells 31a-31h of processing cell 175 is read into memory segment 188 as indicated by signal flow line 220. The cycles depicted in FIG. 11B and 11C are repeated for all remaining more significant bits in the data words stored in segment 188. After the sum is thereby completed, the bits in the accumulator 190 will have been completely recycled so that the numbers are registered as they were when they were originally loaded in FIG. 11A. The process can continue if desired in a like manner until all rows in the data matrix found in memory segments of memories 13a-13h other than segment 188 are processed.
  • From the foregoing, it can be appreciated that the system and method of my invention, which employs a linear chain of selectively interconnectable parallel processing units described herein, not only provides for the reliable accomplishment of the objects of the invention, but does so in a particularly effective and economical manner. It is recognized, of course, that those skilled in the art may make various modifications or additions to the preferred embodiment chosen to illustrate the invention without departing from the spirit and scope of the present contribution to the art. Also, the correlative terms "row" and "column," "vertical" and "horizontal," "left" and right," "east" and "west," "up" and "down," and the like are used herein to make the description and claims more readily understandable and are not meant to limit the scope of the invention. In this regard, those skilled in the art will readily appreciate such terms are often merely a matter of perspective and are interchangeable merely by altering one's perspective, e.g., rows become columns and vice-versa when one's view is rotated 90 degrees. Also, although the architecture of the preferred embodiments disclosed herein is based primarily upon data words having eight bits, and processing data in arrays of 8x8 bits or 16x8 bits, it should be appreciated that my invention described herein can be readily adapted to data words of other sizes, such as from data words as small as 2 bits to data words as large as 32 (or more) bits, and process data in groups of correspondingly smaller or larger arrays.

Claims (9)

  1. A processing system (9) for performing parallel processing operations upon data from a large array of data bits having L rows and at least M * h columns of data bits where L, M and h are integers greater than 1, and wherein the large array is subdivided into M subarrays for processing, where each subarray has L rows and only h columns, the system comprising:
    M groups (10a-10h, 10a-10h; 130, 130) of h individual processor units each, each processor unit including a processor cell (31) wherein all the processor cells within a single group of processor units are linked together by a plurality of connections (34, 35, 36, 37, 38), and wherein the M groups of processor units are linked together by a plurality of connections (11; 21) for transferring data between the M groups of processor units;
    M groups (13a-13h, 13a-13h; 132, 132) of h memory means (13a-13h), each memory means (13e; 13i) in the M groups of h memory means for storing L rows of one of the M * h columns of the data bits in the large array of data bits, each group of memory means in the M groups of memory means being respectively associated with a group of processor units in the M groups of processor units, each one of the memory means within a single group of memory means being respectively connected to a processor unit within the respectively associated group of processor units;
    each of the processor units further including -
    (1) an accumulator (51) for accumulating h data bits, wherein each accumulator comprises as data bit transfer lines a bit-serial input/output connected to the corresponding memory means, h parallel inputs and h parallel outputs connected to the corresponding outputs of the accumulator of the neighboring west and to the corresponding inputs of the accumulator of the neighboring east processing units, respectively, the data bit transfer lines allowing data bits to be shifted bi-directionally between the accumulator and the memory means and between the accumulator and either of the neighboring east and west processing units, and wherein the h parallel inputs of the accumulator of the westmost processor unit and the h parallel outputs of the accumulator of the eastmost processor unit within a single group of the h processor units are connected directly to the associated group of h memory means;
    (2) line selector means (33, 50, 76) coupled with the accumulator and memory means, and responsive to control signals for selecting the data bit transfer lines over which the data bits are to be transferred, such that the processor cell selectively operates as a bit-serial processor or as a part of an h-bit wide word parallel processor;
    whereby each accumulator may function as a bit-serial shift register and also perform a parallel shift in and shift out function of all h bits simultaneously, and whereby the format of a group of data bits can be transposed,
    the accumulators and line selector means cooperating for selectively allowing a group of processor units to operate in parallel upon data bits from the L rows of the M groups of h memory means and to operate in parallel upon data bits from any one of the h memory means of the M groups of h memory means.
  2. A system as in claim 1, wherein each accumulator includes h one-bit accumulators (120a, 120h) each for temporarily holding one bit of data therein, and wherein the h one-bit accumulators are serially linked together whereby the accumulator receives and temporarily holds h bits of data from the group of h memory means associated with the respective group of h processing units including this accumulator.
  3. A system as in claim 1, wherein wherein each accumulator includes 2h one-bit accumulators serially linked together, whereby each accumulator receives and temporarily holds 2h bits of data from the group of h memory means associated with the respective group of h processing units including this accumulator.
  4. A system as in claims 1, 2 or 3 wherein: each group of h processing units includes input selector means (135) for selectively coupling the data connections between the processor units of its group and the group of memory means associated with its group of processor units to the accumulator of the westmost processor unit within its group, such that h bits of data may be transferred in parallel from the memory means (132) or processor units of its group to the parallel inputs of the accumulator of the westmost processor unit of the respective group, wherein h bits of data transferred from the memory means to accumulator of the westmost processor unit of the respective group is stored in a row format that spans h columns of the array of memory means.
  5. A system as in any one of the foregoing claims, further characterized by:
       each of the processor units further including arithmetic means (31, 37, 81, 82, 85-91, 96, 105, 114, 115) for performing arithmetic operations upon data provided thereto, the arithmetic means including carry means (34, 37, 96, 114, 115) for transferring data corresponding to carries resulting from the arithmetic operations performed within the processor unit to an adjacent processor unit if any located in a common first direction along the group, such that the group of h processing units may be operated in parallel to perform arithmetic operations upon data that is provided in parallel to the h processor units.
  6. A system as in claim 5, wherein:
       in each processor unit (30), the arithmetic means form part of the processor cell (31) of the unit, are arranged to perform a one-bit addition operation, and include a pair of one-bit storage means (105, 114) for temporarily holding a one-bit sum and a carry bit if any resulting from the addition operation most recently performed,
       each processor unit (30) includes a carry-out connection (37, 34) between the one-bit storage means for holding the carry bit and the arithmetic means of the adjacent processor unit if any located in the first direction, and
       in each processor unit (30), the processor cell (31) includes a pair of one-bit storage means (81, 82) for temporarily holding a pair of bits successively transferred thereto from the memory means associated with the processor unit for processing by the arithmetic means.
  7. A system as in any one of the foregoing claims, further characterized by:
       means (43h, 55h, 130, 131) for indirectly addressing data within one group (13a-13h; 132) of the memory means.
  8. A system as in claim 7, wherein:
       the group of memory means includes address inputs (23; 28a-28n);
       the means for indirect addressing includes address selection means (131), coupled to the accumulator (55h) within a processor unit (30h) of the associated group of processor units, for selectively directing data in parallel from this accumulator to at least some of the address inputs of the group of memory means, and
       the group of processor units includes means (45, 127, 129, 122; 47, 43) for shifting data within the accumulator of each processor unit in one direction to the accumulator of an adjacent processor unit, if any.
  9. A method of transferring data between processor units and memory means in a processing system (9) for performing parallel processing operations upon data from a large array of data bits having L rows and at least M * h columns of data bits where L, M and h are integers greater than 1, and wherein the large array is subdivided into M subarrays for processing, where each subarray has L rows and only h columns, the method comprising the steps of:
    a) providing M groups (10a-10h, 10a-10h; 130, 130) of h individual processor units each, each processor unit including a processor cell (31) wherein all the processor cells within a single group of processor units are linked together by a plurality of connections (34, 35, 36, 37, 38), and wherein the M groups of processor units are linked together by a plurality of connections (11; 21);
    b) providing M groups (13a-13h, 13a-13h; 132, 132) of h memory means (13e; 13i), each memory means (13e; 13i) in the M groups of h memory means for storing L rows of one of the M * h columns of the data bits in the large array of data bits, each group of memory means in the M groups of memory means being respectively associated with a group of processor units in the M groups of processor units, each one of the memory means within a single group of memory means being respectively connected to a processor unit within the respectively associated group of processor units;
    c) providing an accumulator (51) in each of the h processor units for accumulating h data bits, wherein each accumulator comprises as data bit transfer lines a bit-serial input/output connected to the corresponding memory means, h parallel inputs and h parallel outputs connected to the corresponding outputs of the accumulator of the neighboring west and to the corresponding inputs of the accumulator of the neighboring east processing units, respectively, and wherein the h parallel inputs of the accumulator of the westmost processor unit and the h parallel outputs of the accumulator of the eastmost processor unit within a single group of the h processor units being connected directly to the associated group of h memory means;
    d) temporarily holding h-bits of data to be processed in each of the accumulators;
    e) storing the data in one format in the M groups of h memory means (13i);
    f) selecting data bit transfer lines such that the data bits may be transferred bi-directionally between each accumulator and the associated memory means or between adjacent accumulators, the selecting being performed so that each processor cell of the processor units of each of the M groups of h processor units operates as a bit-serial processor or as a part of an h-bit wide word parallel processor;
    g) selectively transferring a first group of data bits stored in a first format in one group of h memory means to the corresponding group of h processor units, wherein the transferring is performed
       serially in that each accumulator of this group of h processing units functions as a bit-serial shift register; or
       in parallel in that each accumulator of this group of h processing units performs a parallel shift in and shift out function of all h bits simultaneously, whereby the format of a group of data bits is transposed to a second format orthogonal to the first format,
       so that the accumulators and line selector means cooperate for selectively allowing the a group of processor units to operate in parallel upon data bits from the L rows of the M groups of h memory means and to operate in parallel upon data bits from any one of the h memory means of the M groups of h memory means.
EP88108175A 1987-06-01 1988-05-20 Linear chain of parallel processors and method of using same Expired - Lifetime EP0293700B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US07/057,128 US5129092A (en) 1987-06-01 1987-06-01 Linear chain of parallel processors and method of using same
US57128 1987-06-01

Publications (3)

Publication Number Publication Date
EP0293700A2 EP0293700A2 (en) 1988-12-07
EP0293700A3 EP0293700A3 (en) 1989-10-18
EP0293700B1 true EP0293700B1 (en) 1995-02-01

Family

ID=22008678

Family Applications (1)

Application Number Title Priority Date Filing Date
EP88108175A Expired - Lifetime EP0293700B1 (en) 1987-06-01 1988-05-20 Linear chain of parallel processors and method of using same

Country Status (4)

Country Link
US (1) US5129092A (en)
EP (1) EP0293700B1 (en)
JP (1) JP2756257B2 (en)
DE (2) DE3852909T2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10303976B2 (en) 2015-07-23 2019-05-28 Mireplica Technology, Llc Performance enhancement for two-dimensional array processor

Families Citing this family (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0360527B1 (en) * 1988-09-19 1995-01-04 Fujitsu Limited Parallel computer system using a SIMD method
KR930002316B1 (en) * 1989-05-10 1993-03-29 미쯔비시덴끼 가부시끼가이샤 Multiprocessor type time varying image encoding system and image processor
EP0421639B1 (en) * 1989-09-20 1998-04-22 Fujitsu Limited Parallel data processing system
US5287416A (en) * 1989-10-10 1994-02-15 Unisys Corporation Parallel pipelined image processor
EP0444368B1 (en) * 1990-02-28 1997-12-29 Texas Instruments France Digital Filtering with SIMD-processor
WO1991019256A1 (en) * 1990-05-30 1991-12-12 Adaptive Solutions, Inc. Mechanism providing concurrent computational/communications in simd architecture
JP2959104B2 (en) * 1990-10-31 1999-10-06 日本電気株式会社 Signal processor
US5325500A (en) * 1990-12-14 1994-06-28 Xerox Corporation Parallel processing units on a substrate, each including a column of memory
JPH04290155A (en) * 1991-03-19 1992-10-14 Fujitsu Ltd Parallel data processing system
US5732164A (en) * 1991-05-23 1998-03-24 Fujitsu Limited Parallel video processor apparatus
US5241632A (en) * 1992-01-30 1993-08-31 Digital Equipment Corporation Programmable priority arbiter
US5655131A (en) * 1992-12-18 1997-08-05 Xerox Corporation SIMD architecture for connection to host processor's bus
US5375080A (en) * 1992-12-18 1994-12-20 Xerox Corporation Performing arithmetic on composite operands to obtain a binary outcome for each multi-bit component
US5428804A (en) * 1992-12-18 1995-06-27 Xerox Corporation Edge crossing circuitry for SIMD architecture
US5408670A (en) * 1992-12-18 1995-04-18 Xerox Corporation Performing arithmetic in parallel on composite operands with packed multi-bit components
US5651121A (en) * 1992-12-18 1997-07-22 Xerox Corporation Using mask operand obtained from composite operand to perform logic operation in parallel with composite operand
US5450603A (en) * 1992-12-18 1995-09-12 Xerox Corporation SIMD architecture with transfer register or value source circuitry connected to bus
US5450604A (en) * 1992-12-18 1995-09-12 Xerox Corporation Data rotation using parallel to serial units that receive data from memory units and rotation buffer that provides rotated data to memory units
US5526501A (en) * 1993-08-12 1996-06-11 Hughes Aircraft Company Variable accuracy indirect addressing scheme for SIMD multi-processors and apparatus implementing same
US5434629A (en) * 1993-12-20 1995-07-18 Focus Automation Systems Inc. Real-time line scan processor
US5557734A (en) * 1994-06-17 1996-09-17 Applied Intelligent Systems, Inc. Cache burst architecture for parallel processing, such as for image processing
US5630161A (en) * 1995-04-24 1997-05-13 Martin Marietta Corp. Serial-parallel digital signal processor
US6188381B1 (en) * 1997-09-08 2001-02-13 Sarnoff Corporation Modular parallel-pipelined vision system for real-time video processing
US6208772B1 (en) 1997-10-17 2001-03-27 Acuity Imaging, Llc Data processing system for logically adjacent data samples such as image data in a machine vision system
FR2793088B1 (en) * 1999-04-30 2001-06-22 St Microelectronics Sa METHOD AND DEVICE FOR COLLECTING LOGIC VALUES OUTPUT OF A LOGIC UNIT IN AN ELECTRONIC CIRCUIT
US6598146B1 (en) * 1999-06-15 2003-07-22 Koninklijke Philips Electronics N.V. Data-processing arrangement comprising a plurality of processing and memory circuits
EP1122688A1 (en) * 2000-02-04 2001-08-08 Texas Instruments Incorporated Data processing apparatus and method
AU2002238325A1 (en) * 2001-03-02 2002-09-19 Atsana Semiconductor Corp. Data processing apparatus and system and method for controlling memory access
CN1301491C (en) * 2001-03-13 2007-02-21 伊强德斯股份有限公司 Visual device, interlocking counter, and image sensor
HUP0102356A2 (en) * 2001-06-06 2003-02-28 Afca-System Kft. Method and circuit arrangement for parallel mode executing of cyclic repeated data processing jobs, as well as program system for producing and simulating operation codes for carrying out the method
US7054897B2 (en) * 2001-10-03 2006-05-30 Dsp Group, Ltd. Transposable register file
US20100274988A1 (en) * 2002-02-04 2010-10-28 Mimar Tibet Flexible vector modes of operation for SIMD processor
DE10206830B4 (en) * 2002-02-18 2004-10-14 Systemonic Ag Method and arrangement for merging data from parallel data paths
US7506135B1 (en) * 2002-06-03 2009-03-17 Mimar Tibet Histogram generation with vector operations in SIMD and VLIW processor by consolidating LUTs storing parallel update incremented count values for vector data elements
US7266255B1 (en) * 2003-09-26 2007-09-04 Sun Microsystems, Inc. Distributed multi-sample convolution
US7737994B1 (en) * 2003-09-26 2010-06-15 Oracle America, Inc. Large-kernel convolution using multiple industry-standard graphics accelerators
JP2006099232A (en) * 2004-09-28 2006-04-13 Renesas Technology Corp Semiconductor signal processor
US20060156316A1 (en) * 2004-12-18 2006-07-13 Gray Area Technologies System and method for application specific array processing
US20060190517A1 (en) * 2005-02-02 2006-08-24 Guerrero Miguel A Techniques for transposition of a matrix arranged in a memory as multiple items per word
KR101031680B1 (en) * 2006-03-03 2011-04-29 닛본 덴끼 가부시끼가이샤 Processor array system having function for data reallocation between high-speed pe
GB2436377B (en) * 2006-03-23 2011-02-23 Cambridge Display Tech Ltd Data processing hardware
KR100834412B1 (en) * 2007-05-23 2008-06-04 한국전자통신연구원 A parallel processor for efficient processing of mobile multimedia
GB0809192D0 (en) * 2008-05-20 2008-06-25 Aspex Semiconductor Ltd Improvements to data compression engines
JP5601817B2 (en) * 2009-10-28 2014-10-08 三菱電機株式会社 Parallel processing unit
JP5528976B2 (en) * 2010-09-30 2014-06-25 株式会社メガチップス Image processing device
JP2011192305A (en) * 2011-06-01 2011-09-29 Renesas Electronics Corp Semiconductor signal processor
US9183614B2 (en) 2011-09-03 2015-11-10 Mireplica Technology, Llc Processor, system, and method for efficient, high-throughput processing of two-dimensional, interrelated data sets
US9680916B2 (en) 2013-08-01 2017-06-13 Flowtraq, Inc. Methods and systems for distribution and retrieval of network traffic records
FR3015068B1 (en) * 2013-12-18 2016-01-01 Commissariat Energie Atomique SIGNAL PROCESSING MODULE, IN PARTICULAR FOR NEURONAL NETWORK AND NEURONAL CIRCUIT
US11249767B2 (en) * 2019-02-05 2022-02-15 Dell Products L.P. Boot assist zero overhead flash extended file system
US11042372B2 (en) * 2019-05-24 2021-06-22 Texas Instruments Incorporated Vector bit transpose

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3537074A (en) * 1967-12-20 1970-10-27 Burroughs Corp Parallel operating array computer
US3582899A (en) * 1968-03-21 1971-06-01 Burroughs Corp Method and apparatus for routing data among processing elements of an array computer
US3970993A (en) * 1974-01-02 1976-07-20 Hughes Aircraft Company Cooperative-word linear array parallel processor
US4174514A (en) * 1976-11-15 1979-11-13 Environmental Research Institute Of Michigan Parallel partitioned serial neighborhood processors
DE2963153D1 (en) * 1978-06-26 1982-08-12 Environmental Res Inst Apparatus and method for generating a transformation of a first data matrix to form a second data matrix
US4215401A (en) * 1978-09-28 1980-07-29 Environmental Research Institute Of Michigan Cellular digital array processor
US4314349A (en) * 1979-12-31 1982-02-02 Goodyear Aerospace Corporation Processing element for parallel array processors
US4525797A (en) * 1983-01-03 1985-06-25 Motorola, Inc. N-bit carry select adder circuit having only one full adder per bit
US4739474A (en) * 1983-03-10 1988-04-19 Martin Marietta Corporation Geometric-arithmetic parallel processor
US4621339A (en) * 1983-06-13 1986-11-04 Duke University SIMD machine using cube connected cycles network architecture for vector processing
JPH0658631B2 (en) * 1983-12-19 1994-08-03 株式会社日立製作所 Data processing device
FR2573888B1 (en) * 1984-11-23 1987-01-16 Sintra SYSTEM FOR THE SIMULTANEOUS TRANSMISSION OF DATA BLOCKS OR VECTORS BETWEEN A MEMORY AND ONE OR MORE DATA PROCESSING UNITS
US4787057A (en) * 1986-06-04 1988-11-22 General Electric Company Finite element analysis method using multiprocessor for matrix manipulations with special handling of diagonal elements
US4829585A (en) * 1987-05-04 1989-05-09 Polaroid Corporation Electronic image processing circuit

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10303976B2 (en) 2015-07-23 2019-05-28 Mireplica Technology, Llc Performance enhancement for two-dimensional array processor

Also Published As

Publication number Publication date
EP0293700A3 (en) 1989-10-18
EP0293700A2 (en) 1988-12-07
DE3852909T2 (en) 1995-10-12
DE3852909D1 (en) 1995-03-16
US5129092A (en) 1992-07-07
JPS63316167A (en) 1988-12-23
DE293700T1 (en) 1990-04-12
JP2756257B2 (en) 1998-05-25

Similar Documents

Publication Publication Date Title
EP0293700B1 (en) Linear chain of parallel processors and method of using same
EP0390907B1 (en) Parallel data processor
US4622632A (en) Data processing system having a pyramidal array of processors
EP0293701B1 (en) Parallel neighborhood processing system and method
EP0085520B1 (en) An array processor architecture utilizing modular elemental processors
US5557734A (en) Cache burst architecture for parallel processing, such as for image processing
CA2215598C (en) Fpga-based processor
US4215401A (en) Cellular digital array processor
US5247613A (en) Massively parallel processor including transpose arrangement for serially transmitting bits of data words stored in parallel
EP0122048B1 (en) Data processing cells and parallel data processors incorporating such cells
US4697247A (en) Method of performing matrix by matrix multiplication
EP0676764A2 (en) A semiconductor integrated circuit
US4745546A (en) Column shorted and full array shorted functional plane for use in a modular array processor and method for using same
US4524428A (en) Modular input-programmable logic circuits for use in a modular array processor
US4543642A (en) Data Exchange Subsystem for use in a modular array processor
US20040215927A1 (en) Method for manipulating data in a group of processing elements
US5345563A (en) Input/output interface for data in parallel processors
JPH04295953A (en) Parallel data processor with built-in two-dimensional array of element processor and sub-array unit of element processor
Wilson Chapter Nine One Dimensional SIMD Architectures—The AIS-5000
JPH1074141A (en) Signal processor
US8856493B2 (en) System of rotating data in a plurality of processing elements
US5928350A (en) Wide memory architecture vector processor using nxP bits wide memory bus for transferring P n-bit vector operands in one cycle
Gilmore The massively parallel processor (MPP): A large scale SIMD processor
JP2515724B2 (en) Image processing device
US7676648B2 (en) Method for manipulating data in a group of processing elements to perform a reflection of the data

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH DE ES FR GB GR IT LI LU NL SE

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: APPLIED INTELLIGENT SYSTEMS, INC.

RBV Designated contracting states (corrected)

Designated state(s): DE FR GB NL

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): DE FR GB NL

EL Fr: translation of claims filed
17P Request for examination filed

Effective date: 19891220

TCNL Nl: translation of patent claims filed
DET De: translation of patent claims
17Q First examination report despatched

Effective date: 19910717

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB NL

REF Corresponds to:

Ref document number: 3852909

Country of ref document: DE

Date of ref document: 19950316

ET Fr: translation filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed
REG Reference to a national code

Ref country code: GB

Ref legal event code: IF02

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20020508

Year of fee payment: 15

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20020515

Year of fee payment: 15

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20020529

Year of fee payment: 15

Ref country code: DE

Payment date: 20020529

Year of fee payment: 15

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20030520

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20031201

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20031202

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20030520

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20040130

NLV4 Nl: lapsed or anulled due to non-payment of the annual fee

Effective date: 20031201

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST