CA2147363A1 - Improved configurable cellular array - Google Patents

Improved configurable cellular array

Info

Publication number
CA2147363A1
CA2147363A1 CA002147363A CA2147363A CA2147363A1 CA 2147363 A1 CA2147363 A1 CA 2147363A1 CA 002147363 A CA002147363 A CA 002147363A CA 2147363 A CA2147363 A CA 2147363A CA 2147363 A1 CA2147363 A1 CA 2147363A1
Authority
CA
Canada
Prior art keywords
switch
cell
input
cells
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002147363A
Other languages
French (fr)
Inventor
Thomas A. Kean
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xilinx Inc
Original Assignee
Thomas A. Kean
Xilinx, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=10724616&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=CA2147363(A1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Thomas A. Kean, Xilinx, Inc. filed Critical Thomas A. Kean
Publication of CA2147363A1 publication Critical patent/CA2147363A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/34Circuit design for reconfigurable circuits, e.g. field programmable gate arrays [FPGA] or programmable logic devices [PLD]
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03KPULSE TECHNIQUE
    • H03K19/00Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits
    • H03K19/02Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components
    • H03K19/173Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components using elementary logic circuits as components
    • H03K19/1733Controllable logic circuits
    • H03K19/1737Controllable logic circuits using multiplexers
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03KPULSE TECHNIQUE
    • H03K19/00Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits
    • H03K19/02Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components
    • H03K19/173Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components using elementary logic circuits as components
    • H03K19/177Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components using elementary logic circuits as components arranged in matrix form
    • H03K19/17704Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components using elementary logic circuits as components arranged in matrix form the logic functions being realised by the interconnection of rows and columns
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03KPULSE TECHNIQUE
    • H03K19/00Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits
    • H03K19/02Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components
    • H03K19/173Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components using elementary logic circuits as components
    • H03K19/177Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components using elementary logic circuits as components arranged in matrix form
    • H03K19/17748Structural details of configuration resources
    • H03K19/17752Structural details of configuration resources for hot reconfiguration
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03KPULSE TECHNIQUE
    • H03K19/00Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits
    • H03K19/02Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components
    • H03K19/173Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components using elementary logic circuits as components
    • H03K19/177Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components using elementary logic circuits as components arranged in matrix form
    • H03K19/17748Structural details of configuration resources
    • H03K19/17756Structural details of configuration resources for partial configuration or partial reconfiguration
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03KPULSE TECHNIQUE
    • H03K19/00Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits
    • H03K19/02Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components
    • H03K19/173Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components using elementary logic circuits as components
    • H03K19/177Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits using specified components using elementary logic circuits as components arranged in matrix form
    • H03K19/17748Structural details of configuration resources
    • H03K19/17758Structural details of configuration resources for speeding up configuration or reconfiguration

Abstract

A field programmable gate array (FPGA) of cells (12) arranged in rows and columns is interconnected by a hierarchical routing structure. Switches (18, 20) separate the cells (12) into blocks and blocks of blocks with routing lines (26, 28, 30, 32, 34, 36, 38, 40) interconnecting the switches (18, 20) to form the hierarchy.

Description

~147~3 IMPROVEn CONFIGUR~RTF CFTTUT~R ARRAY

4 The present invention relates to a configurable cellular array of dynamically configurable logic elements, 6 such arrays being generally known as Field Progr~m~hle Gate 7 Arrays (FPGAs).

Reprogrammable FPGAs have been available commercially ll for several years. The best known commercial family of 12 FPGAs are those from Xilinx Inc. One class of these devices 13 uses Static Random Access Memory (SRAM) to hold control bits 14 which control their configurations. Most FPGA devices replace traditional mask programmed Applications Specific 16 Integrated Circuit (ASIC) parts which have a fixed 17 configuration. The configuration of the FPGA is static and 18 iS loaded from a non-volatile memory when power is applied l9 to the system. Nearly all commercially available FPGAs have 20 a stream-based interface to the control store. (The control 21 store contains the set of bits which determine what 22 functions the FPGA will implement.) In a stream-based 23 interface to the control store, a sequence of data is 24 applied to a port in the FPGA to provide a complete 25 configuration for the whole device or for a fixed (normally 26 large) sub-section of the FPGA. This stream-based 27 interface, when combined with an address counter which is 28 implemented on the FPGA itself, provides an efficient method 29 of loading the complete device configuration from adjacent 30 EPROM or other non-volatile memory and power up without any 31 additional overhead circuits. A stream based interface with 32 an address counter is a suitable programming interface for 33 an FPGA which is used as a replacement for a standard ASIC.
34 Some FPGAs can be partly or totally reconfigured using one 35 of a set of static configurations stored at different 36 addresses in an EPROM, and can trigger the reconfiguration 37 from within the design implemented on the FPGA.

WO94/107S4 , ~ ~ ~ 7 3 ~ ~ PCT/US93/10 1 Published International Application WO 90/11648,
2 corresponding to U.S. Patent 5,243,238, discloses an
3 architecture hereafter referred to as CAL I, which has been
4 implemented in an Algotronix product designated CAL 1024.
CAL I is different from other commercially available FPGAs 6 in that its control store appears as a standard SRAM to the 7 systems designer, and can be accessed using address bus, 8 data bus, chip enable, chip select and read/write signals.
g Addressing the control store as an SRAM supports a user lo program running on the host processor mapping the FPGA
11 control store (configuration memory) into the memory or 12 address space of the host processor so that the `processor 13 can configure the FPGA to implement a user-defined circuit.
14 This arrangement, which is implemented in the CAL 1024 FPGA, allows the user to partition an application between the 16 processor and the FPGA with appropriate sections being 17 implemented by each device. The control store interface 18 provides an important input/output ~I/O) channel between the 19 FPGA and the processor, although the I/O can also take place 20 using more traditional techniques via, for example, a shared 21 data memory area. This latter type of FPGA provides a 22 passive control store interface because an external agent is 23 required to initiate configuration or reconfiguration of the 24 device, as required.
Experience with the CAL I architecture and trends 2 6 within the electronics industry have made this second 27 passive form of control store interface increasingly 28 attractive for many applications. Microprocessors or 29 microcontrollers are now pervasive components of computer systems and most board level systems contain one. The major 31 benefit of the stream based "active" FPGA programming 32 approach is that no overhead circuits are required to 33 initiate reconfiguration. In systems where a microprocessor 34 or microcontroller is present, the ~passive~ RAM emulating FPGA interface is preferable for several reasons:
36 (1) the FPGA configuration can be stored in the 37 microprocessor's program and data memory (reducing the 2f ~ 73~3 l number of parts by removing the need for a separate 2 memory chip), 3 (2) the existing data and address buses on the board 4 can be used to control the FPGA (saving printed circuit board area by removing dedicated wires between the 6 configuration EPROM and the FPGA);
7 (3) the FPGA control store can be written to and read 8 from by the microprocessor, and thereby used as an I/O
g channel between the FPGA and the microprocessor, thereby potentially saving additional wiring between ll the FPGA and the processor buses and freeing the FPGA
12 programmable I/0 pins for communication with external 13 devices, and 14 (4) the intelligence of the microprocessor can be used to support compression schemes for the configuration 16 data and other techniques, which allows more 17 flexibility in reprogramming the FPGA.
18 In addition, the difference in cost between an "active" FPGA
l9 with an associated EPROM holding its configuration and a 20 passive FPGA with an active microcontroller chip containing 21 an EPROM and a simple processor is m;n;m~l. The easy 22 reprogrammability makes the passive FPGA attractive, even if 23 the microcontroller has no other function apart from 2 4 reprogramming the FPGA.
Another trend within the Electronics Industry has been 26 the provision of "support chips" for microprocessors which 27 provide an interface between I/O devices and a particular 28 microprocessor. Examples of these devices include Universal 29 Asynchronous Receiver Transmitters (UARTs) for low bandwidth 30 serial I/O, Programmable Peripheral Interfaces (PPIs) for 31 low bandwidth parallel I/O and various specialised chips for 32 higher bandwidth connections to networks and disk drives.
33 These support chips appear to the processor as locations in 34 the I/O or memory address space to and from which data are 35 transferred. Some support chips can interrupt the processor 36 via interrupt lines or take over the bus for Direct Memory 37 Access (DMA) operations. In many ways a passive FPGA chip .
-W094/10754 ~ 1 4~ 3 ~ 3 PCT/US93/104 l can be viewed as a successor to a support chip, providing an2 interface to the processor via its control store on the one 3 hand, and an interface to the external world via a large 4 number of flexible I/O lines on the other, for example 128
5 programmable I/O lines on the Algotronix CAL 1024 device.
6 A passive FPGA chip has a number of advantages. For
7 example, it is cost-effective to provide a single FPGA with ^
8 a library of configurations instead of provi~ding a number of g support chips. In addition, providing a single FPGA for several functions reduces the number of devices in the ll processor manufacturer's catalogue. Also-, reconfigurable 12 FPGAs can support changeable I/O functions, such as when a 13 single external connector can be used as either a serial or 14 a parallel port. With a passive RAM control interface, the 15 FPGA is able to support other functions as well.
16 Each time an FPGA is reconfigured to implement a 17 different set of functions, the microprocessor must access 18 the configuration memory. One reconfiguration typically ls requires many control store accesses, one access for each word of configuration memory to be changed. Several 21 important classes of reconfiguration have been identified.
22 (1) Application swapping occurs when one application 23 terminates and a completely different application 24 wishes to make use of the FPGA. In this case the FPGA
chip is completely reconfigured, usually from a static 2 6 configuration.
27 (2) Task swapping occurs when the application must 28 configure relatively large sections of the FPGA to 29 implement a new phase in the computation. For example, a sorting application might first sort small batches of 31 data completely using configuration A and then merge 32 those sorts into a completely sorted stream of data 33 using configuration B. In this case, the application 34 has knowledge of both configurations and need only change those resources which are different in 3 6 configuration B. At a later point, configuration A may 37 itself be restored.

~ W O 94/10754 ~ 1 ~73 6 3 PC~r/US93/10404 1 (3) Data dependent reconfiguration occurs when the 2 configurations of some cells are computed dynamically 3 based on input data by the application program, rather 4 than being loaded from a static configuration file.
Often a static configuration is first loaded, then a 6 relatively small sub-set of cells are reconfigured 7 dynamically (that is, reconfigured while the chip is 8 operating). An important example of this class of g reconfiguration is where an operand (such as a constant multiplier or a search string) is folded directly into 11 the logic used to implement the multiply or sort unit 12 rather than being stored in a register. This technique 13 is advantageous as it frequently results in smaller and 14 faster operation units.
( 4) Access to gate outputs occurs for debugging. The 16 outputs of all the logic cells on the CAL I FPGA are 17 mapped to bits of the control store. Debugging 18 programs are available which read back this information 19 on the display or design layout to show the logic levels on internal wires.
21 ( 5) Access to gate outputs for I/O is similar to the 22 previous access to gate outputs for debugging. But in 23 this particular case only a small fraction of the logic 24 nodes, namely those which correspond to input and output registers, will be accessed repeatedly. The 26 ability to rapidly assemble a word representing input 27 to or the result of a computation from several bits at 28 different locations in the control store is critical to 29 the effectiveness of this technique.
It is desirable to reduce the number of accesses 31 required and hence the time to wholly or partially 32 reconfigure the device. Several systems other than CAL I
33 have been proposed which allow direct access to internal 34 signals in an FPGA or an FPGA-like device, for example, as disclosed in Cellular Logic-in-Memory Arrays, William H.
36 Kautz, IEEE Transactions on Computers Vol C18 No. 8, August 37 1969; A Logic in Memory Computer, Harold S. Stone, IEEE

W094/10754 ~ ~ 3 ~ PCT/US93/1040 1 Transactions on Computers, Vol C19 No. 1, January 1970 and 2 Xilinx U.S. Patent No. 4,758,985 Microprocessor Oriented 3 Configurable Logic Element, although all these proposals 4 suffered from major drawbacks and were not made available commercially.
6 It is also desirable to improve the means of accessing 7 state information in designs implemented on FPGAs so that an 8 external processor can perform word-wide read or write g operations on the registers of the user s design with a single access to the control store. Thus the control store 11 interface allows high bandwidth communication between the 12 processor and the FPGA. It is also desirable to provide 13 mechAni sm~ for synchronising computations between the FPGA
14 and the processor and to provide a mechAnism for extending design configuration files to support dynamic 16 reconfiguration while allowing use of conventional tools for 17 static designs to create FPGA configurations.
18 The architecture of the CAL 1024 was based on 1.5 19 micrometre technology available in 1989. One problem with the CAL I architecture in which cells are connected only to 21 their nearest neighbours was that cells in the middle of the 22 array became less useful with increasing array size as the 23 distance and hence delay to the edge of the chip increased.
24 This problem became more serious as improvements in processing technology meant that the number of cells 26 implementable per chip increased from 1024 to about 16,384.
27 This resulted in a scalability problem because of increased 28 delays, and reduced the performance below the desired 29 criteria. Thus, although scalability of chips using the CAL
I architecture can be achieved, it is at the expense of 31 performance. The limited number of cells available on a 32 single chip with 1.5 um technology meant that it was 33 desirable to ensure scalability over chip boundaries so that 34 large designs typical of many computational applications could be realised using multiple chips. The limitations of 36 the then processing technology also made it essential to 37 optimise the architecture for silicon area and sometimes l this optimisation was at the expense of speed. The original 2 Algotronix CAL 1024 chips were designed to bring out 3 peripheral array signals to pads on the edges of the 4 cellular array so that they could be cascaded into larger 5 cellular arrays on a printed circuit board. Packaging 6 technology has not evolved as rapidly as chip technology and 7 limitations on the number of package I/O pins make it 8 uneconomic to produce fully cascadable versions of the g higher cell density chips.
The CAL I architecture suffered from a number of other ll disadvantages. For example, in order to access a cell in 12 the existing CAL I FPGA, five to six processor instructions 13 are needed to calculate the address of the cell; this again 14 takes time and slows operation. With the existing CAL I
cell array the routing architecture used meant that with 16 increased number of cells per chip, routing via intermediate 17 cells added considerably to the delays involved. In 18 addition, in the CAL 1024 device, global signals are coupled l9 to all the cells in the array so that the cells can be signalled simultaneously. It logically follows that at high 21 clock frequencies, global signals could consume high power.

24 An object of the present invention is to provide an 25 improved field programmable gate array which obviates or 26 mitigates at least one of the aforementioned disadvantages.
27 A further object of the present invention is to reduce 2 8 the number of control store accesses required and the time 29 to wholly or partially reconfigure the device from one 3 0 configuration to another.
31 A further object of the invention is to enable an 32 external processor to perform word-wide read or write 33 operations on registers of a user's design with a single 34 access to the control store.
A yet further object of the present invention is to 36 provide a mechanism for extending design configuration files 37 to support dynamic reconfiguration while allowing the use of WO94/10754 2 1 ~7 3 6 3 PCT/US93/104 ~

l conventional tools for static designs to create FPGA
2 configurations.
3 A further object of the invention is to provide 4 mechanisms for the synchronisation of computations between the FPGA and an external processor.
6 A yet further object of the invention is to provide a 7 novel routing architecture which can be scaled up to operate 8 on arrays having different numbers of cells to reduce delays g involved in routing between cells in large cell arrays.

ll ArraY of Cells with Hierarchical Routina Structure 12 In accordance with the present invention, a 2-13 dimensional field programmable gate array (FP&A) of cells 14 arranged in rows and columns is provided. Each cell in the 15 array has at least one input and one output connection at 16 least one bit wide to each of its neighbouring cells. Each 17 cell also has a programmable routing unit and a programmable 18 function unit to permit intercellular connections to be l9 made. The programmable function unit can select one of a plurality of functions of several input signals for 21 generating a function unit output signal. The routing unit 22 of a cell directs inputs of the cell to function unit 23 inputs, and also directs inputs of the cell and the function 24 unit output to neighbouring cells. Groups of cells in the 25 array are arranged so as to be connected to additional 26 conductors of a length e~ual to a predetermined number of 27 cells. Cells in the array are coupled to the additional 28 conductors via switches. Typically, four such conductors 29 are provided for each cell, two conductors arranged in one direction in the array and two conductors arranged in the 31 orthogonal direction in the array. Each pair of conductors 32 iS arranged such that one conductor in the pair carries 33 signals in one direction and the other conductor carries 34 signals the opposite direction. This novel architecture is 35 referred to hereafter as the CAL II architecture, or simply 36 as CAL II.
37 A predetermined block of cells, for example a 4 x 4 2~473~3 ~ WO94/10754 PCT/US93/10404 1 block of cells, has the additional conductors of at least 2 cell length 4 (four cells long). These blocks are arranged 3 into repeating units to form an array of these cells whereby 4 16 of such 4 x 4 blocks of cells result in a unit of 16 cells x 16 cells with each 4 x 4 block having the additional 6 conductors, the longer conductors hereinafter referred to as - 7 flyovers, associated with each row or column of 4 cells.
8 The 16 x 16 block of cells may itself have additional g flyover conductors.
lo In larger cellular arrays, the structure of the 11 hierarchical routing can be extended to any number of 2 levels, a third level using conductors of length 64, and a 13 fourth level using conductors of length 256 and so on.
14 This arrangement permits scaling of the array with the advantage that the scaling is logarithmic in terms of 16 distance, thereby significantly reducing delay between 17 cells. Specifically, a signal travels from an origin cell 18 to its closest associated switch located at a cell block 19 boundary, then along appropriate flyovers to the destination cell. Thus, this structure creates a hierarchical cellular 21 array with a variety of routing resources whose lengths 22 scale, in one embodiment by a factor of 4 each time, built 23 on top of a simple array of neighbour connections only.
24 The principal advantage of providing different levels of routing resources is that it allows the number of 26 conductor segments required to route from cell to cell on 27 the array to be min;mised. For example, if a path is 28 provided between two points in the array using neighbour 29 interconnect only, the number of routing segments would be 30 equal to the number of cells between the two points, whereas 31 with the hierarchical interconnect, the number of segments 32 increases with the logarithm of the distance between the 33 cells.

3 5 Sinqle-Source Directed Wirinq 3 6 In one embodiment of a pre-programmable cellular array 37 with hierarchical routing, all the wires in the array are WO94/10754 ~ i 4~ PCT/US93/104 ~

1 directed and have a single source. Thus, 3-state drivers 2 are not used. In one embodiment, all the connections in the 3 array are completely symmetrical so that if the array is 4 rotated the structure r~m~ unchanged. Single source 5 wiring has the advantage of being simpler to implement than 6 multiple-source wires. Multiple-source wires, while 7 allowing additional flexibility, require a considerable area 8 overhead, and produce contention when different drivers g attempt to place opposite values on the same wire.
lo Contention dissipates heavy power, which can result in 11 device failure. Contention is obviated by the present invention, in which a wire is driven by a single multiplexer 13 output. The symmetry feature simplifies the CAD software to 14 map user designs onto the array and allows hierarchical 15 blocks of those designs to be rotated or reflected to 16 provide better utilisation of the array area.
17 Preferably, the switches providing connections between 18 the flyovers and the cells are static RAM controlled 19 multiplexers. Conveniently, the switches at 4-cell and 16-cell boundaries permit direct neighbour connections as well 21 as additional connections via the longer flyover conductors.

23 Automatic Routinq O~timization, PortabilitY
24 The hierarchical routing resources in the improved FPGA
25 can be used in two principal ways. Firstly, the user can 26 design using simpler neighbour programming models ignoring 27 the availability of the longer connections. In such a case, 28 the software will automatically detect nets in the layout 29 which may benefit from new routing resources, and take 30 advantage of these resources to speed up performance of the 31 design. Secondly, the user can design using improved 32 programming models and make explicit assignments to the 33 extra routing resources. In this case, extra density is 34 achieved by assigning different nets to various levels of 35 interconnect at the same point in the cellular array. For 36 example, a length-16 wire could carry a signal over a sub-37 unit (for example, several 4 x 4 blocks) without interfering WO94/10754 2 1 ~ 7 3 6 ~ PCT/US93/lW04 1 with the local interconnect in that sub-unit. when flyovers 2 are used to bypass a block of cells, blocks of the user 3 design might have to be placed in the FPGA on these 4-cell 4 or 16-cell boundaries. Automatic addition of flyover 5 routing is easier to use and is independent of the number of 6 levels of routing provided by a given FPGA chip. Using _ 7 software to add the flyovers provides design portability 8 between different chips, and using improved programming g models whi~h use flyovers to bypass a block, or manually o assigning flyover resources as appropriate, allows more 11 efficient use of the resources provided.
12 Use of longer routing resources may be achieved using 3 low level CAD software as described above or using hardware 14 in the chip itself to automatically route signals to longer 15 wires where possible. This provides more device portability 16 and allows special "fast" versions of existing chips to be 17 made with additional longer wires without requiring any 18 changes to the existing design. This "dynamic" selection of 19 longer routing wires simplifies the CAD software, allowing 20 it to run faster. Dynamic selection of longer wires is 21 particularly attractive for applications which involve 22 dynamically reprogr~mmi ng FPGA chips.
23 According to another aspect of the invention the speed 24 of propagation of signals through an FPGA is improved by 25 automatically mapping onto flyovers those signals capable of 26 being speeded up using circuitry fabricated on the FPGA.
2 7 The method comprising the steps of:
28 detecting control store bit patterns which correspond 29 to routing a signal straight through a cell, detecting when 30 a group of cells beneath a flyover all route the signal in 31 the direction of the flyover by using the 4-input gate 32 provided for that flyover direction, and taking as input the 33 output of the 4-input gate of the appropriate neighbour 34 multiplexer, feeding an output from one of the 4 input gates to 36 switches at both ends of the flyover, whereby the signal is 37 carried automatically by the flyover as well as by neighbour WO 94/10754 ~ 4~ 3~ 12 PCI/US93/1041~

1 routing, and the faster signal on the flyover is selected by 2 the switch at the end of the flyover.
3 The method is scalable and can be applied to a group of 4 4 length-4 flyovers under a length-16 flyover when this 5 group all route a signal in the direction of the length-16 6 flyover. This is done using a 4-i~put gate which takes as 7 inputs the outputs of the 4-input gates used for receiving 8 signals from the neighbour cells.
g The type of gate used depends on the control bits being detected. For example, a NOR gate is used for detecting bits 0,0 in an East to West direction in a West routing 12 multiplexer. Alternatively, to detect a bit pattern of 1,1, 13 a NAND gate and associated logic circuitry are used.

15 Block Corres~ondence Allows EasY Reconfiauration 16 An important feature of the present invention, is that 17 a rectangular area of cells specified as a hierarchical 18 block of the user's design (for example, a 4 x 4 block or a 19 16 x 16 block of cells) corresponds directly, i.e. by a 20 straight-forward physical relationship, with one or more 21 rectangular areas in the configuration memory (control 22 store) of the CAL II FPGA device representing instances of 23 that block. This means that a block of the user's design 24 can be dynamically replaced by another block of the same 25 size, for example, a register can be replaced by a counter 26 of equal size. Thus, in accordance with the present 27 invention, the host processor must reconfigure only the 28 corresponding area of the control store RAM. The binary 29 data for both blocks can be pre-calculated from the user's 3 0 design, and the actual replacement can be done very rapidly 31 using a block transfer operation, as is well known in the 32 art. During dynamic reconfiguration, registers can be 33 initialised either to a default associated with a block 34 definition or to restore the previous state of the unit whose configuration is being restored or to a convenient 36 value decided by the application program performing the 37 reconfiguration.

21~7363 2 Wildcard Feature 3 According to a further aspect of the present invention 4 an FPGA is provided having a randomly accessible writable control store in which more than one word of control memory 6 iS written simultaneously as a result of a single write 7 access. Conveniently, the row and column decoders may be 8 implemented by standard NOR gates coupled to wildcard g registers associated with the address buses to the respective row and column decoders.

12 Match Feature 13 Alternatively, the FPGA includes a plurality of 14 programmable rather than fixed row decoders, which are implemented by means of match registers. Also, the FPGA
16 includes a plurality of column decoders which are 17 implemented by match registers.

l9 Shift and Mask Feature According to a further aspect of the present invention 21 shift and mask registers are provided between an external 22 data bus and internal data bus to the bit line drivers.
23 This has the advantage of allowing for additional 24 flexibility in selecting which bits of the addressed word 25 are significant for the current transfer and presenting that 26 information in a more convenient form, such as left aligned 27 to the external processor.
28 Preferably, the FPGA writable control store includes a 29 mask unit for allowing some bits of a word to be programmed 30 selectively. Conveniently, the mask unit includes shift 31 components which can expand left aligned data for unmasked 32 bits or produce left aligned data from a word with some bits 33 masked out.
34 In the FPGA, word-wide read and write accesses may be 35 made through the control store interface to registers of a 3 6 user design. Register access can be extended to an 37 antifuse, EPROM, EEPROM or mask programmable logic device by .

W094/10754 2 ~ ~7 3 ~ 3 14 PCT/US93/104 ~

1 providing an additional RAM like interface for accessing 2 internal state information.

4 Confiauration and State Information Seareaated Advantageously, in the CAL II FPGA, the values present a 6 on internal nodes appear in the co~trol store address space 7 such that any word in the address space contains bits 8 representing values on internal nodes or bits cont~;n;ng g configuration information, but not both. Conveniently, the values in internal nodes appear in the control store address ll space such that addresses corresponding to state information 12 are distinguishable from addresses corresponding to 13 configuration information by ex~min;ng one or a small sub-14 set of mode bits from the address bus.
Conveniently, the FPGA includes a further set of bit 16 and word line drivers which are arranged orthogonally to the 17 first set of bit and word line drivers such that logic state 18 information in a dual-ported memory in the device is l9 accessible word-wide in either the horizontal (bit) or 20 vertical (word) direction.

22 Multi~le Address Decoders 23 According to another aspect of this invention bit and 24 word lines in the RAM are associated with multiple address 25 decoders, and additional address bits are fed to these 26 secondary decoders. Using more than one address decoder 27 allows a more complex mapping between internal memory bits 28 and external addresses, including the possibility of 29 multiple bits of memory having a single address. This 30 technique allows for density of the memory array while 31 preserving logical fields corresponding to different device 32 functions in the external address.

34 Microcontroller Intearated with FPGA
A further aspect of the invention integrated FPGA
36 architecture on the same chip with a microprocessor or 37 microcontroller, wherein the FPGA control store memory is ~ WO 94/10754 2 ~ 4 ~ 3 6 3 PCT/US93/10404 l mapped into the processor address space.

3 on-Chi~ Timers 4 The FPGA architecture may include programmable counter-5 timers integrated on the chip to drive in global clock 6 signals.

8 External and Internal ProqrammabilitY
g The address and data buses in the CAL II FPGA used for programming can also be connected to cell inputs and outputs ll as well as external signals.
12 To external systems, the CAL II array appears as two 13 separate devices: a static random access memory control 14 store and a programmable logic unit which is effectively the 15 user's design. The memory control store is normally mapped 16 into the address space of the host computer allowing rapid 17 changes of configuration. Use of random access control 18 memory in the CAL II FPGA means that only those cells or l~ parts of cells whose function has changed need to be reprogrammed. It will be understood that the programmable 21 logic unit consists of the array of functional cells 22 surrounded by programmable input/output blocks.

24 Hiah S~eed Path According to another aspect of the present invention, 26 the function unit of a cell has a plurality of input signals 27 for receiving a plurality of input variables, the input 28 variables being processed in parallel paths, whereby one of 29 the parallel paths is optimised for speed so that a user can 30 direct critical signals to the optimised path to ensure 31 minim~l delay in processing the signal.

33 Reconfi~uration Synchronized with ComDutation 34 According to a further aspect of the invention there is 35 provided a method of writing data directly into a register 36 of the CAL II array. The method comprises the steps of 37 using the bit or word lines to the control store as clocks WO94/10754 PCT/US93/1040 ~
21~7~ 16 1 or signals for synchronising computations in a circuit 2 implemented in the CAL II array. In this manner, user 3 logic implemented on the FPGA is synchronized to a 4 microprocessor.
According to a further aspect of the invention, 6 external signals may be monitored b~y circuits implemented on 7 an FPGA. The monitoring method comprises the steps of 8 connecting the external signals to be monitored to positions g at the periphery of the cell array which do not have associated I/O pads. Available external signals include 11 data bus, address bus, mode, read/write, chip enable, chip 12 select, reset, and force high impedance (FHZ).
13 Advantageously, a circuit implemented on the FPGA
detects external reads and writes to the control memory, and automatically clocks itself to process an input value or 16 produce the next output value.
17 According to yet a further aspect of the present 18 invention there is provided an FPGA in which circuits are 19 implementable as one or more replaceable blocks, configuration data for each potential configuration of each 21 replaceable block being storable in memory, the replaceable 22 blocks being selected from a hierarchical selection of 23 blocks with associated bounding box sizes, the blocks being 24 replaceable by alternative configurations having the same 25 bounding box and I/O signals appearing at the same point on 2 6 the block periphery.
27 These and other aspects will become apparent from the 28 following description when taken in combination with the 2 9 accompanying drawings.

31 BRIEF DF~CRIPTION OF T~F DRAWINGS
32 Fig. 1 is an enlarged plan view of a representative 33 area of an FPGA in accordance with a preferred embodiment of 34 the invention, showing the spatial 2-dimensional 35 relationship between individual cells and switches, the 36 representative relevant area having 16 cells x 16 cells;
37 Fig. 2 is an enlarged portion in more detail of the ~ WO 94/10754 t 2 1 ~ 7 ~ ~ 3 PCT/US93/10404 1 area indicated in broken outline of Fig. 1 showing the 2 routing paths between the cells and the switches in the 3 representative area;
4 Fig. 3 is a schematic block diagram of the - s representative area shown in Fig. 1 at its basic routing 6 level, that is with neighbour interconnects only;
7 Fig. 4 depicts the array structure shown in Fig. 1, 8 with the neighbour interconnects omitted in the interest of g clarity, and with additional first level routing flyovers lo for a 4-cell x 4- cell block;
11 Fig. 5 is a view similar to Fig. 4 but with the length-12 4 flyovers omitted, and shows the routing flyover 13 arrangement for the entire 16-cell x 16-cell block of Fig. 1 14 and depicts length-16 flyovers only;
Fig. 6 is a schematic diagram showing a row of cells 16 and how a signal may be passed along the line of cells using 17 the neighbour connect and the length-4 flyover arrangement 18 shown in Figs. 3 and 4;
19 Fig. 7 is a schematic diagram of a 64 x 64 cell array which embodies the CAL II architecture, showing the I/O
21 units along the perimeter of the chip;
22 Fig. 8 is a block diagram of the above device showing 23 the row and column decode units, control and global I/O
24 units and global buffers of this embodiment of the CAL II
25 architecture device;
26 Fig. 9 is a logic symbol for the CAL II array shown in 27 Fig. 8 used in schematic diagrams of systems containing the 28 above embodiment of the CAL II architecture;
29 Fig. 10 is an enlarged schematic diagram of one of the 30 cells shown in the representative area of Fig. 2;
31 Fig. 11 shows functions implementable by function unit 32 48 of the cell shown in Fig. 10;
33 Fig. 12 is another representation of output elements 34 50, 52, 54, and 56 of in Fig. 10;
Fig. 13 is a schematic diagram of a switch at a 4-cell 36 boundary showing the interconnections to the switch;
37 Fig. 14 is a schematic diagram of a switch at a 16-cell WO94/107~4 2 1 47 3 ~ 3 PCT/US93/104 ~

1 boundary showing the interconnections to the switch;
2 Fig. 15 is a schematic layout of a switching function 3 for use at 4-cell boundaries;
4 Fig. 16 is similar to Fig. 15 and depicts an extended 5 switching function for use at 4-cell boundaries with global 6 signals and connections to programming bit signals;
7 Fig. 17 is a similar schematic diagram but for a 8 preferred switching function for use at 16-cell boundaries;
g Fig. 18 depicts a NOR gate circuit for automatically moving local signals to a flyover.
11 Fig. 19 depicts one implementation of the NOR gate of 12 Fig. 18.
13 Fig. 20 depicts a first version of a function unit in 14 accordance with the preferred embodiment of the present invention;
16 Fig. 21 depicts an alternative version of a function 17 unit for use with the preferred embodiment of the present 18 invention;
1g Fig. 22 depicts a further alternative version of a 20 function unit for use with the preferred embodiment of the 21 present invention, this version reducing the symmetry of the 22 input selection multipliers;
23 Fig. 23 is a further embodiment of a function unit 24 similar to that shown in Fig. 20;
Fig. 24 depicts a schematic diagram of a 2-input 26 function unit for use with the present invention;
27 Fig. 25 is a schematic diagram of a 3-input function 28 unit for use with the present invention;
29 Fig. 26a depicts a preferred input/output architecture 30 arrangement for each pad used with the FPGA of the present 31 invention;
32 Fig. 26b is a table of the various I/O block modes for 33 the architecture shown in Fig. 26a;
34 Fig. 26c shows an example circuit in which a global 35 signal can be taken from several sources by a user 3 6 programmable multiplexer.
3 7 Fig. 26d shows a schematic diagram of a switch used in 21473~3 1 the input/output architecture of Fig. 26a.
2 Fig. 27a shows the memory map and row and column 3 address structures for accessing the register or node output 4 of each cell in the CAL II array.
_ 5 Fig. 27b is a diagrammatic representation of a RAM
6 programmed FPGA which consists of an array of RAM cells 7 within which are embedded active logic structures;
8 Fig. 27c is an enlarged representation of the RAM
g programmed FPGA shown in Fig. 27b depicting the word boundaries in the control store;
11 Fig. 28 depicts a RAM programmed array in which the row 12 and column decoders have been replaced by match registers;
13 Fig. 29 is a table depicting the address bus format for 14 use with FPGA implemented by an embodiment of the CAL II
architecture having 4096 cells;
16 Fig. 30 is a table depicting the area of the control 17 store which is selected by using particular mode values;
18 Fig. 31a is a table showing the cell routing mode in 19 which bytes read from and written to the control store have 2 0 the format depicted in the table;
21 Fig. 31b is a table illustrating the cell function mode 22 showing the byte read from and written to the control store 23 in a particular format;
24 Figs. 32a, 32b, and 32c illustrate rows which are 25 addressed by three different combinations of a row address 2 6 and a wildcard register value;
27 Fig. 33a, 33b and 33c show the wildcard mask and shift 28 register addressing circuits implemented with a standard 2 9 address decoder;
Fig. 34a is a schematic block diagram of a mask 31 register and alignment unit for use with a wildcard register 32 for use with read cycles;
33 Fig. 34b is a schematic diagram of the internal layout 34 of one of the switches of the mask register shown in Fig.
35 34a;
3 6 Fig. 34c is a table of state information which depicts 37 bits presented in a right justified form on the external WO94/10754 PCT/US93/104 ~
2~ 4~3-~3 20 ^1 interface following a shift/mask operation;
2 Fig. 35 depicts a table showing the state access bits 3 retrieved in an example access using the separation register 4 technique;
Fig. 36a shows an alternative structure to that of Fig.
6 34a;
7 Fig. 36b shows how the circuit of Fig. 34a can be 8 extended to support systems wh~re the width of the input and g mask registers is wider than that of the output bus;
Fig. 37 depicts a function unit similar to that shown 11 in Fig. 20 which has been modified to support state access;
12 Fig. 38 depicts a structure on which duplicated bits of 13 control store can exist on the same bit line as normal bits 14 of control store;
Fig. 39 is a schematic diagram of the relationship of 16 an FPGA, microprocessor and memory device in a typical 17 application;
18 Fig. 40 is a diagrammatic representation of an FPGA
19 architecture showing a 4 bit wide AND gate implemented on an 20 FPGA in accordance with an embodiment of the present 21 invention;
22 Fig. 41 depicts a diagrammatic representation similar 23 to Fig. 40 but of a 16 bit wide AND gate implemented on four 24 4-cell x 4-cell blocks with the 16 cells arranged in one 25 column of the array;
2 6 Fig. 42 depicts a PAL-type structure showing how the 27 AND plane is structured and mated to an OR plane to form a 28 general purpose logic block;
29 Fig. 43 depicts a one bit accumulator structure from a 30 row of five cells in two 4-cell x 4-cell blocks;
31 Fig. 44 depicts a three bit accumulator with a look 32 ahead carry;
33 Fig. 45 depicts a 16 bit accumulator with a carry look 34 ahead which is a more complex arrangement but similar to 35 that shown in Fig. 44;
36 Fig. 46 is a diagrammatic representation of a 4 bit 37 synchronous counter implemented in a single 4-cell x 4-cell 1 block;
2 Fig. 47 depicts a 16 bit synchronous counter realised 3 on four 4-cell x 4-cell blocks having flyover routing 4 resources; and 5 Fig. 48 depicts a 16:1 multiplexer realised in two 6 4-cell x 4-cell blocks, and which is implemented as a tree r 7 of 2:1 multiplexers.
9 DETAILED DESCRIPTION OF SOME PREFERRED EMBODIMENTS
Fig. 1 depicts an enlarged plan view of part of an FPGA
11 10 in accordance with a preferred embodiment of the 12 invention. A plurality of cells 12 are arranged in a 2-13 ~im~ncional array in orthogonal rows and columns on the 14 chip. Cells 12 are arranged into 4-cell x 4-cell blocks 14, and the 4-cell x 4-cell blocks are arranged into a 16-cell x 16 16-cell block 16. Cell block 14 is defined by routing 17 switches 18 and 20 at its cell boundaries. Another block 14 18 can be defined by routing switches 18 at all boundaries. It 19 will be seen from Fig. 1 that there are two types of such routing switches: routing switches 18 which form a boundary 21 between 4-cell x 4-cell blocks, and routing switches 20 22 which form the boundary between 16-cell x 16-cell blocks.
23 The routing switches 18 and 20 provide connections between 24 various wires and cells 12. The structure and function of these different types of routing switches will be disclosed 26 in detail later.
27 Fig. 2 depicts an enlarged portion of the area 21 shown 28 in broken outline in Fig. 1 with all the routing resources 29 shown. As will be explained with reference to Figs. 2, 3, 4 and 5, there are three main ways by which cells may be 31 connected to each other. As best seen in Fig. 3, at the 32 first level neighbouring cells are interconnected to each 33 other by neighbour interconnects 22 and 24. This is the 34 structure of the above mentioned CAL I array. As shown in Figs. 2 and 4, in each 4 x 4 block 14, additional wires or 36 conductors 26, 28, 30, and 32, hereafter referred to as 37 length-4 flyovers, are routed between the neighbour W O 94/1075~ PC~r/US93/1040 ~
2~473~3 22 1 interconnections. For each row or column having 4 cells 2 there are two length-4 flyovers. In Fig. 4, each pair of 3 flyovers for each row or column of cells is shown on one 4 side of the cells, whereas in Fig. 2 the flyovers are located on either side of a row or column of cells. In the t 6 top-most row of cells shown in Figs. 2 and 4 in cell block 7 14, length-4 flyovers 26 and 28 a~e provided. Length-4 8 flyover 26 conducts signals ,in the East direction and g length-4 flyover 28 conduc~ts signals in the West direction.
Vertical flyovers 30 (North) and 32 (South) are provided for 11 each column of 4 cells x 4 cells so that each cell 12 in the 12 4 x 4 array not only has neighbour interconnects 22, 24 but 13 can also interconnect to any of the length-4 flyovers 26, 14 28, 30 and 32. From Fig. 2 it will be seen that horizontally arranged flyovers 26 (E) and 28 (W) are 6 interconnected between switches 18 and 20 as are vertical 7 flyovers 30 (N) and 32 (S).
8 Each 4 x 4 block of cells 14 with the length-4 flyovers lg shown in Figs. 2 and 4 can itself be considered as being a repeatable cell. One can form an array of these cells each 21 of which communicates with its near neighbours as in Fig. 4.
22 At each position in this array there are two length-4 wires 23 and a length-l wire, the pair passing to the neighbouring 24 cell in opposite directions. Thus, this array of 4-cell x 4-cell blocks has four directed wires for providing output 26 to its nearest neighbours, four wires for receiving signals 27 from its four neighbors, and four wires for receiving input 28 from the length-4 wires directed in the four direction.
29 In the same way, for a 16 x 16 array length-16 wires 30 34, 36, 38, and 40 can be added in the same way as the 31 length-4 wires, as is shown in Figs. 2 and 5. Although 32 Fig. 2 represents only a part of the 16 x 16 array shown in 33 Fig. 1, it includes the length-16 flyovers 34, 36, 38, and 34 40. These length-16 flyovers are both horizontal and vertical. An illustration of this is best seen in Fig. 5 of 36 the drawings, which depicts 4-cell x 4-cell blocks and is at 37 a-higher level than the arrangement shown in Fig. 4. Since ~ WO94/107~ 2 1 ~ 7 3 ~ 3 PCT/US93/10404 1 each block includes four rows of cells and four columns of 2 cells, there are four East length-16 flyovers 34 for each 3 row of blocks, one length-16 flyover for connecting each row 4 of cells in the block. The same is true for the West, - 5 North, and South length-16 flyovers. From Fig. 2 it will be 6 seen that the horizontal and vertical length-16 flyovers 34, 7 36, 38, and 40 are inputs and outputs of the larger boundary 8 switches 20 and inputs to the smaller 4-cell x 4-cell g boundary switches 18. There is no direct connection from lo any of the leng~h-16 flyovers to an individual cell 12.
11 It is clear that this process can be repeated with the 12 larger switches 20 of the 16-cell x 16-cell blocks. For the 13 16-cell x 16-cell blocks, in switches 20, three wires are 14 provided in each direction, 3 exiting East, 3 exiting West, 15 3 entering East, and 3 entering West, for example. The next 16 step would be 64 x 64 cell blocks in which switches not 17 shown would have 4 connections in each direction between 18 neighbouring blocks. The arrangement described above 19 defines a hierarchical cellular array built on top of the 20 simple array shown in Fig. 3, with a variety of routing 21 resources whose lengths scale by a factor of 4 each level of 22 the hierarchy. If the level of the hierarchical array is 23 represented as L, with L = 0 for the cellular array shown in 24 Fig. 3 (neighbour interconnections only), and the scale 25 factor applied to the array at each stage of the hierarchy 26 is represented as S, which in this example is 4, the flyover 27 wire lengths and the array block sizes (the side length of 28 the blocks) in basic cells for a given hierarchical level 29 are given by S to the power L. For example, 4 = 1 for the 30 neighbour interconnect array. Except for the highest-level 31 switches, there will be 2L+l wires in each direction for a 32 total of 2(2L+l) wires between blocks on a level L boundary 33 switch. At the highest level, the switch does not have 34 wires connecting to a higher level, so there are 2(2L) wires 35 entering or exiting that boundary switch. Normally, the 36 width and height of an FPGA chip in cell units will be an 37 integer multiple of SL, where L is the highest level of W O 94/10754 ~ 1 47 ~ ~ ~ PC~r/US93/1040 1 interconnect. Note that for S=4, the maximum number of 2 levels = log4 chip width, assuming chip width is equal to 3 chip height, but it may be convenient to provide fewer 4 levels. It should also be clear that while S=4 appears to be particularly attractive for the scale factor, other 6 values can be used, for example, S=2, S=3, S=5, and S=8. It 7 will~also be appreciated that th~ process of hierarchical 8 scaling can be applied to base cells with different g connections or can be started with a cluster of cells lo representing the basic repeating unit at level 0, i.e. the 11 arrangement as shown in Fig. 3.
12 The provision of different levels of routing resources 13 allows the number of segments required to route point to 14 point (cell to cell) on the FPGA array to be min;mised. If a straight line path is considered between two points or 16 cells on the array with neighbour connect only, then the 17 number of routing segments would equal the number of cells 18 between the two points. In contrast, as best seen in Fig.
19 6, the hierarchical interconnect results in the number of segments being proportional to the logarithm of the distance 21 between the origin or source and destination cells (plus a 22 few local segments). For example, for a source at cell 2 23 travelling to cell 12 via cell 3, the effective distance via 24 flyovers on the hierarchical routing arrangement is 5, whereas the neighbour cell routing distance is 10. Fig. 7 26 shows the 64-cell x 64-cell structure of a chip which 27 embodies the CAL II architecture. Note that length-64 28 flyovers (level 3) are not provided since with only a single 29 64-cell x 64-cell block they do not provide a significant reduction in routing delays.
31 With the CAL II structure all the wires in the array 32 are directed, and have a single source. Therefore, 3-state 33 drivers are not used. In addition, it will be understood 34 that in the array the connections are completely symmetrical, that is to say that if the array is rotated or 36 reflected, the structure remains unchanged. Both these 37 properties provide considerable advantage. Firstly, multiple-1 source wires, although allowing additional flexibility, can 2 result in a considerable area overhead, and experience 3 contention where different drivers attempt to place opposite 4 values onto the wire. Contention causes heavy power - 5 dissipation and can result in device failure. This is 6 obviated by the single source arrangement of CAL II.
7 Secondly, symmetry of the array simplifies the CAD software 8 which maps user designs onto the array and allows g hierarchical blocks of those designs to be rotated or lo reflected in order to provide better utilisation of the 11 array. It will be understood however that the principle of 12 hierarchical scaling can be successfully applied to arrays 13 which are not symmetrical, It should also be understood 14 that although the previous discussion referred only to single wires, the hierarchical scaling technique is equally 16 applicable where wires are replaced by multi-bit buses 17 rllnn i ng between the blocks.
18 Fig. 8 depicts a block diagram of a 64 x 64 CAL II
19 array. Fig. 8 is essentially a floor plan of the array and demonstrates that there are a row decoder 40 and column 21 decoder 42 for addressing the RAM control store and a total 22 of 128 I/O lines. In addition, there are buffers 44 for 23 global signals and a global I/O 46 associated with the 24 global buffers. There are 15 address and 32 data lines as 25 well as the 128 I/O lines. In addition, there are four 2 6 global inputs G1 to G4, a reset input and an FHZ input which 27 forces all user outputs to a high impedance state.
28 Fig. 9 is a logic symbol for the CAL II array shown in 2g Fig. 8 and depicts the programming interface for static RAM
3 0 using chip enable CE, chip select, CS and read/write Wr 31 control signals. The CE signal starts the programming mode, 32 and normally it will be fed to all chips in a large array.
33 The CS signal may be used to address a single chip in an 34 array of chips such as the 4096 cell embodiment of the 3 5 CAL II architecture, and read or write data to the addressed 3 6 chip. Timing on these signals is compatible with common 37 SRAM parts such as the HM 628128 (Hitachi) with a 50 ns WO94/10754 ~ ~ 4 ~ 3 ~ ~ PCT/US93/1040 1 cycle time. The SRAM programming interface is supplemented 2 by additional hardware resources designed to m;n;mise the 3 number of processor cycles required for reconfiguration.
4 These resources are initially inactive so that the device 5 looks exactly like an SRAM on power-up.
6 Fig. 10 is an enlarged schemati~ view of one of the 7 cells 12 of the FPGA shown in Fi~S. 1 and 2. Firstly, the 8 cell 12 is shown as having 8 nei~hbour interconnects; 2 to g each designated North, South, East and West cell. In lo addition, cell 12 is also connected to the East and West 11 flyovers 26 and 28 and the North and South flyovers 30 and 12 32 respectively. Within cell 12 is a function unit 48, and 13 also within cell 12 are various switches 50, 52, 54 and 56 14 for receiving signals from the respective neighbour lS interconnects. The SELF output of function unit 48 can be 16 connected to lines Nout, Sout, Eout, and Wout through 17 multiplexers 50, 52, 54, and 56, respectively. Also, in 18 cell 12, function unit 48 receives input from three 19 multiplexer switches 58, 60 and 62, which receive inputs 20 from neighbouring cells and flyovers and which generate 21 three outputs, Xl, X2 and X3, respectively.
22 Function unit 48, one of which is present in each cell 23 12, is capable of implementing any combinational function of 24 two boolean variables, A and B. Additionally, function unit 25 48 can implement one of several three-input functions, 2 6 namely a two-to-one multiplexer with true or inverted inputs 27 or a D-type edge triggered flip-flop with true or inverted 28 clock, data and reset inputs. These functions are 2~ illustrated in Fig. 11.
30 AS indicated above, each cell 12 has four neighbour 31 inputs and four inputs from flyovers. From Fig. 10 it will 32 be seen that any neighbour input can be connected to any of 33 the function unit inputs Xl,X2 and X3 via multiplexers 58, t 34 60, and 62, and then to the cell neighbour outputs Nout, 35 Sout, Eout, and Wout via programmable multiplexers 50, 52, 36 54, and 56. The cell function unit output is available to 37 an external device by reading a bit in the control store.

~ WO94/10754 2 1 4 7 3 6 3 PCT/US93/10404 1 This allows applications to read results from CAL II
2 computations. In addition, register values within circuits 3 implemented on the array can be set by writing to the 4 control store. Vertically adjacent cells have state access - 5 bits connected to adjacent bit lines in the RAM: this allows 6 registers of the user~s design implemented in the CAL II
- 7 array to be read and written 8, 16, or more bits at a time 8 according to the selected width of the data bus. Thus, it g will be appreciated that the CAL II array architecture effectively supports the "active-memory" model of 11 computation in which operands can be written to memory 12 locations and results read from memory locations.
13 The routing associated with a cell is best described 14 with reference to Figs. 12, 13 and 14 of the accompanying drawings. Fig. 12 shows the basic routing resources 16 provided in cell 12, as shown in Fig. 10. The designation 17 "SELF" refers to the output of function unit 48, which 18 implements the logical operations required by the design.
19 It will therefore be seen that within the cell the routing 20 requires four 4:1 multiplexers 50, 52, 54 and 56 21 respectively for routing the signals into Nout, Sout, Eout, 22 and Wout respectively. Each of those multiplexers receives 23 the SELF signal from function unit 48 and therefore each 24 multiplexer can route signals from one of the other three directions, that is for multiplexer 56 (Wout), output can 26 come from either North, South, East or SELF. Likewise, the 27 other multiplexers can select from the various other inputs 28 to provide the respective output. The implementation of 29 these multiplexers in CMOS technology is disclosed in U.S.
30 Patent 5,243,238 (the CAL I patent); all the other 31 multiplexers in the array can be implemented using this 3 2 technique.
33 At the next level in the hierarchy, that is, at the 34 junction of the 4-cell x 4-cell blocks, another switching 35 function must be provided. This is best seen in Eig. 13, 3 6 which depicts a switch 18. Fig. 13 shows the potential 37 inputs and outputs of switch 18. Switch 18 handles only the W O 94/10754 ~ 1 4 ~ 3 ~ ~ PC~r/US93/104 1 horizontal East/West going signals, but the switches for the 2 vertical signals are identical. Switch 18 has six inputs;
3 two inputs from cells 12, two inputs from the length-4 East 4 and West flyovers and two inputs from the East and West length-16 flyovers. Because the 4 ~ 4 boundaries occur at 6 the same position in the array as ~eighbour boundaries, it 7 is advantageous for switch 18 t~ lnclude the direct 8 neighbour connections as well ~s additional connections to g longer wires. Including both neighbour and longer wires lo allows a design which uses only neighbour connections to be 11 mapped onto the hierarchical array without using additional 12 switching units. In general, the outputs of a switch at 13 level 'L' in the array will be a 'superset' of (that is, 14 will include) outputs of a switch at level 'L-l~. As is seen in Fig. 13, it is also convenient to connect the longer 16 length-16 wires into switch 18, which serves lower 17 hierarchical levels. In a larger array using higher levels 18 of hierarchy, length-64 or longer wires may also be provided 19 as inputs to the switch. However, to preserve the hierarchy 20 for switches at 4-cell boundaries, the only outputs are 21 length-l (neighbour) and length-4 signals.
22 Fig. 14 depicts the inputs and outputs of the switching 23 function on a switch 20 located at the boundaries of the 24 16-cell x 16-cell blocks (see Figs. 1 and 2). Because 16 x 25 16 boundaries also occur at 4 x 4 cell boundaries, then in 26 order to preserve the hierarchy and regularity of the array, 27 the 16-cell boundary switches are arranged to offer all the 28 routing permutations of the 4-cell boundary switches, but 29 also offer additional options related to the length-16 30 wires. This is illustrated by the arrangement shown in Fig.
31 14. In Fig. 14, the hierarchy stops at 16 x 16 blocks and 32 there are no length-64 wires. However, in a larger 33 embodiment, switches using longer wires can be provided.
34 Fig. 15 depicts a preferred switching function for use 35 at 4-cell boundaries. East, West, North and South are the 36 same since the switches are symmetrical, and therefore only 37 the East and West switching functions are shown. Switch 18 WO94/10754 ~ 3 6 ~ PCT/US93/10404 1 has two 3:1 multiplexers (that is for the East and West 2 cells) and two 5:1 multiplexers for the East flyover (4) and 3 West flyover (4). With reference to Figs. 1 and 2, it is 4 clear that the switch 18 exists physically at the East/West - 5 boundary between blocks. There is a similar switch at the 6 edge of the array to handle connections to the I/O pads, as - 7 will be later described with reference to Fig. 26a.
8 Fig. 16 depicts a switching function similar to that g shown in Fig. 15 but which accommodates global signals.
Switch 18a of Fig. 16 has multiplexers for driving the East 1 and West cells which are the same as those shown in Fig. 12.
12 However, 8:1 multiplexers 72,74 receive inputs from the 13 neighbouring East and West cells, the length-4 East and West 14 flyovers, the length-16 East and West flyovers, and 15 additionally three inputs from horizontal global signals G1 16 and G2, and a constant 0 signal. Although not shown in the 17 interests of clarity, it will be appreciated that a 18 corresponding switch used for vertical signals will have two 19 vertical global signals G3, G4. Otherwise, switch 18a for the vertical signals will be the same as the horizontal 21 switch 18a. 8:1 multiplexers are preferred so as to allow a 22 straightforward implementation using three RAM-controlled 23 pass transistors. However, in an embodiment in which 24 additional multiplexer inputs are acceptable, four global 25 signals G1, G2, G3, and G4 will all be provided as 2 6 multiplexer inputs to the flyover multiplexer.
27 It is also desirable that routing delays from one cell 28 to a neighbouring cell across 4-cell and 16-cell boundary 29 switches should be negligible and, consequently, to achieve 30 this the number of sources (inputs) to the multiplexers 31 which connect neighbouring cells must be mlnimised.
32 Fig. 17 depicts switch 20 located at a 16 x 16 cell 33 boundary. As before, only a switch 20 for an East/West 34 direction is shown, but it will be appreciated that a 35 similar switch 20 is necessary for signal conductors in the 36 North/South direction because of symmetry. Switch 20 37 includes six multiplexers: two 4:1 multiplexers 76 and 78 WO94/10754 2 ~ 47 3 ~ 3 PCT/US93/1040 ~
^ 30 1 for driving neighbour wires, two 8:1 multiplexers 80, 82 for 2 driving length-4 flyovers, and two 7:1 multiplexers 84 and 3 86 for driving length-16 flyovers. The 8:1 and 7:1 4 multiplexers also receive inputs from the length-4 and 5 length-16 flyovers. In addition to the length-4 switches of 6 the form of Fig. 16, switch 20 incl~ud;es additional 7 connections for the correspondi~g ~ultiplexers for driving 8 the length-16 flyovers. In Fig. ~7, the BIT signals in g multiplexers 84 and 86 are from the RAM bit line.
Corresponding North/South switches have a WORD line from the 11 RAM, instead of the BIT line in multiplexers 84 and 86.
12 The switches shown in Figs. 15 through 17 and 13 described above for the 4-cell x 4-cell and 16-cell x 14 16-cell boundaries can be extended for use in larger arrays 15 if required. An appropriate rule for deciding what is 16 connected to each multiplexer in a switch is that a 17 multiplexer outputting a signal at level L in the hierarchy 18 should have inputs from 19 1) signals going in the same and opposite directions as the multiplexer output at level L+1 if level 21 L+1 exists, 22 2) a signal going in the same direction as the 23 multiplexer output at level L, and 24 3) signals going in the same and opposite directions at levels L-1, L-2, etc. down to level 0.
2 6 It should also be appreciated that this is not the only 27 possible rule and that there are a large number of possible 28 switching functions which can be used in a hierarchical 29 array. Many potential modifications to the switches present here will be immediately apparent to those of ordinary skill 31 in the art. One modification is to include provision for 32 90 turns in the switches by providing inputs from a 33 perpendicular edge (via an additional selector, in 34 conjunction with extra routing resources in the switches 35 themselves, and a different choice of input signals to the 36 multiplexers). Another modification is elimination of the 37 180 wraparound options (for example, eliminating west WO 94/10754 ~ 1 4 7 3 6 3 PCT/US93/10404 1 neighbour input to east length-4 flyover output).
2 The additional routing resources provided by the 3 hierarchical FPGA can be used in four separate ways:
4 1. User Desiqns at lowest level, FPGA hardware selects - 5 lonqer wires. The user can design using the simple 6 neighbour routing models ignoring the availability of longer 7 connections. Hardware in the FPGA detects when a signal is 8 placed on a series of connections straight through a row or g column of cells extending the full length of a flyover, and places the signal on the flyover as well.
11 2. User Desi~ns at lowest level, CAD software selects 12 lon~er wires. In this case low level CAD software 13 automatically detects nets in the layout which can benefit 14 from the new routing resources and takes advantage of them to speed up the design. With this methodology the flyover 16 wires will often carry the same signal as the neighbour 17 connect at the same position in the cellular array. Adding 18 redundant wiring of longer lengths is easy for the software, 19 and is independent of the number of levels of routing provided by a given FPGA chip, thus providing design 21 portability between different chips.
22 3. User Desiqns at All Levels, FPGA hardware selects 23 lon~er wires if usable and control store indicates theY are 24 unused. In this case the FPGA hardware uses longer wires when appropriate, but some long wires will have already been 26 taken up by the user. In this case the hardware (an extra 27 NOR gate input, see Fig. 18 discussed below) must detect 23 whether the control store bit has marked the longer wire as 29 unused. In one embodiment, an extra bit is provided for the user to disable the automatic long line selection.
31 ~. User Controls Selection of Tong Lines. The user 32 can design using a programming model which includes the 33 longer wires, and make explicit assignments to the extra 34 routing resources. In such an embodiment, no automatic selection of long lines by hardware is provided. With this 36 embodiment, CAD software may be selected which optimizes the 37 placement of signals on the various lines in the device.

WO 94/10754 2 1 47 ~ ~ 3 PCI/US93/104~

1 It is advantageous to use the CAD software to detect 2 places in the completed design where wires can 3 advantageously be transferred to longer routing resources.
4 Such occasions can arise where two sub-units of the user's 5 design which themselves must use shorter connections are 6 placed side by side. Providing redundant wiring, for 7 example on both length-4 and length-16 flyovers increases 8 speed. And substituting a longer wire for four shorter g wires increases speed and also frees the shorter wires for other uses.

12 ~rdware Selects Lonqer Wires 13 The chip may contain special circuitry which can 14 determine when to use longer routing resources. For example, a logic gate can detect when a path through 4 6 neighbour interconnects is used and then automatically route 17 the signals through a length-4 flyover. This hardware 18 option provides more portability. If a company produces new 19 ~'fast~ versions of existing chips by adding longer wires and related logic gates to automatically select the longer 21 wires, existing user designs can be implemented on these 22 faster chips without re~uiring any changes or effort by the 23 user. In addition, the direct hardware selection of the 24 longer routing wires simplifies the CAD software, allowing 25 it to run faster. Chip hardware which automatically selects 26 faster routes is particularly attractive for applications 27 which dynamically reprogram FPGA chips, where a-priori 28 determination of long routing lines is difficult.
29 Automatic selection of long lines is an extension of a technique disclosed in U.S. Patent 5,243,238, where a NOR
31 gate was used in 4:1 multiplexers to detect the state of two 32 RAM cells and output a 0 corresponding to routing '~straight 33 through" (for example, North to South) the multiplexer.
34 Fig. 18 shows a NOR gate circuit which automatically uses a length-4 flyover in response to a user's design which 36 specifies a path through four adjacent neighbour 37 interconnects in a 4 x 4 block. The portion of cell 12-1 ~1473~`
~094/10754 PCT/US93/10404 1 which detects that RAM cells Ml and M2 both carry logic 0, 2 indicating that a signal from the West is to be routed to 3 the East is illustrated. Corresponding portions are present 4 in cells 12-2 through 12-4, but for simplicity are not illustrated. OR gate ORl outputs a logic 0 only when RAM
6 cells Ml and M2 both carry logic 0. When corresponding RAM
- 7 cells for controlling logic cells 12-2 through 12-4 all 8 carry logic 0, four logic 0 signals are input to NOR gate 9 NORl, causing NOR gate NORl to output a logic 1. Switches o 18-1 and 18-2 show only the circuits which automatically 11 select the length-4 flyover. Multiplexer MXl located in 12 switch 18-1 is controlled by the logic 1 output of NOR gate 13 NORl to place its input signal onto flyover line 134.
14 Multiplexer MX2 in switch 18-2 is controlled to pass the signal on line 134 in response to this logic 1 from NOR gate 16 NORl. Thus switch 18-2 will provide an output signal OUT
17 faster than if the input signal IN had been routed through 18 cells 12-1 through 12-4.
19 Fig. 19 shows an implementation of NOR gate NORl of Fig. 18 available as a single metal wire 191 extending 21 through cells 12-1 through 12-4 into multiplexers MXl and 22 MX2. The four OR gates ORl can be physically positioned in 23 their respective cells, so the layout of the NOR gate of 24 Fig. 18 is very compact. The additional NOR gates can be implemented using a standard pull-down technique using an 2 6 extra wire with transistors which can be pulled down at any 27 one of 4 positions and a p-type device at the end of the 28 wire which acts to pull the wire high in the absence of a 29 pulldown signal. Using several pulldowns and a p-type pullup has the advantage of allowing a distributed layout 31 where the main overhead is a single metal wire routed in 32 parallel with the flyover.
33 The signal from the 4-input gate NORl can be fed to the 34 cells 12-1 through 12-4 so that function unit 48 inputs (see Fig. 10) use the flyover inputs rather than the neighbour 36 interconnects (for example multiplexer 58 can be programmed 37 to select its xl signal from line W4 instead of line W). In WO94/107~4 ~ 1 4~ PCT/US93/l0 1 a similar way, the technique can be scaled up so that 2 signals travelling on a group of 4 length-4 flyovers under a 3 length-16 flyover running in the same direction can be 4 detected. This is achieved using a further 4-input gate, 5 e.g. NOR gate, which takes as inpu~s, the outputs of the 6 four 4-input NOR gates NORl. The output of this further NOR
7 gate can be fed to switches a~ both ends of the length-16 8 flyover to ensure that the faster length-16 flyover path is g used for signal routing. This NOR gate output can also be fed to intermediate length-4 flyovers to allow signals to be 11 taken directly from the faster path.
12 Hardware selection of flyovers is transparent to the 13 user, who obtains the benefit of an automatically faster 14 chip, and results in a simpler programming mode. The 15 technique can be extended straightforwardly to longer wires, 16 i.e. length-64, or length-256 flyovers, for blocks at higher 17 levels of hierarchy. This technique is equally applicable 18 when the flyovers scale by factors other than four. Similar 19 techniques can be used in other areas where a ~short-cut"
20 path is provided to supplement routing resources.

22 User Control of Wire Lenqth 23 The user can also design using a programming model 24 which includes the longer wires, and make explicit 25 assignments to the extra routing resources. In this case, 26 extra density as well as extra speed is achieved by 27 assigning different nets to the various levels of 28 interconnect at the same point in the cellular array. For 29 example, the CAD software can select a length-16 wire to 30 carry a signal over a sub-unit without disturbing the local 31 interconnect in that sub-unit. With a design style using a 32 programming model which includes the longer wires, and makes 33 explicit assignments to the extra routing resources, blocks 34 of the user design may have to be aligned on 4-cell or 35 16-cell boundaries in the array. Replacing neighbour wires 36 with longer wires can change timing and power consumption, 37 and these changes may be undesirable to the user. To allow WO94/10754 P ~ I ~ 7 3 G 3 PCT/US93/10404 1 user control of long line replacement, an additional bit is 2 added to the control store, and must be set in order to 3 allow automatic addition of longer wires.

Function Unit 48 of Cell 12, Several Embodiments 6 Fig. 20 is a schematic block diagram of a multiplexer 7 based function unit capable of implementing the function of 8 the function unit disclosed in the corresponding published g PCT Application WO90/11648 ~equivalent to U.S. Patent 5,243,238), but with additional functions. Experience with 11 the CAL 1024 chip which implemented the CAL I architecture 12 indicated it would be desirable to include two new cell 13 functions, a 2:1 multiplexer and a D-register with clear.
14 These are both three-input functions. Use of a multiplexer to speed up carry propagation in cellular logic structures 16 iS well known in the literature, for example, Fast Carry-17 Propagation Iterative Networks, Domenico Ferrari, IEEE
18 Transactions on Computers Vol. C17 No. 2, August 1968 and 19 European patent application serial no. 91 304 129.9, owned 20 by Xilinx, Inc., entitled ~Logic Structure and Circuit for 21 Fast Carry~, published 13 Nov 1991 under publication no. 0 22 456 475 A2. The multiplexer function is useful for building 23 adder and counter circuits, and a D register serves many 24 user~s expressed preferences for TTL like D-registers 25 instead of latches. Both functions were considered for the 26 original CAL 1024 part. One problem with including three 27 input functions in the CAL 1024 architecture was that only 28 the four neighbour inputs could be selected as function 29 inputs. This meant that for a three input function, inputs 30 had to come from three of the four neighbour directions, 31 which is hard to achieve without using adjacent cells for 32 routing alone. Use of a cell for routing alone reduces 33 density. With the new routing architecture of CAL II, it is 34 attractive to allow the length-4 flyovers to be used as 35 function unit inputs, providing a total of eight possible 36 inputs. The additional flyover routing resources mean that 37 the three input functions can be used and density 21473~3 l maintained.
2 Fig. 20 shows an embodiment o function unit 48 3 depicted in Fig. 10 which can easily support a 2-input 4 multiplexer and a D-register with çlear. Function unit 48 consists of the three 8:l multiplexers 58, 60, 62 each of 6 which receives eight inputs. T}~ eight inputs are from the 7 four immediately neighbouring cells plus the North (N4), 8 South (S4), East (E4) and West (W4) length-4 flyovers and g the multiplexers 58, 60 and 62 provide outputs Xl, X3 and X2 respectively. The three outputs, Xl, X2 and X3, are fed to ll 2:l multiplexers 94, 96 and 98 respectively which provide 12 conditional inversion of Xl, X2, and X3. Three further 13 outputs Yl, Y2 and Y3 are created which are fed to a further 14 2:l multiplexer F which is controlled by the output of the Yl multiplexer. It will therefore be appreciated that the 16 function unit 48 is based on the 2:l multiplexer F which is 17 the only multiplexer in the cell which is controlled by a 18 data signal (output of Yl) rather than by the control store.
lg As well as implementing all boolean function of two variables, the 2:l multiplexer F can be used directly with 21 true or inverted input variables. As will be later 22 described, the 2:l multiplexer function is useful in a wide 23 variety of circuits including adder carry ch~; n.c: . Fig. ll 24 shows the two-input boolean functions and the three-input (two data, one control) multiplexer and three-input D-26 register functions implemented by Fig. 20. Generating 27 combinational logic functions using 2:l multiplexer based 28 function units is well known and is disclosed in the CA~ I
29 application. Function unit 48 includes a D-type edge triggered flip-flop lO0 which allows it to achieve certain 31 logic functions. D-register lO0 receives as a clock input 32 the output Yl from multiplexer 94, as a clear input the 33 output Y3 from from multiplexer 96, and as data input the 34 output 42 from multiplexer 96. In another embodiment, flip-35 flop lO0 may have an enable input connected to Y3 and no 3 6 clear input. Alternatively, a clear input may be provided 37 and connected to a special global clear signal fed to every ~ 21~736~
0 94/10754 PC~r/US93/10404 1 cell. The F multiplexer receives these same three signals, 2 as shown in Fig. 20. The output of the F multiplexer and 3 the output (Q) of D-flip-flop 100 are fed to a further 2:1 4 multiplexer 102, the output of which is designated as the "self~ output. The path through multiplexer 98 to the 6 function unit output has been optimised for speed and, where 7 possible, signals on the critical path in the user's logic 8 use this input in preference to X1 or X3.
9 With the function unit shown in Fig. 20 the three-input function capabilities over the existing CAL 1024 chip function unit are achieved. This is in part due to the 12 symmetry of the inputs to the function unit 48 because any 13 of the neighbour and length-4 flyovers can be selected as 14 sources for each of the function unit inputs as can be seen by inspecting multiplexers 88, 90 and 92. Two drawbacks of 6 this function unit 48 are a relatively large delay caused by 7 the 8:1 multiplexers and the fact that a constant 0 cannot 18 be forced onto the clear input of the register to produce a 19 non-clearable flip-flop. Instead a source for clear input must be found which, for example, can be from a global 21 signal via one of the switch units on the 4 x 4 cell block 22 boundaries or by using an adjacent cell to form a constant 23 0. This is a further advantage of having constant 0 as a 24 source for the length-4 flyover wires.
Fig. 21 depicts an alternative function unit 114 which 26 can serve as function unit 48 in all of cells 12 shown in 27 Fig. 10 and Fig. 2. Function unit 114 provides a constant 28 source for the register clear input and requires the same 29 number of bits of control store RAM as the Fig. 20 embodiment. One more bit controls rnultiplexer 122 and one 31 less bit controls multiplexer 118. But function unit 114 is 32 less symmetrical in its input selection than the function 33 unit of Fig. 20, as can be seen from an inspection of 34 multiplexers 116, 118 and 120. Lack of symmetry in function unit 114 complicates the software which implements a design, 36 thereby making it more difficult to make effective CAD
37 tools. As before, multiplexers 116, 118 and 120 provide the WO94/10754 2 1 ~ 7 3 6 3 PCT/US93/104 ~

1 three outputs, X1, X3 and X2 respectively. The structure is 2 otherwise the same except for the inclusion of a 4:1 3 multiplexer 122 between multiplexer 118 and 2:1 multiplexer 4 F. The output multiplexer 124 receives outputs from the F
multiplexer, still controlled by Y1, and the D-type flip-6 flop 121.
7 A further alternative version of a function unit 126 to 8 be used as function unit 48 in cell 12 is depicted in Fig.
9 22. In this design, the symmetry of the input 4:1 lo multiplexers 128, 130 and 132 is further reduced. The 11 attractions of Fig. 22 are the lower fan-in on the length-4 12 wires which are connected to 1 multiplexer per cell rather 13 than 3 per cell, and the fast X2 path through the function 14 unit itself which improves performance over the function unit of Fig. 20.
16 Fig. 23 shows a further variation on the function unit 17 of Fig. 20 which implements the same operations. The 18 additional 2:1 multiplexers 95 and 97 are controlled by the 19 same bit of RAM which provides for routing the register to the self output via the combinational function multiplexer 21 F, and the 2:1 multiplexer 102 is deleted. The advantage of 22 this unit is that the number of multiplexers between input 23 and output for combinational functions and the X3 input is 24 reduced from 4 to 3 while all other paths still require 4 multiplexers, thus X3 provides a fast path through the 26 function until without requiring more control store RAM than 27 the function unit of Fig. 20.
28 The performance of multiplexer based switching 29 structures such as those in Figs. 18, 21, 22 and 23 can be improved using standard techniques at the expense of 31 increasing the area required to implement them. However, 32 because of the area cost it is not considered desirable to 33 increase area of more than a small sub-set of multiplexers.
34 In the network of logic gates in a user's design, a critical path corresponding to the longest signal delay between input 36 and output can usually be identified. To improve the 37 performance of the network of logic gates it is necessary to ~ 094/10754 2 1 ~ 7 3 ~ 3 PCT/US93/10404 1 reduce the delays along the critical path; reductions in 2 delay elsewhere will have no effect. CAD software tools are 3 available which can automatically determine the critical 4 path through a block of combinational logic and implement 5 the critical path using fast path hardware to reduce delay.
6 Fig. 24 depicts an implementation of a gate-based - 7 function unit indicated by reference numeral 133. In this 8 case, there are two 8:1 multiplexers 134 and 136, which g provide outputs Xl and X2 respectively. Xl and X2 are optionally inverted, then 4:1 multiplexer 138 selects one of 11 the four functions of the resulting variables, that is AND, 12 OR, XOR or DREG. Function unit 133 has only two input 13 variables similar to that of the function unit used in the 14 CAL I FPGA as described in WO90/11648 (U.S. Patent 15 5,243,238) and, consequently, it cannot implement a 2:1 16 multiplexer as a cell function. This design shows that 17 multiplexer based function units are not the only possible 18 way of implementing functions in the cellular array.
19 Another possibility would be to use a 4 bit RAM lookup table addressed by Xl and X2, as discussed in U.S. Patent 21 4,870,302, reissued as U.S. Reissue Patent Re34,363, 22 invented by Ross Freeman, entitled UConfigurakle Electrical 23 Circuit Having Configurable Logic Elements and Configurable 2 4 Interconnects".
Fig. 25 depicts a further gate-based version of a 26 function unit 140 to implement function unit 48. Function 27 unit 140 is somewhat similar to that shown in Fig. 23 except 28 that 4:1 multiplexer 138 is combined with the neighbour 29 routing multiplexers which become 8:1 multiplexers 142, 144, 30 146 and 148. In addition, 8:1 multiplexer 145 provides an 31 X3 output which passes through an inverter 150. There is 32 also a 2:1 multiplexer 152 to generate a Y3 output which can 33 be fed to the D register, and a 2:1 multiplexer 156 which is 34 controlled by data signal Y3 rather than the control store.
35 Thus, five function outputs Zl, Z21 Z3, Z4 and Z5 are 36 generated. Therefore, in this structure cell 12 can compute 37 several different functions of its input variables (Xl to W094/107~4 ~ 1 4 ~ 3 ~ 3 PCT/US93~104 l X3) simultaneously and route them to neighbour outputs 2 (Nout, Sout, Eout, Wout). Offering several outputs is 3 advantageous for important functions like adders, but 4 nevertheless requires extra control memory and more chip 5 area, thus function unit 140 is ~a~der to design with.

7 One Fast Path 8 The version of function unit 48 shown in Fig. 20 can be g constructed in such a way that the path between one of the input variables Xl, X2 or X3 and the function unit output ll SELF is optimised for speed using standard techniques while 12 the other paths are not. Such a function unit 48 requires 13 significantly less area than a function unit 48 in which all 14 input paths are optimised. Software may be written so that 15 signals on the critical path in a user design can be l~ directed where possible to use the optimized X2 input to the 17 function block, ensuring that critical path signals incur 18 m;nlm~l delay. This may be done by having the software make l9 selective changes to the user design taking advantage of the symmetrical nature of the function unit which allows inputs 21 of combinational functions to be permutated. Fig. ll shows 22 logic functions of A and B which are available from the 23 embodiment of function unit 48 shown in Fig. 20. For 24 example, function Xl--~ with Xl = A and X2 = B, where A is 25 on the critical path can be transformed to ~-X2 with Xl 2 6 equal to B and X2 equal to A by making changes to the 27 sources of Xl, X2 which drive function unit multiplexers 58, 28 60, and 62 in the local cell. Such a technique allows most 29 circuits to obtain similar performance to that available 30 from a function unit where the delays through Xl and X2 are 31 both equally fast, but with much less area overhead.

33 In~ut/Out~ut Structure 34 Fig. 26a depicts a schematic block diagram of the 35 input/output architecture of the embodiment of the CAL II
36 array illustrated in Fig. 8. The circuit of Fig. 26a occurs 37 on the east side of the chip. At the edge of the array of 94/10754 ~ ~ 1 4 7 3 6 ~ PCT/US93/10404 1 cells 12 there are programmable input/output (I/O) blocks 2 110. Each I/O block 110 is connected to an external pad.
3 Three bits of control store RAM are provided to each I/O
4 block for selecting input threshold voltage (LEVEL), 5 selecting pad slew rate (SLEW), and providing an input pull-6 Up resistor. Flexibility is increased by using additional 7 control store bits to get additional control over pad 8 parameters in this case slew rate and threshold voltage g level, or to provide switchable pull ups.
10 There is one I/O block 110 for every two cells 12-A and
11 12-B along the West and East edges of the chip and also one
12 external pad for every two cells along the North and South
13 edges of the chip. This arrangement has the advantage of
14 reducing cost by reducing the number of package pins
15 required. Normally, wide (16-32 bit) buses are connected to
16 the West and East edges as shown in Fig. 8 so that the chip
17 registers latching these buses will be arranged vertically
18 (as will be later described in detail) and hence are
19 efficiently accessible through the control store interface
20 by the host processor. Many variations on this allocation
21 of pads are possible, including providing one I/O block and
22 pad per external cell position on all edges and putting one
23 pad per cell on two adjacent edges and one pad per two or
24 four cells on the other two edges.
25 With regard to the architecture depicted in Fig. 26a,
26 each I/O block 110 has a data input (OUT) and an enable
27 input (EN), each of which are connected directly to cells 12
28 on the periphery of the CAL II array. Similarly, I/O block
29 110 can provide on its IN line a signal to cell 12-A or cell
30 12-B or on West length-4 flyovers W4B or W4A or West
31 length-16 flyovers W16B or W16A. Likewise, I/O block 110
32 can receive on its OUT line a signal from East flyovers E4B, A 33 E4A, E16B, or E16A as well as from cell 12-A or 12-B.
34 Thus, the data input to I/O block 110, which is a pad output 35 (labelled OUT in the I/O block) receives data from switch 36 112 and is enabled by the EN output of switch 112. This 37 design min;mises delays between the internal array and off-WO94/10754 ~ 3 PCT/US93/104 chip by eliminating separate I/O control logic signals. By 2 placing suitable values on the data and enable input signals 3 (which could be achieved by using constant cell functions 0 4 and 1), I/O block 110 can be programmed to operate in input, output, bi-directional, or open ~rain mode, as shown on the 6 I/O block mode table of Fig. 26b.
7 Fig. 26d shows one embodiment of switch 112 of Fig.
8 26a. Eight multiplexers are provided, as shown, Signal g lines and flyover lengths are labeled as shown in Figs. 15-lo 17. Thus, the details of Fig. 26d, which reference the same 1l signals, are not described further. Supporting all 6 input 12 signals from the two cells allows data and enable signals to 13 be sourced from either cell, making it less likely that a 14 pad will not be used because of routing constraints.
Additional inputs are provided, in particular, constants 1 16 and 0, and a bit line as inputs to the enable multiplexer.
17 The constant values are particularly useful for the enable 18 signal when the pad is to function as an input (enable=0) or 19 as an output (enable=1) rather than a bidirectional pad.
Constant values on the data signal and a computed value on 21 the enable signal produce open drain pull-up (in=1) or pull-22 down (in=0) pads. However, the I/O architecture of the CAL
23 II array has been designed to minimise input and output 24 delay by eliminating extra pad control logic and, consequently, represents a considerable simplification over 26 the CAL I pad control architecture.
27 The I/O routing switches for the North, South, and West 28 sides of the chip are derived in the same way as the East 29 routing switch shown in Figs. 26a and 26d.
3 0 In addition to I/O signals, input and output signals at 31 positions which do not have associated external I/O pads can 32 be connected to programming signals on the FPGA (data bus,
33 address bus, mode, read/write, CE, CS, FHZ). This allows 3 4 circuits implemented on the EPGA to monitor these important external signals. Thus, a circuit implemented on the FPGA
3 6 can detect external reads and writes to the status or 37 control memory and automatically clock itself to process an 1 input value or produce the next output value.
2 When global signals are provided, they can be driven 3 from logic signals at the edge of the array in the same 4 manner, as well as from dedicated input pads, as is conventionally the case with the CAL I arrangement. Thus 6 use of logic connections from the edge of the array 7 increases the flexibility of the device and allows 8 additional chip logic to be eliminated, further reducing the g part count.
Fig. 26c shows an example circuit in which a global 11 signal on line 205 can be taken from four possible sources 12 by a user programmable multiplexer. A programmable 13 multiplexer may be used to select between various potential 14 sources for global signals so the number of potential 15 sources can be larger than the number of global clock lines 16 in the array. First, multiplexer 207 can take an external 17 signal such as a clock signal, which is applied to pad 206, 8 buffered by input buffer 203 and provided to multiplexer 19 207. Second, multiplexer 207 can select an internally generated power-on-reset (POR) signal on line 204, which can 21 be provided as a result of a voltage disturbance or other 22 reason. A reset signal generated automatically by detecting 23 transitions in the power line to allow user logic to be 24 initialised is particularly valuable for chips which use 25 non-volatile control memory such as a flash EPROM or support 2 6 a low voltage data retention mode for their RAM control 27 store. Third, multiplexer 207 can select a signal from 28 counter/timer 209, which may include an internal or external 29 oscillator. A programmable counter/timer driven by an 3 0 external crystal or other clock source can provide a 31 flexible clock to user logic. Fourth, multiplexer 207 can 32 select a signal generated by the user's internal logic and 33 selected by I/O block 110-6 from the output of cell 208, an
34 east length-16 flyover, and an east length-4 flyover. Such
35 a global signal could be a clock driven from a cell output.
36
37 WO94/10754 ~ 3 44 PCT/US93/104 1 Reaister Access: Control Store Mani~ulation and FPGA
2 Reconfiauration 3 CAL II supports direct access from the processor to 4 nodes within the user's circuit. The output of any cell's function unit 48 (see Fig. 10) can be read and the state of 6 a cell which is configured to ~mplement a register can be 7 written by the processor. These accesses are done through 8 the control store interface and require no additional wiring g lines to be added to the user's design. The row and column signals which address these registers can be selected as 11 sources within a length-4 switch unit, so that user circuits 12 can detect that an access has been made and take the 13 appropriate action, for example, calculate a new value for an output register or process a value to be placed into an input register. In many applications, this access to 16 internal nodes will be the main path through which data are 17 transferred to and from the processor. In some coprocessor 18 applications, it may be the only method used by the 19 processor to access nodes in the FPGA. User programmable 20 I/O pads may not be used at all.
21 To allow high bandwidth transfers between the processor 22 and internal nodes of the FPGA, it is necessary to transfer 23 a complete processor data word of up to 32 bits in one 2 4 memory cycle. For this reason, register or gate access bits 2 5 in CAL II are mapped into a separate region of the device 26 address space from configuration bits. Fig. 27a shows the 27 mapping of this area of the address space. In an embodiment 28 of the CAL II architecture having 4096 cells, there are 64 29 columns and 64 rows of cells. Since there is only one 30 function unit per cell, one memory cell in the control store 31 iS sufficient to represent one cell function unit output.
32 Fig. 27a represents a memory space which the processor can 33 address to access the cell function unit outputs. One cell, 34 cell 12-6-23 (where the 6 designates the row and 23 35 designates the column) is shown as a blown-up cell, 36 e~uivalent to cell 12 shown in the earlier figures. A 6-bit 37 column address CA[0:5] selects a particular column of cells 1 to access. All bits to be accessed in one memory cycle must 2 be in the same column of cells. Select unit 275 selects a 3 subset of these rows to connect to an external data bus.
4 Several possible implementations of select unit 275 are discussed below in connection with Figs. 28, 34a, 34e, 35, 6 36a, 36b, and the run length register discussion. The 7 advantage of select unit 275 is that the fixed relationship 8 between rows or words in a memory array and lines on a data g bus in prior art structures is replaced with a programmable relationship. A row select decoder selects one or more of 11 rows 0 through 63 and reads the programmably selected 12 values, applying them to data bus D[0:8,16 or 32], or writes 13 the data bus value into the selected memory locations.
14 The functional operation of storage manipulation and 15 reconfiguration of data in the array will now be described.
16 Fig. 27b is a diagrammatic representation of a RAM
17 programmed FPGA, which consists o the array of RAM cells 18 160 embedded in which are active logic structures controlled 19 by the neighbouring RAM cells. The logic structures 20 implement signal switching and function generators as 21 required by the application mapped onto the FPGA. Details 22 of the functioning of the RAM cells and their use in the 23 FPGA control store are disclosed in WO90/11648 to Algotronix 24 Limited and Principles of CMOS VLSI Design, A System 25 Perspective, Weste, N. and Eshraghian K., published by 26 Addison Wesley 1985.
27 In the structure shown in Fig. 27b it will be seen that 28 there is a data bus 162 and an address bus 164. Each row 29 166a, 166b etc. of RAM cells 160 is connected to the data 30 bus 162 via a row decoder 168a, 168b, etc. Address bus 164 31 iS similarly connected to each column 170a, 170b, 170c 32 etc.of RAM cells via a column decoder 172a, 172b, 172c etc.
33 The lines interconnecting columns of RAM cells 160 are 34 termed word lines and similarly the lines connecting 35 horizontal rows of RAM cells with row decoder are termed bit 36 lines. When an address is applied to the RAM array shown in 37 Fig. 27b, a single word line and a single bit line for each W094/10754 ~4~3~ 46 PCT/US93/104 ~
1 bit of the data bus will be activated. Since bits of a word 2 are in a vertical line, addressing a RAM cell results in a 3 vertical vector (column) of RAM cells being written.
4 Reconfiguration time for dynamically programmed FPGAs is an overhead which reduces the potential advantage in 6 computation time over conventional computers. It is 7 therefore essential to m;n;m;se reconfiguration time if 8 dynamic reconfiguration is to become practical for user g applications. It is also essential to minimise lo reconfiguration time to reduce the cost of device testing 11 where a large number of configurations is required.
12 If a single word line is active then a narrow data bus 13 iS a limiting factor on the number of write cycles required 14 to configure the array. Consequently, narrow data bus width limits or restricts the configuration time. Making the data 16 bus width identical to the number of row decoders enables an 17 entire column of RAM cells to be written simultaneously. In 18 this case, the row address is redundant. In the case of the 19 CAL I array this would require a data bus 128 bits wide and 20 hence require 128 external pads for maximum throughput. It 21 Will be appreciated that FPGAs have a large number of logic 22 I/O pins from the array (in the case of the CAL 1024, 128 23 pins) so if the data bus pins are shared with logic I/Os, 24 wide data buses can be supported. Although one data bus bit 25 per row decoder is unfeasible, a system which supports a 26 data bus bit for every two or four row decoders is feasible.
27 Using the I/O pins on the same edge of the chip as the row 2 8 decoders means that no long wires between pads and row 29 decoders are required. Driving two to four decoders with 30 one I/O pin is especially useful for device testing to 31 min;m;se the number of vectors required. However, very wide 32 data buses are less useful in actual applications because of 33 the mismatch to the data bus widths of conventional 34 processors and the overhead of board wiring. Using the same 35 pad for both I/O and programming data is also a considerable 3 6 inconvenience to the designer of boards containing FPGAs.
37 Systems which take advantage of bit line parallel writes by ~147363 l providing block transfer modes such as those becoming common 2 on commercial DRAM chips ("A New Era of Fast Dynamic RAMs~, 3 Fred Jones IEEE Spectrum, October 1992) allow high bandwidth 4 for relatively low pinout, and may be attractive for use in 5 future FPGAs.
6 FPGA configurations are usually highly regular compared 7 with the data found in normal data memories. This 8 regularity is especially apparent in the configuration data g for user designs implementing computational applications where the circuits themselves usually consist of vectors of ll repeating bit slice units. Regularity is also apparent in 12 configurations for testing the device. As depicted in Fig.
13 27c, which shows in more detail the RAM addressing circuits, 14 if the columns of the FPGA device are considered as consisting of a sequence of words the same width as the data 6 bus, each word having a unique row address, then it is 17 likely that many of the values in these words are the same.
18 In the CAL 1024 FPGA (CAL I array) device there are 16 such l9 words and in a typical configuration there are an average of 20 3.4 distinct values in the 16 words. This implies that an 21 architecture in which all the words with the same 22 configuration could be written simultaneously could reduce 23 the number of writes required on average from 16 per column 24 to 3.4 per column. Similarly, in a row of 144 words, there 25 may only be 35 distinct values. Thus, an FPGA architecture 26 which activates several word lines simultaneously can also 27 reduce the number of write cycles required. However, 28 activating several word lines simultaneously during a write 29 cycle is more complex because there is a fan-out problem;
30 the buffer at the row decoder must potentially overcome the 31 opposite value stored in several RAM cells on that bit line.
32 This limits the number of word lines which can be active 33 simultaneously, with the exact number depending on a variety 34 of factors, but the number of active word lines is much less 35 than the total number of word lines and a value of 4 is 36 reasonable.

WO94/10754 ~ ~ 47 ~ PCT/US93/104 1 O~eration of Match Re~isters 2 One method of advantageously providing or facilitating 3 multiple writes is to replace either the row or column or 4 both address decoders with match registers which compare the value of the address applied to the value stored in the 6 local register. There will be one match register 7 (programmable decoder) where each decoder would otherwise 8 be. If the match register detects that the address matches g its stored value, the register activates the corresponding bit or word line,as shown in Fig. 28. Match registers can 11 be programmed to respond to different addresses. Thus, 12 different patterns of registers can be written to 13 simultaneously by programming the same address into a 14 selected group of match registers. In Fig. 28, both row and 15 column decoders have been replaced by match address 16 registers 180a, 180b, etc. for the rows and 182a, 182b etc.
17 for the columns. If the value stored in each register 180, 18 182 is the index of the corresponding bit or word line, then 19 this structure will function as a normal RAM. Functioning 20 as a normal RAM is a desirable state to initialise the match 21 registers to. By storing the same value in multiple 22 registers, the system can be set up to write multiple words 23 when a given address is presented.
24 An additional level of flexibility is provided if the 25 row address decoder is replaced by the structure shown in 26 Fig. 28 where the match register is supplemented by an 27 additional register 184a, 184b which holds a number of the 28 bit of the data bus 162 (which bit in the row) to be used to 29 drive the bit lines when the match occurs. In conventional 30 memories, there is one row address decoder per word and each 31 data bit is connected to a fixed data bus line. However, In 32 Fig. 28, there is one address decoder per bit line, and the 33 mapping to data bus lines is programmable. Thus, there are 34 no fixed word boundaries. This has the considerable 35 advantage of allowing multiple sub-fields to be written 36 simultaneously. The structure of Fig. 28 is considerably 37 more efficient in dynamic reprogramming applications where ~1 4 7363 1 it is desired to make a change to multiple bit slices.
2 In an embodiment of the CAL II architecture having 4096 3 cells, the format of the address bus is shown in the table 4 of Fig. 29. The first six bits define the cell column, the 5 next six bits the cell row, bit 12 the side, and bits 13 and 6 14 the mode. Smal~'er,CAL II devices have proportionally 7 fewer bits allocatedl~to row and column addresses. The mode 8 bits determine which area of the control store is to be g accessed according to the table shown in Fig. 30.
0 When the address bus is in cell routing mode, bytes 11 read from and written to the control store have the format 12 shown in the table of Fig. 31a, which shows the control 13 store byte for progr~mmi ng routing multiplexers. When side 14 = 0 the external routing multiplexers are accessed, i.e.
15 South, West, East or North. In this embodiment, no data are 16 provided for side=l and mode=cell routing.
17 This control store layout corresponds to function unit 8 126 of Fig. 22. When the address bus is in the cell 19 function mode, bytes read from and written to the control 20 store have the format shown in the table of Fig. 31b. When 21 the address bus is in the channels, I/O mode, bytes read 22 from and written to the control store control the 23 multiplexers in the switches (see switches in Figs. 15-17, 24 and 26d). When the address bus is in the state access or 25 device configuration mode, the state of function units of 26 the device will be read or written (written if the function 27 unit is a flip flop). Control store registers which control 28 shift and mask units, wildcard units, and the state access 29 mask register are mapped into the state access region of the 30 device address space when the ~side~ bit is set to 0. state 31 access transfers (reading and writing to function units) 32 take place when the Uside~ bit is set to 1. One additional 33 device control register includes two bits which allow 34 selection of external data bus width, in one embodiment 35 between 8, 16, and 32 bit width. A third bit can force all 36 user I/O pins to high impedance irrespective of the 37 configuration of the I/O control store value for individual 2~3~3 l pins. This third bit overrides the FHZ signal which forces 2 high impedance on an individual pin. (see Fig. 26d) Upon 3 reset of the device, the data bus width goes to 8 bits and 4 this third bit selects high impedance. During operation, 5 after a valid configuration has been loaded, an external 6 microprocessor will set this bi~ (select not high impedance) 7 to allow user I~O signals to ~rive the pins. These tables 8 are included only by way of example to make concepts more g concrete. Many other encodings are possible.
Although the match register approach allows for maximum ll reduction in the number of writes to the array, it entails a 12 considerable overhead for setting up the values in the match 13 registers and bit selection registers. For example, if a 14 system with two data lines D0 and Dl is considered, a column of the RAM could be set up with a single write cycle by 16 setting all the match registers on bit lines whose RAMs were 17 to be 0~s to select D0 and all the match registers on bit 18 lines whose RAMs were to be 1~s to select Dl, then writing ls binary l0. One write cycle per bit line is required to set 20 Up the select registers, so this technique is less efficient 21 than the standard RAM interface for configuring the entire 22 array. However, in some computational applications where it 23 is necessary to make one of a smaller number of changes to 24 the control store very quickly (for example, to select a 25 source operand for a computational unit by reprogramming a 26 vector of switches through a control store interface), the 27 match register approach represents an improvement over prior 28 art programming.
29 It is desirable to support multiple simultaneous writes 30 to take advantage of the regularity of the control store 31 (configuration memory) programming data but to m;nimi se the 32 overhead operations required. This can be done by:
33 1. The use of run lenath registers. A run length 34 register tells how many sequential words are to be written 35 in response to a single address. In this technique the row 3 6 and the column address decoders are supplemented with 37 additional run length registers. When an address matches ~ W094/10754 2 1 ~ 7 3 ~ 3 PCT/US93/10404 1 the corresponding decoder, the next N decoders, where N is 2 the value stored in the run length register, are enabled and 3 write the word on the data bus onto the bit lines. If the 4 value of 0 is stored in the length registers then only the address decoder is enabled and the device functions as a 6 normal random access memory.
7 The principal advantage of this approach is its ability 8 to configure small rectangular areas of the chip. But a g disadvantage is that no implementation has been found which lo is both small enough to require roughly the same area as the 11 standard decoder and to allow write cycles of the same 12 duration as standard RAMs. However, the technique can be 13 readily implemented by a sequence of single word writes to 14 numerically successive locations. Although this is not as fast as truly parallel writes, it is significantly faster 16 than a sequence of writes under the control of an external 17 processor and would free that processor to undertake another 18 task simultaneously with the writes.
19 2. Wildcard addressinq. In this technique the row and address decoders are supplemented with additional 21 wildcard registers which can be written through the RAM
22 interface. The wildcard register has one bit for each bit 23 in the row address. A logic 1 bit in the wildcard register 24 indicates that the corresponding bit in the address is to be taken as ~'Don't-Care": that is, the address decoder will 26 match addresses independent of this bit. When power is 27 applied, the wildcard registers are initialised to logic 0 28 SO that all address bits are treated as significant. Also, 2 9 the wildcard register is disabled during read operations and 3 0 when the address bus is in State Access mode. The wildcard 31 register allows many cell configuration memories in the same 32 column of cells to be written simultaneously with the same 33 data. This is used during device testing to allow regular 34 patterns to be loaded efficiently to the control memory but 3 5 is more generally useful especially with regular bit sliced 3 6 designs because it allows many cells to be changed 37 simultaneously. For example, a 16-bit 2:1 multiplexer can WO94/10754 ~ ~ 4~ 52 PCT/US93/104 1 be built using cell routing multiplexers and switched 2 between two sources using a single control store access.
3 For example, using East routing multiplexers, the two 4 sources could be a cell's function unit output, perhaps a register output, and the cell~s West input. When a bit of 6 the wildcard register is active the corresponding bit of the 7 address is ignored, for example if the wildcard register for 8 the lowest order bit holds 01 and the address supplied is g 10, then decoders 10 and 11 are both enabled. If the lo wildcard register for the lowest order bit holds 00, then 11 the device functions as a normal RAM.
12 Figs. 32a, 32b, and 32c show three examples of wildcard 13 addressing. In Fig. 32a, a user has not set any bits in the 14 wildcard and has applied the row address 010101 (decimal 21). Thus only row 21 is addressed, as shown. In Fig. 32b, 16 the user has set the wildcard address 100001 and again 17 applied the row address 010101. This time, the value 1 in 18 the least and most significant bits causes rows with 19 addresses 010100 (decimal 20) 21 010101 (decimal 21) 22 110100 (decimal 52) and 23 110101 (decimal 53) 24 to be addressed. In Fig. 32c, the user has set the wildcard 25 address 000111 and applied the same row address 010101.
26 This combination causes rows 16 through 23 to be addressed.
27 Thus many combinations and densities of multiple rows can be 28 addressed by selecting an appropriate entry in the wildcard 29 register and an appropriate row address.
The principal advantage of the wildcard addressing 31 approach is that it can be implemented with standard address 32 decoders without space and time penalties. This arrangement 33 is depicted in detail in Figs. 33a, 33b and 33c. AS
34 indicated in the aforementioned application W090/11648 (U.S.
35 Patent 5,243,238) and in the aforementioned Weste and 36 Eshraghian book, the standard address decoder for RAMs 37 consists of a CMOS NOR gate. soth true and complemented ~14~363 094/10754 ~ PCT/US93/10404 l values of each address bit, that is A and A (eight in this 2 case AoAo to A7A7) are fed through all gates. Each 3 individual gate selects either the true or the complemented 4 value of the address bit according to the address it decodes by placing a contact on the appropriate metal line. For 6 example, as shown in Fig. 33a, address decoder 183 for 7 decoding 0 uses all true forms (so that if any of the 8 address inputs is high, the corresponding decoder output is g low). This is repeated for each of N row decoders. As shown in Fig. 33b, by inserting AND gates controlled by the ll wildcard register on both the true and complemented signals 12 being fed to the array, both the true and complemented 13 signals for a given address bit can be forced to the low 14 condition. This means that any NOR gate for which the other address bits match will have its output active (high) 16 independent of this bit. Fig. 33b shows part of such an AND
17 gate in the wildcard unit circuit. It will be appreciated 18 that this circuitry is duplicated for each address bit. The l9 En signal applied to the AND gate input of Fig. 33b comes 20 from the corresponding bit of the wildcard register.
21 Normally, these AND gates would not be present and the -A
22 signal would be derived from A using an inverter. Fig. 33c 23 shows that the wildcard unit of Fig. 33b is located within 24 the RAM between the external address bus and the bus to the 25 row and column decoders.
2 6 As well as being easily implemented, the wildcard 27 address register has an additional important benefit. In 28 many bit-sliced structures found in computational 29 applications, it is desirable to change the same cell in 30 each bit slice unit simultaneously. In addition, it is 31 often the case in fine-grained FPGAs that the cells to be 32 changed simultaneously would be every second, fourth, or 33 eighth cell along a row or column. The wildcard address 34 decoder allows this sort of operation to be performed 35 efficiently by masking out the second, third, or fourth bit, 3 6 respectively, of the address bus.

Testinq 2 A further advantage of the wildcard address register is 3 the reduced time required to functionally test the FPGA.
4 Following manufacture it is desirable to test the device to 5 confirm that processing errors or defects have not 6 introduced cells or areas of the FPGA which do not function 7 correctly. Reprogram~able FPGAs are well suited to such 8 testing. The cost of testing is a significant portion of g the total manufacturing costs and the cost of testing is lo almost directly proportional to the number of test vectors.
11 In its most basic form such testing may involve writing then 12 reading particular bit patterns to and from the control 13 store. By comparing the value written with that read back, 14 it can be confirmed that the control store memory is 15 functioning correctly. It is well known that a careful 16 choice of bit patterns can be used to verify correct 17 functioning of the control store with only a small number of 18 test vectors.
19 An alternative and more exhaustive test of the FPGA
20 behaviour would involve the stimulation of every multiplexer 21 with each possible combination of inputs. The procedure for 22 testing the multiplexer behaviour requires that a large 23 number of regular configurations need to be written in order 24 to test each multiplexer. Such a test would involve 25 exercising function multiplexers and also the routing 2 6 multiplexers.
27 Both control store testing and multiplexer testing 28 involve writing repetitious and regular bit patterns to the 29 configuration memory. Each benefits from the wildcard 30 address register. sy using the wildcard address register it 31 iS possible to apply a common test configuration pattern to 32 a large number of cells using fewer write cycles than is 33 required when using a conventional memory interface.
34 Similarly, when exercising the multiplexers, the ability to 35 read back function unit outputs from a group of cells 3 6 provides a substantial reduction in the number of read 37 cycles required. The FPGA testing using wildcard registers 21473~3 WO94/107~4 ~ PCT/US93/10404 1 thus takes significantly less time to test exhaustively, or 2 alternatively the FPGA could be subjected to a more 3 extensive test in a given time period.

Shift and Mask Re~isters 6 It is also desirable to provide access to sub-fields of 7 configuration words. However, this is not achievable using 8 normal RAM addressing or even using wildcard or run length g registers. Access to sub-fields is a common requirement lo because a single word of configuration memory usually 11 contains configuration data for several separate switching 12 units which it is often desirable to change independently.
13 With a word-wide interface, a complex sequence of shift and 4 mask operations is necessary on the host processor in order to change one logical unit without affecting the states of 6 the others. These shift and mask operations often make it 17 impossible to take advantage of the ability to perform the 18 same write operation on several words simultaneously since 19 the bits of the words not to be changed might well be different in each word. This problem is solved by providing 21 a separate mask register and placing a shift and mask unit 22 between the external data bus and the data bus to the bit 23 line drivers as shown in Fig. 33c.
24 A detailed arrangement of a shift and mask register for read cycles is shown in Fig. 34a. For write cycles the same 26 unit as shown in Fig. 34a would be used facing in the 27 opposite direction (that is, its input would come from off 28 the chip and its output would go to the bit line driver data 29 bus) and additional enable lines for each data bus bit would be supplied to the bit line drivers sourced from the mask register. From Fig. 34a it will be seen that the shift and 32 mask register generally indicated by reference numeral 200 33 includes switches 201 placed between the external data bus 34 and the internal data connections 162 to the RAMs 160 (Fig.
27b). After power up, register 200 contains all logic Os 36 corresponding to a conventional RAM interface. Data are 37 loaded into the shift and mask register as part of the WO94/10754 PCT/US93/104~
2~13~ 56 1 control store configuration, or periodically by the 2 microprocessor during reconfiguration. A logic 1 in a 3 particular bit of the mask register indicates that the 4 corresponding bit of the internal data bus 162 is not relevant. On the read operation, only ~valid~ bits, that is 6 those with a logic 0 in the shift and mask register, are 7 presented in right justified form on the external interface, 8 i . e. at data out. This is depicted by the table in Fig. 35 g of the drawings. Figs. 34a, 34b and 35 show that each switch 201 has an input from the mask register 200. Switch 11 201 operates as follows: on switch row 7, switch 201-77 12 receives bit b7 as data input InB (see Fig. 34b). Data 13 input InA of switch 201-77 is connected to ground. When 14 mask register bit M7 is high, transistor 203a is on, and the InA input (ground) appears at the output, i.e. bit b7 on InB
16 iS masked. The output of switch 201-77, which is ground, is 17 fed to switches 201-66 and 201-67 in row 6. Also fed to 18 switch 201-67 is a ground signal, and fed to switch 201-66 19 is bit b6. Fig. 34c shows that in row 6 there is no enable 20 signal, therefore transistor 203a in switches 201-66 and 21 201-67 stays off and transistor 203b is on, so that inputs 22 b7 and b6 on Ins pass to the output. At row 5, again the 23 shift and mask register is not enabled, so signals b7, b6, 24 and b5 pass straight down. At row 4, the mask register bit 25 M4 is enabled, so bit b4 is not shifted down and bits b7, 26 b6, and b5 are shifted to the right and down. This is 27 repeated for other switches depending on the value of the 28 bit in the mask register. Fig. 34c shows the effect the 29 bits in the mask register have on bits passing from Data In 30 to Data Out. It will be appreciated that the shift and mask 31 register simplifies changing isolated multiplexers in the 32 control store. For example, changing a source for a cell's 33 North multiplexer without the shift and mask register would 34 entail the following operations:
3 5 1. Read control store at appropriate address.
3 6 2. Mask out bits corresponding to North register with 37 binary 00111111.

~1~73~-l 3. Get new value for North register and align with 2 bits 6 and 7. Make sure other bits are 0.
3 4. OR new value with value from operation 2.
4 5. Write back.
Using the shift and mask unit the following steps suffice:
6 1. Write mask register with binary 00llllll.
7 2. Write new value to control store at appropriate 8 address.
g On write cycles the unit disables those bits of the data bus to the bit line drivers corresponding to the l bits 1l in the mask register so that no writes are performed on RAM
12 cells in those bit lines. The values in the other bit lines 13 are sourced from input data bus bits in order starting with 14 the least significant bit. This has the advantage of allowing single multiplexers to be written using right 16 aligned data so that the processor does not have to perform 17 additional shift operations. The wildcard register with the 18 shift and mask facility can also be used to allow multiple l9 writes to the same sub-unit of several control store words.
20 For write cycles, the same unit as shown in Fig. 34a is 21 used, but the input comes from off the chip and the output 22 goes to the bit line driver data bus. Enable lines for each 23 external bit are supplied to the bit line drivers sourced 24 from the mask register. It should be apparent that the 25 shift and mask functions are independent, and that shift 26 only and mask only units could easily be derived.

28 State Access 29 Current FPGA designs allow read access to the outputs 30 of individual gates and flip-flops of user designs by 31 mapping them into bits in the device control store. Fig. 37 32 shows the additional logic in function unit 48 to support 33 read and write state access. Read and write operations to 34 flip flop 207 are to separate addresses: here write uses Bit 35 0, Word 0 and read uses Bit l, Word 0. Transmission gate 36 205 is controlled by word 0. For reading, when word 0 is 37 addressed, transmission gate 205 places the output of 2:l W O 94/10754 PC~r/US93/104 2 ~ 4 ~ 3 6 3 58 multiplexer 301 onto the bitl line. If the bitl line is 2 addressed, this value is read. If 2:1 multiplexer 301 has 3 been programmed by its control store bit to pass the Q
4 output of D flip flop 207, this value will be read. For writing to register 207, bit O and word O are addressed.
6 Register 207 has asynchronous Set and Reset (R,S). AND
7 gates 302 and 303 are connected to Set and Reset 8 respectively. If wordO is 0, AND gates 302 and 303 both g carry logic 0, and the value in D flip flop 207 is not lo changed. If the signal at wordO is 1, a logic O on bitO
11 produces a high reset output from AND gate 303, causing D
12 flip flop 207 to store logic 1. Likewise if bitO is logic 13 0, D flip flop 207 stores logic 0. Similar logic to that 14 provided by AND gates 301 and 303 and tr~n~mission gate 205 can be applied to other function unit designs such as shown 16 in Figs 18-23. AND gates 302 and 303 need to have non-17 standard threshold voltages on their inputs connected to bit 18 line bitO. This ensures that in this case that the bitO
19 voltage is at an intermediate value neither logic 1 nor logic 0, the register state will remain unchanged. Such a 21 situation occurs for registers whose word line is selected 22 by the column address but whose bit line is not selected, 23 i. e. the access is to another register on the same word 24 line. Alternatively, complementary bit lines bitO and bitO
as used in 6-transistor SRAM cells can be used to avoid the 26 need for nonstandard gates.

28 Rec~ister Access: User Re~isters Se~arated from ~onfiauration 29 Memory Access to gate output and register state is in itself 31 of limited use as an I/O mechanism for communicating data 32 between FPGAs and a host processor. (Access to gate output 33 and register state is, however, very useful for debugging 34 systems.) I/O use has been limited because a large number of overhead shift and mask operations are required to assemble 36 a single word of data from bits taken from multiple RAM
37 cells in the FPGA, possibly from different words in the 21473~3 l processor address space. To make gate output and register 2 state access a useful communications interface, it is 3 necessary to provide hardware which allows word-wide read 4 and write accesses from the processor to registers which are - 5 part of the user design.
6 The techniques described here to provide access to 7 internal state are perhaps most conveniently applied to an 8 FP&A with a RAM control store because the circuitry can g easily be shared with that to access the control store.
l0 However, providing internal state access can be used for ll other types of FPGAs including, but not limited to, anti-12 fuse and EPROM based structures.
3 In particular, if it is assumed that every computation 14 unit in the FPGA is assigned a single bit within the device 15 control memory to allow read access to gate output and 16 register state and write access to register state, then the 17 first step in improving the bandwidth of the interface is to 18 map bits of RAM which represent register state or gate l9 output into a logically distinct segment of the address 20 space, rather than intermingling them with other 21 configuration memory bits. In one embodiment, register 22 state bits are still physically intermingled with the 23 configuration bits of the array. Segregating the register 24 state bits from the configuration bits is achieved in the 25 simplest way by providing an additional ~mode'l bit within 26 the address bus as discussed in connection with Fig. 29, and 27 designing the decoders such that the decoders which 28 correspond to the device configuration bits use the true 29 form of the mode signal and decoders which correspond to the 30 state access bits use the complemented form. This 31 segregation results in making the address space less dense 32 but makes it much easier to dynamically change configuration r 33 bits or to access state bits. An address bit format is 34 depicted in Figs. 29 and 30. This segregation scheme can be 35 used where the state access bits respond to the same bit and 36 word lines as the configuration bits at the expense of 37 additional complexity in the row and column decoders;
.

2~4~3~ 60 l thereby each decoder now has two NOR gates corresponding to 2 addresses within the two address spaces and the mode bit 3 selects which NOR gate output is used to enable the bit or 4 word line circuits. It is also convenient to connect the 5 bit line to a different data bus line when the bit line is 6 active in state access mode than when active in 7 configuration access mode. Selecting the data bus is done 8 using extra circuitry in the bit line driver.

Word-Wide Interface Havinq Row and Column Se~aration ll Re~isters 12 Given that the state bits are mapped into a logically 13 distinct section of the address space, the best interface to 14 allow word-wide access to internal registers must be 15 considered. Word-wide access techniques could also be 16 applied to access small RAMs in the FPGA such as those in 17 the Xilinx xC4000 system. One reasonable constraint is that 18 bits of registers should occur in order from least 19 significant bit to most significant bit evenly spaced along 20 a single row or column of cells. This constraint is met by 21 most user designs on existing fine-grained FPGAs. With this 22 constraint one can specify an interface using two additional 23 registers which contain row and column separation 24 information. Writing into one of these additional registers 25 would automatically clear the other additional register and 2 6 determine whether a register was to be accessed along a row 27 or column of the array. One register value would specify 28 the number of cells between register bits. For example, if 29 the data bus width was 8 bits and one accessed address 8 3 0 with the separation register holding the value 2, one would 31 get addresses 8, l0, 12, 14, 16, etc. This example of the 32 separation register controlling the state access of the bits 33 iS illustrated in Fig. 35.

35 Use of Wildcard and Shift/Mask Reqisters to Access State 3 6 Tn form~tion 37 While the above interface is, in many ways, ideal, it ~ 2147363 * WO94/10754 PCT/US93/10404 l involves additional logic in registers which cannot easily 2 be shared with the logic required for programming the 3 configuration memory. An attractive option is to use the 4 existing wildcard and bit shift and mask units for accesses to state information as well. While they are not as 6 flexible as the interface using separation registers and 7 require some overhead operations in the processor when 8 registers do not align well with cell addresses in the g array, they do provide a significant increase in flexibility over standard RAM access modes. In this context, it may be ll convenient for the internal data bus to the bit line drivers 12 to be much wider than the external data bus.
13 A variation on the above approach is to use a larger 14 shift and mask register with one bit per cell row. In this case, the row wildcard units are unnecessary for accessing 16 state information. Since the shift and mask register is 17 likely to be significantly wider than the external data bus 18 and the data bus to the bit line drivers, more than one l9 external write operation will be required to set the contents of the shift and mask register. Fig. 34c shows how 21 the circuit of Fig. 34a can be extended to support systems 22 where the width of the input and mask registers is wider 23 than that of the output bus.
24 It is also possible to provide several mask registers, each of which holds a pattern corresponding to a different 26 register of the user's design, and each of which can be 27 conditionally connected to the shift and mask logic. During 2~ a register access operation, bits on the address bus can be 2 9 used to select which of these mask registers is used.
3 0 Having several mask registers considerably increases the 31 flexibility with which registers can be placed in a user's 32 design.
33 One disadvantage of the shift and mask circuit of Fig.
34 34a is that a signal must pass through a significant number 35 of switches 201 on the path between input and output. This 3 6 reduces speed of register access operations. Fig. 36a 37 illustrates an alternative shift and mask unit which has WO94/10754 PCT/US93/104 ~
2~rl3~ 62 only a single switch on each path between input and output, 2 with additional decoding to e~able the particular switch.
3 The decoding circuit incurs delay, but this delay is during 4 the write to the mask register, not during access to the user~s register. The particular embodiment of Fig. 36a 6 includes 64 mask bits for accessing 64 bit lines, of which 7 no more than 32 will be accessed at one time.
8 As shown in Fig. 36a, a mask register M holds 64 mask g bits M0 through M63. A column of 63 incrementers Hl through lo H63, of which only a few are illustrated, keeps count of the 11 number of logic ones stored in mask register M. Each logic 12 1 causes a corresponding data bit to be provided as data.
13 Circuitry for mask bit MO iS simplest. When M0 is logic 1, 14 transistor TO iS turned on, thus connecting bit line B0 to output data line D0. This logic 1 is applied to incrementer 16 Hl, which causes incrementer Hl to output the value 1 on the 17 five-bit bus leading to incrementer H2. (This value 1 will 18 prevent any of decoders DEC 0-1 through DEC 0-63 from 19 turning on corresponding transistors T0-1 through T0-63 to place a conflicting value from bits Bl through B63 onto 21 output data line D0.) 22 If mask register bit Ml is logic 0, no value will be 23 added by incrementer Hl to the value 1 input to incrementer 24 Hl. Thus the value 1 will be output by incrementer Hl.
Even though decoder DEC 1-1 would decode the value 1, the 26 logic 0 value of Ml disables decoder DEC 1-1. Thus the 27 value Bl is not placed onto either of data lines D0 or Dl.
28 If mask register bit M2 is logic 1, a 1 will be added by 29 incrementer H2 to the input value 1 and output to incrementer H3 (not shown for clarity). Since M2 is logic 31 1, decoders in that row are enabled. Therefore, the value 1 32 input to incrementer H2 is decoded by decoder DEC 1-2, which 33 turns on transistor Tl-2 to place the bit line signal B2 34 onto data line Dl. From the above discussion, it can be seen that other values in mask register M will produce other 36 connections from a bit line to an output data line.
37 Decode circuitry for mask bits M0 through M31 is as ~1~736~

1 discussed above. For mask register bits M32 through M63, no 2 more decoders are added, in the present embodiment, because 3 the data bus includes only 32 data lines D0 through D31. In 4 this portion of the circuit, error detection circuitry is - s included comprising an OR gate which detects an overflow if 6 the number of logic l~s in mask register M is greater than 7 32. The error detection circuitry for a mask register bit 8 Mn is shown. OR gate ORn receives a logic 1 if incrementer g Hn detects that a 33rd logic 1 has been entered into mask register M. At its other input, OR gate ORn receives a 11 logic 1 if any lower order incrementer has detected an 12 overflow. This logic 1 value propagates through all OR
13 gates above ORn and causes AND gate ANDn and all AND gates 14 above ANDn to output a logic 0, thus disabling all decoders 15 above row n.
16 Thus it can be seen that the circuit of Fig. 36a forms 17 a set of data bus outputs as specified by 64-bit mask 18 register M from the 64 bit line inputs B0 through B63, and 19 right-justifies the selected bits. Yet each selected bit 20 line value passes through only a single transistor to reach 21 its data line.
22 Decoders in Fig. 36a are preferably implemented as NOR
23 gates, though other implementations are of course possible.
24 The incrementer circuits of Fig. 36a may be implemented as 25 shown in the inset attached to incrementer H63.
26 Fig. 36b illustrates another shift and mask register 27 similar to that of Fig. 34a but having 16 data bits and an 28 8-bit data register. Mask 200 allows the shift and mask 29 circuit of Fig. 36b to select up to 8 of the 16 DATA IN bits 30 bo through b15 to place on the DATA OUT bus Values Mo 31 through M1s are loaded into mask register 200. As with Fig.
32 34a, a value 1 in the mask register causes a bit value bo 33 through b15 to be shifted down and to the right, whereas a 34 value 0 in the mask register causes the bit value to be 35 shifted straight down. If mask register 200 contains more 36 than eight l~s, the higher order bit values will be lost.
37 Writing to the mask register is an inherently faster operation than accessing configuration memory because it 2 does not involve setting up the long bit and word lines in 3 the configuration memory via the row and column decoders.
4 Thus, there is likely to be adequate time in a normal write 5 cycle to allow the decoding circuitry to settle. If there 6 are multiple mask registers selected by address bits, and 7 only a single set of decoding circuitry, then the decoding 8 delay will be incurred during the access to the user g register. Thus, the shift and mask unit of Fig. 36a is mainly of benefit when there is only a single mask register.

12 Access to Reaisters Im~lemented Horizontallv 13 The interfaces to registers and gate outputs provide l~ for word-wide access to registers in the device running in 15 the vertical direction so that the bits of the register all 16 occur on different bit lines. If the register runs 17 horizontally then all bits will occur in the same bit line 18 and parallel word-wide access will not be possible. Because ls the number of bits of control store corresponding to state access is likely to be approximately 20 times less than the 21 total number of bits of control store, it is quite feasible 22 to use ~dual-ported~ memory for the feedback bits along with 23 a second set of bit and word line drivers running in the 24 perpendicular direction to the first set. The extra port 25 allows horizontal as well as vertical registers to be 26 accessed in a word-wide fashion. Dual-ported memories are 27 well known and are disclosed in the above mentioned Weste &
28 Eshraghian book. This second set of drivers may have their 29 own dedicated shift and mask unit and wildcard register or 3 0 share with the first set according to detailed layout 31 considerations.

33 Control Store DuDlication 34 Control store duplication will now be described. In 35 some situations it is convenient to have multiple bits of 3 6 the control store of an FPGA which are guaranteed to always 37 contain the same value. This duplication of control store .

21~7~3 1 bits can eliminate wiring which would otherwise be required 2 to route the output of a single control store bit to a 3 distant location in the array. One important example of the 4 application of this technique is when a single routing wire - 5 is provided which can be driven by more than one possible 6 source using 3-state drivers and the control store contains 7 bits which select which driver is active. A solution may be 8 achieved by routing the bits of the control store in g parallel with the wire itself to all drivers, but this 0 involves considerable area overhead for a long wire. An 11 alternative solution is achieved by simultaneously writing 12 duplicate bits to those control store locations which must 13 be identical. If the duplicate bits are on the same bit line of the control store address, then simultaneous writing of the duplicate bits is readily achieved by using the same 16 column address for the various columns of RAM cont~ining 17 duplicate bits. By increasing the complexity of the row and 18 column decoders, for example by providing more than one NOR
19 gate in a given decoder and routing row address bits to the 20 column decoders and vice versa, a flexible structure can be 21 built which reads and writes the duplicate bits.
22 This arrangement is best seen in Fig. 38. The letter A
23 represents those memory cells (cells 351 and 352 are shown) 24 in which bits are to have the same value. Additional memory 2s cells A may be provided but are not shown in Fig. 38. All 26 memory cells A which are to have the same value are placed 27 on the same bit line 99. Other cells such as 331-334 28 labeled RAM are each separately addressable and are in the 29 same columns (word lines) as memory cells A. Word lines 361 30 and 362 can be selected by two different column addresses.
31 Word line 361 is selected by either of decoders 321 or 322 32 and word line 352 is selected by either of decoders 323 or 33 324. Decoders 321 and 323 decode the same address, and such 34 decoders are provided for all columns in which memory cell A
3 5 is located. In other words, all columns having a memory 36 cell A include a decoder which decodes a single row and 37 column address. Decoder 322 decodes the column address for W O 94/10754 PC~r/US93/104 214713 ~RAM cells 331 and 333, while decoder 324 decodes the column 2 addresses for RAM cells 332 and 334. Decoders 321 and 323 3 for memory cell A include row address bits for selecting bit 4 line 99, so the outputs of decoders 321 and 323 go high only when the columns having memory cells A are selected and bit 6 line 99 is also selected. Decoders 322 and 324 for the RAM
7 cells go high only when bit line 99 is inactive. Thus, 8 multiple memory cells A can be simultaneously read or g written, and yet high density in the remainder of the memory lo is retained. The remainder of the memory rem~ high 11 density because no extra word lines are added for accessing 12 the duplicate bits A. Another useful way of applying 13 control store duplication is to feed the read/write signal 14 to the address decoders and set decoders 321 to decode logic 0 and decoder 323 to decode logic 1 on the read/write line.
16 Feeding the read/write signal to other decoders such as 322 17 and 324 allows two row and column addresses, one for reading 18 and one for writing to a cell's function unit register to be 19 mapped onto a single address in the device address space.
The exact structure of the row and column decoders will 21 depend on a variety of factors, for example, the way in 22 which the duplicated bits are interspersed through the 23 control store and the performance required for read and 24 write operations. Appropriate circuit designs for decoders 25 can easily be arrived at using conventional design 26 techniques as disclosed in the aforementioned Weste &~
27 Eshraghian book.

29 Processor Interface to FPGA
The processor interface will now be described. Current 31 FPGA designs do not provide any means for handshaking 32 information transfers between the user logic on the FPGA and 33 host microprocessors. Consequently, a variety of ad hoc 34 mechanisms have been used. The most flexible existing 35 mechanism presently in use is to clock the FPGA directly 36 from the processor, which keeps the two computations in 37 complete synchronisation. However, clocking the FPGA from l the processor slows the FPGA down too much in high 2 performance applications. Thus, clocked transfer is most 3 useful when relatively small amounts of data are transferred 4 to and from registers implemented on the FPGA and provides a 5 useful debugging methodology.
6 It is also possible to write data into a buffer memory 7 and then initiate a free-running clock implemented on the 8 FPGA itself or implemented on adjacent logic which runs for g enough cycles to complete the operation on the data and then stops. This technique works efficiently for large data ll streams but the overhead of initiating the clock in a 12 separate operation is significant for single operands 13 written to and read from the registers on the FPGA. The 14 processor can poll a hardware flag continuously, use an 15 interrupt generated by the FPGA or wait for a known delay 16 until the FPGA has finished computing and then reads back 17 the results. When an interrupt generated by the FPGA is to 18 be used by the processor, it may be convenient to provide a l9 small number of global output signals which can be 20 selectively driven by any cell function unit output. These 21 global signals can be used as interrupt request signals.
22 ~The global signals may be implemented as wired ORs so that 23 several cells can activate the external interrupt. The FPGA
24 device may be programmed to latch the external interrupt 25 line until the latch is cleared by the processor writing to 2 6 an interrupt status register.
27 In many applications it is desirable to initiate 28 processing on the FPGA directly by the action of writing 29 data into registers on the array. We described above the 30 addressing scheme for input/output transfers from internal 31 device state registers where the row and column (bit and 32 word) wires used in this addressing scheme pass through the 33 array and have exactly the signals required to synchronise 34 computations on the FPGA. For example, in the case of a 35 write to a vertical register along the bit line, the word 36 line for those bits of RAM will go high during the transfer 37 and low when the transfer is complete. Although these bit W094/10754 PCT/US93/104 ~
2 1 4~ 68 - l line and word line signals are normally concerned only with 2 programming and state access, they can easily be provided as 3 a source to one of the logic routing multiplexers in the 4 array. (Conveniently at a length-4 switch block, bit lines are connected to East/West switches and word lines to 6 North/South switches.) Thus, user defined logic in the 7 FPGA can be triggered by the low going edge on the word line 8 signal to initiate the next computation to clock a new value g into the register.
When a relatively short operation (that is less than the execution time of a small number of processing 12 instructions, say 500 nanoseconds with today's technology) 13 is implemented in the FPGA, it can be convenient to extend 14 the above state access mechanism by using the ability of most processors to lengthen read and write cycles by 16 inserting "wait-states" when dealing with slow devices. The 17 FPGA generates a ~wait" signal when the register is accessed 18 before its new value has been computed, forcing the ls processor to wait until it can read the valid result.
Similarly, in the write cycle the processor is held up until 21 the previous data in the register has been processed. This 22 arrangement provides a very simple and low overhead method 23 of synchronising the processor with the FPGA.

CAD Software Tools 26 We will now describe and discuss the CAD software tools 27 for the FPGA and thereafter we will discuss the application 28 of CAL II to the implementation of several common logic 29 structures.
The present CAD tools for FPGAS represent a design as a 31 static hierarchy of connected components; this hierarchy is 32 then flattened into a set of basic components and converted 33 into a bit pattern representing the configuration of the 34 whole device. In a fine-grained FPGA, hierarchical blocks of the design normally specify a rectangular area of the 36 fine-grained array and map onto a rectangular area in the 37 device configuration memory. If the FPGA is designed such W O 94/10754 ~ 1 4 ~ 3 6 3 PC~r/US93/10404 1 that the bit patterns for a given rectangular array of 2 memory depend only on the configuration of the resources in 3 the corresponding area of the design, it is possible to 4 program the FPGA rapidly on a block-by-block basis, all 5 instances of the same block having the same bit pattern.
6 Two instances of the same block with different external r 7 connections may have different programming bit patterns.
8 With current FPGAs the configuration generation program g performs transformations on the user's design. The 10 transformations require the configuration program to be able to analyse the entire design in order to determine the configuration information for a block of that design.
13 One way for dynamic reconfiguration to be used is for 14 the host processor to construct the CAL II design 15 dynamically with an internal data structure and compute the 16 bit patterns corresponding to the design and then download 7 them directly into the chip. In such a case there is no 8 specialist translation program or static file containing lg configuration bit patterns. This approach is made practical 20 by having a less highly encoded translation between design 21 representation and bit pattern (for example certain bits in 22 the bit pattern are reserved for representing a single 23 parameter~. Translation can be applied hierarchically or on 24 a block-by-block basis. In addition, the fact that every 25 instance of the same block has the same configuration can be 26 used in conjunction with the multiple write capability of 27 the CAL II chip (implemented by wildcard registers) to 28 decrease programming time. The shift and mask register 29 feature allows overlapping blocks, each of which specifies 30 some resources in the same cell and the same word of control memory, to be programmed independently by allowing a sub-set 32 of bits in a bYte to be changed.

34 F~sY Reconfiquration Throu~h Block Desiqn 35 Although an algorithm may be used to construct some CAL
36 II designs, in most cases users will wish to use more 37 traditional CAD tools to generate the CAL II designs without WO 94/10754 PCT/US93/10~ ~
2~ ~7~3 70 ~ 1 losing the advantages of dynamic reprogramming. Dynamic 2 reprogrammability may be achieved by using replaceable 3 blocks. For each block of the design the user specifies a 4 number of possible configurations where each of these configurations is a static design which can be produced and 6 analysed using conventional tools. Configuration data for 7 each potential configuration of each replaceable block and 8 the single initial configuration for the whole design can be g computed and stored in a disk file or non-volatile memory such as an EPROM. A run-time library routine (that is, a 11 library routine written by the FPGA supplier and called by 12 the user~s application programs to interact with the FPGA) 13 for the host processor which controls the CAL II chip can 14 then provide for replacing any replaceable block configuration with one of its alternative configurations.
16 Replacement can be very simple and fast because it requires only block transfers to regular areas of configuration memory.
19 The software can also provide for initialisation of state registers in replaceable blocks of the design.
21 Conveniently, state registers may be initialised to a 22 default value associated with the block definition or to the 23 previous state of the current instance of the block, thus 24 restoring its status. This can be achieved using the CAL II
architecture~s ability to read and write registers randomly.
26 To ensure rapid reconfiguration, it is desirable to 27 impose some restrictions on replaceable blocks. For 28 example, each version of a replaceable block must have the 29 same bounding box, I/O signals must appear at the same point on the block periphery on all versions of a replaceable 31 block, and no versions of a replaceable block may use any 32 chip resources which extend outside their bounding box. For 33 example, it would be unacceptable to use in a replaceable 3 4 block a flyover wire which extended outside the bounding box of the repla~eable block. A more restrictive rule which 36 considerably simplifies the software is that no chip 37 resources lying within the boundary of an instance of a ~ WO94/10754 2 1 ~ 7 3 ~ ~ PCT/US93/10404 l replaceable block may be assigned to any other block in the 2 design. CAD software can easily check whether these 3 restrictions have been met. If they are not met, the block 4 can be ruled illegal. Alternatively, a more general purpose - 5 and slower reconfiguration algorithm which checks individual 6 resources for conflicts rather than checking bounding boxes - 7 can be used.
8 In some cases, there are relatively few potential g configurations of the device, and extremely rapid switching between these configurations is desirable. In such a ll situation, in order to m;n;m;se the number of device 12 accesses, optimisation software (which could have a long 13 run-time) may be used to analyse the device configuration 14 file and a list of potential reconfigurations. This optimisation software will produce a set of configuration 16 operations which take advantage of the multiple write 17 capabilities of the device and change only those bits of 18 control store which are different. The optimiser output l9 could be stored in high level language code or machine language program segments for a host processor. These pre-21 computed instructions, when executed, will then perform the 22 reconfiguration rather than a data file controlling 23 reconfiguration.
24 Fig. 39 is a schematic diagram of an FPGA shown located 2 5 on an address bus and data bus with a microprocessor and 26 memory (EPROM and/or RAM). This depicts the simplicity of 27 using the FPGA in a microprocessor based circuit 28 application. The CAL II architecture does not support bi-29 directional and tri-state wires. The principal reason for this is that the CAL II is intended to support dynamic 31 reconfiguration by user software. It is possible that 32 during dynamic reconfiguration, the control store may be 33 configured incorrectly either as the result of the program 3 4 being terminated mid-way through configuring the array, or because of errors in the user's software. In an 36 architecture where a wire can be driven by multiple 37 transceivers each of which is controlled by an independent 2 ~ 4~ 3 6 3 72 - l bit of RAM, there is the inherent potential for conflict, 2 resulting in high power dissipation and potential damage to 3 the device if the control store is incorrectly configured.
4 Such a situation is tolerable when configurations are static 5 and generated by trusted software, but is unacceptable in a 6 device intended to support frequent reconfiguration. The 7 function of tri-state buses can be emulated using wire-OR or 8 wire-AND buses implemented using the cell logic gates and g the longer logic wires provided by the CAL II array.

ll Exam~le A~lications Usina CAL II
12 Figs. 40-48 show example applications of the CAL-II
13 architecture. The drawing convention used in Figs. 40-48 14 represents function units of a cell which are used by a 15 design as a central box with a name on the box to represent 16 the selected function. The drawing convention places 17 signals in order according to decreasing length from the 18 perimeter towards the center of the cell, so that, for l9 example, length-4 flyovers near the perimeter of cells and 20 the neighbour interconnects closer to the function block.
21 Lines which turn at a single dotted line and pass to the 22 central box in the cell represent signals being handled by 23 one of multiplexers 58, 60, or 62 of Fig. l0. Lines which 24 terminate at the edge of a box represent inputs to the 25 function unit. The side of the function unit contacted 26 corresponds to input terminals on Fig. ll as indicated in 27 Table I.

29 Function Class Left Riah~ ~Q~ Centre ZERO and ONE Not used Not used Not used F
31 A and A A Not used Not used F
32 B and B Not used B Not used F
33 Two Input Comb. A B Not used F
34 Multiplexer A B Sel F
Register D Clk Clr Q

37 Lines which exit from the center of a cell function unit ~ ~ 214~3~3 1 represent signals which have been placed on the SELF line by 2 function unit 48 of Fig. 10 and further connected to a 3 neighbour cell by one of multiplexers 50, 52, 54, or 56.
4 Lines which pass through one cell close to the function unit 5 and to the next cell represent signals being received on a 6 N, S, E, or W input by one of multiplexers 50, 52, 54, or 56 7 of Fig. 10 and passed to NOUT, SOUT, EOUT, or WOUT by that 8 multiplexer. In order to simplify the drawings, the switches 9 18 or 20 (switches are illustrated in Figs. 15, 16, and 17), which are shown and labeled in Fig. 41, are not labeled in 11 Figs. 42-48. These switches are positioned between the 12 double lines which separate cell blocks, as shown in Fig.
13 41.
14 Fig. 40 depicts a first implementation of an 15 application using the CAL II architecture using a 4-input 16 AND gate. The 4-input AND gate is provided in a 4 x 4 block 17 of cells which t~pically implements additional functions, 18 although for clarity only those cells which implement the 19 AND gate are shown. Wide gates are found in many important 20 logic structures including address decoders and the AND and 21 OR planes of ROM/PLA/PAL type structures. It is essential 22 to be able to implement such wide gates efficiently in terms 23 of both speed and area. The CAL II architecture supports 24 the fast implementation of these wide gates by using a tree 25 of two-input, one-output logic cells 12. In the tree 26 structure shown in Fig. 40, the delay grows logarithmically 27 rather than linearly with the number of inputs. The drawing 28 convention in Figs. 40 through 48 represents function units 29 within the logic cells as rectangles labeled with their 30 selected function. Those cells whose function unit is 31 unused do not contain a rectangle. Input signals to the 32 logic cells are shown contacting the logic cell rectangles 33 at their edges, and outputs are shown leaving the logic cell 34 rectangles from their centers. Switches which connect 35 neighbor cells are positioned on the single dashed lines, 36 but for clarity are not shown. Switches 18, which were 37 illustrated in Figs. 1, 2, 15, and 16 are also not shown but WO94/10754 PCT/US93/104 ~
2,~413~3 74 1 are positioned between the double dashed lines.
2 In Fig. 40, AND gate 12a receives two inputs, IN0 and 3 IN1, and AND gate 12c receives two inputs IN2 and IN3. The 4 outputs of AND gates 12a and 12c form the inputs to AND gate 12b from whence the output OUT is taken. The function units 6 depicted in Figs. 18 to 25 allow true or complemented values 7 of each input variable to be used, which is essential for 8 decoders. The flexibility of the function unit in the OR
g plane of a ROM enables the number of product terms to be 1o halved, and the routing resources provided by the CAL II
11 architecture allow tree-structured gates with up to 32 12 inputs to be implemented in a single column of cells.
13 Fig. 41 depicts a 16-cell AND gate with the 16 cells 14 arranged in a column of the array. There are four 4-cell x 15 4-cell blocks arranged vertically. This arrangement not 16 only depicts the connections between neighbour cells but 17 also the connections between the blocks of cells using the 18 length-4 and length-16 flyovers. The cells are numbered 19 from cell 0 at the bottom to cell 15 at the top. In Fig.
20 41, switches 18 are shown located in the spaces between the 21 cell blocks. In addition, it will be seen that lines 210, 22 211 and 212 depict length-4 flyover routing. Signals can 23 only enter flyover 210 at switch 18 between cells 3 and 4 24 although a signal can exit flyover 210 directly into cells 25 4, 5, 6, 7 and into the switch 18 between cells 7 and 8. In 26 the bottom block comprising cell 0 through cell 3, there are 27 three AND gates in cells 0 to 2. The output of cell 1 28 passed through an unused cell 3 and enters the switch 18 at 29 the boundary to the bottom block. The output of cell 1 30 forms an input of the AND gate in cell 4. The output of the 31 AND gate in cell 4 is sent via the switch 18 between cells 3 32 and 4 to length-4 flyover 210. Cell 8 has one input from 33 length-4 flyover 210 and the other input from cell 12 via 34 flyover 211 (without going through the switch 18 between 35 cells 7 and 8). Output of the 16-input AND gate is taken 36 from the output of cell 8 and is routed via the switch 18 37 between cells 7 and 8 onto flyover 212 to the switch 18 2`~7363 0 94/10754 PC~r/US93/10404 1 between cells 11 and 12, and provided as output at the top 2 of Fig. 41.
3 Fig. 42 depicts a PAL-type structure showing how an AND
4 plane is built up and mated to an OR plane to form a general - 5 purpose logic block. Inputs IN0 through IN15 are provided 6 to 8 columns of 16 rows of AND gates. Each column is - 7 connected as shown in Fig. 41 to form a tree structure.
8 Because there are eight columns of AND gates, connections g from the input signals IN0 through IN15 are applied to length-4 flyovers. Input signals are applied at the left of 11 the figure and the East flyovers are used. Since two input 12 signals are applied to AND gates at the lowest level of the 13 tree, and only one East length-4 flyover is provided for 14 each cell, in even rows, the East length-4 flyovers of adjacent cells are used, and signals transferred through 16 neighbor interconnect. For example, the row of cells 17 labeled ROW 5 receives its IN5 input from the East length-4 18 flyovers of ROW 5 and its IN4 input from the East length-4 19 flyover of ROW 4. But in the embodiment shown in Fig. 10, there is no provision to take routing from one neighbour 21 cell to another from a length-4 flyover. Therefore at the 22 switches indicated by double dotted lines the signal on the 23 east length-4 flyover is transferred to east neighbour 24 routing. One such switch transfer is labeled 424 in Fig.
42. For simplicity, other transfers are not labeled. The 26 IN4 signal is then transferred east through neighbour 27 routing to the next three adjacent cells. The signal is 28 also transferred by cells in row 4 upward along neighbor 29 interconnect to cells directly above in row 5. In accordance with the invention, the IN4 signal runs on the 1 east length-4 flyover as well as through neighbour 32 interconnect, so that it reaches the four AND gates at the right of the figure with less delay than if it had passed 34 through eight neighbour cells. Another switch e~uivalent to 424 transfers the IN4 signal to the neighbour interconnect 36 of the right 4 columns. In this application, it is not 37 necessary to also place the signal on the east length-4 W094/ 54 PCT/US93/104 ~
2i4~ 76 1 flyover at the right of the figure because no further 2 connection of the IN4 signal to the right of the figure is 3 made. Rows 6, 8, 10, 12, and 14 include the same 4 combination of length-4 flyovers and neighbour routing to get high speed. Rows 1 and 3 also include this co-m--bination~
6 though in these cases, the signal is passed downward to rows 7 0 and 2, respectively, rather than upward.
8 One row of 7 OR gates is positioned at the top of Fig.
9 42. OR gate OR0 receives as input the outputs of the first two columns of AND gates. OR gate oR2 receives as input the 11 outputs of the third and fourth columns of AND gates. OR
12 gate ORl receives as input the outputs of OR gates OR0 and 13 OR2. A similar tree structure is formed by OR gates OR5 14 through OR7, with the output signal OUT taken from OR gate oR3 through a length-4 flyover.
16 Fig. 43 depicts a one-bit accumulator constructed from 17 a row of 5 cells in two (4 x 4) cell blocks. The cells are 18 configured for XOR, AND, MUX and DC as shown to create a SUM
19 output and a CARRY output.
Fig. 44 depicts a three bit accumulator with a look 21 ahead carry (for 3 inputs InO, Inl and In2 generating SUM0, 22 SUMl, and SUM2, and CARRYOUT).
23 Fig. 45 depicts an adder which is a 16 bit accumulator 24 with a look ahead carry for min;m;sing the delay along the carry chain. The CAL II architecture supports the 2:1 26 multiplexer as a cell function, as can be seen from ~igs.
27 43, 44 and 45, and this reduces the carry path delay from 2 8 two gate delays in the CAL I architecture to one gate delay 29 in this architecture. The extra routing resources provided by the flyovers allows the one bit adder shown in Fig. 43 to 31 be implemented in a single row of cells, which reduces 32 routing delays on the carry path as compared to a two-cell-33 high (C~L I) implementation. In this way it will be 34 appreciated that accumulators and adders of various complexity can be constructed using the CAL II architecture 36 and, of course, the routing resources can be used as shown 37 in Fig. 45 to route the carry from a previous stage over a ~ WO 94/10754 2 1 4 7 3 6 3 PCT/US93/10404 1 block of adders in a look ahead structure.
2 The CAL II architecture can also implement synchronous 3 counters. As described above, the CAL II architecture 4 provides an edge triggered flip-flop as a cell primitive, which allows a more efficient implementation of such 6 synchronous counters. Fig. 46 depicts a 4-bit synchronous - 7 counter stage which has the usual signals such as Clock 8 Enable In, Clock Enable Out, clock, and output signals Q0, g Ql, Q2 and Q3. It will be seen that the clock enable output lo signal comes from a length-4 flyover and that the clock 11 signal can be communicated to all cells in the row used for 12 flip flops via the length-4 flyover. The CAL II
13 architecture is particularly effective because it provides 14 flyover routing resources to route the clock lines directly into the cells. Also, the look ahead function required by 16 the fast synchronous counters is provided using wide gates.
17 The 4-bit counter stage shown in Fig. 46 can be cascaded and 18 expanded to form a 16 bit synchronous counter as shown in 19 Fig. 47 using 4 blocks of 4 cells x 4 cells.
It will also be appreciated that wide multiplexers such 21 as 16:1 multiplexers can be efficiently implemented as a 22 tree of 2:1 multiplexers. Such an arrangement is depicted 23 in Fig. 48 in which two 4 x 4 blocks of cells are used to 24 form the tree. The first row of cells has eight cells implementing 2:1 multiplexers. The outputs of these 26 multiplexers are fed to the inputs of multiplexer cells in 27 the second row whereupon two outputs are taken from the 2:1 28 ~ultiplexer shown in the third row which provides the output 29 of the 16:1 multiplexer.

31 Summarv 32 It will readily be appreciated that all common logic 33 structures can be implemented using this technology. The 34 main additional features supported by the CAL II
35 architecture are that the control store layout is arranged 3 6 so that closely associated groups of resources within a cell 37 are accessed through the same byte of the control store, WO94/107~4 PCT/US93/104 ~
2~4~3~ 78 - 1 additional logic circuits on the control store interface 2 allow for word-wide read and write access to internal state 3 registers in the user design, wildcard registers are 4 provided in the control store address decoder to allow vectors of cells and bit slices in the user designs to be 6 changed simultaneously, and a hierarchical routing structure 7 consisting of length-4, 16 and 64 wires is overlaid on the 8 basic cell grid to reduce the delay on longer wires. As g described above, length-4 wires are used as function unit inputs to the basic cells. This structure can be extended upwardly in a hierarchical manner to length-64 and 12 length-256 and so on for wires in future product families.
13 The CAL II architecture provides the ability to make 14 dynamic accesses to a CAL II FPGA by mapping its control store into the address space of a host processor. This 16 offers design opportunities which are not available from any 17 other FPGA. Significant benefits can be gained from this 18 architecture without abandoning traditional CAD frameworks.
19 The CAL II architecture can be used in a variety of modes and four principal modes have been identified:
21 1. cQnventional ASIC: In this mode, conventional 22 ASIC/FPGA design tools are used to produce a static 23 configuration file which is then loaded into the device from 24 an EPROM or other non-volatile store at power up. No host processor is needed, although it will be appreciated that if 26 such a host processor is available, savings in board area 27 can be obtained by storing the CAL II design configuration 28 within the host processor~s memory. The use of the host 29 processor also allows configuration time and configuration data size to be greatly reduced by taking advantage of the 31 wildcard units in the CAL II address decoders.
32 2. Processor Access to Internal State: In this 33 arrangement, again a conventional ASIC process flow is used 34 to produce a static configuration which is then down-loaded on power up. While the device is active the processor 36 accesses internal registers of the user~s design to store 37 and to retrieve values. The control store interface can be ` 2~47363 l regarded as providing free wiring to all internal registers 2 of the user's design. Use of existing control store wiring 3 can increase density by eliminating wires which would 4 otherwise be required to route signals to the chip edge, and 5 can also reduce design complexity. This design style is 6 particularly attractive in applications where the FPGA
7 provides an interface between a microprocessor and external 8 hardware. Software running in the host calculates the g addresses of internal registers using trivial computations based on placement information output from the CAD system.
ll 3. Multi~le Unrelated FPGA ~onfiaurations: In this 12 design style several complete FPGA designs are undertaken in 13 parallel using a conventional CAD system and then verified 14 independently. Run-time software on the host processor can 15 then swap between various configurations of the FPGA device.
16 Conveniently, FPGA configurations can be associated with l7 processes running on the host processor and swapped during 18 process context switches, preserving the state of internal l9 registers. In this way, each process can appear to have 20 access to its own 'virtual' FPGA. These multiple 21 configurations must, however, be designed to co-operate with 22 each other if any user I/O pins are shared by multiple 23 configurations. The additional circuits on the CAL II
24 control store interface greatly reduce the number of write 25 operations to switch between various device configurations.
26 One example of an application suited to this technique is a 27 laser printer controller where the FPGA initially operates 28 as an RS232 interface to down-load a printer image file and 29 iS then reconfigured to control the print engine and 30 implement low level graphics operations.
31 4. Alqorithmic Use of Dynamic Reconfi~uration: In 32 this design style portions of the circuit implemented on the 33 FPGA are reconfigured dynamically as part of the computation 34 being performed. For example, the routing network in the 35 FPGA may be used directly to perform a permutation function 36 over the FPGA input pins. The largest part of the design 37 work and much of the verification can be done using WO94/107~4 PCT/US93/104 2~4~3~3 l conventional ASIC design tools.
2 A high percentage of system designs in present use 3 consist of a processor, memory, and chips to interface to 4 I/O devices on the circuit board. The design of such a system consists of both hardware design of any ASIC or FPGA
6 and the board itself, and also software design of the 7 program for the processor which implements most of the 8 desired functionality. Mapping of the control store of the 9 FPGA into the address space of the processor provides the opportunity to move elements of the design from the hardware ll engineer to the software engineer which simplifies the 12 overall design process. It is still necessary for software 13 to lay out the user's design onto the hardware of the CAL II
14 device, but the software for this task can be less complex because of the regularity of the CAL II architecture.
16 A principal advantage of the CAL II structure is that 17 it is simple, symmetrical, and regular, which allows novice 18 users to quickly make use of the array of fine-grained l9 cells, and permits CAD tools to make efficient use of the resources available. A further advantage of the CAL II
21 array is that it provides flexibility in placing functional 22 blocks of designs on the array to meet an overall size 23 constraint. The arrangement of the control store and the 24 use of the wildcard registers and shift and mask registers minimi ses the number of microprocessor instructions required 26 to access device resources and status. The specific 27 structure of the control store allows many control bits to 28 be written simultaneously instead of one at a time because 29 of the structured set of data in the RAM. This has the advantage of reducing the testing overhead because testing 31 uses regular configurations. The advantage of the 32 hierarchical scaling is that delays are logarithmic in terms 33 of distance in cell units and delays are hence considerably 34 reduced in comparison with previous designs. Since the 3 5 flyover wires can only be driven by one element, dynamic 36 access to the control store is safer because there is no 37 possibility of incorrect configurations causing contention.

1 This added safety is useful in situations where the FPGA
2 configuration is intended to be fre~uently altered ~y a 3 US er.

Claims (42)

82 I claim:
1. An hierarchically-scalable cellular array comprising:
a plurality of cells;
a first routing resource for interconnecting each cell to its adjacent neighboring cells, said first routing resource including a first plurality of lines having a length n, where n is a predetermined distance;
a second routing resource including a second plurality of lines having a length N x n where N>1; and a third routing resource including a third plurality of lines having a length Nm x n where m?2 and where Nm x n ? T, the total length of said cellular array.
2. The cellular array of Claim 1 wherein n is approximately one cell width.
3. The cellular array of Claim 1 wherein N is 4.
4. The cellular array of Claim 3 wherein m is 2.
5. The cellular array of Claim 3 wherein m is 3.
6. The cellular array of Claim 1 further including a fourth routing resource including a fourth plurality of lines having a length Nr x n where r ? 3, and r = m + 1.
7. A programmable logic structure comprising:
an array of cells arranged in rows and columns;
a plurality of switches, said plurality of switches partitioning said array into a first plurality of cell blocks, wherein each cell includes:
four input lines for providing four input signals to said cell, each of said input lines coupled to an adjacent cell or switch; and four output lines for providing four output signals from said cell, each of said output lines coupled to an adjacent cell or switch, wherein said array further includes four intermediate, directed input lines for providing an additional four input signals to said cell, wherein each intermediate input line provides signals to the cells in either a row or a column of one of said blocks.
8. The logic structure of Claim 7 wherein said array further includes a plurality of flyover lines, each flyover line coupled between two of said plurality of switches, wherein said plurality of flyover lines determines a second plurality of cell blocks.
9. In an integrated circuit having a two-dimensional array of logic cells, a switch comprising:
a set of multiplexers, each of which receives a plurality of input signals and provides an output signal;
for each multiplexer, multiplexer control means for causing said multiplexer to select one of said input signals to provide said output signal, said input signals being taken from the following wires:
a wire extending to the west from said switch and having a length approximately equal to a first multiple of said length of one of said cells;
a wire extending to the east from said switch and having a length approximately equal to said first multiple of said length of one of said cells;
a wire extending to the west from said switch and having a length approximately equal to a second multiple of said length of one of said cells; and a wire extending to the east from said switch and having a length approximately equal to said second multiple of said length of one of said cells;
said output signals being placed on the following wires:
a wire extending to the west from said switch and having a length approximately equal to a first multiple of said length of one of said cells;
a wire extending to the east from said switch and having a length approximately equal to said first multiple of said length of one of said cells;
a wire extending to the west from said switch and having a length approximately equal to a second multiple of said length of one of said cells; and a wire extending to the east from said switch and having a length approximately equal to said second multiple of said length of one of said cells.
10. The switch of Claim 9 wherein said first multiple is 1 and second multiple is 4.
11. The switch of Claim 9 wherein said first multiple is 4 and second multiple is 16.
12. The switch of Claim 9 wherein said first multiple is 16 and second multiple is 64.
13. The switch of Claim 9 wherein said multiplexers further receive input signals taken from the following wires:
a wire extending to the west from said switch and having a length approximately equal to a third multiple of said length of one of said cells; and a wire extending to the east from said switch and having a length approximately equal to said third multiple of said length of one of said cells.
14. The switch of Claim 13 wherein said first multiple is 1, said second multiple is 4, and said third multiple is 16.
15. The switch of Claim 13 wherein said first multiple is 4, said second multiple is 16, and said third multiple is 64.
16. A programmable integrated circuit comprising:
a two-dimensional array of logic cells, each cell comprising means for generating as an output signal a selected logic function of a plurality of logic cell input signals;
a plurality of switches, said switches positioned so as to group said cells into cell blocks;
each cell including:
four short input wires for providing four of said input signals, said four short input wires carrying output signals from the cells or the switches which comprise nearest neighbors on four sides of said cell;
four medium input wires for providing four of said input signals, said four medium input wires extending in four compass directions between switches which define a cell block in which said cell is positioned, wherein a plurality of said medium input wires are simultaneously accessable by said cell; and four short output wires for providing said output signal, said four short output wires carrying output signals to the cells or the switches which comprise nearest neighbors on four sides of said cell.
17. The programmable integrated circuit of Claim 16 wherein each of said medium input wires carries a signal from a switch in one of the four compass directions to said cell and to other cells positioned between two of said switches making up said cell block.
18. The programmable integrated circuit of Claim 17 wherein said medium wires are approximately four times the length of said short wires.
19. A programmable logic structure comprising:
an array of cells;
a plurality of switches arranged to group said cells into cell blocks such that for each cell block at least one west switch is positioned to the west of that block, and at least one east switch is positioned to the east of that block;
a neighbor wire connecting from an output of said west switch to an input of the nearest cell east of said west switch;
a flyover wire connecting from an output of said west switch to an input of said east switch;
a neighbor wire connecting from an output of said east switch to an input of the nearest cell west of said east switch; and a flyover wire connecting from an output of said east switch to an input of said west switch.
20. A programmable logic structure of Claim 19 wherein said plurality of switches is further arranged such that for each cell block at least one north switch is positioned to the north of that block, and at least one south switch is positioned to the south of that block, wherein said programmable logic structure further comprises:
a neighbor wire connecting from an output of said north switch to an input of the nearest cell south of said north switch;
a south flyover wire connecting from an output of said north switch to an input of said south switch;
a short wire connecting from an output of said south switch to an input of the nearest cell north of said south switch; and a north flyover wire connecting from an output of said south switch to an input of said north switch.
21. A programmable logic structure as in Claim 20 in which each of said cells comprises a plurality of smaller cells, each of said smaller cells being defined by:
a plurality of switches, said switches arranged to group said smaller cells into smaller cell blocks such that for each smaller cell block at least one west switch is positioned to the west of that block, at least one east switch is positioned to the east of that block, at least one north switch is positioned to the north of that block, and at least one south switch is positioned to the south of that block;
said smaller cells being connected to said switches by:
a neighbor wire connecting an output of said west switch to an input of the nearest smaller cell east of said west switch;
a neighbor wire connecting an output of the nearest smaller cell east of said west switch to an input of said west switch;
an east flyover wire connecting from an output of said west switch to an input of said east switch;
a neighbor wire connecting from an output of said east switch to an input of the nearest smaller cell west of said east switch; and a neighbor wire connecting an output of the nearest smaller cell west of said east switch to an input of said east switch;
a west flyover wire connecting from an output of said east switch to an input of said west switch.
a neighbor wire connecting from an output of said north switch to an input of the nearest smaller cell south of said north switch;
a neighbor wire connecting an output of the nearest smaller cell south of said north switch to an input of said north switch;
a south flyover wire connecting from an output of said north switch to an input of said south switch;
a neighbor wire connecting from an output of said south switch to an input of the nearest smaller cell north of said south switch; and a neighbor wire connecting an output of the nearest smaller cell north of said south switch to an input of said south switch;
a north flyover wire connecting from an output of said south switch to an input of said north switch.
22. An input/output structure for a programmable logic device having a plurality of logic cells comprising:
a plurality of pads, each pad connected to an external pin of said programmable logic device;
a plurality of I/O buffers, each I/O buffer connected to one of said plurality of pads, each I/O buffer including a switch for connecting said I/O buffer to at least one of said plurality of logic cells, wherein said switch forms part of a hierarchial interconnect structure; and means for controlling said switch.
23. A method for routing signals in a programmable logic structure comprising the steps of arranging an array of cells in rows and columns;
partitioning said array into a plurality of cell blocks with a plurality of switches;
coupling each input line associated with each cell to an adjacent cell or switch;
coupling each output line associated with each cell to said adjacent cell or switch;
providing an additional four directed, intermediate input lines for each cell, wherein each directed intermediate input line provides signals to the cells in either a row or a column of one of said plurality of cell blocks.
24. The method of Claim 23 further including the step of providing a plurality of flyover lines, each flyover line coupling two of said plurality of switches.
25. The method of Claim 24 further including the step of providing a signal from a first cell to a second cell, wherein said providing includes transferring said signal via at least one intermediate input line.
26. The method of Claim 25 wherein said step of providing further includes transferring said signal via at least one flyover line.
27. The method of Claim 23 further including the steps of:
sending a signal via a first path including a plurality of switching units, said first path paralleling a direct, second path;
using logic gates to detect whether said signal travels the full length of said first path; and if said signal travels said full length, then placing said signal on said second path.
28. A logic structure comprising:
an array of memory cells arranged in rows and columns;
a plurality of decoders, one decoder for each row or each column, each decoder receiving both true and complemented values for each address bit;
a pair of logic gates associated with each address bit, wherein said pair of logic gates provide said true and complemented values for each address bit; and a wildcard register providing one enable bit for each address bit, wherein both of said logic gates receive said enable bit from said wildcard register, one of said logic gates receives said address bit, and the other of said logic gates receives the complement of said address bit.
29. A method of simultaneously writing a plurality of cells comprising the steps of:
arranging a plurality of memory cells in rows and columns;
providing a decoder for each row or each column, each decoder receiving both true and complemented values for each address bit;
providing a pair of logic gates for each address bit, wherein said pair of logic gates provide said true and complemented values for each address bit; and placing a wildcard register in operative relation to said pair of logic gates, wherein said wildcard register provides one enable bit for each address bit, further wherein both of said logic gates receive said enable bit from said wildcard register, one of said logic gates receives said address bit, and the other of said logic gates receives the complement of said address bit.
30. A programmable logic device comprising:
an array of cells arranged in rows and columns;
a plurality of match registers, each match register placed in operative relation to a row or column, wherein if an address matches the stored value in a match register, then said match register activates said row or column.
31. The programmable logic device of Claim 30 further including a bit select register to select at least one predetermined bit from said row or column.
32. A method of facilitating multiple writes comprising the steps of:
arranging a plurality of cells in rows and columns;
placing a match register in operative relation to each row or column;
comparing an address to a stored value in each match register; and activating said row or said column if said address matches said stored value.
33. A register structure comprising:
a plurality of input lines;
a plurality of output lines;
a register providing a plurality of bits, the number of bits equal to the number of said plurality of input lines;
a plurality of tiered switches placed in operative relation to said input lines and said register, wherein said plurality of output lines is coupled to said plurality of input lines in a pattern determined by a value in said register.
34. The register structure of Claim 33 wherein each switch comprises:
a first transistor and a second transistor, each transistor having an enable terminal, a first terminal and a second terminal;
an enable line coupled to said enable terminal of said first transistor and to an invertor which in turn is coupled to said enable terminal of said second transistor;
a first input line coupled to said first terminal of said first transistor and a second input line coupled to said first terminal of said second transistor; and an output line coupled to the second terminals of said first and second transistors.
35. The register structure of Claim 34 wherein said first terminals of said switches on said first input line are coupled to a voltage source, wherein each bit of said register is provided to those enable terminals associated with one tier of said plurality of tiered switches, and further wherein if one tier is not the last tier, then said output line of one switch in said one tier is coupled to the second terminal of the switch in the next tier in the same input line as said one switch, if present, and said output line of said one switch in said one tier is further coupled to the first terminal of the switch in the next tier in the data line adjacent said data line of said one switch.
36. A method of providing access to a subset of a plurality of bits of a configuration memory comprising the steps of:
placing a register in operative relation to a plurality of input lines, wherein said register provides one bit for every input line;
positioning a plurality of switches in tiers in operative relation to said input lines; and interconnecting said plurality of output lines to said plurality of input lines in a pattern determined by a value in said register.
37. The method of Claim 36 wherein each switch comprises a first transistor and a second transistor, each transistor having an enable terminal, a first terminal and a second terminal.
38. The method of Claim 37 wherein said step of interconnecting includes the steps of:
coupling an enable line to said gate of said first transistor and to an invertor which in turn is coupled to said gate of said second transistor;
coupling a first input line to said first terminal of said first transistor and coupling a second input line to said first terminal of said second transistor;
coupling an output line to the second terminals of said first and second transistors;
coupling said first terminals of said switches on said first data line to a voltage source;

providing each bit of said register to said enable terminals associated with one tier of said plurality of tiered switches, wherein if one tier is not the last tier, then coupling said output line of one switch in said one tier to the second terminal of the switch in the next tier in the same input line as said one switch, if present, and coupling said output line of said one switch in said one tier to the first terminal of the switch in the next tier in the input line adjacent said input line of said one switch.
39. A decoding system in an array of cells comprising:
a plurality of bit lines placed in operative relation to said array;
a plurality of word lines placed in operative relation to said array;
a plurality of address decoders, wherein each address decoder is coupled to a word line; and a plurality of duplicate decoders, wherein each duplicate decoder is associated with a predetermined word or bit line, said duplicate decoder and the corresponding address decoder coupled to said predetermined word line via a logic gate.
40. A method of manipulating a first set of bits to produce a second set of bits comprising the steps of:
selecting a predetermined pattern which relates said first set of bits to said second set of bits; and configuring a register device to provide said pattern.
41. A routing device in a programmable logic device comprising:
a plurality of cells providing a first path, wherein each cell includes:
means for receiving a plurality of input signals and providing an output signal;
a plurality of memory bits associated with said means for receiving; and a first logic gate for providing a trigger signal determined by the state of said plurality of memory bits;
a second path not including cells, said second path paralleling said first path;
a second logic gate coupled to the output terminals of the first logic gates to detect whether said signal travels the full length of said first path; and means for determining whether a signal is provided on said first path or on said second path, said means for determining controlled by said seecond logic gate.
42. A function unit comprising:
a plurality of multiplexers; and at least one flip-flop, wherein a first set of said plurality of multiplexers receive input signals from a hierarchial interconnect system, and a second second set of said plurality of multiplexers receive output signals and their complements from said first set of multiplexers, wherein one of said plurality of multiplexers is a function multiplexer which is controlled by an output signal from one of the multiplexers in said second set, wherein said function multiplexer receives output signals from the other multiplexers in said second set, wherein said second set of multiplexers provide input signals to said at least one flip-flop, and wherein one of said plurality of multiplexers receives output signals from said function multiplexer and said at least one flip-flop, and provides an output signal for said function unit.
CA002147363A 1992-11-05 1993-11-05 Improved configurable cellular array Abandoned CA2147363A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB9223226.3 1992-11-05
GB929223226A GB9223226D0 (en) 1992-11-05 1992-11-05 Improved configurable cellular array (cal ii)

Publications (1)

Publication Number Publication Date
CA2147363A1 true CA2147363A1 (en) 1994-05-11

Family

ID=10724616

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002147363A Abandoned CA2147363A1 (en) 1992-11-05 1993-11-05 Improved configurable cellular array

Country Status (6)

Country Link
US (9) US5469003A (en)
EP (1) EP0669056A4 (en)
JP (1) JPH08503111A (en)
CA (1) CA2147363A1 (en)
GB (1) GB9223226D0 (en)
WO (1) WO1994010754A1 (en)

Families Citing this family (415)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5477165A (en) * 1986-09-19 1995-12-19 Actel Corporation Programmable logic module and architecture for field programmable gate array device
US5367208A (en) 1986-09-19 1994-11-22 Actel Corporation Reconfigurable programmable interconnect architecture
US20020130681A1 (en) 1991-09-03 2002-09-19 Cliff Richard G. Programmable logic array integrated circuits
US5550782A (en) * 1991-09-03 1996-08-27 Altera Corporation Programmable logic array integrated circuits
US6759870B2 (en) 1991-09-03 2004-07-06 Altera Corporation Programmable logic array integrated circuits
GB9223226D0 (en) 1992-11-05 1992-12-16 Algotronix Ltd Improved configurable cellular array (cal ii)
GB9312674D0 (en) * 1993-06-18 1993-08-04 Pilkington Micro Electronics Configurabel logic array
US5457410A (en) * 1993-08-03 1995-10-10 Btr, Inc. Architecture and interconnect scheme for programmable logic circuits
US6051991A (en) * 1993-08-03 2000-04-18 Btr, Inc. Architecture and interconnect scheme for programmable logic circuits
US6462578B2 (en) 1993-08-03 2002-10-08 Btr, Inc. Architecture and interconnect scheme for programmable logic circuits
US5805834A (en) * 1994-03-30 1998-09-08 Zilog, Inc. Hot reconfigurable parallel bus bridging circuit
JPH07271662A (en) * 1994-03-31 1995-10-20 Sony Corp Memory circuit and its access method, and method for generating data of memory
US5682107A (en) * 1994-04-01 1997-10-28 Xilinx, Inc. FPGA architecture with repeatable tiles including routing matrices and logic matrices
EP0755588B1 (en) * 1994-04-14 2002-03-06 Btr, Inc. Architecture and interconnect scheme for programmable logic circuits
US5689195A (en) 1995-05-17 1997-11-18 Altera Corporation Programmable logic array integrated circuit devices
US5802540A (en) * 1995-11-08 1998-09-01 Altera Corporation Programming and verification address generation for random access memory blocks in programmable logic array integrated circuit devices
US5600845A (en) * 1994-07-27 1997-02-04 Metalithic Systems Incorporated Integrated circuit computing device comprising a dynamically configurable gate array having a microprocessor and reconfigurable instruction execution means and method therefor
US5581199A (en) * 1995-01-04 1996-12-03 Xilinx, Inc. Interconnect architecture for field programmable gate array using variable length conductors
GB2297409B (en) * 1995-01-27 1998-08-19 Altera Corp Programmable logic devices
US5537057A (en) * 1995-02-14 1996-07-16 Altera Corporation Programmable logic array device with grouped logic regions and three types of conductors
US6049223A (en) * 1995-03-22 2000-04-11 Altera Corporation Programmable logic array integrated circuit with general-purpose memory configurable as a random access or FIFO memory
US5838585A (en) * 1995-03-24 1998-11-17 Lsi Logic Corporation Physical design automation system and method using monotonically improving linear clusterization
US5530378A (en) * 1995-04-26 1996-06-25 Xilinx, Inc. Cross point interconnect structure with reduced area
JP3948494B2 (en) * 1995-04-28 2007-07-25 ザイリンクス,インコーポレイテッド Microprocessor with distributed registers accessible by programmable logic device
GB9508932D0 (en) * 1995-05-02 1995-06-21 Xilinx Inc FPGA with parallel and serial user interfaces
US5701091A (en) * 1995-05-02 1997-12-23 Xilinx, Inc. Routing resources for hierarchical FPGA
WO1996035263A1 (en) * 1995-05-02 1996-11-07 Xilinx, Inc. Programmable switch for fpga input/output signals
US5600597A (en) * 1995-05-02 1997-02-04 Xilinx, Inc. Register protection structure for FPGA
GB9508931D0 (en) 1995-05-02 1995-06-21 Xilinx Inc Programmable switch for FPGA input/output signals
US5850564A (en) * 1995-05-03 1998-12-15 Btr, Inc, Scalable multiple level tab oriented interconnect architecture
AU5718196A (en) 1995-05-03 1996-11-21 Btr, Inc. Scalable multiple level interconnect architecture
US5543730A (en) * 1995-05-17 1996-08-06 Altera Corporation Techniques for programming programmable logic array devices
US5541530A (en) * 1995-05-17 1996-07-30 Altera Corporation Programmable logic array integrated circuits with blocks of logic regions grouped into super-blocks
US5909126A (en) 1995-05-17 1999-06-01 Altera Corporation Programmable logic array integrated circuit devices with interleaved logic array blocks
US5963049A (en) 1995-05-17 1999-10-05 Altera Corporation Programmable logic array integrated circuit architectures
US5521529A (en) * 1995-06-02 1996-05-28 Advanced Micro Devices, Inc. Very high-density complex programmable logic devices with a multi-tiered hierarchical switch matrix and optimized flexible logic allocation
US5631578A (en) * 1995-06-02 1997-05-20 International Business Machines Corporation Programmable array interconnect network
US5818254A (en) * 1995-06-02 1998-10-06 Advanced Micro Devices, Inc. Multi-tiered hierarchical high speed switch matrix structure for very high-density complex programmable logic devices
US5692147A (en) * 1995-06-07 1997-11-25 International Business Machines Corporation Memory mapping method and apparatus to fold sparsely populated structures into densely populated memory columns or rows by selectively transposing X and Y address portions, and programmable gate array applications thereof
JPH11510004A (en) 1995-07-19 1999-08-31 フジツウ ネットワーク コミュニケーションズ,インコーポレイテッド Point-to-multipoint transmission using subqueues
US6113260A (en) * 1995-08-16 2000-09-05 Raytheon Company Configurable interface module
WO1997010656A1 (en) 1995-09-14 1997-03-20 Fujitsu Network Communications, Inc. Transmitter controlled flow control for buffer allocation in wide area atm networks
GB2305759A (en) * 1995-09-30 1997-04-16 Pilkington Micro Electronics Semi-conductor integrated circuit
US5815004A (en) * 1995-10-16 1998-09-29 Xilinx, Inc. Multi-buffered configurable logic block output lines in a field programmable gate array
US5794033A (en) * 1995-10-24 1998-08-11 International Business Machines Corporation Method and system for in-site and on-line reprogramming of hardware logics with remote loading in a network device
US5781007A (en) * 1995-10-24 1998-07-14 General Electric Company Portable three axis scanner to inspect a gas turbine engine spool by eddy current or ultrasonic inspection
US5914906A (en) * 1995-12-20 1999-06-22 International Business Machines Corporation Field programmable memory array
US7266725B2 (en) 2001-09-03 2007-09-04 Pact Xpp Technologies Ag Method for debugging reconfigurable architectures
AU1697697A (en) 1996-01-16 1997-08-11 Fujitsu Limited A reliable and flexible multicast mechanism for atm networks
IL116792A (en) * 1996-01-16 2000-01-31 Chip Express Israel Ltd Customizable integrated circuit device
US5752006A (en) * 1996-01-31 1998-05-12 Xilinx, Inc. Configuration emulation of a programmable logic device
US5737766A (en) * 1996-02-14 1998-04-07 Hewlett Packard Company Programmable gate array configuration memory which allows sharing with user memory
GB9604496D0 (en) * 1996-03-01 1996-05-01 Xilinx Inc Embedded memory for field programmable gate array
US5726584A (en) * 1996-03-18 1998-03-10 Xilinx, Inc. Virtual high density programmable integrated circuit having addressable shared memory cells
US5694056A (en) * 1996-04-01 1997-12-02 Xilinx, Inc. Fast pipeline frame full detector
US5835998A (en) * 1996-04-04 1998-11-10 Altera Corporation Logic cell for programmable logic devices
US5977791A (en) 1996-04-15 1999-11-02 Altera Corporation Embedded memory block with FIFO mode for programmable logic device
US6212668B1 (en) 1996-05-28 2001-04-03 Altera Corporation Gain matrix for hierarchical circuit partitioning
US5742181A (en) * 1996-06-04 1998-04-21 Hewlett-Packard Co. FPGA with hierarchical interconnect structure and hyperlinks
US6384630B2 (en) 1996-06-05 2002-05-07 Altera Corporation Techniques for programming programmable logic array devices
US5764076A (en) * 1996-06-26 1998-06-09 Xilinx, Inc. Circuit for partially reprogramming an operational programmable logic device
US6094066A (en) * 1996-08-03 2000-07-25 Mission Research Corporation Tiered routing architecture for field programmable gate arrays
US5821772A (en) * 1996-08-07 1998-10-13 Xilinx, Inc. Programmable address decoder for programmable logic device
US5838165A (en) * 1996-08-21 1998-11-17 Chatter; Mukesh High performance self modifying on-the-fly alterable logic FPGA, architecture and method
US6624658B2 (en) 1999-02-04 2003-09-23 Advantage Logic, Inc. Method and apparatus for universal program controlled bus architecture
US6034547A (en) * 1996-09-04 2000-03-07 Advantage Logic, Inc. Method and apparatus for universal program controlled bus
JPH1084092A (en) * 1996-09-09 1998-03-31 Toshiba Corp Semiconductor integrated circuit
US5950052A (en) * 1996-09-17 1999-09-07 Seiko Epson Corporation Image forming apparatus
US5880597A (en) * 1996-09-18 1999-03-09 Altera Corporation Interleaved interconnect for programmable logic array devices
US6301694B1 (en) 1996-09-25 2001-10-09 Altera Corporation Hierarchical circuit partitioning using sliding windows
US6300794B1 (en) * 1996-10-10 2001-10-09 Altera Corporation Programmable logic device with hierarchical interconnection resources
US5999016A (en) * 1996-10-10 1999-12-07 Altera Corporation Architectures for programmable logic devices
US5977793A (en) * 1996-10-10 1999-11-02 Altera Corporation Programmable logic device with hierarchical interconnection resources
US5946219A (en) * 1996-10-30 1999-08-31 Atmel Corporation Method and system for configuring an array of logic devices
US6112020A (en) * 1996-10-31 2000-08-29 Altera Corporation Apparatus and method for generating configuration and test files for programmable logic devices
US6005410A (en) * 1996-12-05 1999-12-21 International Business Machines Corporation Interconnect structure between heterogeneous core regions in a programmable array
DE19651075A1 (en) 1996-12-09 1998-06-10 Pact Inf Tech Gmbh Unit for processing numerical and logical operations, for use in processors (CPU's), multi-computer systems, data flow processors (DFP's), digital signal processors (DSP's) or the like
DE19654595A1 (en) 1996-12-20 1998-07-02 Pact Inf Tech Gmbh I0 and memory bus system for DFPs as well as building blocks with two- or multi-dimensional programmable cell structures
ATE243390T1 (en) * 1996-12-27 2003-07-15 Pact Inf Tech Gmbh METHOD FOR INDEPENDENT DYNAMIC LOADING OF DATA FLOW PROCESSORS (DFPS) AND COMPONENTS WITH TWO- OR MULTI-DIMENSIONAL PROGRAMMABLE CELL STRUCTURES (FPGAS, DPGAS, O.L.)
US5959466A (en) 1997-01-31 1999-09-28 Actel Corporation Field programmable gate array with mask programmed input and output buffers
US6542998B1 (en) 1997-02-08 2003-04-01 Pact Gmbh Method of self-synchronization of configurable elements of a programmable module
US5999015A (en) * 1997-02-20 1999-12-07 Altera Corporation Logic region resources for programmable logic devices
US6127844A (en) 1997-02-20 2000-10-03 Altera Corporation PCI-compatible programmable logic devices
US5982195A (en) * 1997-02-20 1999-11-09 Altera Corporation Programmable logic device architectures
US7148722B1 (en) 1997-02-20 2006-12-12 Altera Corporation PCI-compatible programmable logic devices
US6204689B1 (en) 1997-02-26 2001-03-20 Xilinx, Inc. Input/output interconnect circuit for FPGAs
US6201410B1 (en) 1997-02-26 2001-03-13 Xilinx, Inc. Wide logic gate implemented in an FPGA configurable logic element
US6396303B1 (en) 1997-02-26 2002-05-28 Xilinx, Inc. Expandable interconnect structure for FPGAS
US5914616A (en) * 1997-02-26 1999-06-22 Xilinx, Inc. FPGA repeatable interconnect structure with hierarchical interconnect lines
US5963050A (en) 1997-02-26 1999-10-05 Xilinx, Inc. Configurable logic element with fast feedback paths
US5920202A (en) * 1997-02-26 1999-07-06 Xilinx, Inc. Configurable logic element with ability to evaluate five and six input functions
US5942913A (en) * 1997-03-20 1999-08-24 Xilinx, Inc. FPGA repeatable interconnect structure with bidirectional and unidirectional interconnect lines
US5889411A (en) * 1997-02-26 1999-03-30 Xilinx, Inc. FPGA having logic element carry chains capable of generating wide XOR functions
US6150837A (en) * 1997-02-28 2000-11-21 Actel Corporation Enhanced field programmable gate array
US6184710B1 (en) 1997-03-20 2001-02-06 Altera Corporation Programmable logic array devices with enhanced interconnectivity between adjacent logic regions
US6551857B2 (en) 1997-04-04 2003-04-22 Elm Technology Corporation Three dimensional structure integrated circuits
US6085317A (en) * 1997-08-15 2000-07-04 Altera Corporation Reconfigurable computer architecture using programmable logic devices
US5971595A (en) * 1997-04-28 1999-10-26 Xilinx, Inc. Method for linking a hardware description to an IC layout
US6314550B1 (en) * 1997-06-10 2001-11-06 Altera Corporation Cascaded programming with multiple-purpose pins
US6011407A (en) * 1997-06-13 2000-01-04 Xilinx, Inc. Field programmable gate array with dedicated computer bus interface and method for configuring both
JP3403614B2 (en) * 1997-06-13 2003-05-06 富士通株式会社 Data processing system with dynamic resource utilization function
US6006321A (en) 1997-06-13 1999-12-21 Malleable Technologies, Inc. Programmable logic datapath that may be used in a field programmable device
US5970254A (en) * 1997-06-27 1999-10-19 Cooke; Laurence H. Integrated processor and programmable data path chip for reconfigurable computing
US6286062B1 (en) 1997-07-01 2001-09-04 Micron Technology, Inc. Pipelined packet-oriented memory system having a unidirectional command and address bus and a bidirectional data bus
US6020760A (en) * 1997-07-16 2000-02-01 Altera Corporation I/O buffer circuit with pin multiplexing
US6011744A (en) * 1997-07-16 2000-01-04 Altera Corporation Programmable logic device with multi-port memory
US6034857A (en) * 1997-07-16 2000-03-07 Altera Corporation Input/output buffer with overcurrent protection circuit
US6078736A (en) 1997-08-28 2000-06-20 Xilinx, Inc. Method of designing FPGAs for dynamically reconfigurable computing
US5995971A (en) * 1997-09-18 1999-11-30 Micdrosoft Corporation Apparatus and accompanying methods, using a trie-indexed hierarchy forest, for storing wildcard-based patterns and, given an input key, retrieving, from the forest, a stored pattern that is identical to or more general than the key
US8686549B2 (en) 2001-09-03 2014-04-01 Martin Vorbach Reconfigurable elements
US9092595B2 (en) 1997-10-08 2015-07-28 Pact Xpp Technologies Ag Multiprocessor having associated RAM units
US6107824A (en) * 1997-10-16 2000-08-22 Altera Corporation Circuitry and methods for internal interconnection of programmable logic devices
US6121790A (en) * 1997-10-16 2000-09-19 Altera Corporation Programmable logic device with enhanced multiplexing capabilities in interconnect resources
US6107825A (en) * 1997-10-16 2000-08-22 Altera Corporation Input/output circuitry for programmable logic devices
US6084427A (en) 1998-05-19 2000-07-04 Altera Corporation Programmable logic devices with enhanced multiplexing capabilities
US6122719A (en) 1997-10-31 2000-09-19 Silicon Spice Method and apparatus for retiming in a network of multiple context processing elements
US5915123A (en) 1997-10-31 1999-06-22 Silicon Spice Method and apparatus for controlling configuration memory contexts of processing elements in a network of multiple context processing elements
US6108760A (en) * 1997-10-31 2000-08-22 Silicon Spice Method and apparatus for position independent reconfiguration in a network of multiple context processing elements
US5995744A (en) * 1997-11-24 1999-11-30 Xilinx, Inc. Network configuration of programmable circuits
US6212650B1 (en) 1997-11-24 2001-04-03 Xilinx, Inc. Interactive dubug tool for programmable circuits
US6185724B1 (en) 1997-12-02 2001-02-06 Xilinx, Inc. Template-based simulated annealing move-set that improves FPGA architectural feature utilization
US6069490A (en) * 1997-12-02 2000-05-30 Xilinx, Inc. Routing architecture using a direct connect routing mesh
US6898101B1 (en) 1997-12-16 2005-05-24 Cypress Semiconductor Corp. Microcontroller with programmable logic on a single chip
DE19756591B4 (en) * 1997-12-18 2004-03-04 Sp3D Chip Design Gmbh Device for hierarchically connecting a plurality of functional units in a processor
DE19861088A1 (en) 1997-12-22 2000-02-10 Pact Inf Tech Gmbh Repairing integrated circuits by replacing subassemblies with substitutes
US6134703A (en) * 1997-12-23 2000-10-17 Lattice Semiconductor Corporation Process for programming PLDs and embedded non-volatile memories
US6172520B1 (en) 1997-12-30 2001-01-09 Xilinx, Inc. FPGA system with user-programmable configuration ports and method for reconfiguring the FPGA
US6028445A (en) * 1997-12-30 2000-02-22 Xilinx, Inc. Decoder structure and method for FPGA configuration
US5883852A (en) * 1998-02-23 1999-03-16 Dynachip Corporation Configurable SRAM for field programmable gate array
US6772387B1 (en) 1998-03-16 2004-08-03 Actel Corporation Cyclic redundancy checking of a field programmable gate array having an SRAM memory architecture
US6049487A (en) * 1998-03-16 2000-04-11 Actel Corporation Embedded static random access memory for field programmable gate array
US6038627A (en) * 1998-03-16 2000-03-14 Actel Corporation SRAM bus architecture and interconnect to an FPGA
US7146441B1 (en) * 1998-03-16 2006-12-05 Actel Corporation SRAM bus architecture and interconnect to an FPGA
US7389487B1 (en) * 1998-04-28 2008-06-17 Actel Corporation Dedicated interface architecture for a hybrid integrated circuit
US6226735B1 (en) * 1998-05-08 2001-05-01 Broadcom Method and apparatus for configuring arbitrary sized data paths comprising multiple context processing elements
US6173419B1 (en) 1998-05-14 2001-01-09 Advanced Technology Materials, Inc. Field programmable gate array (FPGA) emulator for debugging software
US6020776A (en) * 1998-06-22 2000-02-01 Xilinx, Inc. Efficient multiplexer structure for use in FPGA logic blocks
US6467017B1 (en) 1998-06-23 2002-10-15 Altera Corporation Programmable logic device having embedded dual-port random access memory configurable as single-port memory
US6282627B1 (en) 1998-06-29 2001-08-28 Chameleon Systems, Inc. Integrated processor and programmable data path chip for reconfigurable computing
US6201404B1 (en) * 1998-07-14 2001-03-13 Altera Corporation Programmable logic device with redundant circuitry
US6094064A (en) * 1998-07-23 2000-07-25 Altera Corporation Programmable logic device incorporating and input/output overflow bus
US6137307A (en) * 1998-08-04 2000-10-24 Xilinx, Inc. Structure and method for loading wide frames of data from a narrow input bus
US6097210A (en) * 1998-08-04 2000-08-01 Xilinx, Inc. Multiplexer array with shifted input traces
US6069489A (en) 1998-08-04 2000-05-30 Xilinx, Inc. FPGA having fast configuration memory data readback
US5955751A (en) * 1998-08-13 1999-09-21 Quicklogic Corporation Programmable device having antifuses without programmable material edges and/or corners underneath metal
JP2000068488A (en) * 1998-08-20 2000-03-03 Oki Electric Ind Co Ltd Semiconductor integrated circuit layout method
US6549035B1 (en) 1998-09-15 2003-04-15 Actel Corporation High density antifuse based partitioned FPGA architecture
US6353920B1 (en) * 1998-11-17 2002-03-05 Xilinx, Inc. Method for implementing wide gates and tristate buffers using FPGA carry logic
US6507216B1 (en) 1998-11-18 2003-01-14 Altera Corporation Efficient arrangement of interconnection resources on programmable logic devices
US6215326B1 (en) 1998-11-18 2001-04-10 Altera Corporation Programmable logic device architecture with super-regions having logic regions and a memory region
US6225822B1 (en) * 1998-11-18 2001-05-01 Altera Corporation Fast signal conductor networks for programmable logic devices
DE69910826T2 (en) 1998-11-20 2004-06-17 Altera Corp., San Jose COMPUTER SYSTEM WITH RECONFIGURABLE PROGRAMMABLE LOGIC DEVICE
US6301695B1 (en) 1999-01-14 2001-10-09 Xilinx, Inc. Methods to securely configure an FPGA using macro markers
US6160418A (en) * 1999-01-14 2000-12-12 Xilinx, Inc. Integrated circuit with selectively disabled logic blocks
US6305005B1 (en) 1999-01-14 2001-10-16 Xilinx, Inc. Methods to securely configure an FPGA using encrypted macros
US6357037B1 (en) 1999-01-14 2002-03-12 Xilinx, Inc. Methods to securely configure an FPGA to accept selected macros
US6324676B1 (en) 1999-01-14 2001-11-27 Xilinx, Inc. FPGA customizable to accept selected macros
US6427199B1 (en) * 1999-01-19 2002-07-30 Motorola, Inc. Method and apparatus for efficiently transferring data between peripherals in a selective call radio
US6262933B1 (en) 1999-01-29 2001-07-17 Altera Corporation High speed programmable address decoder
US6654889B1 (en) 1999-02-19 2003-11-25 Xilinx, Inc. Method and apparatus for protecting proprietary configuration data for programmable logic devices
WO2000051183A1 (en) * 1999-02-22 2000-08-31 Actel Corporation A semi-hierarchical reprogrammable fpga architecture
US7003660B2 (en) 2000-06-13 2006-02-21 Pact Xpp Technologies Ag Pipeline configuration unit protocols and communication
US6407576B1 (en) 1999-03-04 2002-06-18 Altera Corporation Interconnection and input/output resources for programmable logic integrated circuit devices
JP3425100B2 (en) 1999-03-08 2003-07-07 松下電器産業株式会社 Field programmable gate array and method of manufacturing the same
US6256767B1 (en) * 1999-03-29 2001-07-03 Hewlett-Packard Company Demultiplexer for a molecular wire crossbar network (MWCN DEMUX)
US6255848B1 (en) 1999-04-05 2001-07-03 Xilinx, Inc. Method and structure for reading, modifying and writing selected configuration memory cells of an FPGA
US6262596B1 (en) 1999-04-05 2001-07-17 Xilinx, Inc. Configuration bus interface circuit for FPGAS
US6191614B1 (en) 1999-04-05 2001-02-20 Xilinx, Inc. FPGA configuration circuit including bus-based CRC register
US6351808B1 (en) 1999-05-11 2002-02-26 Sun Microsystems, Inc. Vertically and horizontally threaded processor with multidimensional storage for storing thread data
US6341347B1 (en) 1999-05-11 2002-01-22 Sun Microsystems, Inc. Thread switch logic in a multiple-thread processor
US6507862B1 (en) 1999-05-11 2003-01-14 Sun Microsystems, Inc. Switching method in a multi-threaded processor
US6542991B1 (en) 1999-05-11 2003-04-01 Sun Microsystems, Inc. Multiple-thread processor with single-thread interface shared among threads
US6938147B1 (en) 1999-05-11 2005-08-30 Sun Microsystems, Inc. Processor with multiple-thread, vertically-threaded pipeline
AU5805300A (en) 1999-06-10 2001-01-02 Pact Informationstechnologie Gmbh Sequence partitioning in cell structures
US6188242B1 (en) * 1999-06-30 2001-02-13 Quicklogic Corporation Virtual programmable device and method of programming
US6405352B1 (en) * 1999-06-30 2002-06-11 International Business Machines Corporation Method and system for analyzing wire-only changes to a microprocessor design using delta model
US6486702B1 (en) 1999-07-02 2002-11-26 Altera Corporation Embedded memory blocks for programmable logic
US6424567B1 (en) 1999-07-07 2002-07-23 Philips Electronics North America Corporation Fast reconfigurable programmable device
US6294926B1 (en) * 1999-07-16 2001-09-25 Philips Electronics North America Corporation Very fine-grain field programmable gate array architecture and circuitry
US6745317B1 (en) 1999-07-30 2004-06-01 Broadcom Corporation Three level direct communication connections between neighboring multiple context processing elements
US6204687B1 (en) 1999-08-13 2001-03-20 Xilinx, Inc. Method and structure for configuring FPGAS
US6625787B1 (en) * 1999-08-13 2003-09-23 Xilinx, Inc. Method and apparatus for timing management in a converted design
US6308309B1 (en) * 1999-08-13 2001-10-23 Xilinx, Inc. Place-holding library elements for defining routing paths
US6851047B1 (en) 1999-10-15 2005-02-01 Xilinx, Inc. Configuration in a configurable system on a chip
US7356541B1 (en) * 1999-10-29 2008-04-08 Computer Sciences Corporation Processing business data using user-configured keys
US6629311B1 (en) * 1999-11-17 2003-09-30 Altera Corporation Apparatus and method for configuring a programmable logic device with a configuration controller operating as an interface to a configuration memory
US6320412B1 (en) 1999-12-20 2001-11-20 Btr, Inc. C/O Corporate Trust Co. Architecture and interconnect for programmable logic circuits
GB9930145D0 (en) 1999-12-22 2000-02-09 Kean Thomas A Method and apparatus for secure configuration of a field programmable gate array
US20070288765A1 (en) * 1999-12-22 2007-12-13 Kean Thomas A Method and Apparatus for Secure Configuration of a Field Programmable Gate Array
US7240218B2 (en) * 2000-02-08 2007-07-03 Algotronix, Ltd. Method of using a mask programmed key to securely configure a field programmable gate array
US6438737B1 (en) 2000-02-15 2002-08-20 Intel Corporation Reconfigurable logic for a computer
US6256253B1 (en) * 2000-02-18 2001-07-03 Infineon Technologies North America Corp. Memory device with support for unaligned access
US6769109B2 (en) * 2000-02-25 2004-07-27 Lightspeed Semiconductor Corporation Programmable logic array embedded in mask-programmed ASIC
US6694491B1 (en) 2000-02-25 2004-02-17 Lightspeed Semiconductor Corporation Programmable logic array embedded in mask-programmed ASIC
US7233167B1 (en) * 2000-03-06 2007-06-19 Actel Corporation Block symmetrization in a field programmable gate array
US6567968B1 (en) * 2000-03-06 2003-05-20 Actel Corporation Block level routing architecture in a field programmable gate array
US6268743B1 (en) 2000-03-06 2001-07-31 Acatel Corporation Block symmetrization in a field programmable gate array
US6285212B1 (en) 2000-03-06 2001-09-04 Actel Corporation Block connector splitting in logic block of a field programmable gate array
US6861869B1 (en) 2000-03-06 2005-03-01 Actel Corporation Block symmetrization in a field programmable gate array
US7249105B1 (en) * 2000-03-14 2007-07-24 Microsoft Corporation BORE-resistant digital goods configuration and distribution methods and arrangements
US6469540B2 (en) 2000-06-15 2002-10-22 Nec Corporation Reconfigurable device having programmable interconnect network suitable for implementing data paths
US6912601B1 (en) 2000-06-28 2005-06-28 Cypress Semiconductor Corp. Method of programming PLDs using a wireless link
JP2002026132A (en) * 2000-07-07 2002-01-25 Mitsubishi Electric Corp Method for location wiring of semiconductor integrated circuit and storage media storing program of the same capable of being read and executed by computer
US6526557B1 (en) * 2000-07-25 2003-02-25 Xilinx, Inc. Architecture and method for partially reconfiguring an FPGA
WO2002013072A2 (en) * 2000-08-07 2002-02-14 Altera Corporation Inter-device communication interface
US7343594B1 (en) 2000-08-07 2008-03-11 Altera Corporation Software-to-hardware compiler with symbol set inference analysis
US6433603B1 (en) 2000-08-14 2002-08-13 Sun Microsystems, Inc. Pulse-based high speed flop circuit
US6870396B2 (en) * 2000-09-02 2005-03-22 Actel Corporation Tileable field-programmable gate array architecture
US7426665B1 (en) 2000-09-02 2008-09-16 Actel Corporation Tileable field-programmable gate array architecture
US7015719B1 (en) 2000-09-02 2006-03-21 Actel Corporation Tileable field-programmable gate array architecture
US6937063B1 (en) 2000-09-02 2005-08-30 Actel Corporation Method and apparatus of memory clearing with monitoring RAM memory cells in a field programmable gated array
US6476636B1 (en) 2000-09-02 2002-11-05 Actel Corporation Tileable field-programmable gate array architecture
US7055125B2 (en) * 2000-09-08 2006-05-30 Lightspeed Semiconductor Corp. Depopulated programmable logic array
US6904436B1 (en) * 2000-10-04 2005-06-07 Cypress Semiconductor Corporation Method and system for generating a bit order data structure of configuration bits from a schematic hierarchy
US6490712B1 (en) * 2000-10-04 2002-12-03 Cypress Semiconductor Corporation Method and system for identifying configuration circuit addresses in a schematic hierarchy
JP4022040B2 (en) * 2000-10-05 2007-12-12 松下電器産業株式会社 Semiconductor device
US8058899B2 (en) 2000-10-06 2011-11-15 Martin Vorbach Logic cell array and bus system
US6625794B1 (en) * 2000-11-06 2003-09-23 Xilinx, Inc. Method and system for safe device reconfiguration
GB2370380B (en) * 2000-12-19 2003-12-31 Picochip Designs Ltd Processor architecture
ITRM20010298A1 (en) * 2001-05-31 2002-12-02 Micron Technology Inc USER CONTROL INTERFACE WITH PROGRAMMABLE DECODER.
US7444531B2 (en) 2001-03-05 2008-10-28 Pact Xpp Technologies Ag Methods and devices for treating and processing data
US9411532B2 (en) 2001-09-07 2016-08-09 Pact Xpp Technologies Ag Methods and systems for transferring data between a processing device and external devices
US7581076B2 (en) * 2001-03-05 2009-08-25 Pact Xpp Technologies Ag Methods and devices for treating and/or processing data
US9250908B2 (en) 2001-03-05 2016-02-02 Pact Xpp Technologies Ag Multi-processor bus and cache interconnection system
US9141390B2 (en) 2001-03-05 2015-09-22 Pact Xpp Technologies Ag Method of processing data with an array of data processors according to application ID
US9037807B2 (en) 2001-03-05 2015-05-19 Pact Xpp Technologies Ag Processor arrangement on a chip including data processing, memory, and interface elements
US9436631B2 (en) 2001-03-05 2016-09-06 Pact Xpp Technologies Ag Chip including memory element storing higher level memory data on a page by page basis
US9552047B2 (en) 2001-03-05 2017-01-24 Pact Xpp Technologies Ag Multiprocessor having runtime adjustable clock and clock dependent power supply
US7844796B2 (en) 2001-03-05 2010-11-30 Martin Vorbach Data processing device and method
US6836839B2 (en) * 2001-03-22 2004-12-28 Quicksilver Technology, Inc. Adaptive integrated circuitry with heterogeneous and reconfigurable matrices of diverse and adaptive computational units having fixed, application specific computational elements
US7624204B2 (en) * 2001-03-22 2009-11-24 Nvidia Corporation Input/output controller node in an adaptable computing environment
US6462579B1 (en) * 2001-04-26 2002-10-08 Xilinx, Inc. Partial reconfiguration of a programmable gate array using a bus macro
US6605962B2 (en) 2001-05-06 2003-08-12 Altera Corporation PLD architecture for flexible placement of IP function blocks
US6630842B1 (en) 2001-05-06 2003-10-07 Altera Corporation Routing architecture for a programmable logic device
US6895570B2 (en) 2001-05-06 2005-05-17 Altera Corporation System and method for optimizing routing lines in a programmable logic device
US6653862B2 (en) 2001-05-06 2003-11-25 Altera Corporation Use of dangling partial lines for interfacing in a PLD
US6970014B1 (en) 2001-05-06 2005-11-29 Altera Corporation Routing architecture for a programmable logic device
US6720796B1 (en) 2001-05-06 2004-04-13 Altera Corporation Multiple size memories in a programmable logic device
US7076595B1 (en) 2001-05-18 2006-07-11 Xilinx, Inc. Programmable logic device including programmable interface core and central processing unit
GB0114317D0 (en) * 2001-06-13 2001-08-01 Kean Thomas A Method of protecting intellectual property cores on field programmable gate array
US10031733B2 (en) 2001-06-20 2018-07-24 Scientia Sol Mentis Ag Method for processing data
WO2002103532A2 (en) 2001-06-20 2002-12-27 Pact Xpp Technologies Ag Data processing method
US20030020082A1 (en) * 2001-07-25 2003-01-30 Motorola, Inc. Structure and method for fabricating semiconductor structures and devices for optical switching
DE60202152T2 (en) * 2001-08-07 2005-12-01 Xilinx, Inc., San Jose Application-specific test methods for programmable logic devices
US6664808B2 (en) * 2001-08-07 2003-12-16 Xilinx, Inc. Method of using partially defective programmable logic devices
US7996827B2 (en) 2001-08-16 2011-08-09 Martin Vorbach Method for the translation of programs for reconfigurable architectures
US7434191B2 (en) 2001-09-03 2008-10-07 Pact Xpp Technologies Ag Router
US8686475B2 (en) 2001-09-19 2014-04-01 Pact Xpp Technologies Ag Reconfigurable elements
US6798239B2 (en) * 2001-09-28 2004-09-28 Xilinx, Inc. Programmable gate array having interconnecting logic to support embedded fixed logic circuitry
US20030068038A1 (en) * 2001-09-28 2003-04-10 Bedros Hanounik Method and apparatus for encrypting data
US7420392B2 (en) * 2001-09-28 2008-09-02 Xilinx, Inc. Programmable gate array and embedded circuitry initialization and processing
US6781407B2 (en) 2002-01-09 2004-08-24 Xilinx, Inc. FPGA and embedded circuitry initialization and processing
US7310757B2 (en) * 2001-10-11 2007-12-18 Altera Corporation Error detection on programmable logic resources
US6996758B1 (en) 2001-11-16 2006-02-07 Xilinx, Inc. Apparatus for testing an interconnecting logic fabric
US6983405B1 (en) 2001-11-16 2006-01-03 Xilinx, Inc., Method and apparatus for testing circuitry embedded within a field programmable gate array
US6886092B1 (en) 2001-11-19 2005-04-26 Xilinx, Inc. Custom code processing in PGA by providing instructions from fixed logic processor portion to programmable dedicated processor portion
US6668361B2 (en) * 2001-12-10 2003-12-23 International Business Machines Corporation Method and system for use of a field programmable function within a chip to enable configurable I/O signal timing characteristics
US7154298B1 (en) * 2001-12-14 2006-12-26 Lattice Semiconductor Corporation Block-oriented architecture for a programmable interconnect circuit
US7577822B2 (en) 2001-12-14 2009-08-18 Pact Xpp Technologies Ag Parallel task operation in processor and reconfigurable coprocessor configured based on information in link list including termination information for synchronization
US7035595B1 (en) * 2002-01-10 2006-04-25 Berkana Wireless, Inc. Configurable wireless interface
EP1483682A2 (en) 2002-01-19 2004-12-08 PACT XPP Technologies AG Reconfigurable processor
US6820248B1 (en) 2002-02-14 2004-11-16 Xilinx, Inc. Method and apparatus for routing interconnects to devices with dissimilar pitches
US8127061B2 (en) 2002-02-18 2012-02-28 Martin Vorbach Bus systems and reconfiguration methods
US6754882B1 (en) * 2002-02-22 2004-06-22 Xilinx, Inc. Method and system for creating a customized support package for an FPGA-based system-on-chip (SoC)
US6976160B1 (en) 2002-02-22 2005-12-13 Xilinx, Inc. Method and system for controlling default values of flip-flops in PGA/ASIC-based designs
US6934922B1 (en) 2002-02-27 2005-08-23 Xilinx, Inc. Timing performance analysis
US7007121B1 (en) 2002-02-27 2006-02-28 Xilinx, Inc. Method and apparatus for synchronized buses
US7111217B1 (en) 2002-02-28 2006-09-19 Xilinx, Inc. Method and system for flexibly nesting JTAG TAP controllers for FPGA-based system-on-chip (SoC)
US7088767B1 (en) 2002-03-01 2006-08-08 Xilinx, Inc. Method and apparatus for operating a transceiver in different data rates
US7111220B1 (en) 2002-03-01 2006-09-19 Xilinx, Inc. Network physical layer with embedded multi-standard CRC generator
US7187709B1 (en) 2002-03-01 2007-03-06 Xilinx, Inc. High speed configurable transceiver architecture
US6961919B1 (en) 2002-03-04 2005-11-01 Xilinx, Inc. Method of designing integrated circuit having both configurable and fixed logic circuitry
US20030174702A1 (en) * 2002-03-14 2003-09-18 Michael Meier Modifying overhead data of a transport layer frame
US8914590B2 (en) 2002-08-07 2014-12-16 Pact Xpp Technologies Ag Data processing method and device
US9170812B2 (en) 2002-03-21 2015-10-27 Pact Xpp Technologies Ag Data processing system having integrated pipelined array data processor
US6996713B1 (en) 2002-03-29 2006-02-07 Xilinx, Inc. Method and apparatus for protecting proprietary decryption keys for programmable logic devices
US7162644B1 (en) 2002-03-29 2007-01-09 Xilinx, Inc. Methods and circuits for protecting proprietary configuration data for programmable logic devices
US6774667B1 (en) 2002-05-09 2004-08-10 Actel Corporation Method and apparatus for a flexible chargepump scheme for field-programmable gate arrays
US6973405B1 (en) 2002-05-22 2005-12-06 Xilinx, Inc. Programmable interactive verification agent
US6891394B1 (en) * 2002-06-04 2005-05-10 Actel Corporation Field-programmable gate array low voltage differential signaling driver utilizing two complimentary output buffers
US7378867B1 (en) 2002-06-04 2008-05-27 Actel Corporation Field-programmable gate array low voltage differential signaling driver utilizing two complimentary output buffers
US6772405B1 (en) 2002-06-13 2004-08-03 Xilinx, Inc. Insertable block tile for interconnecting to a device embedded in an integrated circuit
US7129744B2 (en) * 2003-10-23 2006-10-31 Viciciv Technology Programmable interconnect structures
US7085973B1 (en) 2002-07-09 2006-08-01 Xilinx, Inc. Testing address lines of a memory controller
US7657861B2 (en) 2002-08-07 2010-02-02 Pact Xpp Technologies Ag Method and device for processing data
AU2003286131A1 (en) 2002-08-07 2004-03-19 Pact Xpp Technologies Ag Method and device for processing data
US6765427B1 (en) 2002-08-08 2004-07-20 Actel Corporation Method and apparatus for bootstrapping a programmable antifuse circuit
WO2004015764A2 (en) * 2002-08-08 2004-02-19 Leedy Glenn J Vertical system integration
US7099426B1 (en) 2002-09-03 2006-08-29 Xilinx, Inc. Flexible channel bonding and clock correction operations on a multi-block data path
US7434080B1 (en) 2002-09-03 2008-10-07 Actel Corporation Apparatus for interfacing and testing a phase locked loop in a field programmable gate array
AU2003289844A1 (en) 2002-09-06 2004-05-13 Pact Xpp Technologies Ag Reconfigurable sequencer structure
US7092865B1 (en) 2002-09-10 2006-08-15 Xilinx, Inc. Method and apparatus for timing modeling
US6750674B1 (en) 2002-10-02 2004-06-15 Actel Corporation Carry chain for use between logic modules in a field programmable gate array
US7269814B1 (en) 2002-10-08 2007-09-11 Actel Corporation Parallel programmable antifuse field programmable gate array device (FPGA) and a method for programming and testing an antifuse FPGA
US6885218B1 (en) 2002-10-08 2005-04-26 Actel Corporation Parallel programmable antifuse field programmable gate array device (FPGA) and a method for programming and testing an antifuse FPGA
US6937064B1 (en) * 2002-10-24 2005-08-30 Altera Corporation Versatile logic element and logic array block
US7116840B2 (en) 2002-10-31 2006-10-03 Microsoft Corporation Decoding and error correction in 2-D arrays
US7133563B2 (en) 2002-10-31 2006-11-07 Microsoft Corporation Passive embedded interaction code
US7146598B2 (en) * 2002-11-07 2006-12-05 Computer Network Technoloy Corp. Method and apparatus for configuring a programmable logic device
US6727726B1 (en) 2002-11-12 2004-04-27 Actel Corporation Field programmable gate array architecture including a buffer module and a method of distributing buffer modules in a field programmable gate array
US6946871B1 (en) * 2002-12-18 2005-09-20 Actel Corporation Multi-level routing architecture in a field programmable gate array having transmitters and receivers
TWI220738B (en) * 2002-12-20 2004-09-01 Benq Corp Method for effectively re-downloading data to a field programmable gate array
US6891396B1 (en) 2002-12-27 2005-05-10 Actel Corporation Repeatable block producing a non-uniform routing architecture in a field programmable gate array having segmented tracks
US7385420B1 (en) 2002-12-27 2008-06-10 Actel Corporation Repeatable block producing a non-uniform routing architecture in a field programmable gate array having segmented tracks
US7673118B2 (en) 2003-02-12 2010-03-02 Swarztrauber Paul N System and method for vector-parallel multiprocessor communication
US7873811B1 (en) * 2003-03-10 2011-01-18 The United States Of America As Represented By The United States Department Of Energy Polymorphous computing fabric
US7571973B2 (en) * 2003-03-22 2009-08-11 Hewlett-Packard Development Company, L.P. Monitoring fluid short conditions for fluid-ejection devices
US6943581B1 (en) 2003-03-27 2005-09-13 Xilinx, Inc. Test methodology for direct interconnect with multiple fan-outs
GB2400195B (en) * 2003-03-31 2005-06-29 Micron Technology Inc Active memory processing array topography and method
US7255437B2 (en) * 2003-10-09 2007-08-14 Howell Thomas A Eyeglasses with activity monitoring
US6838902B1 (en) * 2003-05-28 2005-01-04 Actel Corporation Synchronous first-in/first-out block memory for a field programmable gate array
US6825690B1 (en) 2003-05-28 2004-11-30 Actel Corporation Clock tree network in a field programmable gate array
US7375553B1 (en) 2003-05-28 2008-05-20 Actel Corporation Clock tree network in a field programmable gate array
US7757197B1 (en) * 2003-05-29 2010-07-13 Altera Corporation Method and apparatus for utilizing constraints for the routing of a design on a programmable logic device
US7385419B1 (en) * 2003-05-30 2008-06-10 Actel Corporation Dedicated input/output first in/first out module for a field programmable gate array
US6867615B1 (en) 2003-05-30 2005-03-15 Actel Corporation Dedicated input/output first in/first out module for a field programmable gate array
US6897676B1 (en) 2003-06-04 2005-05-24 Xilinx, Inc. Configuration enable bits for PLD configurable blocks
US7386826B1 (en) * 2003-06-24 2008-06-10 Xilinx, Inc. Using redundant routing to reduce susceptibility to single event upsets in PLD designs
JP4423953B2 (en) 2003-07-09 2010-03-03 株式会社日立製作所 Semiconductor integrated circuit
US6990010B1 (en) 2003-08-06 2006-01-24 Actel Corporation Deglitching circuits for a radiation-hardened static random access memory based programmable architecture
EP1676208A2 (en) * 2003-08-28 2006-07-05 PACT XPP Technologies AG Data processing device and method
US7421014B2 (en) * 2003-09-11 2008-09-02 Xilinx, Inc. Channel bonding of a plurality of multi-gigabit transceivers
US7622947B1 (en) * 2003-12-18 2009-11-24 Nvidia Corporation Redundant circuit presents connections on specified I/O ports
US7583842B2 (en) 2004-01-06 2009-09-01 Microsoft Corporation Enhanced approach of m-array decoding and error correction
DE102004001669B4 (en) * 2004-01-12 2008-06-05 Infineon Technologies Ag Configurable logic device without local configuration memory with parallel configuration bus
US7263224B2 (en) 2004-01-16 2007-08-28 Microsoft Corporation Strokes localization by m-array decoding and fast image matching
US7328377B1 (en) 2004-01-27 2008-02-05 Altera Corporation Error correction for programmable logic integrated circuits
US7109746B1 (en) 2004-03-22 2006-09-19 Xilinx, Inc. Data monitoring for single event upset in a programmable logic device
US6975139B2 (en) 2004-03-30 2005-12-13 Advantage Logic, Inc. Scalable non-blocking switching network for programmable logic
US7698118B2 (en) * 2004-04-15 2010-04-13 Mentor Graphics Corporation Logic design modeling and interconnection
US7030652B1 (en) 2004-04-23 2006-04-18 Altera Corporation LUT-based logic element with support for Shannon decomposition and associated method
US7081772B1 (en) * 2004-06-04 2006-07-25 Altera Corporation Optimizing logic in non-reprogrammable logic devices
US7426678B1 (en) 2004-07-20 2008-09-16 Xilinx, Inc. Error checking parity and syndrome of a block of data with relocated parity bits
US7460529B2 (en) * 2004-07-29 2008-12-02 Advantage Logic, Inc. Interconnection fabric using switching networks in hierarchy
US20060080632A1 (en) * 2004-09-30 2006-04-13 Mathstar, Inc. Integrated circuit layout having rectilinear structure of objects
FR2879337A1 (en) * 2004-12-15 2006-06-16 St Microelectronics Sa Memory circuit e.g. dynamic RAM or static RAM, for use in industrial application, has data buses that respectively serves to read and write memory modules, and address buses connected to inputs of multiplexers
US7218138B2 (en) * 2004-12-23 2007-05-15 Lsi Corporation Efficient implementations of the threshold-2 function
US7627291B1 (en) * 2005-01-21 2009-12-01 Xilinx, Inc. Integrated circuit having a routing element selectively operable to function as an antenna
US20070247189A1 (en) * 2005-01-25 2007-10-25 Mathstar Field programmable semiconductor object array integrated circuit
US7607076B2 (en) 2005-02-18 2009-10-20 Microsoft Corporation Embedded interaction code document
US7826074B1 (en) 2005-02-25 2010-11-02 Microsoft Corporation Fast embedded interaction code printing with custom postscript commands
US7394708B1 (en) 2005-03-18 2008-07-01 Xilinx, Inc. Adjustable global tap voltage to improve memory cell yield
EP1877990A4 (en) * 2005-04-06 2009-11-04 Omnilink Systems Inc System and method for tracking monitoring, collecting, reporting and communicating with the movement of individuals
US7599560B2 (en) 2005-04-22 2009-10-06 Microsoft Corporation Embedded interaction code recognition
US7421439B2 (en) 2005-04-22 2008-09-02 Microsoft Corporation Global metadata embedding and decoding
US7542976B2 (en) * 2005-04-22 2009-06-02 Microsoft Corporation Local metadata embedding and decoding
US7400777B2 (en) 2005-05-25 2008-07-15 Microsoft Corporation Preprocessing for information pattern analysis
US7729539B2 (en) 2005-05-31 2010-06-01 Microsoft Corporation Fast error-correcting of embedded interaction codes
US7580576B2 (en) 2005-06-02 2009-08-25 Microsoft Corporation Stroke localization and binding to electronic document
US7619607B2 (en) 2005-06-30 2009-11-17 Microsoft Corporation Embedding a pattern design onto a liquid crystal display
US7622182B2 (en) 2005-08-17 2009-11-24 Microsoft Corporation Embedded interaction code enabled display
US7817816B2 (en) 2005-08-17 2010-10-19 Microsoft Corporation Embedded interaction code enabled surface type identification
US7439763B1 (en) 2005-10-25 2008-10-21 Xilinx, Inc. Scalable shared network memory switch for an FPGA
US7996604B1 (en) 2005-10-25 2011-08-09 Xilinx, Inc. Class queue for network data switch to identify data memory locations by arrival time
US7568074B1 (en) * 2005-10-25 2009-07-28 Xilinx, Inc. Time based data storage for shared network memory switch
US7730276B1 (en) 2005-10-25 2010-06-01 Xilinx, Inc. Striping of data into memory of a network data switch to prevent read and write collisions
US20070139074A1 (en) * 2005-12-19 2007-06-21 M2000 Configurable circuits with microcontrollers
US8250503B2 (en) 2006-01-18 2012-08-21 Martin Vorbach Hardware definition method including determining whether to implement a function as hardware or software
US7423453B1 (en) 2006-01-20 2008-09-09 Advantage Logic, Inc. Efficient integrated circuit layout scheme to implement a scalable switching network used in interconnection fabric
JP4755033B2 (en) * 2006-07-05 2011-08-24 ルネサスエレクトロニクス株式会社 Semiconductor integrated circuit
US7478357B1 (en) 2006-08-14 2009-01-13 Xilinx, Inc. Versatile bus interface macro for dynamically reconfigurable designs
US10402366B2 (en) * 2006-08-21 2019-09-03 Benjamin J. Cooper Efficient and scalable multi-value processor and supporting circuits
US8261138B2 (en) * 2006-10-24 2012-09-04 International Business Machines Corporation Test structure for characterizing multi-port static random access memory and register file arrays
US7884672B1 (en) 2006-11-01 2011-02-08 Cypress Semiconductor Corporation Operational amplifier and method for amplifying a signal with shared compensation components
US7508231B2 (en) 2007-03-09 2009-03-24 Altera Corporation Programmable logic device having redundancy with logic element granularity
US7456653B2 (en) * 2007-03-09 2008-11-25 Altera Corporation Programmable logic device having logic array block interconnect lines that can interconnect logic elements in different logic blocks
US8803672B2 (en) * 2007-05-15 2014-08-12 Sirius Xm Radio Inc. Vehicle message addressing
US8239430B2 (en) * 2007-10-09 2012-08-07 International Business Machines Corporation Accuracy improvement in CORDIC through precomputation of the error bias
US20090094306A1 (en) * 2007-10-09 2009-04-09 Krishnakalin Gahn W Cordic rotation angle calculation
GB2454865B (en) * 2007-11-05 2012-06-13 Picochip Designs Ltd Power control
US20090144595A1 (en) * 2007-11-30 2009-06-04 Mathstar, Inc. Built-in self-testing (bist) of field programmable object arrays
JP2009159567A (en) * 2007-12-28 2009-07-16 Panasonic Corp Reconfigurable circuit, configuration method, and program
JP5194805B2 (en) 2008-01-08 2013-05-08 富士通セミコンダクター株式会社 Semiconductor device and manufacturing method thereof
GB2457310B (en) * 2008-02-11 2012-03-21 Picochip Designs Ltd Signal routing in processor arrays
CN101393658B (en) * 2008-02-27 2011-04-20 重庆长安汽车股份有限公司 Central controlled anti-theft method and system for automobile
US8640071B2 (en) * 2008-06-06 2014-01-28 Nec Corporation Circuit design system and circuit design method
US7956639B2 (en) * 2008-07-23 2011-06-07 Ndsu Research Foundation Intelligent cellular electronic structures
US7705629B1 (en) * 2008-12-03 2010-04-27 Advantage Logic, Inc. Permutable switching network with enhanced interconnectivity for multicasting signals
US7714611B1 (en) 2008-12-03 2010-05-11 Advantage Logic, Inc. Permutable switching network with enhanced multicasting signals routing for interconnection fabric
GB2466661B (en) * 2009-01-05 2014-11-26 Intel Corp Rake receiver
GB2466821A (en) 2009-01-08 2010-07-14 Advanced Risc Mach Ltd An FPGA with an embedded bus and dedicated bus interface circuits
US8307182B1 (en) * 2009-01-31 2012-11-06 Xilinx, Inc. Method and apparatus for transferring data to or from a memory
KR20100108697A (en) * 2009-03-30 2010-10-08 삼성전자주식회사 Semiconductor memory device having swap function for dq pads
GB2470037B (en) 2009-05-07 2013-07-10 Picochip Designs Ltd Methods and devices for reducing interference in an uplink
GB2470891B (en) 2009-06-05 2013-11-27 Picochip Designs Ltd A method and device in a communication network
GB2470771B (en) 2009-06-05 2012-07-18 Picochip Designs Ltd A method and device in a communication network
US7999570B2 (en) * 2009-06-24 2011-08-16 Advantage Logic, Inc. Enhanced permutable switching network with multicasting signals for interconnection fabric
US8085603B2 (en) * 2009-09-04 2011-12-27 Integrated Device Technology, Inc. Method and apparatus for compression of configuration bitstream of field programmable logic
GB2474071B (en) 2009-10-05 2013-08-07 Picochip Designs Ltd Femtocell base station
FR2951868B1 (en) * 2009-10-28 2012-04-06 Kalray BUILDING BRICKS OF A CHIP NETWORK
FR2954023B1 (en) * 2009-12-14 2012-02-10 Lyon Ecole Centrale INTERCONNECTED MATRIX OF RECONFIGURABLE LOGIC CELLS WITH CROSS INTERCONNECTION TOPOLOGY
GB2482869B (en) 2010-08-16 2013-11-06 Picochip Designs Ltd Femtocell access control
US8890567B1 (en) 2010-09-30 2014-11-18 Altera Corporation High speed testing of integrated circuits including resistive elements
US8972821B2 (en) * 2010-12-23 2015-03-03 Texas Instruments Incorporated Encode and multiplex, register, and decode and error correction circuitry
JP5171971B2 (en) * 2011-01-17 2013-03-27 ルネサスエレクトロニクス株式会社 Semiconductor integrated circuit
GB2489919B (en) 2011-04-05 2018-02-14 Intel Corp Filter
GB2489716B (en) 2011-04-05 2015-06-24 Intel Corp Multimode base system
GB2491098B (en) 2011-05-16 2015-05-20 Intel Corp Accessing a base station
US8874837B2 (en) * 2011-11-08 2014-10-28 Xilinx, Inc. Embedded memory and dedicated processor structure within an integrated circuit
US8959469B2 (en) 2012-02-09 2015-02-17 Altera Corporation Configuring a programmable device using high-level language
US9111151B2 (en) * 2012-02-17 2015-08-18 National Taiwan University Network on chip processor with multiple cores and routing method thereof
US9166598B1 (en) 2012-05-08 2015-10-20 Altera Corporation Routing and programming for resistive switch arrays
US9490811B2 (en) * 2012-10-04 2016-11-08 Efinix, Inc. Fine grain programmable gate architecture with hybrid logic/routing element and direct-drive routing
US9525419B2 (en) * 2012-10-08 2016-12-20 Efinix, Inc. Heterogeneous segmented and direct routing architecture for field programmable gate array
US8645892B1 (en) * 2013-01-07 2014-02-04 Freescale Semiconductor, Inc. Configurable circuit and mesh structure for integrated circuit
US9210486B2 (en) * 2013-03-01 2015-12-08 Qualcomm Incorporated Switching fabric for embedded reconfigurable computing
US8860457B2 (en) * 2013-03-05 2014-10-14 Qualcomm Incorporated Parallel configuration of a reconfigurable instruction cell array
FR3015068B1 (en) * 2013-12-18 2016-01-01 Commissariat Energie Atomique SIGNAL PROCESSING MODULE, IN PARTICULAR FOR NEURONAL NETWORK AND NEURONAL CIRCUIT
CN103793190A (en) * 2014-02-07 2014-05-14 北京京东方视讯科技有限公司 Information display method and device and display equipment
US9378326B2 (en) * 2014-09-09 2016-06-28 International Business Machines Corporation Critical region identification
JP2016100870A (en) * 2014-11-26 2016-05-30 Necスペーステクノロジー株式会社 Dynamic circuit device
CN104571949B (en) * 2014-12-22 2017-07-07 华中科技大学 Realize calculating processor and its operating method merged with storage based on memristor
US10879904B1 (en) * 2017-07-21 2020-12-29 X Development Llc Application specific integrated circuit accelerators
US10790828B1 (en) 2017-07-21 2020-09-29 X Development Llc Application specific integrated circuit accelerators
CN111913794A (en) * 2020-08-04 2020-11-10 北京百度网讯科技有限公司 Method and device for sharing GPU, electronic equipment and readable storage medium

Family Cites Families (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4020469A (en) 1975-04-09 1977-04-26 Frank Manning Programmable arrays
US4296475A (en) * 1978-12-19 1981-10-20 U.S. Philips Corporation Word-organized, content-addressable memory
US4268908A (en) 1979-02-26 1981-05-19 International Business Machines Corporation Modular macroprocessing system comprising a microprocessor and an extendable number of programmed logic arrays
US4538247A (en) * 1983-01-14 1985-08-27 Fairchild Research Center Redundant rows in integrated circuit memories
US4870302A (en) * 1984-03-12 1989-09-26 Xilinx, Inc. Configurable electrical circuit having configurable logic elements and configurable interconnects
US4670749A (en) * 1984-04-13 1987-06-02 Zilog, Inc. Integrated circuit programmable cross-point connection technique
JPS6177946A (en) * 1984-09-26 1986-04-21 Hitachi Ltd Semiconductor memory
US4642487A (en) 1984-09-26 1987-02-10 Xilinx, Inc. Special interconnect for configurable logic array
US4706216A (en) * 1985-02-27 1987-11-10 Xilinx, Inc. Configurable logic element
DE3630835C2 (en) 1985-09-11 1995-03-16 Pilkington Micro Electronics Integrated semiconductor circuit arrangements and systems
JPH0789674B2 (en) * 1985-10-22 1995-09-27 シ−メンス、アクチエンゲゼルシヤフト Wideband signal-combiner
US5179540A (en) * 1985-11-08 1993-01-12 Harris Corporation Programmable chip enable logic function
US4700187A (en) 1985-12-02 1987-10-13 Concurrent Logic, Inc. Programmable, asynchronous logic cell and array
US5477165A (en) * 1986-09-19 1995-12-19 Actel Corporation Programmable logic module and architecture for field programmable gate array device
US5451887A (en) * 1986-09-19 1995-09-19 Actel Corporation Programmable logic module and architecture for field programmable gate array device
US5187393A (en) * 1986-09-19 1993-02-16 Actel Corporation Reconfigurable programmable interconnect architecture
US4758745B1 (en) * 1986-09-19 1994-11-15 Actel Corp User programmable integrated circuit interconnect architecture and test method
US4866508A (en) 1986-09-26 1989-09-12 General Electric Company Integrated circuit packaging configuration for rapid customized design and unique test capability
US5175865A (en) * 1986-10-28 1992-12-29 Thinking Machines Corporation Partitioning the processors of a massively parallel single array processor into sub-arrays selectively controlled by host computers
US4918440A (en) 1986-11-07 1990-04-17 Furtek Frederick C Programmable logic cell and array
US4847612A (en) * 1988-01-13 1989-07-11 Plug Logic, Inc. Programmable logic device
KR910003594B1 (en) * 1988-05-13 1991-06-07 삼성전자 주식회사 Spare column selection method and circuit
DE68905240T2 (en) * 1988-06-01 1993-07-15 Nec Corp SEMICONDUCTOR STORAGE DEVICE WITH HIGH-SPEED READING DEVICE.
US4930107A (en) * 1988-08-08 1990-05-29 Altera Corporation Method and apparatus for programming and verifying programmable elements in programmable devices
US5221922A (en) * 1988-08-08 1993-06-22 Siemens Aktiengesellschaft Broadband signal switching matrix network
JP2723926B2 (en) * 1988-09-20 1998-03-09 川崎製鉄株式会社 Programmable logic device
US4973956A (en) * 1988-12-22 1990-11-27 General Electric Company Crossbar switch with distributed memory
IT1225638B (en) 1988-12-28 1990-11-22 Sgs Thomson Microelectronics LOGIC DEVICE INTEGRATED AS A NETWORK OF DISTRIBUTED MEMORY LINKS
US4942319A (en) 1989-01-19 1990-07-17 National Semiconductor Corp. Multiple page programmable logic architecture
GB8906145D0 (en) * 1989-03-17 1989-05-04 Algotronix Ltd Configurable cellular array
US5343406A (en) * 1989-07-28 1994-08-30 Xilinx, Inc. Distributed memory architecture for a configurable logic array and method for using distributed memory
KR910006849A (en) * 1989-09-29 1991-04-30 미다 가쓰시게 Semiconductor integrated circuit device
US5015883A (en) * 1989-10-10 1991-05-14 Micron Technology, Inc. Compact multifunction logic circuit
US5073729A (en) * 1990-06-22 1991-12-17 Actel Corporation Segmented routing architecture
US5130947A (en) * 1990-10-22 1992-07-14 Motorola, Inc. Memory system for reliably writing addresses with reduced power consumption
US5144166A (en) * 1990-11-02 1992-09-01 Concurrent Logic, Inc. Programmable logic cell and array
US5122685A (en) * 1991-03-06 1992-06-16 Quicklogic Corporation Programmable application specific integrated circuit and logic cell therefor
US5317209A (en) * 1991-08-29 1994-05-31 National Semiconductor Corporation Dynamic three-state bussing capability in a configurable logic array
US5260611A (en) 1991-09-03 1993-11-09 Altera Corporation Programmable logic array having local and long distance conductors
US5260610A (en) * 1991-09-03 1993-11-09 Altera Corporation Programmable logic element interconnections for programmable logic array integrated circuits
US5559971A (en) * 1991-10-30 1996-09-24 I-Cube, Inc. Folded hierarchical crosspoint array
JP2790746B2 (en) * 1992-01-10 1998-08-27 シャープ株式会社 Semiconductor storage device
JPH05324452A (en) * 1992-05-27 1993-12-07 Nec Ic Microcomput Syst Ltd External memory interface circuit
GB9223226D0 (en) 1992-11-05 1992-12-16 Algotronix Ltd Improved configurable cellular array (cal ii)
US5457410A (en) 1993-08-03 1995-10-10 Btr, Inc. Architecture and interconnect scheme for programmable logic circuits
US5486775A (en) * 1993-11-22 1996-01-23 Altera Corporation Multiplexer structures for use in making controllable interconnections in integrated circuits.
JP2600597B2 (en) * 1993-12-06 1997-04-16 日本電気株式会社 Dynamic circuit for information propagation
US5455525A (en) 1993-12-06 1995-10-03 Intelligent Logic Systems, Inc. Hierarchically-structured programmable logic array and system for interconnecting logic elements in the logic array

Also Published As

Publication number Publication date
JPH08503111A (en) 1996-04-02
US5670897A (en) 1997-09-23
WO1994010754A1 (en) 1994-05-11
US5528176A (en) 1996-06-18
US5552722A (en) 1996-09-03
US5798656A (en) 1998-08-25
US5500609A (en) 1996-03-19
GB9223226D0 (en) 1992-12-16
US5831448A (en) 1998-11-03
US6292018B1 (en) 2001-09-18
US5469003A (en) 1995-11-21
US5861761A (en) 1999-01-19
EP0669056A1 (en) 1995-08-30
EP0669056A4 (en) 1996-04-24

Similar Documents

Publication Publication Date Title
US5469003A (en) Hierarchically connectable configurable cellular array
US5894228A (en) Tristate structures for programmable logic devices
US6216257B1 (en) FPGA device and method that includes a variable grain function architecture for implementing configuration logic blocks and a complimentary variable length interconnect architecture for providing configurable routing between configuration logic blocks
Marshall et al. A reconfigurable arithmetic array for multimedia applications
US5883526A (en) Hierarchical interconnect for programmable logic devices
US6421817B1 (en) System and method of computation in a programmable logic device using virtual instructions
US6812738B1 (en) Vector routing in a programmable logic device
US5815726A (en) Coarse-grained look-up table architecture
JP3471088B2 (en) Improved programmable logic cell array architecture
US5570040A (en) Programmable logic array integrated circuit incorporating a first-in first-out memory
US6181162B1 (en) Programmable logic device with highly routable interconnect
US5809281A (en) Field programmable gate array with high speed SRAM based configurable function block configurable as high performance logic or block of SRAM
US6882177B1 (en) Tristate structures for programmable logic devices
JPH0256114A (en) Programmable logic device having array block coupled through programmable wiring
GB2295738A (en) Programmable logic array integrated circuits with enhanced output routing.
US6570404B1 (en) High-performance programmable logic architecture
US10855285B2 (en) Field programmable transistor arrays
US7924051B2 (en) Programmable logic device with a microcontroller-based control system
GB2318663A (en) Hierarchical interconnect for programmable logic devices
York Survey of field programmable logic devices
US11362662B2 (en) Field programmable transistor arrays
Hagemeyer et al. Dedicated module access in dynamically reconfigurable systems
Koch et al. Intra-FPGA Communication Architectures for Reconfigurable Systems
GB2315897A (en) Programmable logic cell array architecture

Legal Events

Date Code Title Description
FZDE Dead