Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS5105424 A
Publication typeGrant
Application numberUS 07/201,682
Publication dateApr 14, 1992
Filing dateJun 2, 1988
Priority dateJun 2, 1988
Fee statusPaid
Publication number07201682, 201682, US 5105424 A, US 5105424A, US-A-5105424, US5105424 A, US5105424A
InventorsCharles M. Flaig, Charles L. Seitz
Original AssigneeCalifornia Institute Of Technology
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Inter-computer message routing system with each computer having separate routinng automata for each dimension of the network
US 5105424 A
Abstract
In a multicomputer, concurrent computing system having a plurality of computing nodes, this is a method and apparatus for routing message packets between the nodes. The method comprises providing a routing circuit at each node and interconnecting the routing circuits to define communications paths interconnecting the nodes along which message packets can be routed; at each routing circuit, forming routes to other nodes as a sequence of direction changing and relative address indicators for each node between the starting node and each destination node; receiving a message packet to be transmitted to another node and an associated destination node designator therefor; retrieving the route to the destination node from a memory map; adding the route to the destination node to the beginning of the message packet as part of a header; transmitting the message packet to the routing circuit of the next adjacent node on the route to the destination node; and at each intermediate node, receiving the message packet; reading the header; directing the message packet to one of two outputs thereof as a function of routing directions in the header, updating the header to reflect passage through the routing circuit; and at the destination node, stripping remaining portions of the header from the message packet; storing the message packet; and, informing the node that the message packet has arrived.
Images(6)
Previous page
Next page
Claims(26)
Wherefore, having thus described the present invention, what is claimed is:
1. An inter-computer message routing system wherein message packets are routed among a plurality of computers along communication paths between said computers in an n-dimensional network of said communication paths, different groups of said communication paths comprising different ones of the n dimensions of said network, said message packets each comprising a header containing successive routing directions relative to successive computers along a selected route in said network, said system comprising:
a plurality of routers, each router being associated with a corresponding one of said computers, each of said routers comprising n routing automata corresponding to said n dimensions, each of said n routing automata having plural message packet inputs and plural message packet outputs, at least some of said inputs and outputs being connected to respective communication paths of the corresponding one of said n dimensions, said n routing automata being connected together in cascade from a message packet output of one to a message packet input of the next one of said routing automata corresponding to a sequence of dimensions of the routing automata;
routing logic means disposed within each one of said routing automata, said routing logic means comprising means for reading the header of a message packet received from one of the inputs of said one routing automata, means for directing said message packet to one of said outputs of said one routing automata in accordance with the contents of said header, and means for modifying said header to reflect the passage of said message packet through said one routing automata, whereby each of said routing automata performs all message routing for the message packets traveling in a corresponding one of said dimensions.
2. The system of claim 1 wherein said means for directing said message packet to one of said outputs comprises plural decision element means each having one decision input and plural decision outputs for directing a message packet received at said decision input to one of said decision outputs in accordance with the contents of said header, and plural merge element means having plural merge inputs and a single merge output for directing message packets received at the plural merge inputs thereof to said merge output without interference between said message packets.
3. The system of claim 2 wherein within a single one of each routing automata at least some of said decision inputs are connected to respective message packet inputs, at least some of said merge inputs are connected to respective decision outputs and at least some of said merge outputs are connected to respective message packet outputs.
4. The system of claim 1 wherein said header comprises a sequence of symbols and said means for modifying said header comprises:
means for decrementing a leading symbol in said header whereby said header specifies a direction for said message packet relative to a next one of the computers along said selected path.
5. The system of claim 4 wherein said means for modifying said header further comprises means for stripping no longer needed portions of said header.
6. The system of claim 4 wherein said sequence of symbols include symbols each comprising a count decrementable by said means for modifying said header, said count specifying a duration of travel in a constant dimension.
7. The system of claim 6 wherein said system permits bi-directional travel in each one of said dimensions, said sequence of symbols further including binary directional symbols specifying one of two directions.
8. The system of claim 6 wherein said means for modifying said header comprises means for decrementing a leading non-zero one of said counts and wherein said header specifies a change to the next one of said sequence of dimensions upon two consecutive leading counts thereof preceding one of said binary directional symbols being decremented to zero.
9. The system of claim 8 wherein said means for directing said message packet comprises means responsive to said header specifying a change o the next one of said sequence of dimensions for directing said message packet to a message packet output connected in cascade to a message packet input of a next routing automata of said next dimension.
10. The system of claim 3 wherein:
travel by said message packets along said paths is mono-directional and said system is one of a group of networks comprising 2-dimensional mesh networks and 3 dimensional torus networks;
there are two message packet inputs in each routing autotmata comprising one message packet input which receives message packets traveling in the corresponding dimension and another message packet input connectable to a message packet output of a preceding routing automata connected in cascade therewith; and
there are two message packet outputs in each routing automata comprising one message packet output which transmits message packets for travel in said corresponding dimension and another message packet output connectable to a message packet input of a succeeding routing automata in cascade therewith.
11. The system of claim 10 wherein:
there are two decision element means and two merge element means in each of said routing automata;
each of said decision element means has two decision outputs and each merge element means has two merge inputs.
12. The system of claim 3 wherein:
travel by said message packets along said paths is bi-directional;
there are three message packet inputs in each routing autotmata comprising first and second message packet inputs which receive message packets traveling in first and second directions in the corresponding dimension respectively, and a third message packet input connectable to a message packet output of a preceding routing automata connected in cascade therewith; and
there are three message packet outputs in each routing automata comprising first and second message packet outputs which transmit message packets for travel in first and second directions in said corresponding dimension respectively, and a third message packet output connectable to the third message packet input of a succeeding routing automata in cascade therewith.
13. The system of claim 12 wherein:
there are at least three decision element means and three merge element means in each of said routing automata;
at least one of said decision element means has three decision inputs including first and second decision inputs corresponding to said first and second directions and a third input corresponding to said preceding dimension;
at least one of said merge element means has three merge inputs of which at least one is connected to one of the three decision outputs of said one decision element means having three decision outputs.
14. The system of claim 1 wherein each of said computers comprise an integrated circuit including the corresponding router, a processor, a memory, a packet interface connected to said router and a bus connecting said packet interface, said memory and said processor, wherein:
said processor comprises means for retrieving information data from said memory to be routed in said network in a message packet and for computing an initial header based upon destination instructions stored in said memory; and
said packet interface comprises means for forming a data stream comprising said information data and said initial header and transmitting said data stream to said router connected thereto as a message packet.
15. The system of claim 4 wherein:
said metwork is one-dimensional and comprises an asynchronous pipeline of succesive ones of said computers in which said routers are connected in a cascaded succession of routers each comprising one routing automata; and
said means for decrementing comprises means for providing an asynchronous request signal to a successive one of said routing automata.
16. The system of claim 2 wherein said plural decision element means are a first set of substantially identical building block elements, said plural merge element means are a second set of substantially identical building block elements and wherein each of said routing automata are of substantially identical structure.
17. An inter-computer message routing system wherein message packets are routed among a plurality of computers along communication paths between said computers in an n-dimensional network of said communication paths, different groups of said communication paths comprising different ones of the n dimensions of said network, said message packets each comprising a header containing successive routing directions relative to successive computers along a selected route in said network, said system comprising:
a plurality of routers, each router being associated with a corresponding one of said computers, each of said routers comprising n routing automata corresponding to said n dimensions, each of said n routing automata having plural message packet inputs and plural message packet outputs, at least some of said inputs and outputs being connected to respective communication paths of the corresponding one of said n dimensions, said n routing automata being connected together in cascade from a message packet output of one to a message packet input of the next one of said routing automata corresponding to a sequence of dimensions of the routing automata;
routing logic means disposed within each one of said routing automata for directing a message packet received from one of the inputs of said one routing automata to one of said outputs of said one routing automata in accordance with the contents of said header, whereby said routing automata performs all message routing for the message packets traveling in a corresponding one of said dimensions.
18. The system of claim 17 wherein said means for directing said message packet to one of said outputs comprises plural decision element means each having one decision input and plural decision outputs for directing a message packet received at said decision input to one of said decision outputs in accordance with the contents of said header, and plural merge element means having plural merge inputs and a single merge output for directing message packets received at the plural merge inputs thereof to said merge output without interference between said message packets.
19. The system of claim 18 wherein within a single one of each routing automata at least some of said decision inputs are connected to respective message packet inputs, at least some of said merge inputs are connected to respective decision outputs and at least some of said merge outputs are connected to respective message packet outputs.
20. The system of claim 19 wherein:
travel by said message packets along said paths is mono-directional and said system is one of a group of networks comprising 2-dimensional mesh networks and 3 dimensional torus networks;
there are two message packet inputs in each routing autotmata comprising one message packet input which receives message packets traveling in the corresponding dimension and another message packet input connectable to a message packet output of a preceding routing automata connected in cascade therewith; and
there are two message packet outputs in each routing automata comprising one message packet output which transmits message packets for travel in said corresponding dimension and another message packet output connectable to a message packet input of a succeeding routing automata in cascade therewith.
21. The system of claim 17 wherein:
travel by said message packets along said paths is bi-directional;
there are three message packet inputs in each routing autotmata comprising first and second message packet inputs which receive message packets traveling in first and second directions in the corresponding dimension respectively, and a third message packet input connectable to a message packet output of a preceding routing automata connected in cascade therewith; and
there are three message packet outputs in each routing automata comprising first and second message packet outputs which transmit message packets for travel in first and second directions in said corresponding dimension respectively, and a third message packet output connectable to the third message packet input of a succeeding routing automata in cascade therewith.
22. The system of claim 17 wherein each of said computers comprise an integrated circuit including the corresponding router, a processor, a memory, a packet interface connected to said router and a bus connecting said packet interface, said memory and said processor, wherein:
said processor comprises means for retrieving information data from said memory to be routed in said network in a message packet and for computing an initial header based upon destination instructions stored in said memory; and
said packet interface comprises means for forming a data stream comprising said information data and said initial header and transmitting said data stream to said router connected thereto as a message packet.
23. The system of claim 18 wherein:
said metwork is one-dimensional and comprises an asynchronous pipeline of succesive ones of said computers in which said routers are connected in a cascaded succession of routers each comprising one routing automata; and
said routing logic means comprises means for providing an asynchronous request signal to a successive one of said routing automata.
24. The system of claim 18 wherein said plural decision element means are a first set of substantially identical building block elements, said plural merge element means are a second set of substantially identical building block elements and wherein each of said routing automata are of substantially identical structure.
25. A computer node chip for use in an inter-computer message routing system wherein message packets are routed among a plurality of such computer node chips along communication paths between said computer node chips in an n-dimensional network of said communication paths, different groups of said communication paths comprising different ones of the n dimensions of said network, said message packets each comprising a header containing successive routing directions relative to successive computer node chips along a selected route in said network, said computer node chip comprising:
a router comprising n routing automata corresponding to said n dimensions, each of said n routing automata having plural message packet inputs and plural message packet outputs, at least some of said inputs and outputs being connected to respective communication paths of the corresponding one of said n dimensions, said n routing automata being connected together in cascade from a message packet output of one to a message packet input of the next one of said routing automata corresponding to a sequence of dimensions of the routing automata;
routing logic means disposed within each one of said routing automata, said routing logic means comprising means for reading the header of a message packet received from one of the inputs of said one routing automata, means for directing said message packet to one of said outputs of said one routing automata in accordance with the contents of said header, and means for modifying said header to reflect the passage of said message packet through said one routing automata, whereby each of said routing automata performs all message routing for the message packets traveling in a corresponding one of said dimensions;
a processor;
a memory; and
a packet interface connected to said router and a bus connecting said packet interface, said memory and said processor, wherein said processor comprises means for retrieving information data from said memory to be routed in said network in a message packet and for computing an initial header based upon destination instructions stored in said memory, and said packet interface comprises means for forming a data stream comprising said information data and said initial header and transmitting said data stream to said router connected thereto as a message packet.
26. The chip of claim 25 wherein:
said means for directing said message packet to one of said outputs comprises plural decision element means each having one decision input and plural decision outputs for directing a message packet received at said decision input to one of said decision outputs in accordance with the contents of said header, and plural merge element means having plural merge inputs and a single merge output for directing message packets received at the plural merge inputs thereof to said merge output without interference between said message packets; and
within a single one of each routing automata at least some of said decision inputs are connected to respective message packet inputs, at least some of said merge inputs are connected to respective decision outputs and at least some of said merge outputs are connected to respective message packet outputs.
Description
ORIGIN OF THE INVENTION

The research described herein was sponsored in part by the Defense Advanced Research Projects Agency, ARPA Order No. 3771, and monitored by the Office of Naval Research under Contract No. N00014-79-C-0597, and in part by grants from Ametek Computer Research Division and from Intel Scientific Computers wherein said entities make no claim to title. It was also done in partial fulfillment of the requirements for the degree of Master of Science of applicant Charles M. Flaig at the California Institute of Technology (Caltech), to whom this application is assigned.

CITED REFERENCES Dally 86

Dally, William J., "A VLSI Architecture for Concurrent Data Structures," Caltech Computer Science Technical Report 5209:TR:86.

Dally & Seitz 86

Dally, William and Charles L. Seitz, "The Torus Routing Chip," Distributed Computing, Vol. 1, No. 4, pp 187-196, Springer-Verlag, October 1986.

Mead & Conway 80

Mead, Carver A. and Lynn A. Conway, Introduction to VLSI Systems, Chapter 7, Addison-Wesley, 1980.

Seitz 84

Seitz, Charles L., "Concurrent VLSI Architectures," IEEE TC, Vol. C-33, No. 12, pp 1247-1265, December 1984.

Seitz 85

Seitz, Charles L., "The Cosmic Cube," Communications of the ACM, Vol. 28, No. 1, pp 22-23, January 1985.

BACKGROUND OF THE INVENTION

This invention relates to intercomputer message passing systems and apparatus and, more particularly, in an intercomputer routing system wherein message packets are routed along communications paths from one computer to another, to the improvement comprising, a routing automaton disposed at each computer and having an input for receiving a message packet including routing directions as a header thereto and a plurality of outputs for selectively outputting the message packet as a function of the routing directions in the header; and, routing logic means disposed within the routing automaton for reading the header, for directing the message packet to one of the outputs as a function of the routing directions contained in the header, and for updating the header to reflect the passage of the message packet through the routing automata.

For a message-passing, concurrent computer system with very few nodes as depicted in FIG. 1, it is practical to use a full interconnection scheme between the nodes 10 thereof. A full interconnection of channels quickly becomes impractical as the number of nodes increases, since each node of an N node machine must have N-1 connections. A configuration used for larger message-passing multicomputers such as the Caltech Cosmic Cube [Seitz 85] and its commercial descendants is that of a binary n-cube (or hypercube) as depicted in FIG. 2 which is used to connect N=2n nodes 10. Each node 10 has n=log2 N connections, and a message never has to travel through more than n channels to reach its destination.

Although the choice of the binary n-cube for the first generation of multicomputers is easily justified, the analyses presented in a 1986 Caltech PhD thesis by William J. Dally [Dally 86] showed that the use of lower dimension versions of a k-ary n-cube [Seitz 84a] connecting N=kn nodes, e.g. an n=2 (2-D) torus or mesh, is optimal for minimizing message latency under the assumptions of (1) constant wire bi-section and (2) "wormhole" routing [Seitz 84b].

These 2-D (or optionally 3-D) networks also have the advantage that each node has a fixed number of connections to its immediate neighbors, and, if the nodes are also arrayed in two or three dimensions, the projection of the connection plan into the packaging medium has all short wires. Also, the number of nodes in such a machine can be increased at any time with a minimum amount of rewiring. The low dimension k-ary n-cube greatly decreases the number of channels, so that with a fixed amount of wire across the bisection, one may use wider channels of proportionally higher bandwidth. This higher bandwidth, particularly with wormhole routing, can more than compensate for the longer average path a message packet must travel to reach its destination.

The time required for a packet to reach its destination in a synchronous router is given by, Tn =Tc (pD+[L/W]); where Tc is the cycle time, p is the number of pipeline stages in each router, D is the number of channels that a packet must traverse to reach its destination, L is the length of the packet, and W is the width of a flow control unit (referred to hereinafter as a "flit").

As an example, let us assume that there are N=256 nodes, 512 wires crossing the bisection for communication (neglecting overhead from synchronization wires), a message length of 20 bytes (i.e. 160 bits), and an internal 2-stage pipeline. The bisection of a binary hypercube has 128 channels in each direction, each with a width of 2 bits, and an average of (log2 N)/2=4 nodes that must be traversed, so that Tn =(24+160/2)Tc =88Tc. By comparison, the bisection of a 2-D (kk) mesh, where k=16, has 16 channels in each direction, each with a width of 16 bits, and an average of (2k/3)11 nodes must be traversed, so that Tn =(211+160/16)Tc =32Tc. Thus, the binary hype network in this example has over twice the average latency of a bidirectional mesh network with the same wire bisection.

The Torus Routing Chip (TRC) designed at Caltech in 1985 [Dally & Seitz 86] used unidirectional channels between the nodes 10 connected in a torus as shown in FIG. 3. This is also the subject of a patent application entitled Torus Routing Chip by Charles L. Seitz and William J. Dally, Ser. No. 944,842, Filed Dec. 19, 1986, and assigned to the common assignee of this application, the teachings of which are incorporated herein by reference. As depicted in FIG. 3, the torus is shown folded in its projection onto a common plane in order to keep all channels the same length. Deadlock (a major consideration in multicomputers) was avoided by using the concept of virtual channels, by which a packet injected into a network travels along a spiral of virtual channels, thus avoiding cyclic dependencies and the possibility of deadlock. The TRC was self-timed to avoid the problems associated with delivering a global clock to a large network. There were a total of 5 channels to deal with, i.e., channels to and from the node and 2 virtual channels each in x and y. Thus, the heart of the TRC involved a 55 crossbar switch. Although the initial version had a slow critical path, the revised version was expected to operate at 20MHz, with a latency from input to output of 50ns. Since each channel had 8 data lines, the TRC achieved a data rate of 20MB/s. Each packet is made up of a header, consisting of 2 bytes containing the relative x and y address of the destination, any number of non-zero data bytes, and a zero data byte signifying a "tail" or end of the packet. Upon entering the router, each packet has the address in its header decremented and tested for zero and is then passed out through the proper output channel. The connection stays open for the rest of the message and closes after passage of the tail (wormhole routing). If the desired output channel is unavailable, the message is blocked until the channel becomes available.

In the winter and spring of 1986, concurrently with the developments described above, groups of students in the "VLSI Design Laboratory" project course, under the direction of Dr. Charles Seitz of Caltech, were put to work designing different parts of the "Mosaic C" element. This single-chip node of a message-passing multicomputer was to contain a 16-bit central processing unit (CPU), several KBytes of on-chip dynamic random access memory (dRAM), and routing circuitry for communication with other chips. Each chip would form a complete node in a so-called fine-grain concurrent computer.

After looking at a few possible implementations, including the TRC described above, the group working on the routing section decided that a simple, bidirectional 2-D mesh should be used. A mesh had the advantage of keeping the length of wires between chips down to less than one inch, which would allow the use of a synchronous protocol, since clock skew as a function of wire length could be made very small between chips. A mesh would also allow the channels at the edge of the array to be reserved for communications with the outside world. The group also decided to use a bit-serial protocol for packets, both to minimize the number of pins on each chip and to minimize the number of connections needed between them; but, to organize the packets into flits sufficiently large that all of the routing information could be contained in the first flit. As in the TRC, the first Mosaic C router as specified by this group was to use virtual channels to avoid the possibility of deadlock. Each packet consisted of a 20-bit header with the relative x and y addresses of the destination and an arbitrary number of 20-bit flits consisting of a 16-bit data word and 4 control bits. The router also used wormhole routing with one of the control bits signifying a tail. Internally, flits were switched between input and output channels using a time multiplexed bus. The control circuitry was kept as simple as possible, and as a result, did not know how to forward a packet by itself. Each time the header of a packet came in, the CPU would be interrupted (using a dual-context processor for fast interrupt handling) to determine which output channel the packet should be connected to. This approach resulted in a latency of several micro-seconds per step in path formation, but allowed a lot of flexibility in routing under software control. Acknowledgement packets would automatically be sent and received between chips using the same channels to announce the availability of buffers. With a 20MHz system clock (anticipated for 2 micrometer CMOS technology), the bandwidth was expected to be about 2MB/s on each channel. This initial attempt at a routing circuit for incorporation into the Mosaic C chip was never reduced to a layout. After due consideration, it became obvious that it would consume a large amount of silicon area (on the chip) only to achieve fairly dismal performance.

Wherefore, it is an object of the present invention to provide a new method for routing message packets in a message-passing, multicomputer system which will allow the routing processor to provide good performance with a minimum amount of silicon area on the chip consumed thereby.

It is a further object of the present invention to provide a new element for use in a routing processor for routing message packets in a message-passing, multicomputer system.

It is still another object of the present invention to provide a multifunction node chip for use in fine grain message-passing, multicomputer systems incorporating a router for routing message packets in a manner to provide good performance with a minimum amount of silicon area on the chip consumed thereby.

Other objects and benefits of the present invention will become apparent from the detailed description which follows hereinafter when taken in conjunction with the drawing figures which accompany it.

SUMMARY OF THE INVENTION

The foregoing objects have been achieved in a fine-grain, message-passing, multicomputer, concurrent computing system wherein there are a plurality of computing nodes each including bus means for interconnecting the components of the chip; read only memory (ROM) operably connected to the bus means; random access memory (RAM) operably connected to the bus means; central processing unit (CPU) means operably connected to the bus means for executing instructions contained in the ROM and RAM; and packet interface (PI) means operably connected to the bus means for encoding headers on message packets being transmitted by the CPU means of one chip to another chip and for transferring the message packets to and from the RAM, by the improved method of routing the message packets between the nodes comprising the steps of, providing a routing automaton at each node and interconnecting the routing automata to define communications paths interconnecting the nodes along which the message packets can be routed; and at each routing automaton, receiving a message packet including routing directions comprising the header at an input thereof; reading the header; directing the message packet to one of two outputs thereof as a function of the routing directions contained in the header; and, updating the header to reflect the passage of the message packet through the routing automaton.

The preferred method additionally comprises the steps of, providing a packet interface; and at each packet interface, receiving a message packet to be transmitted to a destination node from the RAM; adding routing directions as a header to the beginning of the message packet; and, transmitting the message packet to the routing automaton of the next adjacent node on the route to the destination node. Additionally in the preferred method, at each packet interface there are the steps of, receiving a message packet at a destination node; stripping remaining portions of the header from the message packet; storing the message packet in the RAM; and, informing the CPU that the message packet is in the RAM.

The preferred method at each packet interface also comprises the additional step of, storing a memory map of the locations of the other nodes in the system and a corresponding route to each node; wherein the step of adding routing directions as a header to the beginning of the message packet comprises, receiving a destination node designator from the CPU requesting the transmission of the message packet; retrieving the route to the destination node from the memory map; and, adding the route to the destination node to the beginning of the message packet as part of a header.

DESCRIPTION OF THE DRAWINGS:

FIG. 1 is a simplified drawing depicting a prior art computer system wherein each node is connected directly to every other node.

FIG. 2 is a simplified drawing depicting the node interconnection scheme employed in a so-called hypercube according to the prior art.

FIG. 3 is a simplified drawing depicting a prior art torus routing chip interconnection scheme.

FIG. 4 is a simplified drawing of a single, one dimensional routing automaton according to the present invention.

FIG. 5 is a simplified drawing showing three routing automata of FIG. 4 connected in series to control packet movement in three dimensions.

FIG. 6 is a simplified functional block diagram of a fine-grain computer chip according to the present invention.

FIG. 7 is a drawing of an exemplary data stream as it passes through a series of nodes according to the method of packet routing of the present invention.

FIG. 8 is a simplified drawing to be employed with FIG. 7 to follow the example of FIG. 7.

FIG. 9 is a simplified block diagram of the internal structure of a routing automaton according to the present invention in one possible embodiment thereof.

FIG. 10 is a simplified block diagram of the internal structure of a routing automata according to the present invention in a preferred and tested embodiment thereof.

FIG. 11 is a simplified block diagram of the internal structure of a routing automaton according to the present invention in an embodiment thereof intended for use in a torus routing chip system of the type shown in FIG. 3.

FIG. 12 is a simplified block diagram of the internal structure of a routing automata according to the present invention in an embodiment thereof intended for use in a hypercube as shown in FIG. 2.

FIG. 13 is a drawing corresponding to the embodiment of FIG. 10 and depicting the elements thereof in their connected sequence as incorporated into a chip as laid out, built and tested by the applicant herein.

FIG. 14 is a functional block diagram depicting a stage of a synchronous pipeline system as is known in the art.

FIG. 15 is a functional block diagram depicting a stage of an asynchronous pipelined approach to the present invention.

FIG. 16 is a functional block diagram of a FIFO stage as employed in the asynchronous approach to the present invention.

FIG. 17 is a functional block diagram of the asynchronous routing automaton of the present invention in its built and tested embodiment employing binary switching logic.

DESCRIPTION OF THE PREFERRED EMBODIMENT

The present invention is based on two major deviations from the prior art. The first was the rejection of employing the x and y addresses of the destination of a packet in the header in favor of a prefix encoding scheme which would allow the packet header to encode the relative address of the destination in the form of a path "map" on several small successive flits. This approach was the key to getting around the problem of having to see a large amount of header information before any logic could decide where to send the head of the packet. This change also simplified the required routing logic circuitry enough to allow it to handle forwarding automatically on a local basis without having to disturb the CPU. The novel deadlock-free routing method decided upon (a design criteria "must") was to send packets on a fixed route on a mesh of first x, and then y, instead of employing virtual channels to avoid deadlock. Initial designs involved 5-bit flits (4 data, 1 control, with an acknowledge wire in the reverse direction) to be sent in parallel on each channel, and to be internally switched using a crossbar switch.

The second major deviation from the prior art was the early rejection of the crossbar switch in favor of a new formulation which has been designated as "routing automata". It was realized early on that a crossbar switch, while very general, had the disadvantage of taking up space (on the chip) proportional to (nW)2, where n is the number of inputs and outputs, and W is the number of bits being switched. With a fixed routing scheme being employed, the generality of a crossbar switch was not needed and it was highly desirable to devise a scheme in which the area consumed would only increase linearly with nW, or as close as possible thereto. This scaling would make it easier to modify the router for more dimensions or wider flits without involving major layout changes, would decrease the path length in the switch and hence its speed, and would, hopefully, decrease the overall area for designs with wide flits and a large number of dimensions.

As depicted in FIG. 4, each of the automata 12 is responsible for switching the packet streams for one dimension of the overall router. An automaton's input, generally indicated as 14, consists of streams from the + and - directions as well as from the previous dimension. Its output, generally indicated as 16, consists of streams to the + and - directions as well as to the next dimension. For n dimensions, n of the automata 12 are strung together in series as depicted in FIG. 5; and, if properly constructed, their size, i.e. the area consumed on the chip, increases roughly linearly with increased width of the flits, for a net increase in area proportional to nW, as desired. Both synchronous and asynchronous (i.e. self-timed) versions of automata according to the present invention will be described shortly. The self-timed version is intended to have each of its components highly modular so that they could be used not only to implement mesh routing, as in the Mosaic C chip, but could also be fit together to implement unidirectional routers, routers for hypercubes, or many other structures, limited only by the desires of the designer. The basic components include a FIFO for "glue" between stages and buffering, a switch used both to divide and merge data streams, a decrementer for adjusting the relative address of the destination as the packet passes through, and control structures for all of the foregoing. Before continuing with specifics of the automata, however, the novel prefix encoding scheme of the present invention as employed therein will be addressed first in further detail.

The prefix encoding scheme of the present invention allows packets to travel through a sequence of nodes at a constant rate, with the first flit of the header generally containing enough information to determine the output channel at each node. The scheme involves the use of a "leading zero" flit that can be used to limit how much of the relative address needs to be looked at before a decision can be made or the address decremented. As mentioned earlier, the header acts as a "map" on a node-by-node basis rather than as a final destination address as in the prior art; that is, as a packet passes through each automaton 12 the only decision that must be made is, like following a map at a highway intersection, "Am I there?" and, if not, "Do I turn or go straight?" The example depicted in FIGS. 7 and 8 may help to clarify the method of the present invention. In the example, there is a 3-bit flit employed (2 data, 1 control), which is the minimum width that allows encoding of the necessary " alphabet" of symbols--which are +, -, ., T, 0, 1, 2 and 3. Where data elements 0, 1, 2, 3 of the alphabet need not be distinguished in an example, they are shown by the letter "M". Each line of FIG. 7 represents the packet as it leaves the node listed to the left and as depicted in FIG. 8, with "SOURCE" designating the node 10 sending the packet and "DESTINATION" indicating the node 10 intended to receive the packet. Time is from the top down in FIG. 7, as indicated. Note that as header flits are no longer needed as part of the "map", they are stripped off.

In the example, the message TMMMM.3+12+ consists of the header (.3+12+), the "payload" (MMMM), which could be of any length, and the tail flit (T). When this message is injected by the SOURCE node 10, the route to the DESTINATION node 10 is to go six nodes in the + direction from the SOURCE node 10 (i.e. 12 radix 4=1(4)+2(1)=6) and then three nodes in the + direction from NODE 6. Originally, the packet enters the network and takes the + direction in the first (i.e. x) dimension. The + flit is stripped off and the new leading flit (i.e. 2) is decremented while passing through each node until it reaches 0. During the decrement to 0 (in this example in NODE 2), the following flit is examined to see if it is a digit. In this case it is, so the leading flit becomes 0. In NODE 3, the second flit needs to be decremented to 0; but, since it is not followed by a digit, it becomes a "leading zero", indicated by the "." designator. The leading flit becomes a 3 (representing the 4 additional nodes to proceed in the same direction) and the leading zero indicator is placed in the following flit. In NODE 6, the leading flit is once again in the position of decrementing to 0. This time, however, the next flit is the leading zero designator. As a consequence, the third flit is used to switch the path of the packet to the + direction in the second (i.e. y) dimension from NODE 6. As with the initial + indicator, the + indicator and two leading zero (i.e. "..") flits are stripped from the header. The same process is continued until the relative address of the header is decremented to 0 once again. At this point, the packet has run out of dimensions to traverse, so it is passed into the receiving (i.e. DESTINATION) node 10. The tail (T) which makes up the end of the packet closes all of the channel connections as it passes through the nodes 10, and is finally stripped off at the DESTINATION node 10 along with the remaining header flits to leave only the data flits as indicated in FIG. 7. As can be appreciated from this example, the encoding scheme of the present invention allows the use of small flits to represent large offsets (compared with prior art x,y final address designations) while allowing decisions to be made based on only two flits of the header at once, which helps minimize the latency of forwarding through each node. The simple decision involved also allow a simple control logic to be employed.

Turning once again to the routing automata of the present invention, as previously mentioned, the reduction of the routing circuitry to simple automata that control the switching through only one dimension greatly simplifies the modification and expansion of a complete router. An individual automaton is also much easier to design and lay out due to the reduced number of inputs and outputs, and independence from the routing occurring in other dimension. As pointed out above, the basic one dimensional (1-D) automaton 12 of FIG. 4 has three inputs 14 for the receipt of packets travelling, respectively, (1) in the + direction, (2) in the - direction and (3) from the previous dimension. Simple finite state machines can then process the input streams, decide on a switch configuration that allows the largest number of packets to be forwarded, and then connect the streams (1) to the + direction, (2) to the - direction and (3) to the next dimension at the outputs 16.

The 1-D automata 12, which have three inputs and three outputs, and which are composed to make 2-D or 3-D automata, can themselves be composed of simpler automata. In the limit, the routing automata must include, at minimum, a decision element with one input stream and two output streams and a merge element with two input streams and one output stream. Other automata with one input stream and one output stream can be employed to take care of decrementing and/or stripping the header in the manner described herein.

The example depicted in FIG. 9 illustrates one possible configuration. The boxes in FIG. 9 represent decision elements 18 that process their incoming data streams and switch them onto their proper output stream. The circles represent merge elements 20 that take their input streams and arbitrate which of them to connect to their output stream; that is, the merge elements 20 include logic to avoid collisions between two streams being merged. Thus, a packet coming from the previous dimension that is to exit in the + direction would enter the leftmost decision element 18 (as the figure is viewed), be switched onto its upper output stream, and merge into the stream exiting in the + direction.

Breaking down the internal structure of the automata in this way can further simplify the design and layout, in the same manner as breaking up the router into a series of 1-D automata. Even when parts cannot be directly reused, time can often be saved by employing modification rather than a complete redesign. In the extreme case, each of the decision and merge operations can be converted into binary form, where 3-way elements are replaced by cascaded binary elements. In this case, the elements become very homogeneous and the automata can be formed out of a minimal subset of very simple elements. This approach is the one used in the self-timed routing automata to be discussed in detailed later herein. These automata can also be constructed for different channel configurations using the same set of internal elements. For example, they can be constructed for unidirectional channels as used in a torus or a hypercube. Examples of the internal structure of such automata are shown in FIGS. 11 and 12 where FIG. 11 depicts a unidirectional automata 12' for use in a torus routing scheme and FIG. 12 depicts a hypercube routing automata 12".

The invention of the routing automata and its associated method of operation in routing packets provided the opportunity of constructing a novel chip for the Mosaic C application as well; that is, the present invention includes a novel chip for containing each node in a fine-grain, message-passing, multicomputer, concurrent computing system wherein a plurality of computing nodes are each contained on a single chip. This "Mosaic chip" 22 is shown in functional block diagram form in FIG. 6. The Mosaic chip 22 is a complete node for a fine-grain concurrent computer. It contains all of the necessary elements including a 16-bit CPU 24 several KBytes of ROM 26 and dRAM 28, as well as routing circuitry comprising a packet interface (PI) 30 and router 32 for communicating with neighboring nodes in a mesh. All of these elements are tied to a common bus 36. Since the router had to fit on a chip along with a processor and memory, the design had to be simple and compact--the automata-based routing scheme of the present invention as described hereinbefore provided such a capability. The PI 30 takes care of encoding the packet header and transferring packet data to and from memory. A simple cycle-stealing form of Direct Memory Access (DMA) is preferred to keep up with the high data rates supported by the router. In the preferred embodiment, the PI 30 also contains some memory mapped locations that are used to specify the relative x and y addresses of the destination node, and an interrupt control register.

The PI 30 generates the appropriate direction control flits and multiplexes the relative x and y addresses in its registers into the flit width required by the router 32. It does the same multiplexing for the packet's data words, which come from an output queue in the dRAM 28. A tail flit is added when the output queue becomes empty. These flits are injected into the previous dimension input of the automaton for the first dimension. Any packets coming out of the last dimension simply have their tail stripped off and the flits are demultiplexed into a 16-bit word, which is then stored in an input queue in the dRAM 28. The CPU 24 may be interrupted either when the output queue becomes empty, when a tail is received, or when the input queue becomes full.

As will be described in greater detail hereinafter, the Mosaic router 34 communicates with other nodes using a 3-bit wide flit (2 data, 1 control) with an acknowledge wire in the reverse direction to control movement between stages of the preferred pipelined design and prevent overwriting from one stage to the next. Any time an acknowledge is present, flits are allowed to progress through the pipeline. The flit can also be made wider to include more data bits. As presently contemplated, the first "production" Mosaic chip will employ a 5-bit flit.

As a result of the bit-slice design employed in a tested embodiment, it became more efficient in the data path to combine the decision and merge operations in a slightly different manner than in the sample automaton 12 shown in FIG. 9. The actual preferred configuration is shown in FIG. 10. In this tested configuration, generally designated as 12'". the decision and merge operations were lumped together into one switch matrix, which handles both the multiplexing and demultiplexing of the data streams, with 4 minimally encoded control wires and their complements selecting one of the possible switch configurations. Although the combined switch approach may possibly minimize the overall size of the router by homogenizing the data path layout for the different channels, it also complicated the control circuitry. Thus, while it may have been the best approach for a 1-bit slice path, those implementing the present invention for wide data path slices may find that it is not a good approach for such applications.

In an attempt to minimize the overhead of extending the width of a flit, the Mosaic router as built and tested employed an approach of constructing the data path out of 1-bit wide slices, with the +, -, and N paths for each bit being placed immediately next to one another. This preferred approach allows the same switching elements to be used no matter how wide the flit is. On the negative side, it also means that the control signals for all three data paths had to be propagated through all of the elements, and this led to a somewhat larger overhead in wiring than is necessary. It also complicated the layout of the control circuitry since it had to fit in an effectively smaller pitch. The data path is made up of a number of elements as shown in FIG. 13 (which corresponds to the embodiment of FIG. 10 as actually laid out and implemented on a chip). Each element, of course, comprises three portions (for the +, - and N paths) as indicated by the dashed lines dividing them. When connected sequentially in the order shown, these elements form a complete data path for a Mosaic routing automaton. The inputs are to an input latch 38 followed by an input shift register 40 as required to properly interface with the preceding stage. This is followed by the zero and tail detection logic 42. Next follows the decrementer and leading zero generator logic 44. The stream switching element 46 follows next. As indicated by the dashed arrows, the stream switching element portions operate in the same manner as the three decision elements 18 of FIG. 10; that is, the two outer switching element portions switch between straight ahead and towards the center path while the center switching element portion can switch between straight ahead and either of the two outer paths. The properly switched packet paths from the stream switching element 46 then proceeds to an output latch 48 and an output buffer/disabler 50 as required to properly interface the asynchronous automata to the next node without destructive interference, as described earlier herein.

Turning briefly to the packet interface (PI) 32 mentioned earlier herein, using the minimum sized flit data width of 2 bits and the Mosaic word size of 16 bits, the inventors herein found that the router of the present invention can deliver one word every 8 cycles. The data rate becomes even faster if wider flits are used. There is no way that the CPU 24 can keep up with this data rate under software control. Therefore, it was decided that in the preferred approach a simple form of cycle-stealing DMA (as is known in the art) should be used to transfer packets between the router 34 and memory, i.e. dRAM 28. For this purpose, four extra registers were added to the CPU 24 to be used as address pointers and limit registers for the input and output channels. Each time the storage bus is not being used by the CPU 24 (about once every 3 or 4 cycles for typical code) the microcode PLA emits a bus release signal. A simple finite state machine then arbitrates between bus requests from a refresh counter, the input channel, and the output channel, and grants the bus cycle to one of them. If a channel is given the cycle, it pulls on a line which causes the corresponding address pointer in the CPU 24 to be placed on the address bus, and the channel then reads or writes data from that location in the dRAM 28. The address pointer is then incremented and compared with its limit register. If the two are equal, the DMA logic is disabled and the CPU 24 is interrupted to process the I/O queues. If an interrupt occurs when an output packet word is requested, a tail is sent following that packet.

A packet is sent by setting the output address pointer to the starting location of the desired data in the dRAM 28 and setting the limit register to the location of the end of that data. The CPU 24 then writes the relative address of the destination node of the packet to memory mapped locations in the channel (using the sign and magnitude form described above). When the last location is written, the header is encoded and sent, with the data following. Data is best received by setting the input channel pointer to the starting location of a queue and setting the limit register to the end of the queue. The CPU 24 can then examine the value of the address pointer at any time to see how many words are in the queue. Currently, there is no provision for marking a tail, so if explicit knowledge of the length of a data packet is required, one of two methods must be use--(1) the length of the packet can be encoded in the first word of the packet, which the CPU 24 can then examine, or (2) the CPU 24 can be interrupted when the tail of a packet arrives so that the interrupt routine can examine the input address pointer register to determine the length of the packet.

Much of what was learned from the design of the Mosaic synchronous router can be applied to an asynchronous routing automaton. An asynchronous router can be used in physically larger systems, such as second-generation multicomputers, in which the interconnections are not limited to being very short wires. The mesh routing chip (MRC) now to be described is designed to meet the specifications for these second-generation multicomputers. These routing automata are intended to be a separate chip, similar to the Torus Routing Chip mentioned earlier herein, as opposed to being part of an integrated "total node" chip such as the Mosaic chip described above. As in the Mosaic router, the 2-D MRC has 5 bidirectional channels, with channels in the +x, -x, +y, -y directions and a channel connecting it to the packet interface. Data is represented on each of the channels using 9-bit wide flits (1 tail, 8 data), where the first bit is the tail bit.

For a 2-D router, the first two flits form the header. In each header flit, a relative offset of six bits allows for up to sixty four nodes along a single dimension, which should be sufficient for any second generation machine with large nodes. The seventh data bit in a header flit is reserved for the future addition of broadcast support and the eighth bit is the sign. A 9-bit flit together with the asynchronous request and acknowledge signals for each channel requires a total of eleven pins. Five directional channels (+x, -x, +y, -y, and the node) then require 110 pins, and the constructed version of the chip was placed in a 132 pin PGA package. The remaining pins are used for a reset and for multiple Vdd and GND pins to minimize noise.

Pipelining is well known in the art and used in many synchronous systems to increase their throughput, [Seitz 84]. Each cycle, each stage of the pipeline accepts data from the previous stage, performs some relatively simple operation on the data, and passes the resulting data on to the next stage. Data is passed between stages during each cycle by clocked registers. A typical synchronous pipeline section is depicted in FIG. 14. The combinational logic in each stage has one clock period in which to produce valid output data based on its input data. This time, Tc, is the same for all stages in the pipeline, and the time required for data to flow through the pipeline is Tn =Tc p, where p is the number of stages in the pipeline.

A similar arrangement can be used in an asynchronous system. Instead of a global clock, the 4-cycle request and acknowledge signals [Mead & Conway 80] are used to control data flow between stages, as shown in FIG. 15. In an asynchronous pipeline, each stage processes data at its own rate, and passes its output data to the next stage when it is finished. Each stage, therefore, has its own cycle time, tc, which is the time it requires to complete its request and acknowledge 4-cycle. Each stage also has a characteristic fallthrough time, tf, which is the time required from when an input request is received until the data is processed and an output request is generated. The ratio of tc /tf determines across how many stages a cycle (and an item of data being worked on) extends. By necessity, tf <tc and for most practical designs tc ≈2tf.

Looking back at the three dimensional automata series shown in FIG. 5, it can be seen that data flows in only one direction within each automaton 12, and the different paths are independent (except for merge operations). Thus, it is an easy transition to think of routing automata as being implemented using a pipeline structure, as in the MRC. As established earlier here, the transit time, from source to destination, of an unblocked packet in the synchronous case is given by, Tn =Tc (pD+L/W). For the asynchronous case, some of these terms are changed because the head of a packet advances at the fallthrough rate, which is less than the cycle time. The formula for network latency of the asynchronous case can be expressed as, Tn =Tf D+Tc [L/W]; where Tf is the fallthrough time for a node. For relatively short packets, D is comparable to L/W, so there is no strong motivation to reduce either Tf or Tc at the expense of the other. Tf and Tc can be expressed as, Tc ≈2tp +tc and Tf ≈tp +tf p; where, tp is the time required to drive the pads, p is the number of stages in the internal pipeline, and tf and tc are the average fallthrough and cycle times, respectively, for a single stage of the pipeline, as described above. A pad, and the external components connected to it, are relatively difficult for a VLSI chip to drive, so tp >>tc >tf. This means that the number of stages in an asynchronous pipeline can be increased without significantly increasing the overall delay. In the case of the MRC, increased pipelining has a significant advantage in that having more pipeline stages provides the network with more internal storage for packets, and consequently, helps prevent congestion of the network.

The preferred asynchronous FIFO structure employed in the present invention is based on chained Muller C-elements [Mead & Conway 80]. Its basic structure is shown in FIG. 16. Care must be taken to insure that the register cells 54 controlled by the C-elements 56 are fully turned on or off before request and acknowledge signals are generated and that they are fast enough to latch the data before the load line changes state again. The first requirement can be taken care of by introducing sufficient delay in the request and acknowledge lines, or, more safely, by using a Schmitt trigger (a gate with hysteresis on each input as is known in the art) to detect the state of the load control line.

Initially, all of the C-elements 56 are reset to 0. Data is presented on the inputs, and the request line (R0) is pulled high. This causes the output of the first C-element 56 to be pulled high, causing the data to be latched. When this load control line becomes high, the data is assumed to be latched, a request (R1) is passed to the next stage, and the acknowledge line (A0) to the previous stage is pulled high. When the next stage latches the data and an acknowledge (A1) is received from it and the request line (R0) goes low, the FIFO state is reset to its initial condition. In this manner, the data quickly falls through the chain of FIFOs, with the data always spread across at least two stages. If the request time is significantly less than the acknowledgment time, then the flit will be spread across more than two stages while it is falling through the pipeline.

The second requirement of fast latches is usually easy to meet, since latches are generally much faster than C-elements (and Schmitt triggers). The registers 54 used in these FIFOs consist of a closed loop of a strong and weak inverter 58, with the data gated to the input node of the stronger inverter 58 as is known in the art. Thus, the data is inverted at each stage of the FIFO, and these stages should be used in multiples of two to preserve the sense of the data. In order to save space in the tested embodiment, the register cells 54 are flip-composed vertically, for data path widths that are multiples of 2. In the MRC, the path width is 10 bits, so there is an extra bit available for propagating information between stages of the pipeline, if it becomes desirable to do so. For 1.2 micrometer CMOS, it is expected, from model calculations, that each FIFO should have a tf (i.e. fallthrough time) of about 1 ns. This follows the assumption of tf <tp (pad driving time), which is about 5 ns (more with a large load or long connection line), so that extra stages of pipelining do not add significantly to the latency of a packet passing through a node.

For simplicity and easy modularity, binary decision and merge elements were used in the asynchronous automata as built and tested by the applicant herein. With careful design, it was possible to use basically the same switch for both elements, simply by flipping it sideways. Each section of the switch consists of a simple 1-to-2 multiplexer (or 2-to-1 demultiplexer), and enough of these are connected along a diagonal to handle the width of a flit. Because of the use of binary switches, the internal construction of the tested asynchronous automata 12"" is as shown in FIG. 17. The decrementers 60 are a simple asynchronous ripple borrow type, with a line that is pulled low to indicate completion. Completion is defined by a stage that receives a borrow in, and produces no borrow out because of having a 1 on its data input. This completion signal is used to generate the request signal for the next stage in the pipeline. As with the registers 54 in the FIFO, it is assumed that the forward propagation through the decrementers 60 is less than the cycle time of a C-element and Schmitt trigger combination.

Finally, it should be noted that, internally, the automata of the present invention use 4-cycle signaling for flow control; but, to increase speed and conserve power, signals sent off chip must use a 2-cycle convention. A small amount of conversion must be done, therefore, before driving the pads. This conversion also adds a small amount of delay in the request/acknowledge path, which helps ensure that the data is valid by the time a request is received, even if the delays in the lines are slightly skewed. If the delays are skewed by a large amount, a simple lumped RC delay can be added to the request line external to the chip.

It is worthy of note to report that the first Asynchronous MRC chips were suubmitted in September 1987 for prototype fabrication in a 3 micrometer CMOS process. The fabricated and packaged chips were returned in November 1987. They functioned correctly at a speed of approximately 10 Mflits/s. Production chips fabricated later in a 1.6 micrometer CMOS process operated at about 30 Mflits/s, about three times faster. Subsequent design refinements in layout and implementation have increased the potential speed of the MRC to 48 Mflits/s in 3 micrometer CMOS. This speed obtained by a prototype of FIFO and 2/4 cycle conversion circuitry submitted for fabrication in February 1988 and returned and tested in April 1988. In a 1.6 micrometer CMOS process, these designs are anticipated to operate at approximately 100 Mflits/s. The primary limitation at these speeds is the lead inductance of PGA packages. Improved packaging techniques should allow such chips to operate at 150 Mflits/s.

Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US4032899 *Jan 19, 1976Jun 28, 1977International Business Machines CorporationApparatus and method for switching of data
US4466060 *Feb 11, 1982Aug 14, 1984At&T Bell Telephone Laboratories, IncorporatedMessage routing in a computer network
US4516238 *Mar 28, 1983May 7, 1985At&T Bell LaboratoriesSelf-routing switching network
US4621359 *Oct 18, 1984Nov 4, 1986Hughes Aircraft CompanyLoad balancing for packet switching nodes
US4731878 *Nov 29, 1985Mar 15, 1988American Telephone And Telegraph Company, At&T Bell LaboratoriesSelf-routing switch node combining electronic and photonic switching
US4742511 *Jun 13, 1985May 3, 1988Texas Instruments IncorporatedMethod and apparatus for routing packets in a multinode computer interconnect network
US4763321 *Aug 4, 1986Aug 9, 1988International Business Machines CorporationDynamic bandwidth allocation mechanism between circuit slots and packet bit stream in a communication network
US4774706 *Oct 29, 1986Sep 27, 1988British Telecommunications Public Limited CompanyPacket handling communications network
US4780873 *May 19, 1986Oct 25, 1988General Electric CompanyCircuit switching network with routing nodes
US4782485 *Nov 9, 1987Nov 1, 1988Republic Telcom Systems CorporationMultiplexed digital packet telephone system
US4797880 *Oct 7, 1987Jan 10, 1989Bell Communications Research, Inc.Non-blocking, self-routing packet switch
US4797882 *Oct 2, 1985Jan 10, 1989American Telephone And Telegraph Company, At&T Bell LaboratoriesMesh-based switching network
US4805091 *Jun 4, 1985Feb 14, 1989Thinking Machines CorporationMethod and apparatus for interconnecting processors in a hyper-dimensional array
US4811210 *Aug 29, 1986Mar 7, 1989Texas Instruments IncorporatedA plurality of optical crossbar switches and exchange switches for parallel processor computer
US4813037 *Jan 15, 1987Mar 14, 1989Alcatel NvSwitching system
US4814980 *Apr 1, 1986Mar 21, 1989California Institute Of TechnologyConcurrent hypercube system with improved message passing
US4825206 *May 3, 1988Apr 25, 1989International Business Machines CorporationAutomatic feedback of network topology data
US4890281 *Nov 6, 1987Dec 26, 1989Cselt - Centro Studi E Laboratori Telecomunicazioni S.P.A.Switching element for self-routing multistage packet-switching interconnection networks
US4893303 *Nov 2, 1988Jan 9, 1990Kabushiki Kaisha ToshibaMethod and apparatus for parallel computation
US4899334 *Oct 11, 1988Feb 6, 1990Oki Electric Industry Co., Ltd.Self-routing multistage switching network for fast packet switching system
US4899335 *Dec 21, 1988Feb 6, 1990American Telephone And Telegraph Company, At&T Bell LaboratoriesSelf routing packet switching network architecture
US4933933 *Dec 19, 1986Jun 12, 1990The California Institute Of TechnologyTorus routing chip
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US5224100 *May 9, 1991Jun 29, 1993David Sarnoff Research Center, Inc.Routing technique for a hierarchical interprocessor-communication network between massively-parallel processors
US5255368 *Aug 19, 1991Oct 19, 1993Hewlett-Packard CompanyMethod for selecting data communications paths for routing messages between processors in a parallel processing computer system organized as a hypercube
US5265207 *Apr 8, 1993Nov 23, 1993Thinking Machines CorporationParallel computer system including arrangement for transferring messages from a source processor to selected ones of a plurality of destination processors and combining responses
US5317564 *Dec 30, 1992May 31, 1994Intel CorporationMerging network for collection of data from multi-computers
US5317566 *Aug 18, 1993May 31, 1994Ascom Timeplex Trading AgLeast cost route selection in distributed digital communication networks
US5317755 *Apr 10, 1991May 31, 1994General Electric CompanySystolic array processors for reducing under-utilization of original design parallel-bit processors with digit-serial processors by using maximum common divisor of latency around the loop connection
US5333279 *Jun 1, 1992Jul 26, 1994Intel CorporationSelf-timed mesh routing chip with data broadcasting
US5367692 *May 30, 1991Nov 22, 1994Thinking Machines CorporationParallel computer system including efficient arrangement for performing communications among processing node to effect an array transposition operation
US5377182 *Aug 18, 1993Dec 27, 1994The United States Of America As Represented By The Administrator Of The National Aeronautics And Space AdministrationN x N crossbar for routing packets
US5408646 *Sep 17, 1992Apr 18, 1995International Business Machines Corp.Multipath torus switching apparatus
US5434972 *Jan 10, 1992Jul 18, 1995Gec-Marconi LimitedNetwork for determining route through nodes by directing searching path signal arriving at one port of node to another port receiving free path signal
US5444701 *Oct 29, 1992Aug 22, 1995International Business Machines CorporationMethod of packet routing in torus networks with two buffers per edge
US5452304 *Sep 17, 1991Sep 19, 1995Telefonaktiebolaget Lm EricssonMethod and a device for destination and source addressing in a packet network
US5465251 *Jun 9, 1993Nov 7, 1995International Business Machines CorporationNetwork addressing
US5471580 *Sep 30, 1992Nov 28, 1995Hitachi, Ltd.Hierarchical network having lower and upper layer networks where gate nodes are selectively chosen in the lower and upper layer networks to form a recursive layer
US5495475 *Feb 27, 1995Feb 27, 1996International Business Machines CorporationResolution of race conditions in cascaded switches
US5506838 *Dec 29, 1994Apr 9, 1996Emc CorporationPacket propagation and dynamic route discovery apparatus and techniques
US5519694 *Feb 4, 1994May 21, 1996Massachusetts Institute Of TechnologyConstruction of hierarchical networks through extension
US5528762 *Dec 27, 1993Jun 18, 1996Intel CorporationSelf-timed data streaming receiver and transmitter having reduced latency and increased data streaming capability
US5533198 *Nov 30, 1992Jul 2, 1996Cray Research, Inc.Direction order priority routing of packets between nodes in a networked system
US5546390 *Dec 29, 1994Aug 13, 1996Storage Technology CorporationMethod and apparatus for radix decision packet processing
US5548773 *Mar 30, 1993Aug 20, 1996The United States Of America As Represented By The Administrator Of The National Aeronautics And Space AdministrationDigital parallel processor array for optimum path planning
US5550984 *Dec 7, 1994Aug 27, 1996Matsushita Electric Corporation Of AmericaSecurity system for preventing unauthorized communications between networks by translating communications received in ip protocol to non-ip protocol to remove address and routing services information
US5553078 *Jan 13, 1995Sep 3, 1996Fujitsu LimitedCommunication control system between parallel computers
US5577028 *Dec 14, 1993Nov 19, 1996Fujitsu LimitedRouting system using a neural network
US5579527 *Mar 14, 1995Nov 26, 1996David Sarnoff Research CenterApparatus for alternately activating a multiplier and a match unit
US5581705 *Dec 13, 1993Dec 3, 1996Cray Research, Inc.Messaging facility with hardware tail pointer and software implemented head pointer message queue for distributed memory massively parallel processing system
US5581778 *Apr 4, 1995Dec 3, 1996David Sarnoff Researach CenterAdvanced massively parallel computer using a field of the instruction to selectively enable the profiling counter to increase its value in response to the system clock
US5583990 *Dec 10, 1993Dec 10, 1996Cray Research, Inc.System for allocating messages between virtual channels to avoid deadlock and to optimize the amount of message traffic on each type of virtual channel
US5586289 *Apr 15, 1994Dec 17, 1996David Sarnoff Research Center, Inc.Method and apparatus for accessing local storage within a parallel processing computer
US5592610 *Dec 21, 1994Jan 7, 1997Intel CorporationMethod and apparatus for enhancing the fault-tolerance of a network
US5598568 *May 6, 1993Jan 28, 1997Mercury Computer Systems, Inc.Multicomputer memory access architecture
US5600629 *Sep 8, 1995Feb 4, 1997Motorola, Inc.Inter-satellite method for routing packets
US5602838 *Dec 21, 1994Feb 11, 1997Lucent Technologies Inc.Global multi-satellite network
US5606551 *Dec 21, 1994Feb 25, 1997Lucent Technologies Inc.Bidirectional mesh network
US5608870 *Jun 2, 1995Mar 4, 1997The President And Fellows Of Harvard CollegeIn a parallel computer system
US5613067 *Dec 30, 1993Mar 18, 1997International Business Machines CorporationMethod and apparatus for assuring that multiple messages in a multi-node network are assured fair access to an outgoing data stream
US5640399 *Sep 18, 1995Jun 17, 1997Lsi Logic CorporationSingle chip network router
US5659686 *Sep 22, 1994Aug 19, 1997Unisys CorporationMethod of routing a message to multiple data processing nodes along a tree-shaped path
US5668809 *Oct 20, 1993Sep 16, 1997Lsi Logic CorporationSingle chip network hub with dynamic window filter
US5669008 *May 5, 1995Sep 16, 1997Silicon Graphics, Inc.Hierarchical fat hypercube architecture for parallel processing systems
US5675736 *Jul 24, 1996Oct 7, 1997International Business Machines CorporationMulti-node network with internode switching performed within processor nodes, each node separately processing data and control messages
US5689646 *Oct 10, 1995Nov 18, 1997Cray Research, Inc.Configuring of networked system to permit replacement of failed modes and selection of alternate paths
US5689647 *Nov 1, 1995Nov 18, 1997Sanyo Electric Co., Ltd.Parallel computing system with processing element number setting mode and shortest route determination with matrix size information
US5701416 *Apr 13, 1995Dec 23, 1997Cray Research, Inc.Adaptive routing mechanism for torus interconnection network
US5708659 *Feb 16, 1995Jan 13, 1998Lsi Logic CorporationMethod for hashing in a packet network switching system
US5721819 *May 5, 1995Feb 24, 1998Silicon Graphics CorporationProgrammable, distributed network routing
US5721821 *May 28, 1996Feb 24, 1998Fujitsu LimitedInformation processing system having ring fashioned bus connection
US5721921 *May 25, 1995Feb 24, 1998Cray Research, Inc.Barrier and eureka synchronization architecture for multiprocessors
US5732233 *Jan 23, 1995Mar 24, 1998International Business Machines CorporationHigh speed pipeline method and apparatus
US5737320 *May 30, 1995Apr 7, 1998Excel Switching CorporationMethod of transferring information
US5737628 *Jun 12, 1996Apr 7, 1998Cray Research, Inc.Multiprocessor computer system with interleaved processing element nodes
US5797035 *Jun 12, 1996Aug 18, 1998Cray Research, Inc.Networked multiprocessor system with global distributed memory and block transfer engine
US5802287 *Aug 3, 1995Sep 1, 1998Lsi Logic CorporationInformation processing system interconnection device
US5826033 *Nov 27, 1992Oct 20, 1998Fujitsu LimitedParallel computer apparatus and method for performing all-to-all communications among processing elements
US5828858 *Sep 16, 1996Oct 27, 1998Virginia Tech Intellectual Properties, Inc.Worm-hole run-time reconfigurable processor field programmable gate array (FPGA)
US5835925 *Mar 13, 1996Nov 10, 1998Cray Research, Inc.Using external registers to extend memory reference capabilities of a microprocessor
US5841772 *Mar 7, 1996Nov 24, 1998Lsi Logic CorporationATM communication system interconnect/termination unit
US5841973 *Mar 13, 1996Nov 24, 1998Cray Research, Inc.Messaging in distributed memory multiprocessing system having shell circuitry for atomic control of message storage queue's tail pointer structure in local memory
US5848068 *Mar 7, 1996Dec 8, 1998Lsi Logic CorporationATM communication system interconnect/termination unit
US5854908 *Oct 15, 1996Dec 29, 1998International Business Machines CorporationComputer system generating a processor interrupt in response to receiving an interrupt/data synchronizing signal over a data bus
US5856975 *Dec 8, 1994Jan 5, 1999Lsi Logic CorporationHigh speed single chip digital video network apparatus
US5859981 *Jul 11, 1996Jan 12, 1999Super P.C., L.L.C.Method for deadlock-free message passing in MIMD systems using routers and buffers
US5864551 *May 31, 1995Jan 26, 1999Excel Switching CorporationMethod of operating a bridge for expandable telecommunications system
US5864738 *Mar 13, 1996Jan 26, 1999Cray Research, Inc.Massively parallel processing system using two data paths: one connecting router circuit to the interconnect network and the other connecting router circuit to I/O controller
US5867723 *Aug 5, 1996Feb 2, 1999Sarnoff CorporationAdvanced massively parallel computer with a secondary storage device coupled through a secondary storage interface
US5872772 *Aug 5, 1996Feb 16, 1999Toyota Kabushiki KaishaSource position detecting method and communications system, communications equipment and relay suitable for executing the detecting method
US5878230 *Jan 5, 1995Mar 2, 1999International Business Machines CorporationSystem for email messages wherein the sender designates whether the recipient replies or forwards to addresses also designated by the sender
US5898826 *Nov 22, 1995Apr 27, 1999Intel CorporationMethod and apparatus for deadlock-free routing around an unusable routing component in an N-dimensional network
US5920561 *Mar 7, 1996Jul 6, 1999Lsi Logic CorporationATM communication system interconnect/termination unit
US5930256 *Mar 28, 1997Jul 27, 1999Xerox CorporationSelf-arbitrating crossbar switch
US5963543 *Aug 5, 1997Oct 5, 1999Lsi Logic CorporationError detection and correction apparatus for an asynchronous transfer mode (ATM) network device
US5970232 *Nov 17, 1997Oct 19, 1999Cray Research, Inc.Router table lookup mechanism
US5982749 *Mar 7, 1996Nov 9, 1999Lsi Logic CorporationATM communication system interconnect/termination unit
US6002683 *Jun 5, 1998Dec 14, 1999Excel Switching CorporationBridge for expandable telecommunications system
US6016510 *Jun 24, 1998Jan 18, 2000Siemens Pyramid Information Systems, Inc.TORUS routing element error handling and self-clearing with programmable watermarking
US6026444 *Jun 24, 1998Feb 15, 2000Siemens Pyramid Information Systems, Inc.TORUS routing element error handling and self-clearing with link lockup prevention
US6029212 *Jan 13, 1998Feb 22, 2000Cray Research, Inc.Method of handling arbitrary size message queues in which a message is written into an aligned block of external registers within a plurality of external registers
US6038629 *Sep 17, 1998Mar 14, 2000International Business Machines CorporationComputer system generating a processor interrupt in response to receiving an interrupt/data synchronizing signal over a data bus
US6055618 *Oct 31, 1995Apr 25, 2000Cray Research, Inc.Virtual maintenance network in multiprocessing system having a non-flow controlled virtual maintenance channel
US6085303 *Nov 17, 1997Jul 4, 2000Cray Research, Inc.Seralized race-free virtual barrier network
US6101181 *Nov 17, 1997Aug 8, 2000Cray Research Inc.Virtual channel assignment in large torus systems
US6118779 *Aug 20, 1998Sep 12, 2000Excel Switching Corp.Apparatus and method for interfacing processing resources to a telecommunications switching system
US6145072 *Sep 20, 1994Nov 7, 2000Hughes Electronics CorporationIndependently non-homogeneously dynamically reconfigurable two dimensional interprocessor communication topology for SIMD multi-processors and apparatus for implementing same
US6216174Sep 29, 1998Apr 10, 2001Silicon Graphics, Inc.System and method for fast barrier synchronization
US6230252Nov 17, 1997May 8, 2001Silicon Graphics, Inc.Hybrid hypercube/torus architecture
US6272548Jul 26, 1996Aug 7, 2001British Telecommunications Public Limited CompanyDead reckoning routing of packet data within a network of nodes having generally regular topology
US6373846Mar 9, 1998Apr 16, 2002Lsi Logic CorporationSingle chip networking device with enhanced memory access co-processor
US6401171 *Feb 26, 1999Jun 4, 2002Cisco Technology, Inc.Method and device for storing an IP header in a cache memory of a network node
US6415344 *Apr 28, 1999Jul 2, 2002Stmicroelectronics LimitedSystem and method for on-chip communication
US6522646Jun 30, 1999Feb 18, 2003Excel Switching Co.Expandable telecommunications system
US6535512Mar 7, 1996Mar 18, 2003Lsi Logic CorporationATM communication system interconnect/termination unit
US6658511 *Dec 26, 2000Dec 2, 2003Hitachi, Ltd.Data processing processor
US6674720Sep 29, 1999Jan 6, 2004Silicon Graphics, Inc.Age-based network arbitration system and method
US6718428Dec 18, 2000Apr 6, 2004Sun Microsystems, Inc.Storage array interconnection fabric using a torus topology
US6751698Sep 29, 1999Jun 15, 2004Silicon Graphics, Inc.Multiprocessor node controller circuit and method
US6909695May 7, 2001Jun 21, 2005Sun Microsystems, Inc.Fault-tolerant, self-healing routing scheme for a multi-path interconnection fabric in a storage network
US6910092Dec 10, 2001Jun 21, 2005International Business Machines CorporationChip to chip interface for interconnecting chips
US6944696Sep 30, 2003Sep 13, 2005Renesas Technology Corp.Data processing processor
US6996681 *Apr 26, 2000Feb 7, 2006Bull, S.A.Modular interconnection architecture for an expandable multiprocessor machine, using a multilevel bus hierarchy and the same building block for all the levels
US7007189May 7, 2001Feb 28, 2006Sun Microsystems, Inc.Routing scheme using preferred paths in a multi-path interconnection fabric in a storage network
US7043612Jun 2, 2003May 9, 2006Fujitsu Siemens Computers LlcCompute node to mesh interface for highly scalable parallel processing system and method of exchanging data
US7072976 *Jan 4, 2001Jul 4, 2006Sun Microsystems, Inc.Scalable routing scheme for a multi-path interconnection fabric
US7100020 *May 7, 1999Aug 29, 2006Freescale Semiconductor, Inc.Digital communications processor
US7280545 *Dec 20, 2001Oct 9, 2007Nagle Darragh JComplex adaptive routing system and method for a nodal communication network
US7334110Aug 18, 2003Feb 19, 2008Cray Inc.Decoupled scalar/vector computer architecture system and method
US7366873Aug 18, 2003Apr 29, 2008Cray, Inc.Indirectly addressed vector load-operate-store method and apparatus
US7379424Aug 18, 2003May 27, 2008Cray Inc.Systems and methods for routing packets in multiprocessor computer systems
US7401161Dec 18, 2000Jul 15, 2008Sun Microsystems, Inc.High performance storage array interconnection fabric using multiple independent paths
US7421565Aug 18, 2003Sep 2, 2008Cray Inc.Method and apparatus for indirectly addressed vector load-add -store across multi-processors
US7437521Aug 18, 2003Oct 14, 2008Cray Inc.Multistream processing memory-and barrier-synchronization method and apparatus
US7503048Aug 18, 2003Mar 10, 2009Cray IncorporatedScheduling synchronization of programs running as streams on multiple processors
US7519771Aug 18, 2003Apr 14, 2009Cray Inc.System and method for processing memory instructions using a forced order queue
US7543133Aug 18, 2003Jun 2, 2009Cray Inc.Latency tolerant distributed shared memory multiprocessor computer
US7586888Feb 17, 2006Sep 8, 2009Mobitrum CorporationMethod and system for mesh network embedded devices
US7596251Jan 30, 2004Sep 29, 2009Nexus Biosystems, Inc.Automated sample analysis system and method
US7630736Sep 29, 2006Dec 8, 2009Mobitrum CorporationMethod and system for spatial data input, manipulation and distribution via an adaptive wireless transceiver
US7688737 *Mar 5, 2007Mar 30, 2010International Business Machines CorporationLatency hiding message passing protocol
US7734706 *Aug 22, 2007Jun 8, 2010International Business Machines CorporationLine-plane broadcasting in a data communications network of a parallel computer
US7735088Aug 18, 2003Jun 8, 2010Cray Inc.Scheduling synchronization of programs running as streams on multiple processors
US7757497Jan 19, 2009Jul 20, 2010Cray Inc.Method and apparatus for cooling electronic components
US7793073Jun 29, 2007Sep 7, 2010Cray Inc.Method and apparatus for indirectly addressed vector load-add-store across multi-processors
US7801058Jul 20, 2007Sep 21, 2010Mobitrum CorporationMethod and system for dynamic information exchange on mesh network devices
US7827385Aug 2, 2007Nov 2, 2010International Business Machines CorporationEffecting a broadcast with an allreduce operation on a parallel computer
US7840779Aug 22, 2007Nov 23, 2010International Business Machines CorporationLine-plane broadcasting in a data communications network of a parallel computer
US7881321Jul 28, 2008Feb 1, 2011Silicon Graphics InternationalMultiprocessor node controller circuit and method
US7991857Mar 24, 2008Aug 2, 2011International Business Machines CorporationBroadcasting a message in a parallel computer
US8013629Sep 16, 2009Sep 6, 2011Massachusetts Institute Of TechnologyReconfigurable logic automata
US8035414Apr 13, 2009Oct 11, 2011Massachusetts Institute Of TechnologyAsynchronous logic automata
US8122228Mar 24, 2008Feb 21, 2012International Business Machines CorporationBroadcasting collective operation contributions throughout a parallel computer
US8140826May 29, 2007Mar 20, 2012International Business Machines CorporationExecuting a gather operation on a parallel computer
US8161268May 21, 2008Apr 17, 2012International Business Machines CorporationPerforming an allreduce operation on a plurality of compute nodes of a parallel computer
US8161480May 29, 2007Apr 17, 2012International Business Machines CorporationPerforming an allreduce operation using shared memory
US8281053Jul 21, 2008Oct 2, 2012International Business Machines CorporationPerforming an all-to-all data exchange on a plurality of data buffers by performing swap operations
US8305935Sep 17, 2010Nov 6, 2012Mobitrum CorporationMethod and system for dynamic information exchange on location aware mesh network devices
US8305936Apr 11, 2011Nov 6, 2012Mobitrum CorporationMethod and system for dynamic information exchange on a mesh network in a vehicle
US8307194Aug 18, 2003Nov 6, 2012Cray Inc.Relaxed memory consistency model
US8332460Apr 14, 2010Dec 11, 2012International Business Machines CorporationPerforming a local reduction operation on a parallel computer
US8332551 *Jul 27, 2010Dec 11, 2012National Sun Yat-Sen UniversityCompressed data managing system and method for circular buffer
US8346883May 19, 2010Jan 1, 2013International Business Machines CorporationEffecting hardware acceleration of broadcast operations in a parallel computer
US8375197May 21, 2008Feb 12, 2013International Business Machines CorporationPerforming an allreduce operation on a plurality of compute nodes of a parallel computer
US8411590Apr 18, 2011Apr 2, 2013Mobitrum CorporationMesh network remote control device
US8422402Apr 1, 2008Apr 16, 2013International Business Machines CorporationBroadcasting a message in a parallel computer
US8427979Nov 5, 2012Apr 23, 2013Mobitrum CorporationMethod and system for dynamic information exchange on location aware mesh network devices
US8458244Aug 15, 2012Jun 4, 2013International Business Machines CorporationPerforming a local reduction operation on a parallel computer
US8484440May 21, 2008Jul 9, 2013International Business Machines CorporationPerforming an allreduce operation on a plurality of compute nodes of a parallel computer
US8489859May 28, 2010Jul 16, 2013International Business Machines CorporationPerforming a deterministic reduction operation in a compute node organized into a branched tree topology
US8565089Mar 29, 2010Oct 22, 2013International Business Machines CorporationPerforming a scatterv operation on a hierarchical tree network optimized for collective operations
US8566841Nov 10, 2010Oct 22, 2013International Business Machines CorporationProcessing communications events in parallel active messaging interface by awakening thread from wait state
US8595478Nov 19, 2007Nov 26, 2013AlterWAN Inc.Wide area network with high quality of service
US8601237Nov 9, 2012Dec 3, 2013International Business Machines CorporationPerforming a deterministic reduction operation in a parallel computer
US8625427Sep 3, 2009Jan 7, 2014Brocade Communications Systems, Inc.Multi-path switching with edge-to-edge flow control
US8667501Aug 10, 2011Mar 4, 2014International Business Machines CorporationPerforming a local barrier operation
US8742794Apr 13, 2009Jun 3, 2014Massachusetts Institute Of TechnologyAnalog logic automata
US20060287593 *Jun 20, 2005Dec 21, 2006General Electric CompanySystem and method providing communication in a medical imaging system
US20110023052 *Jul 27, 2010Jan 27, 2011National Sun Yat-Sen UniversityCompressed data managing system and method for circular buffer
EP0580281A2 *May 18, 1993Jan 26, 1994International Business Machines CorporationNetwork addressing
EP0588104A2 *Aug 23, 1993Mar 23, 1994International Business Machines CorporationMultipath torus switching apparatus
EP0725523A2 *Dec 6, 1995Aug 7, 1996International Business Machines CorporationTransaction message routing in digital communications networks
EP1001351A1 *Dec 9, 1994May 17, 2000Cray Research, Inc.Multidimensional interconnection and routing network for an MPP computer
WO1995016240A1 *Dec 9, 1994Jun 15, 1995Cray Research IncMultidimensional interconnection and routing network for an mpp computer
WO1997005725A1 *Jul 26, 1996Feb 13, 1997British TelecommPacket routing
WO1997048054A1 *Jul 12, 1996Dec 18, 1997Super Pc International LlcDeadlock-free message-passing system for mimd computer processing systems utilizing a csp programming model
WO2001041340A1 *Nov 30, 2000Jun 7, 2001Future Tv Technologies LtdMethod and apparatus for transmission of source-routed data
WO2002054670A2 *Jan 4, 2002Jul 11, 2002Sun Microsystems IncA scalable routing scheme for a multi-path interconnection fabric
WO2003069858A2 *Feb 11, 2003Aug 21, 2003Rensselaer Polytech InstConnectionless internet traffic engineering framework
WO2004071067A2 *Jan 30, 2004Aug 19, 2004Rhett L AffleckData communication in a laboratory environment
WO2008093123A2 *Feb 1, 2008Aug 7, 2008Univ EdinburghEncoding and decoding methods
WO2009126971A1 *Apr 13, 2009Oct 15, 2009Massachusetts Institute Of TechnologyAsynchronous logic automata
WO2010082067A2 *Jan 15, 2010Jul 22, 2010The University Of ReadingProcessors
WO2013030505A1 *Aug 28, 2012Mar 7, 2013Bull SasMethod for the exchange of data between nodes of a server cluster, and server cluster implementing said method
Classifications
U.S. Classification709/243
International ClassificationG06F15/173, H04L12/56
Cooperative ClassificationH04L45/06, H04L45/34, G06F15/17368
European ClassificationH04L45/34, H04L45/06, G06F15/173N4
Legal Events
DateCodeEventDescription
Aug 31, 2006ASAssignment
Owner name: CELLULAR ELEMENTS, LLC, NEVADA
Free format text: LICENSE AGREEMENT;ASSIGNOR:CALIFORNIA INSTITUTE OF TECHNOLOGY;REEL/FRAME:018260/0178
Effective date: 20050426
Sep 26, 2003FPAYFee payment
Year of fee payment: 12
Dec 27, 1999FPAYFee payment
Year of fee payment: 8
Dec 27, 1999SULPSurcharge for late payment
Nov 9, 1999REMIMaintenance fee reminder mailed
Oct 16, 1995FPAYFee payment
Year of fee payment: 4
Jun 2, 1988ASAssignment
Owner name: CALIFORNIA INSTITUTE OF TECHNOLOGY, 1201 E. CALIFO
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST.;ASSIGNORS:FLAIG, CHARLES M.;SEITZ, CHARLES L.;REEL/FRAME:004911/0120
Effective date: 19880527
Owner name: CALIFORNIA INSTITUTE OF TECHNOLOGY,CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FLAIG, CHARLES M.;SEITZ, CHARLES L.;REEL/FRAME:004911/0120