US20030128709A1 - Scalable, re-configurable crossbar switch architecture for multi-processor system interconnection networks - Google Patents

Scalable, re-configurable crossbar switch architecture for multi-processor system interconnection networks Download PDF

Info

Publication number
US20030128709A1
US20030128709A1 US09/296,045 US29604599A US2003128709A1 US 20030128709 A1 US20030128709 A1 US 20030128709A1 US 29604599 A US29604599 A US 29604599A US 2003128709 A1 US2003128709 A1 US 2003128709A1
Authority
US
United States
Prior art keywords
circuits
processors
programmable
multiprocessor system
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/296,045
Other versions
US6597692B1 (en
Inventor
Padmanabha I. Venkitakrishnan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mosaid Technologies Inc
Original Assignee
Hewlett Packard Development Co LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Development Co LP filed Critical Hewlett Packard Development Co LP
Priority to US09/296,045 priority Critical patent/US6597692B1/en
Assigned to HEWLETT-PACKARD COMPANY reassignment HEWLETT-PACKARD COMPANY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VENKITAKRISHNAN, PADMANABHA I.
Assigned to HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. reassignment HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT-PACKARD COMPANY
Publication of US20030128709A1 publication Critical patent/US20030128709A1/en
Application granted granted Critical
Publication of US6597692B1 publication Critical patent/US6597692B1/en
Assigned to CONVERSANT INTELLECTUAL PROPERTY MANAGEMENT INC. reassignment CONVERSANT INTELLECTUAL PROPERTY MANAGEMENT INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
Assigned to CPPIB CREDIT INVESTMENTS, INC. reassignment CPPIB CREDIT INVESTMENTS, INC. AMENDED AND RESTATED U.S. PATENT SECURITY AGREEMENT (FOR NON-U.S. GRANTORS) Assignors: CONVERSANT INTELLECTUAL PROPERTY MANAGEMENT INC.
Anticipated expiration legal-status Critical
Assigned to CONVERSANT INTELLECTUAL PROPERTY MANAGEMENT INC. reassignment CONVERSANT INTELLECTUAL PROPERTY MANAGEMENT INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: CPPIB CREDIT INVESTMENTS INC.
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q3/00Selecting arrangements
    • H04Q3/42Circuit arrangements for indirect selecting controlled by common circuits, e.g. register controller, marker
    • H04Q3/54Circuit arrangements for indirect selecting controlled by common circuits, e.g. register controller, marker in which the logic circuitry controlling the exchange is centralised
    • H04Q3/545Circuit arrangements for indirect selecting controlled by common circuits, e.g. register controller, marker in which the logic circuitry controlling the exchange is centralised using a stored programme
    • H04Q3/54541Circuit arrangements for indirect selecting controlled by common circuits, e.g. register controller, marker in which the logic circuitry controlling the exchange is centralised using a stored programme using multi-processor systems
    • H04Q3/5455Multi-processor, parallelism, distributed systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/1302Relay switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/1304Coordinate switches, crossbar, 4/2 with relays, coupling field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13296Packet switching, X.25, frame relay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q2213/00Indexing scheme relating to selecting arrangements in general and for multiplex systems
    • H04Q2213/13322Integrated circuits

Definitions

  • the present application also contains subject matter related to a concurrently filed U.S. Patent Application by Padmanabha I. Venkitakrishnan, Gopalakrishnan Janakiraman, Tsen-Gong Jim Hsu, and Rajendra Kumar entitled “Scalable System Control Unit for Distributed Shared Memory Multi-Processor Systems”.
  • the related application is also assigned to Hewlett-Packard Company, is identified by docket number 10980275-1, and is hereby incorporated by reference.
  • the present invention relates generally to multi-processor computer systems and more particularly to crossbar switch architecture.
  • crossbar switches which route communications between the “nodes” of a network, included logic for determining a desired destination from message header, and for appropriately routing all of the parallel bits of a transmission; e.g., 64 bits in parallel for a 64 bit microprocessor.
  • a configuration such as this presents inherent scalability problems, principally because its number of nodes or ports limits each crossbar switch. For example, a typical crossbar switch might service four nodes in parallel, and route 64 bits to one of the four nodes; if more nodes were desired, multiple crossbar switches would be cascaded to support the additional nodes.
  • Such a configuration is not readily scalable either in terms of bandwidth; i.e., such a system could not readily be reconfigured to handle 128 bits in parallel to support higher-performance systems, or because the more cascaded structures, the greater the routing overhead and associated latency.
  • the present invention provides a new crossbar switch which is implemented by a plurality of parallel chips. Each chip is completely programmable to couple to every node in the system, e.g., from one node to about one thousand nodes (corresponding to present-day technology limits of about one thousand I/O pins) although conventional systems typically support no more than 32 nodes.
  • the crossbar switch can be implemented to support only one node such that one chip can be used to route all 64 bits in parallel for 64 bit microprocessors or 128 bits in parallel for a 128 bit processor.
  • the present invention provides a flexible structure that allows dynamic programming of its data routing, such that one commercial crossbar system can support many different network architectures. With dynamic scalability, if nodes are added to an existing system, then different programming may be used to reconfigure the crossbar switches.
  • the present invention provides a multi-processor system interconnection network based on a scalable, re-configurable, low latency, packet switched and highly available crossbar switch architecture.
  • the present invention further provides a scalable system by parallelizing the interconnection network into a number of identical crossbar switches. This enables implementation of the interconnection network function without pushing the limits of integrated circuit and system packaging technologies. At the same time, the invention provides a method to substantially increase the bandwidth of a multi-processor system.
  • the present invention further provides a method to re-configure the ports of the crossbar switches so that a smaller number of crossbar switch circuits can provide the required bandwidth when the multi-processor system consists of a small number of node structures, thus reducing system hardware cost.
  • the invention described also provides for a redundant interconnection network in parallel to the primary interconnection network, thus significantly enhancing the reliability and high-availability of the multi-processor system.
  • FIG. 1 is a prior art Distributed Shared Memory (DSM) computer system
  • FIG. 2 is a functional block diagram of the interconnection network for a DSM computer system according to the present invention.
  • FIG. 3 is an illustration of the interconnection network packet format according to the present invention.
  • FIG. 4 is a micro-architectural diagram of the crossbar switch circuit according to the present invention.
  • FIG. 5 is a timing diagram of the crossbar switch.
  • the DSM computer system 100 has a plurality of nodes 200 , 300 , 400 , and 500 .
  • the nodes 200 and 300 are connected to a crossbar switch 600 .
  • the nodes 400 and 500 are connected to a crossbar switch 700 .
  • the crossbar switches 600 and 700 are part of a network which includes additional communication switches, such as the communication switch 800 .
  • the nodes 200 , 300 , 400 , and 500 contain respective memory units 210 , 310 , 410 , and 510 .
  • the memory units 210 , 310 , 410 , and 510 are respectively operatively connected to memory and coherence controllers 220 , 320 , 420 , and 520 .
  • each line of memory (typically a section of memory is tens of bytes in size) is assigned a “home node”, such as the node 200 , which maintains the sharing of that memory line and guarantees its coherence.
  • the home node maintains a directory which identifies the nodes that possess a copy of that memory line.
  • the directories are coherence directories 230 , 330 , 430 , and 530 .
  • the home node directs this node to forward the data to the requesting node.
  • the home node employs a coherence protocol to ensure that when a node writes a new value to the memory line, all other nodes see this latest value.
  • the coherence controllers which are a part of the memory and coherence controllers 220 , 320 , 420 , and 520 , implement this coherence functionality.
  • the memory and coherence controllers 220 are connected to a number of central processing units (CPUs), generally four or eight processors, such as processors 240 and 250 .
  • the memory and coherence controllers 320 are shown connected to the processors 340 and 350 .
  • the memory and coherence controllers 420 are shown connected to processors 440 and 450 .
  • the memory and coherence controllers 520 are shown connected to the processors 540 and 550 .
  • FIG. 2 therein is shown a functional block diagram of the interconnection network for a DSM computer system 1000 according to the present invention.
  • the DSM computer system 1000 has a cross bar switch 2000 , which consists of a plurality of crossbar switch integrated circuits (XBS circuits) 2001 through 2016 .
  • XBS circuits crossbar switch integrated circuits
  • a typical high-performance DSM computer system 1000 can potentially have 16 XBS circuits or more, whereas low and medium performance systems can conceivably have just 8 or even only 4 XBS circuits.
  • the XBS circuits can all be packaged in the same integrated circuit chip or on separate integrated circuit chips. This arrangement meets the large bandwidth requirements of a high-performance DSM computer system 1000 in which the interconnection network is easily scalable.
  • Each of the XBS circuits 2001 through 2016 has 16 ports which are respectively connected to nodes 3001 through 3016 .
  • the node 3009 is typical, and so each of the other nodes is somewhat similarly constructed and would have components which would be similarly numbered.
  • the node 3009 also includes a system control unit (SCU) which includes the coherency controls and which is split into a system control unit address (SCUA) section 4009 and a system control unit data (SCUD) section 5009 .
  • SCU system control unit
  • SCUA system control unit address
  • SCUD system control unit data
  • the SCUD section 5009 is scalable in that additional SCUD sections may be added as required. In FIG. 2, four SCUD sections 5009 A through 5009 D are shown.
  • Each SCUD section such as SCUD section 5009 A, has four ports connected to the corresponding XBS circuits, such as XBS circuits 2001 through 2004 for the SCUD section 5009 A.
  • SCUD section 5009 B is connected to the XBS circuits 2005 through 2009 .
  • the four ports of subsequent SCUD sections would be respectively connected to subsequent ports of subsequent XBS circuits. This is represented by the phantom lines shown perpendicular to the arrows indicating output and input to the ports.
  • each port of the XBS circuit has the same functionality, the above arrangement not only allows the varying of the number of XBS circuits in the interconnection network 1000 , but allows bundling of several ports on an XBS circuit to derive ports with higher bandwidth.
  • the architecture of the XBS circuit allows scaling in two dimensions, i.e., varying number of XBS circuits as well as varying the number of port on a single XBS circuit.
  • This re-configurable and bundling feature of the ports of the crossbar switch 2000 allows having a smaller number of XBS circuits to derive the required bandwidth when the multiprocessor system consists of a small number of nodes, thus reducing system hardware cost.
  • the network packet (NP) 6000 controls the control and data signal transversals through the interconnection network 1000 between its source and destination nodes.
  • the network packet 6000 is configured to provide routing information 6100 , system control unit control packet (SCP) information 6200 , and system control unit data packet (SDP) information 6300 .
  • SCP system control unit control packet
  • SDP system control unit data packet
  • the routing information 6100 provides the following information: destination 6110 , source 6120 , and originator 6130 .
  • the SCP information 6200 contains the following information: destination 6210 , source 6220 , originator 6230 , the command 6240 , the address 6250 , and the length 6260 .
  • the SDP information 6300 contains the following information: destination 6310 , source 6320 , the originator 6330 , the data 6340 , and its length 6350 .
  • FIG. 4 therein is shown a micro-architectural diagram of the XBS circuit 2000 with 8 ports 2020 , 2030 , 2040 , 2050 , 2060 , 2070 , 2080 , and 2090 , as shown.
  • ports 2020 as typical, signals from the source node enter an input buffer 2022 and then are input to the decode and setup crossbar circuitry 2024 .
  • the circuitry 2024 is connected to a programmable cross-bar switch core 2026 which provides the network packet 6000 to output drivers 2028 and then through the port 2050 to the destination node.
  • FIG. 5 therein is shown the low latency transfer of the present invention in which the network packet from the source node is delivered to the destination node in four clock cycles.
  • the network packet 6000 is propagated through the latch and switch.
  • the network packet 6000 is driven out through the destination port 2050 .
  • control and data signal traversals through the interconnection network 1000 between its source and destination nodes is accomplished by moving the network packet 6000 .
  • the destination 6110 information and the source 6130 information contain the information on the nodes involved for routing purposes.
  • the SCP 6200 information and the SDP 6300 information are generated and used by the source and destination nodes by providing control information and data.
  • the DSM computer system 1000 can have 16 XBS circuits, 2001 through 2016 , which can all be integrated into the same integrated circuit or be separate circuits in order to simplify the making of the integrated circuits or packaging the integrated circuits.
  • a new crossbar switch is implemented by a plurality of parallel chips.
  • Each chip is completely programmable to couple to every node in the system, e.g., from one node to about one thousand nodes (corresponding to present-day technology limits of about one thousand I/O pins) although conventional systems typically support no more than 32 nodes.
  • each chip is configured to route up to 64 bits, 32 chips could be provided as part of a crossbar system. If the system as implemented only supported one node, then one chip could be used to route all 64 bits in parallel. On the other hand, if there were 32 nodes, each chip could be connected to all 32 nodes and could be configured by software to each route two bits to attached nodes.
  • the structure provided by the invention reduces latency and promotes scalability.
  • the present invention is a flexible structure that allows dynamic programming of its data routing, such that one commercial crossbar system can support many different network architectures.
  • An advantage of this system is dynamic scalability; if one adds nodes to an existing system, then a different driver may be used to reconfigure the crossbar switches.

Abstract

The present invention provides a new crossbar switch which is implemented by a plurality of parallel chips. Each chip is completely programmable to couple to every node in the system, e.g., from one node to about one thousand nodes (corresponding to present-day technology limits of about one thousand I/O pins) although conventional systems typically support no more than 32 nodes. The crossbar switch can be implemented to support only one node, then one chip can be used to route all 64 bits in parallel for 64 bit microprocessors.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • The present application contains subject matter related to a concurrently filed U.S. Patent Application by Padmanabha I. Venkitakrishnan entitled “Backup Redundant Routing System Crossbar Switch Architecture for Multi-Processor System Interconnection Networks”. The related application is also assigned to Hewlett-Packard Company, is identified by docket number 10981858-1, and is hereby incorporated by reference. [0001]
  • The present application also contains subject matter related to a concurrently filed U.S. Patent Application by Padmanabha I. Venkitakrishnan, Gopalakrishnan Janakiraman, Tsen-Gong Jim Hsu, and Rajendra Kumar entitled “Scalable System Control Unit for Distributed Shared Memory Multi-Processor Systems”. The related application is also assigned to Hewlett-Packard Company, is identified by docket number 10980275-1, and is hereby incorporated by reference.[0002]
  • TECHNICAL FIELD
  • The present invention relates generally to multi-processor computer systems and more particularly to crossbar switch architecture. [0003]
  • BACKGROUND ART
  • High performance, multi-processor systems with a large number of microprocessors are built by interconnecting a number of node structures, each node containing a small number of microprocessors. This necessitates an interconnection network that is efficient in carrying control information and data between the nodes of the multi-processor. [0004]
  • In the past, crossbar switches, which route communications between the “nodes” of a network, included logic for determining a desired destination from message header, and for appropriately routing all of the parallel bits of a transmission; e.g., 64 bits in parallel for a 64 bit microprocessor. A configuration such as this presents inherent scalability problems, principally because its number of nodes or ports limits each crossbar switch. For example, a typical crossbar switch might service four nodes in parallel, and route 64 bits to one of the four nodes; if more nodes were desired, multiple crossbar switches would be cascaded to support the additional nodes. Such a configuration is not readily scalable either in terms of bandwidth; i.e., such a system could not readily be reconfigured to handle 128 bits in parallel to support higher-performance systems, or because the more cascaded structures, the greater the routing overhead and associated latency. [0005]
  • Thus, a method or architecture has been long sought and long eluded those skilled in the art, which would be scalable and re-configurable while having low latency. The system would be packet switched and provide a high availability (HA) crossbar switch architecture. [0006]
  • DISCLOSURE OF THE INVENTION
  • The present invention provides a new crossbar switch which is implemented by a plurality of parallel chips. Each chip is completely programmable to couple to every node in the system, e.g., from one node to about one thousand nodes (corresponding to present-day technology limits of about one thousand I/O pins) although conventional systems typically support no more than 32 nodes. The crossbar switch can be implemented to support only one node such that one chip can be used to route all 64 bits in parallel for 64 bit microprocessors or 128 bits in parallel for a 128 bit processor. [0007]
  • The present invention provides a flexible structure that allows dynamic programming of its data routing, such that one commercial crossbar system can support many different network architectures. With dynamic scalability, if nodes are added to an existing system, then different programming may be used to reconfigure the crossbar switches. [0008]
  • The present invention provides a multi-processor system interconnection network based on a scalable, re-configurable, low latency, packet switched and highly available crossbar switch architecture. [0009]
  • The present invention further provides a scalable system by parallelizing the interconnection network into a number of identical crossbar switches. This enables implementation of the interconnection network function without pushing the limits of integrated circuit and system packaging technologies. At the same time, the invention provides a method to substantially increase the bandwidth of a multi-processor system. [0010]
  • The present invention further provides a method to re-configure the ports of the crossbar switches so that a smaller number of crossbar switch circuits can provide the required bandwidth when the multi-processor system consists of a small number of node structures, thus reducing system hardware cost. [0011]
  • The invention described also provides for a redundant interconnection network in parallel to the primary interconnection network, thus significantly enhancing the reliability and high-availability of the multi-processor system. [0012]
  • The above and additional advantages of the present invention will become apparent to those skilled in the art from a reading of the following detailed description when taken in conjunction with the accompanying drawings.[0013]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 (PRIOR ART) is a prior art Distributed Shared Memory (DSM) computer system; [0014]
  • FIG. 2 is a functional block diagram of the interconnection network for a DSM computer system according to the present invention; [0015]
  • FIG. 3 is an illustration of the interconnection network packet format according to the present invention; [0016]
  • FIG. 4 is a micro-architectural diagram of the crossbar switch circuit according to the present invention; and [0017]
  • FIG. 5 is a timing diagram of the crossbar switch.[0018]
  • BEST MODE FOR CARRYING OUT THE INVENTION
  • Referring now to FIG. 1, therein is shown a Distributed Shared Memory (DSM) [0019] computer system 100. The DSM computer system 100 has a plurality of nodes 200, 300, 400, and 500. The nodes 200 and 300 are connected to a crossbar switch 600. The nodes 400 and 500 are connected to a crossbar switch 700. The crossbar switches 600 and 700 are part of a network which includes additional communication switches, such as the communication switch 800.
  • In the DSM [0020] computer system 100, the nodes 200, 300, 400, and 500 contain respective memory units 210, 310, 410, and 510. The memory units 210, 310, 410, and 510 are respectively operatively connected to memory and coherence controllers 220, 320, 420, and 520.
  • Further, in the DSM [0021] computer system 100, each line of memory (typically a section of memory is tens of bytes in size) is assigned a “home node”, such as the node 200, which maintains the sharing of that memory line and guarantees its coherence. The home node maintains a directory which identifies the nodes that possess a copy of that memory line. In the nodes 200, 300, 400, and 500 the directories are coherence directories 230, 330, 430, and 530. When a node requires a copy of a memory line, it requests the memory line from the home node. The home node supplies the data from its memory unit if it has the latest data. If another node has the latest copy of the data, the home node directs this node to forward the data to the requesting node. The home node employs a coherence protocol to ensure that when a node writes a new value to the memory line, all other nodes see this latest value. The coherence controllers, which are a part of the memory and coherence controllers 220, 320, 420, and 520, implement this coherence functionality.
  • The memory and [0022] coherence controllers 220 are connected to a number of central processing units (CPUs), generally four or eight processors, such as processors 240 and 250. The memory and coherence controllers 320 are shown connected to the processors 340 and 350. The memory and coherence controllers 420 are shown connected to processors 440 and 450. And the memory and coherence controllers 520 are shown connected to the processors 540 and 550.
  • Referring now to FIG. 2, therein is shown a functional block diagram of the interconnection network for a DSM [0023] computer system 1000 according to the present invention. The DSM computer system 1000 has a cross bar switch 2000, which consists of a plurality of crossbar switch integrated circuits (XBS circuits) 2001 through 2016. A typical high-performance DSM computer system 1000 can potentially have 16 XBS circuits or more, whereas low and medium performance systems can conceivably have just 8 or even only 4 XBS circuits. The XBS circuits can all be packaged in the same integrated circuit chip or on separate integrated circuit chips. This arrangement meets the large bandwidth requirements of a high-performance DSM computer system 1000 in which the interconnection network is easily scalable.
  • Each of the [0024] XBS circuits 2001 through 2016 has 16 ports which are respectively connected to nodes 3001 through 3016. The node 3009 is typical, and so each of the other nodes is somewhat similarly constructed and would have components which would be similarly numbered. In addition to the processors and memory, the node 3009 also includes a system control unit (SCU) which includes the coherency controls and which is split into a system control unit address (SCUA) section 4009 and a system control unit data (SCUD) section 5009. The SCUD section 5009 is scalable in that additional SCUD sections may be added as required. In FIG. 2, four SCUD sections 5009A through 5009D are shown. Each SCUD section, such as SCUD section 5009A, has four ports connected to the corresponding XBS circuits, such as XBS circuits 2001 through 2004 for the SCUD section 5009A. Similarly, SCUD section 5009B is connected to the XBS circuits 2005 through 2009. As would be evident to those skilled in the art, the four ports of subsequent SCUD sections would be respectively connected to subsequent ports of subsequent XBS circuits. This is represented by the phantom lines shown perpendicular to the arrows indicating output and input to the ports.
  • Since each port of the XBS circuit has the same functionality, the above arrangement not only allows the varying of the number of XBS circuits in the [0025] interconnection network 1000, but allows bundling of several ports on an XBS circuit to derive ports with higher bandwidth. In other words, the architecture of the XBS circuit allows scaling in two dimensions, i.e., varying number of XBS circuits as well as varying the number of port on a single XBS circuit. This re-configurable and bundling feature of the ports of the crossbar switch 2000 allows having a smaller number of XBS circuits to derive the required bandwidth when the multiprocessor system consists of a small number of nodes, thus reducing system hardware cost.
  • Further, building the [0026] interconnection network 1000 with many of these parallelized XBS circuits as a plurality of integrated circuit chips helps in implementing these parts without pushing integrated circuit and part packaging technology limits. The scalable parallelized XBS circuits make packaging the interconnection network within the multiprocessor system cabinet very simple.
  • Referring now to FIG. 3, therein is shown an illustration of the interconnection network packet formats according to the present invention. The network packet (NP) [0027] 6000 controls the control and data signal transversals through the interconnection network 1000 between its source and destination nodes. The network packet 6000 is configured to provide routing information 6100, system control unit control packet (SCP) information 6200, and system control unit data packet (SDP) information 6300.
  • The [0028] routing information 6100 provides the following information: destination 6110, source 6120, and originator 6130.
  • The [0029] SCP information 6200 contains the following information: destination 6210, source 6220, originator 6230, the command 6240, the address 6250, and the length 6260.
  • The [0030] SDP information 6300 contains the following information: destination 6310, source 6320, the originator 6330, the data 6340, and its length 6350.
  • Referring now to FIG. 4, therein is shown a micro-architectural diagram of the [0031] XBS circuit 2000 with 8 ports 2020, 2030, 2040, 2050, 2060, 2070, 2080, and 2090, as shown. Taking port 2020 as typical, signals from the source node enter an input buffer 2022 and then are input to the decode and setup crossbar circuitry 2024. The circuitry 2024 is connected to a programmable cross-bar switch core 2026 which provides the network packet 6000 to output drivers 2028 and then through the port 2050 to the destination node.
  • Referring now to FIG. 5, therein is shown the low latency transfer of the present invention in which the network packet from the source node is delivered to the destination node in four clock cycles. [0032]
  • During the first clock cycle from T1 to T2, there is a latch of the [0033] network packet 6000 into the input buffer 2022. During the second clock cycle from T2 to T3, the network packet 6000 is decoded, and the crossbar switch core 2026 is setup.
  • During the third clock cycle from T3 to T4, the [0034] network packet 6000 is propagated through the latch and switch.
  • During the fourth clock cycle from T4 to T5, the [0035] network packet 6000 is driven out through the destination port 2050.
  • In operation, the control and data signal traversals through the [0036] interconnection network 1000 between its source and destination nodes, which could be from node 3001 to node 3008, is accomplished by moving the network packet 6000. The destination 6110 information and the source 6130 information contain the information on the nodes involved for routing purposes. The SCP 6200 information and the SDP 6300 information are generated and used by the source and destination nodes by providing control information and data.
  • To meet the large bandwidth requirements of high performance DSM computer systems, the [0037] DSM computer system 1000 can have 16 XBS circuits, 2001 through 2016, which can all be integrated into the same integrated circuit or be separate circuits in order to simplify the making of the integrated circuits or packaging the integrated circuits.
  • For an [0038] XBS circuit 2001 having 16 ports operating at 400 MHz the bandwidth could be 1.6 GB/s per part. At the same time, only 608 signal pins would be required. From the above, it will be evident that it is possible in low and medium performance systems to have a smaller number of XBS circuits when there are a smaller number of node structures and still be able to retain the required bandwidth. This would substantially reduce system hardware cost, while at the same time providing a great deal of flexibility.
  • In accordance with the present invention, a new crossbar switch is implemented by a plurality of parallel chips. Each chip is completely programmable to couple to every node in the system, e.g., from one node to about one thousand nodes (corresponding to present-day technology limits of about one thousand I/O pins) although conventional systems typically support no more than 32 nodes. For example, if each chip is configured to route up to 64 bits, 32 chips could be provided as part of a crossbar system. If the system as implemented only supported one node, then one chip could be used to route all 64 bits in parallel. On the other hand, if there were 32 nodes, each chip could be connected to all 32 nodes and could be configured by software to each route two bits to attached nodes. Each particular node determines whether a message is intended for it. Thus, the structure provided by the invention reduces latency and promotes scalability. As can be seen from this description, the present invention is a flexible structure that allows dynamic programming of its data routing, such that one commercial crossbar system can support many different network architectures. An advantage of this system is dynamic scalability; if one adds nodes to an existing system, then a different driver may be used to reconfigure the crossbar switches. [0039]
  • While the invention has been described in conjunction with a specific best mode, it is to be understood that many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the aforegoing description. Accordingly, it is intended to embrace all such alternatives, modifications, and variations which fall within the spirit and scope of the included claims. All matters set forth herein or shown in the accompanying drawings are to be interpreted in an illustrative and non-limiting sense. [0040]

Claims (20)

The invention claimed is:
1. A multiprocessor system comprising:
a plurality of processors;
a node containing said plurality of processors; and
a plurality of programmable crossbar switch circuits connected to said node; each of said plurality of circuits having:
an input port,
an output port,
said input port and said output port respectively connected to one and to another of said plurality of processors,
a programmable crossbar core for selectively connecting said input port and said output port, and
programmable means for switching said programmable crossbar core whereby signals are routed between said plurality of processors.
2. The multiprocessor system as claimed in claim 1 wherein each of said plurality of processors communicate in at least two parallel bites of information and wherein one of said plurality of circuits routes one of said bits and another of said plurality of circuits routes another of said bits.
3. The multiprocessor system as claimed in claim 1 wherein said plurality of processors communicate with signal packets and said signal packets program said programmable means in said plurality of circuits.
4. The multiprocessor system as claimed in claim 1 including a second node, having a second plurality of processors, and wherein said circuit is connected to said second node and programmable to connect one of said plurality of processors in said first node with one of said second plurality of processors in said second node.
5. The multiprocessor system as claimed in claim 1 wherein said programmable means includes a decoder and a core programmer and are responsive to said signals routed between said plurality of processors for switching said programmable crossbar core.
6. The multiprocessor system as claimed in claim 1 wherein said input ports of said plurality of circuits have input buffers thereon and said output ports have output drivers thereon.
7. The multiprocessor system as claimed in claim 1 wherein each of said plurality of circuits connects said signals from said input port to said output port in four steps.
8. The multiprocessor system as claimed in claim 1 wherein each of said plurality of circuits is programmable between a bit slicing mode and a node connection mode.
9. The multiprocessor system as claimed in claim 1 wherein each of said plurality of circuits is an individual integrated circuit.
10. A multiprocessor system comprising:
a plurality of processors;
a node containing said plurality of processors; and
a plurality of programmable crossbar switch circuits connected to said node; each of said plurality of circuits having:
a plurality of input ports,
a plurality of output ports,
said plurality of input ports and said plurality of output ports connected to said plurality of processors,
a programmable crossbar core for selectively connecting individual of ports of said plurality of input ports and individual ports of said plurality of output ports, and
programmable means for switching said programmable crossbar core whereby signals are routed between said plurality of processors.
11. The multiprocessor system as claimed in claim 10 wherein each of said plurality of processors communicate in parallel bites of information and wherein one of said plurality of circuits routes one of said bits whereby the number of circuits equals the number of bites communicated.
12. The multiprocessor system as claimed in claim 10 wherein said plurality of processors communicate with signal packets and each of said signal packets program one of said plurality of programmable means in said plurality of circuits.
13. The multiprocessor system as claimed in claim 10 including a plurality of nodes, each having a plurality of processors, and wherein said plurality of circuits are connected to said plurality of nodes and programmable to connect one of said plurality of processors in said first node with of said processors in said plurality of nodes.
14. The multiprocessor system as claimed in claim 10 wherein said programmable means includes a decoder and a core programmer and are responsive to said signals routed between said plurality of processors for switching said programmable crossbar core.
15. The multiprocessor system as claimed in claim 10 wherein said input ports of said plurality of circuits have input buffers thereon and said output ports have output drivers thereon.
16. The multiprocessor system as claimed in claim 10 wherein said plurality of processor operate on clock cycles and wherein each of said plurality of circuits connects said signals from said input port to said output port in four clock cycles.
17. The multiprocessor system as claimed in claim 10 wherein each of said plurality of circuits is programmable between a bit slicing mode and a node connection mode.
18. The multiprocessor system as claimed in claim 10 wherein each of said plurality of circuits is an individual integrated circuit and on a common substrate up to a predetermined number.
19. A programmable crossbar switch circuit comprising:
an input port;
an output port;
a switchable crossbar core for selectively connecting said input port and said output port; and
programmable means connected to said switchable crossbar core and including:
a decoder connected to said input port for decoding a signal packet provided thereto containing information on the connection of said input port and said output port; and
a core programmer connected to said decoder for switching said switchable crossbar core to connect and disconnect said input port and said output port.
20. The programmable crossbar switch circuit as claimed in claim 19 including:
a plurality of input ports;
a plurality of output ports;
said switchable crossbar core for selectively connecting said plurality of input ports to said plurality of output ports;
a plurality of programmable means including:
a plurality of decoders individually connected to said plurality of input ports for decoding signal packets provided thereto containing information on the connection of said plurality of input ports and said plurality of output ports in response to said decoder decoding of signal packets provided thereto; and
a plurality of core programmers individually connected to said plurality of decoders for switching said switchable crossbar core to selectively and individually connect and disconnect said plurality of input ports and said plurality of output ports.
US09/296,045 1999-04-21 1999-04-21 Scalable, re-configurable crossbar switch architecture for multi-processor system interconnection networks Expired - Lifetime US6597692B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/296,045 US6597692B1 (en) 1999-04-21 1999-04-21 Scalable, re-configurable crossbar switch architecture for multi-processor system interconnection networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/296,045 US6597692B1 (en) 1999-04-21 1999-04-21 Scalable, re-configurable crossbar switch architecture for multi-processor system interconnection networks

Publications (2)

Publication Number Publication Date
US20030128709A1 true US20030128709A1 (en) 2003-07-10
US6597692B1 US6597692B1 (en) 2003-07-22

Family

ID=23140375

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/296,045 Expired - Lifetime US6597692B1 (en) 1999-04-21 1999-04-21 Scalable, re-configurable crossbar switch architecture for multi-processor system interconnection networks

Country Status (1)

Country Link
US (1) US6597692B1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050083921A1 (en) * 2000-10-31 2005-04-21 Chiaro Networks Ltd. Router switch fabric protection using forward error correction
US20150208076A1 (en) * 2014-01-21 2015-07-23 Lsi Corporation Multi-core architecture for low latency video decoder
US20170060786A1 (en) * 2015-08-28 2017-03-02 Freescale Semiconductor, Inc. Multiple request notification network for global ordering in a coherent mesh interconnect

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060174052A1 (en) * 2005-02-02 2006-08-03 Nobukazu Kondo Integrated circuit and information processing device
KR100546546B1 (en) * 1999-02-23 2006-01-26 가부시키가이샤 히타치세이사쿠쇼 Integrated circuit and information processing device
US7343622B1 (en) * 2000-04-27 2008-03-11 Raytheon Company Multi-level secure multi-processor computer architecture
US6950893B2 (en) * 2001-03-22 2005-09-27 I-Bus Corporation Hybrid switching architecture
EP1280374A1 (en) * 2001-07-27 2003-01-29 Alcatel Network element with redundant switching matrix
US7376811B2 (en) * 2001-11-06 2008-05-20 Netxen, Inc. Method and apparatus for performing computations and operations on data using data steering
US6820167B2 (en) * 2002-05-16 2004-11-16 Hewlett-Packard Development Company, L.P. Configurable crossbar and related methods
US7958351B2 (en) * 2002-08-29 2011-06-07 Wisterium Development Llc Method and apparatus for multi-level security implementation
US7694064B2 (en) * 2004-12-29 2010-04-06 Hewlett-Packard Development Company, L.P. Multiple cell computer systems and methods
GB2430326A (en) * 2005-09-16 2007-03-21 Tyco Electronics Raychem Nv Cross connect device comprising a plurality of sparse cross bars
US7984194B2 (en) * 2008-09-23 2011-07-19 Microsoft Corporation Dynamically configurable switch for distributed test lab

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4379326A (en) 1980-03-10 1983-04-05 The Boeing Company Modular system controller for a transition machine
US5179669A (en) 1988-08-22 1993-01-12 At&T Bell Laboratories Multiprocessor interconnection and access arbitration arrangement
US4965793A (en) 1989-02-03 1990-10-23 Digital Equipment Corporation Method and apparatus for interfacing a system control unit for a multi-processor
US4968977A (en) 1989-02-03 1990-11-06 Digital Equipment Corporation Modular crossbar interconnection metwork for data transactions between system units in a multi-processor system
US5020059A (en) 1989-03-31 1991-05-28 At&T Bell Laboratories Reconfigurable signal processor
US5107493A (en) 1989-08-02 1992-04-21 At&T Bell Laboratories High-speed packet data network using serially connected packet and circuit switches
EP0429733B1 (en) 1989-11-17 1999-04-28 Texas Instruments Incorporated Multiprocessor with crossbar between processors and memories
US5522083A (en) 1989-11-17 1996-05-28 Texas Instruments Incorporated Reconfigurable multi-processor operating in SIMD mode with one processor fetching instructions for use by remaining processors
US5280474A (en) 1990-01-05 1994-01-18 Maspar Computer Corporation Scalable processor to processor and processor-to-I/O interconnection network and method for parallel processing arrays
US5191578A (en) 1990-06-14 1993-03-02 Bell Communications Research, Inc. Packet parallel interconnection network
US5261059A (en) 1990-06-29 1993-11-09 Digital Equipment Corporation Crossbar interface for data communication network
US5179552A (en) * 1990-11-26 1993-01-12 Bell Communications Research, Inc. Crosspoint matrix switching element for a packet switch
JPH0776942B2 (en) 1991-04-22 1995-08-16 インターナショナル・ビジネス・マシーンズ・コーポレイション Multiprocessor system and data transmission device thereof
CA2078912A1 (en) 1992-01-07 1993-07-08 Robert Edward Cypher Hierarchical interconnection networks for parallel processing
US5598568A (en) 1993-05-06 1997-01-28 Mercury Computer Systems, Inc. Multicomputer memory access architecture
JPH06314264A (en) * 1993-05-06 1994-11-08 Nec Corp Self-routing cross bar switch
US5617413A (en) * 1993-08-18 1997-04-01 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Scalable wrap-around shuffle exchange network with deflection routing
US5822605A (en) * 1994-03-24 1998-10-13 Hitachi, Ltd. Parallel processor system with a broadcast message serializing circuit provided within a network
US5555543A (en) 1995-01-03 1996-09-10 International Business Machines Corporation Crossbar switch apparatus and protocol
US5790539A (en) * 1995-01-26 1998-08-04 Chao; Hung-Hsiang Jonathan ASIC chip for implementing a scaleable multicast ATM switch
KR100262682B1 (en) * 1995-04-15 2000-08-01 최병석 Multicast atm switch and its multicast contention resolution
US6181159B1 (en) * 1997-05-06 2001-01-30 Altera Corporation Integrated circuit incorporating a programmable cross-bar switch
US6138185A (en) * 1998-10-29 2000-10-24 Mcdata Corporation High performance crossbar switch
US6263415B1 (en) * 1999-04-21 2001-07-17 Hewlett-Packard Co Backup redundant routing system crossbar switch architecture for multi-processor system interconnection networks

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050083921A1 (en) * 2000-10-31 2005-04-21 Chiaro Networks Ltd. Router switch fabric protection using forward error correction
US8315175B2 (en) * 2000-10-31 2012-11-20 Foundry Networks, Llc Router switch fabric protection using forward error correction
US20150208076A1 (en) * 2014-01-21 2015-07-23 Lsi Corporation Multi-core architecture for low latency video decoder
US9661339B2 (en) * 2014-01-21 2017-05-23 Intel Corporation Multi-core architecture for low latency video decoder
US20170060786A1 (en) * 2015-08-28 2017-03-02 Freescale Semiconductor, Inc. Multiple request notification network for global ordering in a coherent mesh interconnect
US9940270B2 (en) * 2015-08-28 2018-04-10 Nxp Usa, Inc. Multiple request notification network for global ordering in a coherent mesh interconnect

Also Published As

Publication number Publication date
US6597692B1 (en) 2003-07-22

Similar Documents

Publication Publication Date Title
US6378029B1 (en) Scalable system control unit for distributed shared memory multi-processor systems
US6263415B1 (en) Backup redundant routing system crossbar switch architecture for multi-processor system interconnection networks
US11640362B2 (en) Procedures for improving efficiency of an interconnect fabric on a system on chip
US6597692B1 (en) Scalable, re-configurable crossbar switch architecture for multi-processor system interconnection networks
US7924708B2 (en) Method and apparatus for flow control initialization
US7643477B2 (en) Buffering data packets according to multiple flow control schemes
US6304568B1 (en) Interconnection network extendable bandwidth and method of transferring data therein
JP2004525449A (en) Interconnect system
JPH05153163A (en) Method of routing message and network
JPH08251234A (en) Connection method and protocol
TWI759585B (en) System and method for asynchronous, multiple clock domain data streams coalescing and resynchronization
KR100951856B1 (en) SoC for Multimedia system
US5802333A (en) Network inter-product stacking mechanism in which stacked products appear to the network as a single device
US20150135196A1 (en) Method For Enabling A Communication Between Processes, Processing System, Integrated Chip And Module For Such A Chip
US7978693B2 (en) Integrated circuit and method for packet switching control
US20020150056A1 (en) Method for avoiding broadcast deadlocks in a mesh-connected network
US5771227A (en) Method and system for routing massages in a multi-node data communication network
JP2009282917A (en) Interserver communication mechanism and computer system
KR101924002B1 (en) Chip multi processor and router for chip multi processor
US7526631B2 (en) Data processing system with backplane and processor books configurable to support both technical and commercial workloads
US20070245044A1 (en) System of interconnections for external functional blocks on a chip provided with a single configurable communication protocol
US7797476B2 (en) Flexible connection scheme between multiple masters and slaves
US20020161453A1 (en) Collective memory network for parallel processing and method therefor
WO2006048826A1 (en) Integrated circuit and method for data transfer in a network on chip environment
JP3791463B2 (en) Arithmetic unit and data transfer system

Legal Events

Date Code Title Description
AS Assignment

Owner name: HEWLETT-PACKARD COMPANY, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VENKITAKRISHNAN, PADMANABHA I.;REEL/FRAME:010039/0728

Effective date: 19990412

AS Assignment

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;REEL/FRAME:014142/0757

Effective date: 20030605

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: CONVERSANT INTELLECTUAL PROPERTY MANAGEMENT INC.,

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.;REEL/FRAME:034591/0627

Effective date: 20141103

FPAY Fee payment

Year of fee payment: 12

SULP Surcharge for late payment

Year of fee payment: 11

AS Assignment

Owner name: CPPIB CREDIT INVESTMENTS, INC., CANADA

Free format text: AMENDED AND RESTATED U.S. PATENT SECURITY AGREEMENT (FOR NON-U.S. GRANTORS);ASSIGNOR:CONVERSANT INTELLECTUAL PROPERTY MANAGEMENT INC.;REEL/FRAME:046900/0136

Effective date: 20180731

AS Assignment

Owner name: CONVERSANT INTELLECTUAL PROPERTY MANAGEMENT INC., CANADA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CPPIB CREDIT INVESTMENTS INC.;REEL/FRAME:054385/0435

Effective date: 20201028