US20030145294A1 - Verifying interconnect fabric designs - Google Patents

Verifying interconnect fabric designs Download PDF

Info

Publication number
US20030145294A1
US20030145294A1 US10/058,258 US5825802A US2003145294A1 US 20030145294 A1 US20030145294 A1 US 20030145294A1 US 5825802 A US5825802 A US 5825802A US 2003145294 A1 US2003145294 A1 US 2003145294A1
Authority
US
United States
Prior art keywords
flow
design
interconnect
path
requirements
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/058,258
Inventor
Julie Ward
Troy Shahoumian
John Wilkes
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Development Co LP
Original Assignee
Hewlett Packard Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Co filed Critical Hewlett Packard Co
Priority to US10/058,258 priority Critical patent/US20030145294A1/en
Assigned to HEWLETT-PACKARD COMPANY reassignment HEWLETT-PACKARD COMPANY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SHAHOUMIAN, TROY ALEXANDER, WARD, JULIE ANN, WILKES, JOHN
Priority to US10/290,760 priority patent/US7308494B1/en
Priority to US10/290,643 priority patent/US7237020B1/en
Publication of US20030145294A1 publication Critical patent/US20030145294A1/en
Assigned to HEWLETT-PACKARD DEVELOPMENT COMPANY L.P. reassignment HEWLETT-PACKARD DEVELOPMENT COMPANY L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT-PACKARD COMPANY
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/18Network design, e.g. design based on topological or interconnect aspects of utility systems, piping, heating ventilation air conditioning [HVAC] or cabling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities

Definitions

  • the present invention pertains to the field of networks. More particularly, this invention relates to verification of designs for networks.
  • An interconnect fabric provides for communication among a set of nodes in a network. Communications originate within the network at a source node and terminate at a terminal node. Thus, a wide variety of networks may be viewed as a set of source nodes that communicate with a set of terminal nodes via an interconnect fabric. For example, a storage area network may be arranged as a set of computers as source nodes which are connected to a set of storage devices as terminal nodes via an interconnect fabric that includes communication links and devices such as hubs, routers, switches, etc. Devices such as hubs, routers, switches, etc., are hereinafter referred to as interconnect devices. Depending on the circumstances, a node may assume the role of source node with respect to some communications and of terminal node for other communications.
  • the communication requirements of an interconnect fabric may be characterized in terms of a set of flow requirements.
  • a typical set of flow requirements specifies the required communication bandwidth from each source node to each terminal node.
  • the design of an interconnect fabric usually involves selecting the appropriate arrangement of physical communication links and interconnect devices and related components that will meet the flow requirements.
  • a technique for verifying an interconnect fabric design for interconnecting a plurality of network nodes.
  • a design for the interconnect fabric specifies an arrangement of elements of the fabric and flow requirements among the network nodes.
  • the invention programmatically verifies the design. This may include determining whether the flow requirements are satisfied by the design and whether the design violates constraints on the elements, such as bandwidth capacity and number of available ports. This may also include determining whether the network can continue to satisfy the flow requirements in the event of one or more failures of elements of the interconnect fabric.
  • a computer implemented method for verifying a design for an interconnect fabric.
  • the design includes an arrangement of interconnect elements for interconnecting a plurality of network nodes.
  • the design also has requirements for a plurality of flows among the network nodes. Each of the plurality of flows is associated with a path for the flow through the interconnect fabric.
  • requirements e.g., bandwidth or a number of ports
  • a determination is made as to whether the aggregated requirements exceeds a capacity of the interconnect element.
  • a system for verifying a design for an interconnect fabric.
  • a set of design information includes requirements for a plurality of flows and a design specification.
  • Each of the plurality of flows is associated with a path for the flow through the interconnect fabric.
  • a fabric design verification tool aggregates requirements (e.g., bandwidth or a number of ports) associated with each of the corresponding flows and determines whether the aggregated requirements exceeds a capacity of the interconnect element.
  • the interconnect elements may include interconnect devices and links.
  • the interconnect devices may be selected from the group consisting of switches and hubs.
  • the interconnect devices includes a hub
  • the extent of a domain of hub connected components may be identified. Identifying the extent of the domain of hub connected components may include a depth first search of the interconnect fabric for the hub connected components and may include constructing a tree data structure wherein a hub occupies a position in the tree and a other interconnect elements connected to the hub occupy positions in the tree one level down from the hub.
  • Requirements of ports may also be aggregated for each of the plurality of flows and a determination made as to whether a number of available ports of one or more of the interconnect elements is exceeded by the aggregated requirements of ports. Whether a flow corresponds to a valid path through the interconnect fabric may be determined. A valid path starting at a source node for the flow, terminating at an end node for the flow and passing through a contiguous subset of the interconnect elements.
  • a flow may be assigned to a primary path in the design and then the flow may be assigned to a backup path in the design.
  • the backup path is intended to support the flow in the event of a failure in the primary path.
  • the interconnect fabric may be evaluated to determine whether it supports the requirements of the flows under various different failure scenarios.
  • FIG. 1 shows a method for verifying a design of an interconnect fabric according to an aspect of the present invention
  • FIG. 2 shows an arrangement of flows in an exemplary interconnect fabric design
  • FIG. 3 shows a design specification for an exemplary interconnect fabric design
  • FIG. 4 illustrates an exemplary design for an interconnect fabric including switches and hubs
  • FIG. 5 illustrates a method for identifying hub connected components in a design for an interconnect fabric according to an aspect of the present invention
  • FIG. 6 illustrates an exemplary tree data structure that may be formed by the method of FIG. 5;
  • FIG. 7 illustrates a method for verifying a design of an interconnect fabric under various different failure condition scenarios according to an aspect of the present invention
  • FIG. 8 shows an arrangement of flows for an exemplary interconnect fabric design
  • FIG. 9 shows an exemplary design for an interconnect fabric including primary and backup paths for each of the flows of FIG. 8;
  • FIG. 10 shows a system having a fabric design verification tool that may be used to verify a design for an interconnect fabric in accordance with an aspect of the present invention.
  • FIG. 1 shows a method 100 for verifying a design of an interconnect fabric according to an aspect of the present invention.
  • the flow requirements may include, for example, source and terminal nodes for communication flows and required communication bandwidth for the flows.
  • the interconnect design fabric specifies an arrangement of elements of the fabric, such as links and interconnect devices, which is intended to satisfy the flow requirements.
  • the invention programmatically verifies the design. This may include determining whether the flow requirements are satisfied by the design and whether the design violates constraints on the elements, such as bandwidth capacity and number of available ports.
  • a set of network nodes, such as source and terminal nodes, that are interconnected by the interconnect fabric design are determined.
  • flow requirements for the fabric are determined. Table 1 shows an example set of flow requirements for an interconnect fabric design. Terminal Terminal Terminal Node 20 Node 22 Node 24 Source a b c Node 10 Source d e f Node 12 Source — g h Node 14
  • the flow requirements in this example specify three source nodes (source nodes 10 - 14 in the figures below) and three terminal nodes (terminal nodes 20 - 24 in the figures below).
  • source nodes 10 - 14 source nodes 10 - 14 in the figures below
  • terminal nodes 20 - 24 terminal nodes 20 - 24 in the figures below.
  • the interconnect fabric design it must contain communication paths between all pairs of the source nodes 10 - 14 and terminal nodes and 20 - 24 having positive flow requirements and must have sufficient bandwidth to support all of the flow requirements simultaneously.
  • the source nodes 10 - 14 are host computers and terminal nodes 20 - 24 are storage devices and the bandwidth values for flows a-h are numbers expressed in units of megabits per second.
  • the interconnect fabric design may be for storage area network.
  • the cells of Table 1 may contain a list of two or more entries.
  • FIG. 2 shows an arrangement of flows in the interconnect fabric design obtained at step 102 for this example. Accordingly, a flow a forms a connection between the source node 10 and the terminal node 20 , a flow b forms a connection between the source node 10 and the terminal node 22 , and a flow c forms a connection between the source node 10 and the terminal node 24 . Similarly, flows d, e, and f, respectively, form connections from the source node 12 to the terminal nodes 20 - 24 and flows g and h, respectively, form connections from the source node 14 to the terminal nodes 22 - 24 .
  • the set of nodes and the flow requirements are the basic constraints on the design for the interconnect fabric, they may have been used as a starting point for the design which is to be verified by the present invention. Accordingly, they will generally be readily available.
  • U.S. application Ser. No. 09/707,227, filed Nov. 16, 2000, the contents of which are hereby incorporated by reference discloses a technique for designing interconnect fabrics using a set of nodes and flow requirements as a starting point. It will be apparent, however, the present technique may be used to verify interconnect fabric designs obtained by other techniques, such as manual or other methods.
  • the set of nodes and flow requirements may be obtained in other ways. For example, the set of nodes may be obtained from the design itself. Also, the present invention may be used to verify whether an interconnect fabric initially designed to support one set of flow requirements will support a different set of flow requirements. For example, it may be desired to determine whether an existing design will meet the requirements of a new application.
  • a specification of the interconnect fabric design which is to be verified by the present invention is obtained.
  • the design specifies a set of interconnect devices and communication links.
  • the devices may include for example, hubs, routers, switches, and so forth.
  • the links form physical connections among the nodes and the interconnect devices. These may include, for example, fiber optic links, fibre channel links, wire-based links, and links such as SCSI, as well as wireless links.
  • FIG. 3 shows a design specification for the example flow requirements.
  • the design of FIG. 3 may be developed by the technique of U.S. application Ser. No. 09/03/707,227, mentioned above, or by another technique.
  • devices 30 , 32 , and 34 and a set of links 40 - 58 interconnect the nodes 10 - 14 and 20 - 24 . More particularly, flows a, b and c from the source node 10 are merged and connected to the device 30 by a link 40 . The flow a is connected between the device 30 and the terminal node 20 by a link 42 . The flows b and c from the device 30 are merged and connected to the device 32 by a link 44 .
  • the flow d from the source node 12 is connected to the terminal node 20 by a link 46 .
  • the flows e and f from the source node 12 are merged and connected to the device 32 by a link 48 .
  • the flows b and e from the device 32 are merged and connected to the device 34 by a link 50 .
  • the flows c and f from the device 32 are merged and connected to the terminal node 24 by a link 52 .
  • the flow g from the source node 14 is connected to the device 34 by a link 54 .
  • the flows b, e and g from the device 34 are merged and connected to the terminal node 22 by a link 56 .
  • the flow h from the source node 14 is connected to the terminal node 22 by a link 58 .
  • the design specification may be represented other ways.
  • the design specification may be in the form of a list including elements and connections between the elements.
  • each flow included in the flow requirements obtained in the step 102 is associated with a path through the interconnect fabric.
  • These associations of flows to paths may be specified by the design specification. Alternately, these associations may be developed in step 106 by comparing each flow to the design for the interconnect fabric and identifying a path through the fabric whose end points match those of the flow.
  • the flow may be assigned to one such path and an attempt made to verify the design based on that assignment (steps 108 - 110 , discussed below). If the design cannot be verified, the flow may be assigned to another possible path. Flows may be assigned to new paths until the design can be verified or all the possible paths for all flows have been tried unsuccessfully.
  • the path should start at the source node for the flow, terminate at the end node for the flow and pass through a contiguous subset of the links and devices identified in the step 104 . If a valid path cannot be identified for a flow in step 106 , this indicates that the design will not meet the flow requirements. If the design is rejected in step 106 because it does not include a valid path for each flow, it may then be modified to add one or more valid paths as needed or a new design may be selected.
  • each of the flows a-h is associated with a corresponding path through the interconnect fabric.
  • flow a is associated with a path from the source node 10 , through link 40 , device 30 and link 42 , terminating at terminal node 20 .
  • Flow b is associated with a path from the source node 10 , through link 40 , device 30 , link 44 , device 32 , link 50 , device 34 , and link 56 , terminating at terminal node 22 .
  • Flow c is associated with a path from the source node 10 , through link 40 , device 30 , link 44 , device 32 and link 52 , terminating at terminal node 24 .
  • Flow d is associated with a path from the source node 12 , through link 46 and terminating at terminal node 20 .
  • Flow e is associated with a path from the source node 12 , through link 48 , device 32 , link 50 , device 34 and link 56 , terminating at terminal node 22 .
  • Flow f is associated with a path from the source node 12 , through link 48 , device 32 and link 52 , terminating at terminal node 24 .
  • Flow g is associated with a path from the source node 14 , through link 54 , device 34 and link 34 , terminating at terminal node 22 .
  • Flow h is associated with a path from the source node 14 , through link 58 and terminating at terminal node 58 .
  • steps 108 and 110 the paths identified in the step 106 are evaluated to determine whether the flow requirements for the associated flows are met by the design. More particularly, in step 108 , a path may be selected for evaluation. Elements of the selected path are then identified. These elements may include, for example, each port, interconnect device and link encountered in the path. For each such element, the requirements for the flow that corresponds to the path through that element are aggregated along with requirements for other flows through that same element. These requirements may include, for example, the bandwidth and the number of ports required for the flows. For each selected path, its flow requirements are aggregated with those of other paths that were evaluated prior to the selected path. Then, in state 110 , a determination is made as to whether the capacity of each element is exceeded by the aggregated requirements. This process is repeated for each flow and for each element of each flow.
  • each of interconnect devices 30 - 34 is a switch having a maximum bandwidth capacity of 100 Mb/s. Assume also that each of the interconnect devices 30 - 34 has four available ports and each port of the devices 30 - 34 has a maximum bandwidth capacity of 100 Mb/s. In addition, assume that each port of each of the source nodes 10 - 14 and each port of each of the terminal nodes 20 - 24 and each of the links 40 - 58 also has maximum bandwidth capacity of 100 Mb/s. Assume also that each of flows a-d and flows f-h require a bandwidth of 33 Mb/s and that flow e requires 0.5 Mb/s.
  • the path for flow a may be selected.
  • the bandwidth requirement for the flow a may then be associated with each of a port at the source node 10 , the link 40 , the device 30 , the link 42 and a port at the terminal node 20 .
  • this information may be saved in computer memory.
  • the requirement of one port at the node 10 shared by flows a, b and c), two ports (an entry port and an exit port) at the device 30 and one port at the node 20 may be recorded.
  • a determination may be made as to whether any of the bandwidth capacities of these elements is exceeded by the flow a and whether the number of available ports for each of these elements is exceeded by the flow a.
  • the path for the flow b may be selected. Because the flow b uses the same port at the source node as the flow a, the bandwidth requirements for both flows are aggregated. The sum of these flow requirements may then be saved in the step 108 for comparison with capacity of the port at node 20 in the step 110 . Similarly, the flow b also uses the link 40 , and the same entry port at the device 30 that is used by the flow a. Thus, the bandwidth requirements of flow b for each of these elements can be aggregated with those of flow a. However, the flow b uses a different exit port at the device 30 . Thus, the requirement of a third port at the device 30 may be recorded. Then, in the step 110 , requirements of the flow b, aggregated with those of flow a, may be compared to the capacities of the corresponding elements of the network to determine whether any are exceeded.
  • the link 44 , the device 32 , the link 50 , the device 34 , the link 58 and a port of the terminal node 22 are used by the flow b.
  • the requirements for the flow b at each of these elements may be compared the capacities of the corresponding element to determine whether any are exceeded.
  • the device 30 has maximum input bandwidth capacity of 100 Mb/s, however, the total used by flows a and b is 66 Mb/s, which is less than the maximum.
  • the device 30 has four ports, however, the flows a and b only require three ports at the device 30 .
  • the path for the flow c may be selected and its requirements aggregated with those of flows a and b.
  • the requirements for the flow c may be aggregated with those of the other flows for each of the source node 10 , the link 40 , the device 30 , the link 44 , the device 32 , the link 52 and the terminal node 24 .
  • the aggregated requirements for the flows a, b and c may be compared to the capacities of the corresponding elements of the network to determine whether any are exceeded.
  • the steps 108 and 110 may be repeated for each of the flows. In this manner, the additional requirements of each flow may be aggregated with the flows considered in previous passes through the step 108 . In a final pass through the step 110 , the aggregated requirements for all of the flows to be supported by the design may be compared the capacities of the corresponding elements of the network to determine whether any are exceeded.
  • the aggregated bandwidth requirement for the device 32 is 99.5 Mb/s. This includes 33 Mb/s for the flow b, 33 Mb/s for the flow c, 0.5 Mb/s for the flow e and 33 Mb/s for the flow f, resulting in a sum of 99.5 Mb/s.
  • these flows require four ports at the device 32 , two for entering flows and two for exiting flows.
  • the maximum bandwidth capacity for the device 32 is 100 Mb/s and it has four ports. Accordingly, neither the bandwidth capacity, nor port number capacity of the device 32 is exceeded.
  • the method 100 may terminate with a positive result after the final pass through the step 110 .
  • the aggregated bandwidth requirements for the device 32 includes 33 Mb/s for the flow b, 33 Mb/s for the flow c, 10 Mb/s for the flow e and 33 Mb/s for the flow f, resulting in a sum of 109 Mb/s. This exceeds the maximum bandwidth available for the device 32 , which is 100 Mb/s. Accordingly, a determination in the state 110 may be that the bandwidth capacity of the device 32 is exceeded. In response, the design for the interconnect fabric may be modified in order to increase its bandwidth capacity or the flow requirements relaxed in order to reduce the bandwidth requirements.
  • the interconnect devices 30 - 34 are switches. Accordingly, communications for a flow that passes through one of these devices are passed from an entry port of the device to a specified exit port of the device.
  • the bandwidth requirements for the flow may be aggregated in the step 108 along with other flows at the same input and exit ports to determine whether the maximum bandwidth capacity of either the input or exit port is exceeded.
  • bandwidth requirements for the flow may be aggregated with all flows that enter the interconnect device to determine whether the maximum bandwidth capacity of the device is exceeded.
  • communications for a flow that enters a port of the device may be repeated at all other ports of the device, not just a specified exit port as in the case of switches.
  • bandwidth consumed at one port to receive communications is also consumed at each other port in order to retransmit the communications.
  • the bandwidth requirement for a flow entering such a device is aggregated along with the bandwidth requirements for all the other flows entering the device to determine whether the bandwidth capacity of any port is exceeded. This means that the lowest bandwidth capacity among the ports of the device determines the maximum bandwidth of the device itself.
  • a communication received at any port of a connected device may be repeated at each other port of that device and at each port of the other connected devices.
  • bandwidth consumed at any port of a connected group of hub or repeater devices may also be consumed at each other port of the connected group of devices in order to repeat the communication.
  • Elements other than a hub or a repeater such as a switch, a source node or a terminal node, may be connected to a port of a hub or a repeater.
  • communications between those elements and the hub or repeater may be repeated at each other port of the hub or repeater and at each port of any other connected hub or repeater.
  • bandwidth consumed at any port connected to hub or repeater may also be consumed at each other port of the connected group of devices in order to repeat the communication.
  • the term “hub connected” refers to any hub or other network element connected to a hub for which communications repeated by the hub consume bandwidth. This includes links directly connected to a port of a hub or repeater and interconnect devices directly connected to such links.
  • Hub connected components can be said to be within the same “domain” where communications are repeated to each such hub connected component.
  • a path identified in step 106 utilizes any hub connected component, all of the devices in the same domain may also be impacted by the flow.
  • bandwidth requirements for a flow at a hub connected component may be aggregated with the bandwidth requirements for flows at other hub connected components in the same domain in order to determine in step 110 whether the bandwidth capacity of any such component is exceeded.
  • FIG. 4 illustrates an exemplary design for an interconnect fabric including switches 130 - 138 and hubs 140 - 144 . More particularly, a source node 110 is connected to the switch 130 by a link 150 .
  • the switch 130 is connected to a terminal node 120 by a link 152 and to a terminal node 122 by a link 154 .
  • the switch 132 is connected to the source node 130 by a link 156 and to a source node 112 by a link 158 .
  • the hub 140 is connected to the switch 132 by a link 160 , to the switch 134 by a link 162 and to the hub 142 by a link 164 .
  • the switch 134 is also connected to the terminal node 120 by a link 166 and to the terminal node 122 by a link 168 .
  • the switch 136 is connected to the source node 112 by a link 170 , to the source node 114 by a link 172 and to the hub 144 by a link 174 .
  • the hub 144 is also connected to the source node 110 by a link 176 and to the hub 142 by a link 178 .
  • the switch 138 is connected to the source node 114 by a link 180 , to the hub 142 by a link 182 and to the terminal node 124 by a link 184 .
  • a path for a flow passes from the source node 110 to the terminal node 120 via the link 156 , the switch 132 , the link 160 , the hub 146 , the link 162 , the switch 134 and the link 166 .
  • the switch 132 is connected to the hub 140 by the link 160
  • the path includes at least one hub connected component (other hub connected components in the path include the switch 134 and the link 162 ).
  • all of the other hub connected components in the same domain as the hub 140 may be identified. Once such hub connected components are identified, their associated flows can be identified so that their bandwidth requirements may be aggregated appropriately when the step 108 is performed for each such flow.
  • FIG. 5 illustrates a method 200 for identifying hub connected components in a design for an interconnect fabric in accordance with an aspect of the present invention.
  • a hub connected component is encountered in a path between a source node and a terminal node. This may occur, for example, during the step 106 of the method 100 (FIG. 3).
  • the first hub connected component encountered may be switch 132 .
  • FIG. 6 illustrates an exemplary tree data structure 250 that may be formed by the method 200 of FIG. 5.
  • the data structure 250 may be stored, for example, in computer memory.
  • the switch 132 may be initially added to the data structure 250 .
  • a step 206 the connections of the hub connected component added to the tree 250 are searched for other hub connected components in the same domain.
  • the switch 132 is connected to the hub 140 .
  • the hub 140 may be identified in the step 204 as another hub connected component in the same domain.
  • the step 202 is repeated during which the newly identified component may be added to the tree data structure 250 .
  • the hub 140 may be added to the tree 250 .
  • Hub connected links may also be identified in the tree 250 as shown in FIG. 6 by the link 160 .
  • the tree 250 may be formed with hub connected components other than hubs being one level lower than a directly-connected hub. Accordingly, in the example, the hub 140 is inserted into the tree 250 one level higher than the switch 132 .
  • the steps 204 and 206 search the interconnect fabric design to determine whether there are any additional components connected to the hub. If any such devices are found, they are added to the tree, branching from that hub and one level down. If any of those devices are hubs, the interconnect fabric design is searched again for each such hub to determine whether there are any additional components connected to each of those hubs. Then, any such devices found are added to the tree branching from the connected hub. This process may be referred to as a depth-first search which continues until all the hub connected components in the same domain are identified and added to the tree.
  • the switch 134 may be added one level down from the hub 140 in a next pass through the steps 204 and 206 . This is shown in FIG. 6. Then, the hub 142 may be added one level down from the hub 140 . Since there is no other components connected to the hub 140 , the search then moves down to the hub 142 . In the next passes through the steps 204 and 206 , the switch 136 and hub 144 are added to the tree 250 since they are connected to the hub 140 . A search for components connected to the hub 144 results in locating the switch 136 and the node 110 since they are connected to the hub 144 . Because all of the elements in the level below the hub 144 are not hubs, this means that search of the domain has been exhausted.
  • the interconnect fabric design may be searched to determine the extent of the domain. This may be accomplished by performing a depth-first search, as explained above or by another technique, such as a union-find algorithm.
  • the results identify all of the hub connected components in the domain for which the bandwidth requirements may be aggregated in the step 108 of FIG. 1 in order to determine whether the bandwidth capacity of any such component is exceeded.
  • the method 100 of verifying a design for an interconnect fabric may be used to evaluate the ability of the design to withstand various different failure conditions.
  • FIG. 7 illustrates a method 100 for verifying a design of an interconnect fabric under various different failure condition scenarios according to an aspect of the present invention.
  • a failure scenario may be set up in a design for an interconnect fabric.
  • the design may be evaluated to determine whether it satisfies the flow requirements under the failure scenario set up in step 302 . This process may be repeated for any number of different scenarios.
  • a design for an interconnect fabric includes primary and backup paths for each flow, where the backup path is intended to support the flow in the event of a failure in the primary path.
  • the flows may be assigned to the primary path for each flow.
  • the method 100 may be used to evaluate whether the flow requirements are met by the interconnect fabric under conditions of the first scenario.
  • a network element in the primary path may be assumed to have failed.
  • one or more of the flows may be assigned to backup paths.
  • the method 100 may be used to evaluate whether the flow requirements are met by the interconnect fabric under conditions of the second scenario.
  • FIG. 8 shows an arrangement of flows for an interconnect fabric for this example.
  • a flow i forms a connection between a source node 40 and a terminal node 50
  • a flow j forms a connection between the source node 40 and a terminal node 52
  • a flow k forms a connection between the source node 40 and a terminal node 54 .
  • Flows l, m, and n, respectively, form connections from a source node 42 to the terminal nodes 50 - 54 .
  • FIG. 9 shows an exemplary design for an interconnect fabric including primary and backup paths for each of the flows of FIG. 8. More particularly, primary paths for the flows i, j and k pass through a link 326 that connects the source node 310 to a device 328 . From the device 328 , the primary path for i is through a link 330 that connects the device 328 to the terminal node 320 , the primary path for j is through a link 332 that connects the device 328 to the terminal node 322 and the primary path for k passes through a link 334 that connects the device 328 to the terminal node 324 .
  • Primary paths for the flows 1 , m and n pass through a link 336 that connects the source node 312 to a device 338 .
  • the primary path for l is through a link 340 that connects the device 338 to the terminal node 320
  • the primary path for m is through a link 342 that connects the device 338 to the terminal node 322
  • the primary path for n passes through a link 344 that connects the device 338 to the terminal node 324 .
  • Backup paths for the flows i, j and k pass through a link 346 that connects the source node 310 to a device 348 .
  • Backup paths for the flows l, m and n pass through a link 350 to the device 348 .
  • the backup paths for i and l are through a link 352 that connects the device 348 to the terminal node 320
  • the backup paths for j and m are through a link 354 that connects the device 348 to the terminal node 322
  • the backup paths for k and n are through a link 356 that connects the device 348 to the terminal node 324 .
  • the device may not have sufficient bandwidth capacity to support all of the flows simultaneously.
  • the device 348 preferably has sufficient bandwidth capacity to handle the flows i, j and k simultaneously (as may be needed if the device 328 should fail) or the flows l, m and n simultaneously (as may be needed is the device 338 should fail).
  • the interconnect fabric of FIG. 9, including the primary and backup paths may be developed by a method disclosed in co-pending U.S. application Ser. No. ______, filed, Jan. 17, 2002, entitled, “Reliability for Interconnect Fabrics, ” the contents of which are hereby incorporated by reference or by another method.
  • the flows i-n may all be assigned to their primary paths.
  • the method 100 of FIG. 1 may be used to determine whether primary paths of the design of FIG. 9 satisfy the flow requirements for all of the flows.
  • the flows i, j and k may be assigned to their backup paths through the device 348 .
  • the device 328 has failed.
  • the method 100 may be used to determine whether this failure scenario is supported by the design of FIG. 9.
  • the flows l, m and n may be assigned to their backup paths through the device 348 .
  • the method 100 may be used to determine whether this failure scenario is supported by the design of FIG. 9.
  • the interconnect fabric may include redundant paths sufficient to support all of the flows simultaneously.
  • such an interconnect fabric may be developed by a method disclosed in co-pending U.S. application Ser. No. ______, filed Dec. 19, 2001, entitled, “Reliability for Interconnect Fabrics” or by another method.
  • one or more flows may be assigned to both a primary path for the flow and a redundant path for the flow.
  • the method 100 may be used to determine whether the design is capable of supporting such flows simultaneously.
  • the method 300 of FIG. 7 may be used to determine whether a design for an interconnect fabric supports the flow requirements under various different failure scenarios.
  • FIG. 10 shows a system having a fabric design verification tool 400 that may employ the method 100 (and the methods 200 and 300 ) to verify a design for an interconnect fabric.
  • the fabric design verification tool 400 may be implemented in computer software and/or hardware to perform its functions.
  • Design information 422 in one embodiment includes a list of hosts (source nodes) and devices (terminal nodes) 410 , an interconnect design specification 412 , a set of flow requirements data 414 , a set of port availability data 416 and a set of bandwidth data 418 .
  • the design information 422 may be implemented as an information store, such as a file or set of files or a database, etc.
  • the list of hosts and devices 410 may specify the hosts and devices which are to be interconnected by an interconnect fabric design 412 . This list 410 may be obtained in step 102 of FIG. 1.
  • the interconnect fabric design specification 412 may specify the interconnect fabric design to be verified.
  • the design specification 412 may be obtained in the step 104 of FIG. 1.
  • the flow requirements data 414 may specify the desired flow requirements for the interconnect fabric design 412 .
  • the desired flow requirements may include bandwidth requirements for each pairing of the source and terminal nodes and may be obtained in the step 102 of FIG. 1.
  • the port availability data 416 may specify the number of communication ports available on each source node and each terminal node and each available interconnect device.
  • the bandwidth data 418 may specify the bandwidth of each host and device port and each type of fabric node and link.
  • the bandwidth data may also specify maximum bandwidth for entire interconnect devices.
  • Verification result 420 generated by the fabric design verification tool 400 may include an indication as to whether or not the design 412 satisfies the flow requirements 414 .

Abstract

A technique for verifying an interconnect fabric design for interconnecting a plurality of network nodes. A design for the interconnect fabric specifies an arrangement of elements of the fabric and flow requirements among the network nodes. The invention programmatically verifies that the flow requirements are satisfied by the design and that the design does not violate constraints on the elements, such as available bandwidth or number of ports. This may also include determining whether the network can continue to satisfy the flow requirements in the event of one or more failures of elements of the interconnect fabric.

Description

    FIELD OF THE INVENTION
  • The present invention pertains to the field of networks. More particularly, this invention relates to verification of designs for networks. [0001]
  • BACKGROUND OF THE INVENTION
  • An interconnect fabric provides for communication among a set of nodes in a network. Communications originate within the network at a source node and terminate at a terminal node. Thus, a wide variety of networks may be viewed as a set of source nodes that communicate with a set of terminal nodes via an interconnect fabric. For example, a storage area network may be arranged as a set of computers as source nodes which are connected to a set of storage devices as terminal nodes via an interconnect fabric that includes communication links and devices such as hubs, routers, switches, etc. Devices such as hubs, routers, switches, etc., are hereinafter referred to as interconnect devices. Depending on the circumstances, a node may assume the role of source node with respect to some communications and of terminal node for other communications. [0002]
  • The communication requirements of an interconnect fabric may be characterized in terms of a set of flow requirements. A typical set of flow requirements specifies the required communication bandwidth from each source node to each terminal node. The design of an interconnect fabric usually involves selecting the appropriate arrangement of physical communication links and interconnect devices and related components that will meet the flow requirements. [0003]
  • Once a design of an interconnect fabric has been obtained, it may be desired to verify that the design actually meets the communication requirements. Prior methods for verifying an interconnect fabric design may be based on manual techniques. Unfortunately, such techniques are usually error prone and time-consuming. Other techniques include simulation of the network design. Simulations, however, can also be time-consuming to set up and to run since they generally require simulation of the network design and of a synthetic load on the network. [0004]
  • Therefore, what is needed is an improved technique for verifying the design of a network. It is to these ends that the present invention is directed. [0005]
  • SUMMARY OF THE INVENTION
  • A technique is disclosed for verifying an interconnect fabric design for interconnecting a plurality of network nodes. A design for the interconnect fabric specifies an arrangement of elements of the fabric and flow requirements among the network nodes. The invention programmatically verifies the design. This may include determining whether the flow requirements are satisfied by the design and whether the design violates constraints on the elements, such as bandwidth capacity and number of available ports. This may also include determining whether the network can continue to satisfy the flow requirements in the event of one or more failures of elements of the interconnect fabric. [0006]
  • In one aspect of the invention, a computer implemented method is provided for verifying a design for an interconnect fabric. The design includes an arrangement of interconnect elements for interconnecting a plurality of network nodes. The design also has requirements for a plurality of flows among the network nodes. Each of the plurality of flows is associated with a path for the flow through the interconnect fabric. For each interconnect element in each path, requirements (e.g., bandwidth or a number of ports) associated with each of the corresponding flows is aggregated. A determination is made as to whether the aggregated requirements exceeds a capacity of the interconnect element. [0007]
  • In another aspect of the present invention, a system is provided for verifying a design for an interconnect fabric. A set of design information includes requirements for a plurality of flows and a design specification. Each of the plurality of flows is associated with a path for the flow through the interconnect fabric. For each interconnect element in each path, a fabric design verification tool aggregates requirements (e.g., bandwidth or a number of ports) associated with each of the corresponding flows and determines whether the aggregated requirements exceeds a capacity of the interconnect element. [0008]
  • The interconnect elements may include interconnect devices and links. The interconnect devices may be selected from the group consisting of switches and hubs. When the interconnect devices includes a hub, the extent of a domain of hub connected components may be identified. Identifying the extent of the domain of hub connected components may include a depth first search of the interconnect fabric for the hub connected components and may include constructing a tree data structure wherein a hub occupies a position in the tree and a other interconnect elements connected to the hub occupy positions in the tree one level down from the hub. [0009]
  • Requirements of ports may also be aggregated for each of the plurality of flows and a determination made as to whether a number of available ports of one or more of the interconnect elements is exceeded by the aggregated requirements of ports. Whether a flow corresponds to a valid path through the interconnect fabric may be determined. A valid path starting at a source node for the flow, terminating at an end node for the flow and passing through a contiguous subset of the interconnect elements. [0010]
  • A flow may be assigned to a primary path in the design and then the flow may be assigned to a backup path in the design. The backup path is intended to support the flow in the event of a failure in the primary path. Thus, the interconnect fabric may be evaluated to determine whether it supports the requirements of the flows under various different failure scenarios. [0011]
  • Other features and advantages of the present invention will be apparent from the detailed description that follows.[0012]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention is described with respect to particular exemplary embodiments thereof and reference is accordingly made to the drawings in which: [0013]
  • FIG. 1 shows a method for verifying a design of an interconnect fabric according to an aspect of the present invention; [0014]
  • FIG. 2 shows an arrangement of flows in an exemplary interconnect fabric design; [0015]
  • FIG. 3 shows a design specification for an exemplary interconnect fabric design; [0016]
  • FIG. 4 illustrates an exemplary design for an interconnect fabric including switches and hubs; [0017]
  • FIG. 5 illustrates a method for identifying hub connected components in a design for an interconnect fabric according to an aspect of the present invention; [0018]
  • FIG. 6 illustrates an exemplary tree data structure that may be formed by the method of FIG. 5; [0019]
  • FIG. 7 illustrates a method for verifying a design of an interconnect fabric under various different failure condition scenarios according to an aspect of the present invention; [0020]
  • FIG. 8 shows an arrangement of flows for an exemplary interconnect fabric design; [0021]
  • FIG. 9 shows an exemplary design for an interconnect fabric including primary and backup paths for each of the flows of FIG. 8; and [0022]
  • FIG. 10 shows a system having a fabric design verification tool that may be used to verify a design for an interconnect fabric in accordance with an aspect of the present invention.[0023]
  • DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT
  • FIG. 1 shows a [0024] method 100 for verifying a design of an interconnect fabric according to an aspect of the present invention. Requirements for the design may be referred to as flow requirements. The flow requirements may include, for example, source and terminal nodes for communication flows and required communication bandwidth for the flows. The interconnect design fabric specifies an arrangement of elements of the fabric, such as links and interconnect devices, which is intended to satisfy the flow requirements. The invention programmatically verifies the design. This may include determining whether the flow requirements are satisfied by the design and whether the design violates constraints on the elements, such as bandwidth capacity and number of available ports.
  • At [0025] step 102, a set of network nodes, such as source and terminal nodes, that are interconnected by the interconnect fabric design are determined. In addition, flow requirements for the fabric are determined. Table 1 shows an example set of flow requirements for an interconnect fabric design.
    Terminal Terminal Terminal
    Node
    20 Node 22 Node 24
    Source a b c
    Node
    10
    Source d e f
    Node
    12
    Source g h
    Node
    14
  • The flow requirements in this example specify three source nodes (source nodes [0026] 10-14 in the figures below) and three terminal nodes (terminal nodes 20-24 in the figures below). For the interconnect fabric design to meet the flow requirements, it must contain communication paths between all pairs of the source nodes 10-14 and terminal nodes and 20-24 having positive flow requirements and must have sufficient bandwidth to support all of the flow requirements simultaneously.
  • In one embodiment, the source nodes [0027] 10-14 are host computers and terminal nodes 20-24 are storage devices and the bandwidth values for flows a-h are numbers expressed in units of megabits per second. Thus, the interconnect fabric design may be for storage area network.
  • In other embodiments, there may be multiple flow requirements between a given source and terminal node pair. In such embodiments, the cells of Table 1 may contain a list of two or more entries. [0028]
  • FIG. 2 shows an arrangement of flows in the interconnect fabric design obtained at [0029] step 102 for this example. Accordingly, a flow a forms a connection between the source node 10 and the terminal node 20, a flow b forms a connection between the source node 10 and the terminal node 22 , and a flow c forms a connection between the source node 10 and the terminal node 24. Similarly, flows d, e, and f, respectively, form connections from the source node 12 to the terminal nodes 20-24 and flows g and h, respectively, form connections from the source node 14 to the terminal nodes 22-24.
  • Because the set of nodes and the flow requirements are the basic constraints on the design for the interconnect fabric, they may have been used as a starting point for the design which is to be verified by the present invention. Accordingly, they will generally be readily available. For example, U.S. application Ser. No. 09/707,227, filed Nov. 16, 2000, the contents of which are hereby incorporated by reference, discloses a technique for designing interconnect fabrics using a set of nodes and flow requirements as a starting point. It will be apparent, however, the present technique may be used to verify interconnect fabric designs obtained by other techniques, such as manual or other methods. Further, the set of nodes and flow requirements may be obtained in other ways. For example, the set of nodes may be obtained from the design itself. Also, the present invention may be used to verify whether an interconnect fabric initially designed to support one set of flow requirements will support a different set of flow requirements. For example, it may be desired to determine whether an existing design will meet the requirements of a new application. [0030]
  • In a [0031] step 104, a specification of the interconnect fabric design which is to be verified by the present invention is obtained. Typically, the design specifies a set of interconnect devices and communication links. The devices may include for example, hubs, routers, switches, and so forth. The links form physical connections among the nodes and the interconnect devices. These may include, for example, fiber optic links, fibre channel links, wire-based links, and links such as SCSI, as well as wireless links.
  • FIG. 3 shows a design specification for the example flow requirements. The design of FIG. 3 may be developed by the technique of U.S. application Ser. No. 09/09/707,227, mentioned above, or by another technique. As shown in FIG. 3, [0032] devices 30, 32, and 34 and a set of links 40-58 interconnect the nodes 10-14 and 20-24. More particularly, flows a, b and c from the source node 10 are merged and connected to the device 30 by a link 40. The flow a is connected between the device 30 and the terminal node 20 by a link 42. The flows b and c from the device 30 are merged and connected to the device 32 by a link 44. The flow d from the source node 12 is connected to the terminal node 20 by a link 46. The flows e and f from the source node 12 are merged and connected to the device 32 by a link 48. The flows b and e from the device 32 are merged and connected to the device 34 by a link 50. The flows c and f from the device 32 are merged and connected to the terminal node 24 by a link 52. The flow g from the source node 14 is connected to the device 34 by a link 54. The flows b, e and g from the device 34 are merged and connected to the terminal node 22 by a link 56. The flow h from the source node 14 is connected to the terminal node 22 by a link 58. Rather than being represented graphically, as in FIG. 3, the design specification may be represented other ways. For example, the design specification may be in the form of a list including elements and connections between the elements.
  • In a [0033] step 106, each flow included in the flow requirements obtained in the step 102 is associated with a path through the interconnect fabric. These associations of flows to paths may be specified by the design specification. Alternately, these associations may be developed in step 106 by comparing each flow to the design for the interconnect fabric and identifying a path through the fabric whose end points match those of the flow.
  • In some cases, there may be more than one possible path for the flow. In which case, the flow may be assigned to one such path and an attempt made to verify the design based on that assignment (steps [0034] 108-110, discussed below). If the design cannot be verified, the flow may be assigned to another possible path. Flows may be assigned to new paths until the design can be verified or all the possible paths for all flows have been tried unsuccessfully.
  • To be a valid path for a flow, the path should start at the source node for the flow, terminate at the end node for the flow and pass through a contiguous subset of the links and devices identified in the [0035] step 104. If a valid path cannot be identified for a flow in step 106, this indicates that the design will not meet the flow requirements. If the design is rejected in step 106 because it does not include a valid path for each flow, it may then be modified to add one or more valid paths as needed or a new design may be selected.
  • In the example, each of the flows a-h is associated with a corresponding path through the interconnect fabric. Thus, flow a is associated with a path from the [0036] source node 10, through link 40, device 30 and link 42, terminating at terminal node 20. Flow b is associated with a path from the source node 10, through link 40, device 30, link 44, device 32, link 50, device 34, and link 56, terminating at terminal node 22. Flow c is associated with a path from the source node 10, through link 40, device 30, link 44, device 32 and link 52, terminating at terminal node 24. Flow d is associated with a path from the source node 12, through link 46 and terminating at terminal node 20. Flow e is associated with a path from the source node 12, through link 48, device 32, link 50, device 34 and link 56, terminating at terminal node 22. Flow f is associated with a path from the source node 12, through link 48, device 32 and link 52, terminating at terminal node 24. Flow g is associated with a path from the source node 14, through link 54, device 34 and link 34, terminating at terminal node 22. Flow h is associated with a path from the source node 14, through link 58 and terminating at terminal node 58.
  • In [0037] steps 108 and 110, the paths identified in the step 106 are evaluated to determine whether the flow requirements for the associated flows are met by the design. More particularly, in step 108, a path may be selected for evaluation. Elements of the selected path are then identified. These elements may include, for example, each port, interconnect device and link encountered in the path. For each such element, the requirements for the flow that corresponds to the path through that element are aggregated along with requirements for other flows through that same element. These requirements may include, for example, the bandwidth and the number of ports required for the flows. For each selected path, its flow requirements are aggregated with those of other paths that were evaluated prior to the selected path. Then, in state 110, a determination is made as to whether the capacity of each element is exceeded by the aggregated requirements. This process is repeated for each flow and for each element of each flow.
  • In the example of FIG. 3, assume that each of interconnect devices [0038] 30-34 is a switch having a maximum bandwidth capacity of 100 Mb/s. Assume also that each of the interconnect devices 30-34 has four available ports and each port of the devices 30-34 has a maximum bandwidth capacity of 100 Mb/s. In addition, assume that each port of each of the source nodes 10-14 and each port of each of the terminal nodes 20-24 and each of the links 40-58 also has maximum bandwidth capacity of 100 Mb/s. Assume also that each of flows a-d and flows f-h require a bandwidth of 33 Mb/s and that flow e requires 0.5 Mb/s.
  • In a first pass through the [0039] step 108, the path for flow a may be selected. The bandwidth requirement for the flow a may then be associated with each of a port at the source node 10, the link 40, the device 30, the link 42 and a port at the terminal node 20. For example, this information may be saved in computer memory. In addition, the requirement of one port at the node 10 (shared by flows a, b and c), two ports (an entry port and an exit port) at the device 30 and one port at the node 20 may be recorded. Then, in the step 110, a determination may be made as to whether any of the bandwidth capacities of these elements is exceeded by the flow a and whether the number of available ports for each of these elements is exceeded by the flow a.
  • In a next pass through the [0040] step 108, the path for the flow b may be selected. Because the flow b uses the same port at the source node as the flow a, the bandwidth requirements for both flows are aggregated. The sum of these flow requirements may then be saved in the step 108 for comparison with capacity of the port at node 20 in the step 110. Similarly, the flow b also uses the link 40, and the same entry port at the device 30 that is used by the flow a. Thus, the bandwidth requirements of flow b for each of these elements can be aggregated with those of flow a. However, the flow b uses a different exit port at the device 30. Thus, the requirement of a third port at the device 30 may be recorded. Then, in the step 110, requirements of the flow b, aggregated with those of flow a, may be compared to the capacities of the corresponding elements of the network to determine whether any are exceeded.
  • While not used by the flow a, the [0041] link 44, the device 32, the link 50, the device 34, the link 58 and a port of the terminal node 22 are used by the flow b. Thus, in step 110, the requirements for the flow b at each of these elements may be compared the capacities of the corresponding element to determine whether any are exceeded.
  • In this example, none of the capacities are exceeded by the flows a and b. For example, the [0042] device 30 has maximum input bandwidth capacity of 100 Mb/s, however, the total used by flows a and b is 66 Mb/s, which is less than the maximum. As another example, the device 30 has four ports, however, the flows a and b only require three ports at the device 30.
  • In another pass through the [0043] step 108, the path for the flow c may be selected and its requirements aggregated with those of flows a and b. Thus, the requirements for the flow c may be aggregated with those of the other flows for each of the source node 10, the link 40, the device 30, the link 44, the device 32, the link 52 and the terminal node 24. Then, in step 110, the aggregated requirements for the flows a, b and c may be compared to the capacities of the corresponding elements of the network to determine whether any are exceeded.
  • The [0044] steps 108 and 110 may be repeated for each of the flows. In this manner, the additional requirements of each flow may be aggregated with the flows considered in previous passes through the step 108. In a final pass through the step 110, the aggregated requirements for all of the flows to be supported by the design may be compared the capacities of the corresponding elements of the network to determine whether any are exceeded.
  • In the example, none of the capacities of elements of the network are exceeded by the requirements of the flows a-h. For example, the aggregated bandwidth requirement for the [0045] device 32 is 99.5 Mb/s. This includes 33 Mb/s for the flow b, 33 Mb/s for the flow c, 0.5 Mb/s for the flow e and 33 Mb/s for the flow f, resulting in a sum of 99.5 Mb/s. In addition, these flows require four ports at the device 32, two for entering flows and two for exiting flows. The maximum bandwidth capacity for the device 32 is 100 Mb/s and it has four ports. Accordingly, neither the bandwidth capacity, nor port number capacity of the device 32 is exceeded. Thus, the method 100 may terminate with a positive result after the final pass through the step 110.
  • In another example, assume that the flow e requires 10 Mb/s of bandwidth, rather than the 0.5 Mb/s previously assumed. In this case, the aggregated bandwidth requirements for the [0046] device 32 includes 33 Mb/s for the flow b, 33 Mb/s for the flow c, 10 Mb/s for the flow e and 33 Mb/s for the flow f, resulting in a sum of 109 Mb/s. This exceeds the maximum bandwidth available for the device 32, which is 100 Mb/s. Accordingly, a determination in the state 110 may be that the bandwidth capacity of the device 32 is exceeded. In response, the design for the interconnect fabric may be modified in order to increase its bandwidth capacity or the flow requirements relaxed in order to reduce the bandwidth requirements.
  • In the example above, the interconnect devices [0047] 30-34 are switches. Accordingly, communications for a flow that passes through one of these devices are passed from an entry port of the device to a specified exit port of the device. The bandwidth requirements for the flow may be aggregated in the step 108 along with other flows at the same input and exit ports to determine whether the maximum bandwidth capacity of either the input or exit port is exceeded. In addition, bandwidth requirements for the flow may be aggregated with all flows that enter the interconnect device to determine whether the maximum bandwidth capacity of the device is exceeded.
  • For other devices, such as hubs or repeaters, communications for a flow that enters a port of the device may be repeated at all other ports of the device, not just a specified exit port as in the case of switches. As a result, bandwidth consumed at one port to receive communications is also consumed at each other port in order to retransmit the communications. Accordingly, in [0048] step 108, the bandwidth requirement for a flow entering such a device is aggregated along with the bandwidth requirements for all the other flows entering the device to determine whether the bandwidth capacity of any port is exceeded. This means that the lowest bandwidth capacity among the ports of the device determines the maximum bandwidth of the device itself.
  • When two or more such devices are connected together, a communication received at any port of a connected device may be repeated at each other port of that device and at each port of the other connected devices. Thus, bandwidth consumed at any port of a connected group of hub or repeater devices may also be consumed at each other port of the connected group of devices in order to repeat the communication. [0049]
  • Elements other than a hub or a repeater, such as a switch, a source node or a terminal node, may be connected to a port of a hub or a repeater. In which case, communications between those elements and the hub or repeater may be repeated at each other port of the hub or repeater and at each port of any other connected hub or repeater. Thus, bandwidth consumed at any port connected to hub or repeater may also be consumed at each other port of the connected group of devices in order to repeat the communication. As used herein, the term “hub connected” refers to any hub or other network element connected to a hub for which communications repeated by the hub consume bandwidth. This includes links directly connected to a port of a hub or repeater and interconnect devices directly connected to such links. Hub connected components can be said to be within the same “domain” where communications are repeated to each such hub connected component. When a path identified in [0050] step 106 utilizes any hub connected component, all of the devices in the same domain may also be impacted by the flow. Thus, in the step 108, bandwidth requirements for a flow at a hub connected component may be aggregated with the bandwidth requirements for flows at other hub connected components in the same domain in order to determine in step 110 whether the bandwidth capacity of any such component is exceeded.
  • FIG. 4 illustrates an exemplary design for an interconnect fabric including switches [0051] 130-138 and hubs 140-144. More particularly, a source node 110 is connected to the switch 130 by a link 150. The switch 130 is connected to a terminal node 120 by a link 152 and to a terminal node 122 by a link 154. The switch 132 is connected to the source node 130 by a link 156 and to a source node 112 by a link 158. The hub 140 is connected to the switch 132 by a link 160, to the switch 134 by a link 162 and to the hub 142 by a link 164. The switch 134 is also connected to the terminal node 120 by a link 166 and to the terminal node 122 by a link 168.
  • The [0052] switch 136 is connected to the source node 112 by a link 170, to the source node 114 by a link 172 and to the hub 144 by a link 174. The hub 144 is also connected to the source node 110 by a link 176 and to the hub 142 by a link 178. The switch 138 is connected to the source node 114 by a link 180, to the hub 142 by a link 182 and to the terminal node 124 by a link 184.
  • Assume that a path for a flow passes from the [0053] source node 110 to the terminal node 120 via the link 156, the switch 132, the link 160, the hub 146, the link 162, the switch 134 and the link 166. Returning to the step 106, assume that a path through the interconnect fabric is being associated with the flow. Because, the switch 132 is connected to the hub 140 by the link 160, the path includes at least one hub connected component (other hub connected components in the path include the switch 134 and the link 162). Thus, in order to aggregate the bandwidth requirements for the appropriate other flows in step 108, all of the other hub connected components in the same domain as the hub 140 may be identified. Once such hub connected components are identified, their associated flows can be identified so that their bandwidth requirements may be aggregated appropriately when the step 108 is performed for each such flow.
  • FIG. 5 illustrates a method [0054] 200 for identifying hub connected components in a design for an interconnect fabric in accordance with an aspect of the present invention. In step 202, a hub connected component is encountered in a path between a source node and a terminal node. This may occur, for example, during the step 106 of the method 100 (FIG. 3). In the example of FIG. 4, the first hub connected component encountered may be switch 132.
  • In a step [0055] 204, the hub connected component is added to a tree data structure. FIG. 6 illustrates an exemplary tree data structure 250 that may be formed by the method 200 of FIG. 5. The data structure 250 may be stored, for example, in computer memory. As shown in FIG. 6, the switch 132 may be initially added to the data structure 250.
  • In a step [0056] 206, the connections of the hub connected component added to the tree 250 are searched for other hub connected components in the same domain. In the example, the switch 132 is connected to the hub 140. Accordingly, the hub 140 may be identified in the step 204 as another hub connected component in the same domain. From the step 204, the step 202 is repeated during which the newly identified component may be added to the tree data structure 250. Thus, as shown in FIG. 6, the hub 140 may be added to the tree 250. Hub connected links may also be identified in the tree 250 as shown in FIG. 6 by the link 160.
  • The [0057] tree 250 may be formed with hub connected components other than hubs being one level lower than a directly-connected hub. Accordingly, in the example, the hub 140 is inserted into the tree 250 one level higher than the switch 132. Once a hub is added to the tree, the following passes through the steps 204 and 206 search the interconnect fabric design to determine whether there are any additional components connected to the hub. If any such devices are found, they are added to the tree, branching from that hub and one level down. If any of those devices are hubs, the interconnect fabric design is searched again for each such hub to determine whether there are any additional components connected to each of those hubs. Then, any such devices found are added to the tree branching from the connected hub. This process may be referred to as a depth-first search which continues until all the hub connected components in the same domain are identified and added to the tree.
  • In the example, after the [0058] hub 140 is added to the tree 250, the switch 134 may be added one level down from the hub 140 in a next pass through the steps 204 and 206. This is shown in FIG. 6. Then, the hub 142 may be added one level down from the hub 140. Since there is no other components connected to the hub 140, the search then moves down to the hub 142. In the next passes through the steps 204 and 206, the switch 136 and hub 144 are added to the tree 250 since they are connected to the hub 140. A search for components connected to the hub 144 results in locating the switch 136 and the node 110 since they are connected to the hub 144. Because all of the elements in the level below the hub 144 are not hubs, this means that search of the domain has been exhausted.
  • Thus, according to one aspect of the invention, when a hub connected component is encountered, the interconnect fabric design may be searched to determine the extent of the domain. This may be accomplished by performing a depth-first search, as explained above or by another technique, such as a union-find algorithm. The results identify all of the hub connected components in the domain for which the bandwidth requirements may be aggregated in the [0059] step 108 of FIG. 1 in order to determine whether the bandwidth capacity of any such component is exceeded.
  • In another aspect of the invention, the [0060] method 100 of verifying a design for an interconnect fabric may be used to evaluate the ability of the design to withstand various different failure conditions. FIG. 7 illustrates a method 100 for verifying a design of an interconnect fabric under various different failure condition scenarios according to an aspect of the present invention. In a step 302, a failure scenario may be set up in a design for an interconnect fabric. Then, in a step 304, the design may be evaluated to determine whether it satisfies the flow requirements under the failure scenario set up in step 302. This process may be repeated for any number of different scenarios.
  • In an example, assume that a design for an interconnect fabric includes primary and backup paths for each flow, where the backup path is intended to support the flow in the event of a failure in the primary path. Thus, in a first failure condition scenario set up in [0061] step 302, the flows may be assigned to the primary path for each flow. Then, in step 304, the method 100 may be used to evaluate whether the flow requirements are met by the interconnect fabric under conditions of the first scenario. In a second failure condition scenario set up in a next pass through the step 302, a network element in the primary path may be assumed to have failed. In which case, one or more of the flows may be assigned to backup paths. Then, in step 304, the method 100 may be used to evaluate whether the flow requirements are met by the interconnect fabric under conditions of the second scenario.
  • FIG. 8 shows an arrangement of flows for an interconnect fabric for this example. A flow i forms a connection between a [0062] source node 40 and a terminal node 50, a flow j forms a connection between the source node 40 and a terminal node 52, a flow k forms a connection between the source node 40 and a terminal node 54. Flows l, m, and n, respectively, form connections from a source node 42 to the terminal nodes 50-54.
  • FIG. 9 shows an exemplary design for an interconnect fabric including primary and backup paths for each of the flows of FIG. 8. More particularly, primary paths for the flows i, j and k pass through a [0063] link 326 that connects the source node 310 to a device 328. From the device 328, the primary path for i is through a link 330 that connects the device 328 to the terminal node 320, the primary path for j is through a link 332 that connects the device 328 to the terminal node 322 and the primary path for k passes through a link 334 that connects the device 328 to the terminal node 324. Primary paths for the flows 1, m and n pass through a link 336 that connects the source node 312 to a device 338. From the device 338, the primary path for l is through a link 340 that connects the device 338 to the terminal node 320, the primary path for m is through a link 342 that connects the device 338 to the terminal node 322 and the primary path for n passes through a link 344 that connects the device 338 to the terminal node 324.
  • Backup paths for the flows i, j and k pass through a [0064] link 346 that connects the source node 310 to a device 348. Backup paths for the flows l, m and n pass through a link 350 to the device 348. From the device 348, the backup paths for i and l are through a link 352 that connects the device 348 to the terminal node 320, the backup paths for j and m are through a link 354 that connects the device 348 to the terminal node 322 and the backup paths for k and n are through a link 356 that connects the device 348 to the terminal node 324.
  • Because all of the backup paths of FIG. 9 utilize the [0065] device 348, the device may not have sufficient bandwidth capacity to support all of the flows simultaneously. However, the device 348 preferably has sufficient bandwidth capacity to handle the flows i, j and k simultaneously (as may be needed if the device 328 should fail) or the flows l, m and n simultaneously (as may be needed is the device 338 should fail). Accordingly, the interconnect fabric of FIG. 9, including the primary and backup paths, may be developed by a method disclosed in co-pending U.S. application Ser. No. ______, filed, Jan. 17, 2002, entitled, “Reliability for Interconnect Fabrics, ” the contents of which are hereby incorporated by reference or by another method.
  • In the example of FIG. 9, in a first pass through the [0066] step 302 of the method 300 of FIG. 7, the flows i-n may all be assigned to their primary paths. In this scenario it may be assumed that neither of the devices 328 and 338 has failed. Thus, in a first pass through the step 304, the method 100 of FIG. 1 may be used to determine whether primary paths of the design of FIG. 9 satisfy the flow requirements for all of the flows. In a second pass through the step 302, the flows i, j and k may be assigned to their backup paths through the device 348. Thus, in this scenario, it may be assumed that the device 328 has failed. Then in a second pass through step 304, the method 100 may be used to determine whether this failure scenario is supported by the design of FIG. 9. In a third pass through the step 302, the flows l, m and n may be assigned to their backup paths through the device 348. Thus, in this scenario, it may be assumed that the device 328 has failed. Then in a third pass through step 304, the method 100 may be used to determine whether this failure scenario is supported by the design of FIG. 9.
  • In other embodiments, the interconnect fabric may include redundant paths sufficient to support all of the flows simultaneously. In which case, such an interconnect fabric may be developed by a method disclosed in co-pending U.S. application Ser. No. ______, filed Dec. 19, 2001, entitled, “Reliability for Interconnect Fabrics” or by another method. Thus, in a scenario set up in [0067] step 302, one or more flows may be assigned to both a primary path for the flow and a redundant path for the flow. In step 304, the method 100 may be used to determine whether the design is capable of supporting such flows simultaneously.
  • Accordingly, the [0068] method 300 of FIG. 7 may be used to determine whether a design for an interconnect fabric supports the flow requirements under various different failure scenarios.
  • FIG. 10 shows a system having a fabric [0069] design verification tool 400 that may employ the method 100 (and the methods 200 and 300) to verify a design for an interconnect fabric. The fabric design verification tool 400 may be implemented in computer software and/or hardware to perform its functions. Design information 422 in one embodiment includes a list of hosts (source nodes) and devices (terminal nodes) 410, an interconnect design specification 412, a set of flow requirements data 414, a set of port availability data 416 and a set of bandwidth data 418. The design information 422 may be implemented as an information store, such as a file or set of files or a database, etc.
  • The list of hosts and [0070] devices 410 may specify the hosts and devices which are to be interconnected by an interconnect fabric design 412. This list 410 may be obtained in step 102 of FIG. 1.
  • The interconnect [0071] fabric design specification 412 may specify the interconnect fabric design to be verified. The design specification 412 may be obtained in the step 104 of FIG. 1.
  • The [0072] flow requirements data 414 may specify the desired flow requirements for the interconnect fabric design 412. The desired flow requirements may include bandwidth requirements for each pairing of the source and terminal nodes and may be obtained in the step 102 of FIG. 1.
  • The [0073] port availability data 416 may specify the number of communication ports available on each source node and each terminal node and each available interconnect device.
  • The [0074] bandwidth data 418 may specify the bandwidth of each host and device port and each type of fabric node and link. The bandwidth data may also specify maximum bandwidth for entire interconnect devices.
  • [0075] Verification result 420 generated by the fabric design verification tool 400 may include an indication as to whether or not the design 412 satisfies the flow requirements 414.
  • The foregoing detailed description of the present invention is provided for the purposes of illustration and is not intended to be exhaustive or to limit the invention to the precise embodiment disclosed. Accordingly, the scope of the present invention is defined by the appended claims. [0076]

Claims (26)

What is claimed is:
1. A computer implemented method for verifying a design for an interconnect fabric, the design including an arrangement of interconnect elements for interconnecting a plurality of network nodes and the design having requirements for a plurality of flows among the network nodes, and for each of the plurality of flows, the method comprising associating the flow with a path for the flow through the interconnect fabric, and for each interconnect element in each path, aggregating requirements associated with each of the corresponding flows and determining whether the aggregated requirements exceeds a capacity of the interconnect element.
2. The method according to claim 1, wherein the interconnect elements include interconnect devices and links.
3. The method according to claim 2, wherein the interconnect devices are selected from the group consisting of switches and hubs.
4. The method according to claim 3, wherein when the interconnect devices includes a hub, the method further comprises identifying an extent of a domain of hub connected components.
5. The method according to claim 4, wherein said identifying the extent of the domain of hub connected components comprises performing a depth first search of the interconnect fabric for the hub connected components.
6. The method according to claim 5, wherein said identifying an extent of a domain of hub connected components comprises constructing a tree data structure wherein a hub occupies a position in the tree and a other interconnect elements connected to the hub occupy positions in the tree one level down from the hub.
7. The method according to claim 1, wherein the aggregated requirements include bandwidth requirements.
8. The method according to claim 7, further comprising aggregating requirements of ports for each of the plurality of flows and determining whether a number of available ports of one or more of the interconnect elements is exceeded by the aggregated requirements of ports.
9. The method according to claim 1, wherein the aggregated requirements include a number of ports.
10. The method according to claim 1, said method further comprising determining whether a flow corresponds to a valid path through the interconnect fabric, a valid path starting at a source node for the flow, terminating at an end node for the flow and passing through a contiguous subset of the interconnect elements.
11. The method according to claim 10, further comprising rejecting the design if it does not include a valid path for each flow.
12. The method according to claim 1, wherein said associating comprises assigning a flow to a primary path in the design and further comprising assigning the flow to a backup path in the design to determine whether the design has capacity for the flow in the primary path and the backup path simultaneously.
13. The method according to claim 1, wherein said associating comprises assigning a flow to a backup path for the flow in the design to determine whether the design has capacity for the flow in the secondary path in event of a failure in a primary path for the flow.
14. A system for verifying a design for an interconnect fabric comprising:
a set of design information including requirements for a plurality of flows and a design specification wherein each of the plurality of flows is associated with a path for the flow through the interconnect fabric; and
a fabric design verification tool that, for each interconnect element in each path, aggregates requirements associated with each of the corresponding flows and determines whether the aggregated requirements exceeds a capacity of the interconnect element.
15. The system according to claim 14, wherein the interconnect elements include interconnect devices and links.
16. The system according to claim 15, wherein the interconnect devices are selected from the group consisting of switches and hubs.
17. The system according to claim 16, wherein when the interconnect devices includes a hub, the design verification tool identifies an extent of a domain of hub connected components.
18. The system according to claim 17, wherein the design verification tool identifies the extent of the domain of hub connected components by performing a depth first search of the interconnect fabric for the hub connected components.
19. The system according to claim 18, wherein the design verification tool identifies an extent of a domain of hub connected components by constructing a tree data structure wherein a hub occupies a position in the tree and a other interconnect elements connected to the hub occupy positions in the tree one level down from the hub.
20. The method according to claim 14, wherein the aggregated requirements include bandwidth requirements.
21. The system according to claim 20, wherein the design verification tool aggregates requirements of ports for each of the plurality of flows and determines whether a number of available ports of one or more of the interconnect elements is exceeded by the aggregated requirements of ports.
22. The method according to claim 14, wherein the aggregated requirements include a number of ports.
23. The system according to claim 14, wherein the design verification tool determines whether a flow corresponds to a valid path through the interconnect fabric, a valid path starting at a source node for the flow, terminating at an end node for the flow and passing through a contiguous subset of the interconnect elements.
24. The system according to claim 23, wherein the design verification tool rejects the design if it does not include a valid path for each flow.
25. The system according to claim 14, wherein the design verification tool assigns a flow to a primary path in the design and also assigns the flow to a backup path in the design to determine whether the design has capacity for the flow in the primary path and the backup path simultaneously.
26. The system according to claim 14, wherein the design verification tool assigns a flow to a backup path for the flow in the design to determine whether the design has capacity for the flow in the backup path in event of a failure in a primary path for the flow.
US10/058,258 2002-01-25 2002-01-25 Verifying interconnect fabric designs Abandoned US20030145294A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US10/058,258 US20030145294A1 (en) 2002-01-25 2002-01-25 Verifying interconnect fabric designs
US10/290,760 US7308494B1 (en) 2002-01-25 2002-11-08 Reprovisioning technique for an interconnect fabric design
US10/290,643 US7237020B1 (en) 2002-01-25 2002-11-08 Integer programming technique for verifying and reprovisioning an interconnect fabric design

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/058,258 US20030145294A1 (en) 2002-01-25 2002-01-25 Verifying interconnect fabric designs

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US10/290,643 Continuation-In-Part US7237020B1 (en) 2002-01-25 2002-11-08 Integer programming technique for verifying and reprovisioning an interconnect fabric design
US10/290,760 Continuation-In-Part US7308494B1 (en) 2002-01-25 2002-11-08 Reprovisioning technique for an interconnect fabric design

Publications (1)

Publication Number Publication Date
US20030145294A1 true US20030145294A1 (en) 2003-07-31

Family

ID=27609555

Family Applications (3)

Application Number Title Priority Date Filing Date
US10/058,258 Abandoned US20030145294A1 (en) 2002-01-25 2002-01-25 Verifying interconnect fabric designs
US10/290,643 Expired - Lifetime US7237020B1 (en) 2002-01-25 2002-11-08 Integer programming technique for verifying and reprovisioning an interconnect fabric design
US10/290,760 Expired - Lifetime US7308494B1 (en) 2002-01-25 2002-11-08 Reprovisioning technique for an interconnect fabric design

Family Applications After (2)

Application Number Title Priority Date Filing Date
US10/290,643 Expired - Lifetime US7237020B1 (en) 2002-01-25 2002-11-08 Integer programming technique for verifying and reprovisioning an interconnect fabric design
US10/290,760 Expired - Lifetime US7308494B1 (en) 2002-01-25 2002-11-08 Reprovisioning technique for an interconnect fabric design

Country Status (1)

Country Link
US (3) US20030145294A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020083159A1 (en) * 2000-11-06 2002-06-27 Ward Julie A. Designing interconnect fabrics
US20030144822A1 (en) * 2002-01-31 2003-07-31 Li-Shiuan Peh Generating interconnect fabric requirements
US20040003082A1 (en) * 2002-06-28 2004-01-01 International Business Machines Corporation System and method for prevention of boot storms in a computer network
US20050119963A1 (en) * 2002-01-24 2005-06-02 Sung-Min Ko Auction method for real-time displaying bid ranking
US20060095885A1 (en) * 2004-10-30 2006-05-04 Ibm Corporation Systems and methods for storage area network design
US20080066036A1 (en) * 2004-07-12 2008-03-13 International Business Machines Corporation Chip Having Timing Analysis of Paths Performed Within the Chip During the Design Process
US8533016B1 (en) 2005-01-30 2013-09-10 Hewlett-Packard Development Company, L.P. System and method for selecting a portfolio
US10567295B2 (en) * 2018-05-17 2020-02-18 Cisco Technology, Inc. Method and system for teleprotection over segment routing-based networks

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7502839B2 (en) * 2001-09-28 2009-03-10 Hewlett-Packard Development Company, L.P. Module-building method for designing interconnect fabrics
US7673027B2 (en) * 2004-05-20 2010-03-02 Hewlett-Packard Development Company, L.P. Method and apparatus for designing multi-tier systems
US20060025984A1 (en) * 2004-08-02 2006-02-02 Microsoft Corporation Automatic validation and calibration of transaction-based performance models
US7532586B2 (en) * 2005-07-18 2009-05-12 Sbc Knowledge Ventures, L.P. Method of augmenting deployed networks
US8135603B1 (en) 2007-03-20 2012-03-13 Gordon Robert D Method for formulating a plan to secure access to limited deliverable resources
US20120066318A1 (en) * 2010-09-09 2012-03-15 I O Interconnect, Ltd. Data transmission method
US9537743B2 (en) * 2014-04-25 2017-01-03 International Business Machines Corporation Maximizing storage controller bandwidth utilization in heterogeneous storage area networks
US11722431B2 (en) * 2021-10-30 2023-08-08 Dish Network L.L.C. Gateway prioritization

Citations (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4920487A (en) * 1988-12-12 1990-04-24 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Method of up-front load balancing for local memory parallel processors
US5107489A (en) * 1989-10-30 1992-04-21 Brown Paul J Switch and its protocol for making dynamic connections
US5113496A (en) * 1987-08-04 1992-05-12 Mccalley Karl W Bus interconnection structure with redundancy linking plurality of groups of processors, with servers for each group mounted on chassis
US5138657A (en) * 1989-10-23 1992-08-11 At&T Bell Laboratories Method and apparatus for controlling a digital crossconnect system from a switching system
US5245609A (en) * 1991-01-30 1993-09-14 International Business Machines Corporation Communication network and a method of regulating the transmission of data packets in a communication network
US5307449A (en) * 1991-12-20 1994-04-26 Apple Computer, Inc. Method and apparatus for simultaneously rendering multiple scanlines
US5329619A (en) * 1992-10-30 1994-07-12 Software Ag Cooperative processing interface and communication broker for heterogeneous computing environments
US5426674A (en) * 1990-02-06 1995-06-20 Nemirovsky; Paul Method and computer system for selecting and evaluating data routes and arranging a distributed data communication network
US5524212A (en) * 1992-04-27 1996-06-04 University Of Washington Multiprocessor system with write generate method for updating cache
US5581689A (en) * 1993-12-28 1996-12-03 Nec Corporation Multi link type self healing system for communication networks
US5598532A (en) * 1993-10-21 1997-01-28 Optimal Networks Method and apparatus for optimizing computer networks
US5634011A (en) * 1992-06-18 1997-05-27 International Business Machines Corporation Distributed management communications network
US5634004A (en) * 1994-05-16 1997-05-27 Network Programs, Inc. Directly programmable distribution element
US5649105A (en) * 1992-11-10 1997-07-15 Ibm Corp. Collaborative working in a network
US5651005A (en) * 1995-08-29 1997-07-22 Microsoft Corporation System and methods for supplying continuous media data over an ATM public network
US5793362A (en) * 1995-12-04 1998-08-11 Cabletron Systems, Inc. Configurations tracking system using transition manager to evaluate votes to determine possible connections between ports in a communications network in accordance with transition tables
US5802286A (en) * 1995-05-22 1998-09-01 Bay Networks, Inc. Method and apparatus for configuring a virtual network
US5805578A (en) * 1995-10-27 1998-09-08 International Business Machines Corporation Automatic reconfiguration of multipoint communication channels
US5815402A (en) * 1996-06-07 1998-09-29 Micron Technology, Inc. System and method for changing the connected behavior of a circuit design schematic
US5831996A (en) * 1996-10-10 1998-11-03 Lucent Technologies Inc. Digital circuit test generator
US5835498A (en) * 1995-10-05 1998-11-10 Silicon Image, Inc. System and method for sending multiple data signals over a serial link
US5838919A (en) * 1996-09-10 1998-11-17 Ganymede Software, Inc. Methods, systems and computer program products for endpoint pair based communications network performance testing
US5857180A (en) * 1993-09-27 1999-01-05 Oracle Corporation Method and apparatus for implementing parallel operations in a database management system
US5878232A (en) * 1996-12-27 1999-03-02 Compaq Computer Corporation Dynamic reconfiguration of network device's virtual LANs using the root identifiers and root ports determined by a spanning tree procedure
US5970232A (en) * 1997-11-17 1999-10-19 Cray Research, Inc. Router table lookup mechanism
US5987517A (en) * 1996-03-27 1999-11-16 Microsoft Corporation System having a library of protocol independent reentrant network interface functions for providing common calling interface for communication and application protocols
US6003037A (en) * 1995-11-14 1999-12-14 Progress Software Corporation Smart objects for development of object oriented software
US6031984A (en) * 1998-03-09 2000-02-29 I2 Technologies, Inc. Method and apparatus for optimizing constraint models
US6038219A (en) * 1996-12-31 2000-03-14 Paradyne Corporation User-configurable frame relay network
US6047199A (en) * 1997-08-15 2000-04-04 Bellsouth Intellectual Property Corporation Systems and methods for transmitting mobile radio signals
US6052360A (en) * 1997-10-23 2000-04-18 Mci Communications Corporation Network restoration plan regeneration responsive to transitory conditions likely to affect network traffic
US6108782A (en) * 1996-12-13 2000-08-22 3Com Corporation Distributed remote monitoring (dRMON) for networks
US6141355A (en) * 1998-11-06 2000-10-31 Path 1 Network Technologies, Inc. Time-synchronized multi-layer network switch for providing quality of service guarantees in computer networks
US6148000A (en) * 1996-10-02 2000-11-14 International Business Machines Corporation Merging of data cells at network nodes
US6157645A (en) * 1996-05-28 2000-12-05 Kabushiki Kaisha Toshiba ATM communication system and ATM communication method
US6195355B1 (en) * 1997-09-26 2001-02-27 Sony Corporation Packet-Transmission control method and packet-transmission control apparatus
US6212568B1 (en) * 1998-05-06 2001-04-03 Creare Inc. Ring buffered network bus data management system
US6253339B1 (en) * 1998-10-28 2001-06-26 Telefonaktiebolaget Lm Ericsson (Publ) Alarm correlation in a large communications network
US20010039574A1 (en) * 1997-07-31 2001-11-08 Daniel Edward Cowan System and method for verification of remote spares in a communications network
US6331905B1 (en) * 1999-04-01 2001-12-18 The Trustees Of Columbia University In The City Of New York Network switch failure restoration
US6345048B1 (en) * 1998-04-30 2002-02-05 Sbc Technology Resources, Inc. ATM-based distributed virtual tandem switching system
US6363334B1 (en) * 1998-11-05 2002-03-26 Lucent Technologies Inc. Linear programming method of networking design for carrying traffic from endnodes to a core network at least cost
US20020083159A1 (en) * 2000-11-06 2002-06-27 Ward Julie A. Designing interconnect fabrics
US6418481B1 (en) * 1991-08-13 2002-07-09 Storage Technology Corporation Reconfigurable matrix switch for managing the physical layer of local area network
US6442584B1 (en) * 1997-05-16 2002-08-27 Sybase, Inc. Methods for resource consolidation in a computing environment
US20020122421A1 (en) * 2000-12-01 2002-09-05 Thales Method for the sizing of a deterministic type packet-switching transmission network
US6452924B1 (en) * 1997-11-10 2002-09-17 Enron Warpspeed Services, Inc. Method and apparatus for controlling bandwidth in a switched broadband multipoint/multimedia network
US6526420B2 (en) * 1998-11-20 2003-02-25 Hewlett-Packard Company Non-linear constraint optimization in storage system configuration
US6539027B1 (en) * 1999-01-19 2003-03-25 Coastcom Reconfigurable, intelligent signal multiplexer and network design and maintenance system therefor
US6539531B2 (en) * 1999-02-25 2003-03-25 Formfactor, Inc. Method of designing, fabricating, testing and interconnecting an IC to external circuit nodes
US20030065758A1 (en) * 2001-09-28 2003-04-03 O'sullivan Michael Justin Module-building method for designing interconnect fabrics
US6557169B1 (en) * 1998-10-11 2003-04-29 International Business Machines Corporation Method and system for changing the operating system of a workstation connected to a data transmission network
US6570850B1 (en) * 1998-04-23 2003-05-27 Giganet, Inc. System and method for regulating message flow in a digital data network
US6594701B1 (en) * 1998-08-04 2003-07-15 Microsoft Corporation Credit-based methods and systems for controlling data flow between a sender and a receiver with reduced copying of data
US6603769B1 (en) * 1998-05-28 2003-08-05 Cisco Technology, Inc. Method and system for improving traffic operation in an internet environment
US6611872B1 (en) * 1999-01-11 2003-08-26 Fastforward Networks, Inc. Performing multicast communication in computer networks by using overlay routing
US6614796B1 (en) * 1997-01-23 2003-09-02 Gadzoox Networks, Inc, Fibre channel arbitrated loop bufferless switch circuitry to increase bandwidth without significant increase in cost
US6628649B1 (en) * 1999-10-29 2003-09-30 Cisco Technology, Inc. Apparatus and methods providing redundant routing in a switched network device
US6668308B2 (en) * 2000-06-10 2003-12-23 Hewlett-Packard Development Company, L.P. Scalable architecture based on single-chip multiprocessing
US6687222B1 (en) * 1999-07-02 2004-02-03 Cisco Technology, Inc. Backup service managers for providing reliable network services in a distributed environment
US6701327B1 (en) * 1999-05-11 2004-03-02 3Com Corporation Merging network data sets comprising data acquired by interrogation of a network
US6757731B1 (en) * 1999-02-25 2004-06-29 Nortel Networks Limited Apparatus and method for interfacing multiple protocol stacks in a communication network
US6766381B1 (en) * 1999-08-27 2004-07-20 International Business Machines Corporation VLSI network processor and methods
US20050021583A1 (en) * 2003-07-25 2005-01-27 Artur Andrzejak Determination of one or more variables to receive value changes in local search solution of integer programming problem
US6976087B1 (en) * 2000-11-24 2005-12-13 Redback Networks Inc. Service provisioning methods and apparatus

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3224963B2 (en) 1994-08-31 2001-11-05 株式会社東芝 Network connection device and packet transfer method
JPH10510114A (en) 1994-11-30 1998-09-29 ブリティッシュ・テレコミュニケーションズ・パブリック・リミテッド・カンパニー Route determination in communication networks
US6101170A (en) 1996-09-27 2000-08-08 Cabletron Systems, Inc. Secure fast packet switch having improved memory utilization
JP3141808B2 (en) * 1997-01-17 2001-03-07 日本電気株式会社 How to design a network
US6061331A (en) * 1998-07-28 2000-05-09 Gte Laboratories Incorporated Method and apparatus for estimating source-destination traffic in a packet-switched communications network
US6724757B1 (en) 1999-01-15 2004-04-20 Cisco Technology, Inc. Configurable network router
US6697854B1 (en) 1999-02-22 2004-02-24 International Business Machines Corporation Method and apparatus for providing configuration information using a SIGA vector and utilizing a queued direct input-output device
US6275470B1 (en) * 1999-06-18 2001-08-14 Digital Island, Inc. On-demand overlay routing for computer-based communication networks
US6633909B1 (en) 1999-09-23 2003-10-14 International Business Machines Corporation Notification method that guarantees a system manager discovers an SNMP agent
US6697369B1 (en) 1999-09-28 2004-02-24 Lucent Technologies Inc Admission control adjustment in data networks using maximum cell count
US6675328B1 (en) 1999-10-08 2004-01-06 Vigilant Networks, Llc System and method to determine data throughput in a communication network
US6625777B1 (en) 1999-10-19 2003-09-23 Motorola, Inc. Method of identifying an improved configuration for a communication system using coding gain and an apparatus therefor
US6744767B1 (en) 1999-12-30 2004-06-01 At&T Corp. Method and apparatus for provisioning and monitoring internet protocol quality of service
US6697334B1 (en) 2000-01-18 2004-02-24 At&T Corp. Method for designing a network
US6778496B1 (en) 2000-06-07 2004-08-17 Lucent Technologies Inc. Distributed call admission and load balancing method and apparatus for packet networks
US6694361B1 (en) 2000-06-30 2004-02-17 Intel Corporation Assigning multiple LIDs to ports in a cluster
AU2001281240A1 (en) * 2000-08-10 2002-02-25 University Of Pittsburgh Apparatus and method for spare capacity allocation
CA2360963A1 (en) * 2000-11-03 2002-05-03 Telecommunications Research Laboratories Topological design of survivable mesh-based transport networks
US6857027B1 (en) 2000-11-14 2005-02-15 3Com Corporation Intelligent network topology and configuration verification using a method of loop detection
US6879564B2 (en) 2001-02-28 2005-04-12 Microsoft Corp. Method for designating communication paths in a network
US7099912B2 (en) 2001-04-24 2006-08-29 Hitachi, Ltd. Integrated service management system
US20020188732A1 (en) 2001-06-06 2002-12-12 Buckman Charles R. System and method for allocating bandwidth across a network
US6804245B2 (en) 2001-08-17 2004-10-12 Mcdata Corporation Compact, shared route lookup table for a fiber channel switch
US9009004B2 (en) 2002-01-31 2015-04-14 Hewlett-Packasrd Development Comany, L.P. Generating interconnect fabric requirements
US20040010577A1 (en) * 2002-07-09 2004-01-15 Ferit Yegenoglu System and method for optimizing network design in a communications network based on determined network capacity and network utilization
US7277960B2 (en) 2003-07-25 2007-10-02 Hewlett-Packard Development Company, L.P. Incorporating constraints and preferences for determining placement of distributed application onto distributed resource infrastructure
US7426570B2 (en) 2003-07-25 2008-09-16 Hewlett-Packard Development Company, L.P. Determining placement of distributed application onto distributed resource infrastructure

Patent Citations (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5113496A (en) * 1987-08-04 1992-05-12 Mccalley Karl W Bus interconnection structure with redundancy linking plurality of groups of processors, with servers for each group mounted on chassis
US4920487A (en) * 1988-12-12 1990-04-24 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Method of up-front load balancing for local memory parallel processors
US5138657A (en) * 1989-10-23 1992-08-11 At&T Bell Laboratories Method and apparatus for controlling a digital crossconnect system from a switching system
US5107489A (en) * 1989-10-30 1992-04-21 Brown Paul J Switch and its protocol for making dynamic connections
US5426674A (en) * 1990-02-06 1995-06-20 Nemirovsky; Paul Method and computer system for selecting and evaluating data routes and arranging a distributed data communication network
US5245609A (en) * 1991-01-30 1993-09-14 International Business Machines Corporation Communication network and a method of regulating the transmission of data packets in a communication network
US6418481B1 (en) * 1991-08-13 2002-07-09 Storage Technology Corporation Reconfigurable matrix switch for managing the physical layer of local area network
US5307449A (en) * 1991-12-20 1994-04-26 Apple Computer, Inc. Method and apparatus for simultaneously rendering multiple scanlines
US5524212A (en) * 1992-04-27 1996-06-04 University Of Washington Multiprocessor system with write generate method for updating cache
US5634011A (en) * 1992-06-18 1997-05-27 International Business Machines Corporation Distributed management communications network
US5329619A (en) * 1992-10-30 1994-07-12 Software Ag Cooperative processing interface and communication broker for heterogeneous computing environments
US5649105A (en) * 1992-11-10 1997-07-15 Ibm Corp. Collaborative working in a network
US5857180A (en) * 1993-09-27 1999-01-05 Oracle Corporation Method and apparatus for implementing parallel operations in a database management system
US5598532A (en) * 1993-10-21 1997-01-28 Optimal Networks Method and apparatus for optimizing computer networks
US5581689A (en) * 1993-12-28 1996-12-03 Nec Corporation Multi link type self healing system for communication networks
US5634004A (en) * 1994-05-16 1997-05-27 Network Programs, Inc. Directly programmable distribution element
US5802286A (en) * 1995-05-22 1998-09-01 Bay Networks, Inc. Method and apparatus for configuring a virtual network
US5651005A (en) * 1995-08-29 1997-07-22 Microsoft Corporation System and methods for supplying continuous media data over an ATM public network
US5835498A (en) * 1995-10-05 1998-11-10 Silicon Image, Inc. System and method for sending multiple data signals over a serial link
US5805578A (en) * 1995-10-27 1998-09-08 International Business Machines Corporation Automatic reconfiguration of multipoint communication channels
US6003037A (en) * 1995-11-14 1999-12-14 Progress Software Corporation Smart objects for development of object oriented software
US5793362A (en) * 1995-12-04 1998-08-11 Cabletron Systems, Inc. Configurations tracking system using transition manager to evaluate votes to determine possible connections between ports in a communications network in accordance with transition tables
US5987517A (en) * 1996-03-27 1999-11-16 Microsoft Corporation System having a library of protocol independent reentrant network interface functions for providing common calling interface for communication and application protocols
US6157645A (en) * 1996-05-28 2000-12-05 Kabushiki Kaisha Toshiba ATM communication system and ATM communication method
US5815402A (en) * 1996-06-07 1998-09-29 Micron Technology, Inc. System and method for changing the connected behavior of a circuit design schematic
US5838919A (en) * 1996-09-10 1998-11-17 Ganymede Software, Inc. Methods, systems and computer program products for endpoint pair based communications network performance testing
US6148000A (en) * 1996-10-02 2000-11-14 International Business Machines Corporation Merging of data cells at network nodes
US5831996A (en) * 1996-10-10 1998-11-03 Lucent Technologies Inc. Digital circuit test generator
US6108782A (en) * 1996-12-13 2000-08-22 3Com Corporation Distributed remote monitoring (dRMON) for networks
US5878232A (en) * 1996-12-27 1999-03-02 Compaq Computer Corporation Dynamic reconfiguration of network device's virtual LANs using the root identifiers and root ports determined by a spanning tree procedure
US6038219A (en) * 1996-12-31 2000-03-14 Paradyne Corporation User-configurable frame relay network
US6614796B1 (en) * 1997-01-23 2003-09-02 Gadzoox Networks, Inc, Fibre channel arbitrated loop bufferless switch circuitry to increase bandwidth without significant increase in cost
US6442584B1 (en) * 1997-05-16 2002-08-27 Sybase, Inc. Methods for resource consolidation in a computing environment
US20010039574A1 (en) * 1997-07-31 2001-11-08 Daniel Edward Cowan System and method for verification of remote spares in a communications network
US6047199A (en) * 1997-08-15 2000-04-04 Bellsouth Intellectual Property Corporation Systems and methods for transmitting mobile radio signals
US6195355B1 (en) * 1997-09-26 2001-02-27 Sony Corporation Packet-Transmission control method and packet-transmission control apparatus
US6052360A (en) * 1997-10-23 2000-04-18 Mci Communications Corporation Network restoration plan regeneration responsive to transitory conditions likely to affect network traffic
US6452924B1 (en) * 1997-11-10 2002-09-17 Enron Warpspeed Services, Inc. Method and apparatus for controlling bandwidth in a switched broadband multipoint/multimedia network
US5970232A (en) * 1997-11-17 1999-10-19 Cray Research, Inc. Router table lookup mechanism
US6031984A (en) * 1998-03-09 2000-02-29 I2 Technologies, Inc. Method and apparatus for optimizing constraint models
US6570850B1 (en) * 1998-04-23 2003-05-27 Giganet, Inc. System and method for regulating message flow in a digital data network
US6345048B1 (en) * 1998-04-30 2002-02-05 Sbc Technology Resources, Inc. ATM-based distributed virtual tandem switching system
US6212568B1 (en) * 1998-05-06 2001-04-03 Creare Inc. Ring buffered network bus data management system
US6603769B1 (en) * 1998-05-28 2003-08-05 Cisco Technology, Inc. Method and system for improving traffic operation in an internet environment
US6594701B1 (en) * 1998-08-04 2003-07-15 Microsoft Corporation Credit-based methods and systems for controlling data flow between a sender and a receiver with reduced copying of data
US6557169B1 (en) * 1998-10-11 2003-04-29 International Business Machines Corporation Method and system for changing the operating system of a workstation connected to a data transmission network
US6253339B1 (en) * 1998-10-28 2001-06-26 Telefonaktiebolaget Lm Ericsson (Publ) Alarm correlation in a large communications network
US6363334B1 (en) * 1998-11-05 2002-03-26 Lucent Technologies Inc. Linear programming method of networking design for carrying traffic from endnodes to a core network at least cost
US6141355A (en) * 1998-11-06 2000-10-31 Path 1 Network Technologies, Inc. Time-synchronized multi-layer network switch for providing quality of service guarantees in computer networks
US6526420B2 (en) * 1998-11-20 2003-02-25 Hewlett-Packard Company Non-linear constraint optimization in storage system configuration
US6611872B1 (en) * 1999-01-11 2003-08-26 Fastforward Networks, Inc. Performing multicast communication in computer networks by using overlay routing
US6539027B1 (en) * 1999-01-19 2003-03-25 Coastcom Reconfigurable, intelligent signal multiplexer and network design and maintenance system therefor
US6539531B2 (en) * 1999-02-25 2003-03-25 Formfactor, Inc. Method of designing, fabricating, testing and interconnecting an IC to external circuit nodes
US6757731B1 (en) * 1999-02-25 2004-06-29 Nortel Networks Limited Apparatus and method for interfacing multiple protocol stacks in a communication network
US6331905B1 (en) * 1999-04-01 2001-12-18 The Trustees Of Columbia University In The City Of New York Network switch failure restoration
US6701327B1 (en) * 1999-05-11 2004-03-02 3Com Corporation Merging network data sets comprising data acquired by interrogation of a network
US6687222B1 (en) * 1999-07-02 2004-02-03 Cisco Technology, Inc. Backup service managers for providing reliable network services in a distributed environment
US6766381B1 (en) * 1999-08-27 2004-07-20 International Business Machines Corporation VLSI network processor and methods
US6628649B1 (en) * 1999-10-29 2003-09-30 Cisco Technology, Inc. Apparatus and methods providing redundant routing in a switched network device
US6668308B2 (en) * 2000-06-10 2003-12-23 Hewlett-Packard Development Company, L.P. Scalable architecture based on single-chip multiprocessing
US20020083159A1 (en) * 2000-11-06 2002-06-27 Ward Julie A. Designing interconnect fabrics
US6976087B1 (en) * 2000-11-24 2005-12-13 Redback Networks Inc. Service provisioning methods and apparatus
US20020122421A1 (en) * 2000-12-01 2002-09-05 Thales Method for the sizing of a deterministic type packet-switching transmission network
US20030065758A1 (en) * 2001-09-28 2003-04-03 O'sullivan Michael Justin Module-building method for designing interconnect fabrics
US20050021583A1 (en) * 2003-07-25 2005-01-27 Artur Andrzejak Determination of one or more variables to receive value changes in local search solution of integer programming problem

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020091804A1 (en) * 2000-11-06 2002-07-11 Ward Julie Ann Reliability for interconnect fabrics
US20020091845A1 (en) * 2000-11-06 2002-07-11 Ward Julie Ann Reliability for interconnect fabrics
US7032013B2 (en) 2000-11-06 2006-04-18 Hewlett-Packard Development Company, L.P. Reliability for interconnect fabrics
US7076537B2 (en) 2000-11-06 2006-07-11 Hewlett-Packard Development Company, L.P. Designing interconnect fabrics
US7233983B2 (en) 2000-11-06 2007-06-19 Hewlett-Packard Development Company, L.P. Reliability for interconnect fabrics
US20020083159A1 (en) * 2000-11-06 2002-06-27 Ward Julie A. Designing interconnect fabrics
US20050119963A1 (en) * 2002-01-24 2005-06-02 Sung-Min Ko Auction method for real-time displaying bid ranking
US20030144822A1 (en) * 2002-01-31 2003-07-31 Li-Shiuan Peh Generating interconnect fabric requirements
US9009004B2 (en) 2002-01-31 2015-04-14 Hewlett-Packasrd Development Comany, L.P. Generating interconnect fabric requirements
US7415519B2 (en) * 2002-06-28 2008-08-19 Lenovo (Singapore) Pte. Ltd. System and method for prevention of boot storms in a computer network
US20040003082A1 (en) * 2002-06-28 2004-01-01 International Business Machines Corporation System and method for prevention of boot storms in a computer network
US20080066036A1 (en) * 2004-07-12 2008-03-13 International Business Machines Corporation Chip Having Timing Analysis of Paths Performed Within the Chip During the Design Process
US7823108B2 (en) * 2004-07-12 2010-10-26 International Business Machines Corporation Chip having timing analysis of paths performed within the chip during the design process
US7386585B2 (en) * 2004-10-30 2008-06-10 International Business Machines Corporation Systems and methods for storage area network design
US20080275933A1 (en) * 2004-10-30 2008-11-06 International Business Machines Corporation Systems and methods for storage area network design
US20080275934A1 (en) * 2004-10-30 2008-11-06 International Business Machines Corporation Systems and methods for storage area network design
US7707238B2 (en) * 2004-10-30 2010-04-27 International Business Machines Corporation Systems and methods for storage area network design
US7711767B2 (en) * 2004-10-30 2010-05-04 International Business Machines Corporation Systems and methods for storage area network design
US20060095885A1 (en) * 2004-10-30 2006-05-04 Ibm Corporation Systems and methods for storage area network design
US8533016B1 (en) 2005-01-30 2013-09-10 Hewlett-Packard Development Company, L.P. System and method for selecting a portfolio
US10567295B2 (en) * 2018-05-17 2020-02-18 Cisco Technology, Inc. Method and system for teleprotection over segment routing-based networks

Also Published As

Publication number Publication date
US7237020B1 (en) 2007-06-26
US7308494B1 (en) 2007-12-11

Similar Documents

Publication Publication Date Title
US7233983B2 (en) Reliability for interconnect fabrics
US20030145294A1 (en) Verifying interconnect fabric designs
US7200117B2 (en) Method of optimizing network capacity and fault tolerance in deadlock-free routing
US20070053283A1 (en) Correlation and consolidation of link events to facilitate updating of status of source-destination routes in a multi-path network
US8804490B2 (en) Controller placement for fast failover in the split architecture
US8243604B2 (en) Fast computation of alterative packet routes
US7526540B2 (en) System and method for assigning data collection agents to storage area network nodes in a storage area network resource management system
JPH03139936A (en) Path selection method
CN113193996B (en) Power optical transmission network optimization method, device, equipment and storage medium
US6289096B1 (en) Call routing method using prioritized source-destination routes
Santos et al. Robust SDN controller placement to malicious node attacks
US7502839B2 (en) Module-building method for designing interconnect fabrics
US8184555B1 (en) SpaceWire network management
CN111160661A (en) Method, system and equipment for optimizing reliability of power communication network
US7656821B2 (en) Topology discovery and identification of switches in an N-stage interconnection network
US7321561B2 (en) Verification of connections between devices in a network
Yi et al. A safe and reliable heterogeneous controller deployment approach in SDN
Li et al. Towards robust controller placement in software-defined networks against links failure
US7284148B2 (en) Method and system for self-healing in routers
US9009004B2 (en) Generating interconnect fabric requirements
CN112367121B (en) Fission ring splitting optimization method and system based on broken half path algorithm
CN108011815B (en) Network control method and software defined network device and system
Ouveysi et al. Fast heuristics for protection networks for dynamic routing
WO2023207048A1 (en) Network intent mining method and apparatus, and related device
KR100217719B1 (en) Method for calculation communication path existence in lattice type communication network system

Legal Events

Date Code Title Description
AS Assignment

Owner name: HEWLETT-PACKARD COMPANY, COLORADO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WARD, JULIE ANN;WILKES, JOHN;SHAHOUMIAN, TROY ALEXANDER;REEL/FRAME:012950/0484

Effective date: 20020124

AS Assignment

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;REEL/FRAME:014061/0492

Effective date: 20030926

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY L.P.,TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;REEL/FRAME:014061/0492

Effective date: 20030926

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION