WO2004001598A2 - Distributed computer - Google Patents
Distributed computer Download PDFInfo
- Publication number
- WO2004001598A2 WO2004001598A2 PCT/GB2003/002631 GB0302631W WO2004001598A2 WO 2004001598 A2 WO2004001598 A2 WO 2004001598A2 GB 0302631 W GB0302631 W GB 0302631W WO 2004001598 A2 WO2004001598 A2 WO 2004001598A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- node
- policy
- task
- requirements
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5066—Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/505—Clust
Definitions
- the present invention relates to a distributed computer and to a method of operating a computer forming a component of a distributed computer.
- One application of a distributed computer is the carrying out of a task which is too demanding to be solved quickly by a computer having a single processor. In such a case, it is necessary to divide the task to be performed amongst the plurality of processors present in the distributed computer. This is known as processor allocation or 'load balancing'.
- Distributed computers should also be tolerant to the failure or shutdown of one of the processors within them - systems of this type are disclosed, for example, in International Patent Application WO 01 /82678, and European Patent applications 0 887 731 and 0 750 256.
- one or more processors is given the task of tracking how heavily- loaded other processors in the distributed computer are. If the processors within the distributed computer are organised into a logical hierarchy independent of the physical structure of the network interconnecting the different processors, the task of monitoring levels of usage of the processors can be split-up in accordance with that hierarchy.
- New processes can be generated anywhere within the logical hierarchy and are escalated sufficiently far up the hierarchy to a 'manager' processor which has a sufficient number of subordinates to carry out the task. The manager then delegates the component tasks back down the hierarchy.
- a method of dividing a task amongst a plurality of nodes within a distributed computer comprising:
- task group topology data representing nodes and interconnections between them in dependence on requirements data entered by a user / administrator, and then distributing a task to be performed between nodes in accordance with the calculated topology
- a more flexible method of utilising the resources of a distributed computer than has hitherto been known is provided. It is to be understood that the task group will not necessarily equate to the physical topology of the nodes and interconnections between them in the distributed computer.
- the nodes and connections used will often be a subset of those available - also a logical connection represented in the task group topology data might represent a concatenation of a plurality of physical connections.
- said topology calculation comprises the step of comparing said requirements data with node capability data for a node available to join said task group. This provides a convenient mechanism for automatically generating the task group topology.
- said requirements data is arranged in accordance with a predefined data structure defined by requirements format data stored in said computer, said method further comprising the step of verifying that said requirements data is formatted in accordance with predefined data structure by comparing said requirements data to said requirements format data.
- Defining the format of said requirements data in this way allows for easier communication of requirements data between computers.
- the extensible Markup Language (XML) is used to define the format data, and known XML parsing programs are used to check the format of requirements data.
- said method further comprises the step of operating a node seeking to join said task group to generate node capability data and send said data to one or more nodes already included within said task group.
- said task distribution involves a node forwarding a task to a node which neighbours it in said task group topology. This provides a convenient way of utilising the generated topology in the subsequent calculation.
- a distributed computer apparatus comprising:
- each of said nodes having recorded therein:
- processor readable code executable to update group membership data comprising:
- group membership request generation code executable to generate and send a group membership request including node profile data to another node indicated to be a member of said group;
- group membership request handling code executable to receive a group membership request including node profile data, and decide whether said request is to be granted in dependence upon the group membership policy data stored at said node;
- group membership update code executable to update the list of group members stored at said node on deciding to grant a group membership request received from another node, and to send a response to the node sending said request indicating that said request is successful.
- each node further has recorded therein received program data execution code executable to receive program data from another of said nodes and to execute said program.
- said plurality of processor nodes comprise computers executing different operating systems programs, and said received program execution code is further executable to provide a similar execution environment on nodes despite the differences in said operating system programs.
- a method of operating a member node of a distributed computing network comprising:
- membership policy data comprising one or more property value pairs indicating one or more criteria for membership of said distributed computing network
- a distributed network By controlling a member node of a distributed computing network to compare profile data from another computer with criteria indicated by membership policy data accessible to the member node, and updating distributed computing network data accessible to the member node if said profile data indicates that said one or more criteria is met, a distributed network whose membership accords with said policy data is built up.
- the policy reflects the distributed task that is to be shared amongst the members of the distributed computing network, a distributed computer network whose membership is suited to the distributed task to be shared is built up.
- the member node stores said distributed network membership data. This results in a distributing computing network which is more robust than networks where this data is stored in a central database. Similarly, in some embodiments, said member node stores said membership policy data.
- the method further comprises the steps of:
- the distributed computing network to be dynamically reconfigured in response, for example, to a change in the task to be performed or the addition of a new type of node which might apply to become a member of the distributed computing network.
- a computer program product loadable into the internal memory of a digital computer comprising:
- task group requirements data reception code executable to receive and store received task group requirements data
- node capability profile data reception code executable to receive and store received node capability profile data
- comparison code executable to compare said node capability data and said task group requirements data to find whether the node represented by said node capability data meets said task group requirements
- task group topology update code executable to add an identifier of said represented node to a task group topology data structure on said comparison code indicating that said represented node meets said requirements
- task execution code executable to receive code from another node in said task group and to execute said code or forward said code to a node represented as a neighbour in said task group topology data structure.
- Figure 1 shows an internetwork of computing devices operating in accordance with a first embodiment of the present invention
- Figure 2 shows a tree diagram representing a document type definition for a profile document for use in the first embodiment
- Figure 3 shows a tree diagram representing a document type definition for a policy document for use in the first embodiment
- Figure 4 shows the architecture of a software program installed on the computing devices of Figure 1 ;
- Figure 5 is a flow-chart of a script (i.e. program) which is run by each of the computing devices of Figure 1 when they are switched on;
- Figure 6 shows how a node connects to a distributed computing network set up within the physical network of Figure 1 ;
- Figure 7 is a flow-chart showing how each of the computing devices of Figure 1 responds to a request by another computer to join a task group of computing devices for performing a distributed process
- Figure 8 is a flow-chart showing how each of the computing devices of Figure 1 responds to a received policy document
- Figure 9 illustrates how the topology of the task group is controlled by the policy documents stored in the computing devices of Figure 1 .
- Figure 1 illustrates an internetwork comprising a fixed Ethernet 802.3 local area network 10 which interconnects first 12 and second 14 Ethernet 802.1 1 wireless local area networks.
- Attached to the fixed local area network 10 are a server computer 218, and three desktop PCs (21 9, 220, 221 ).
- the first wireless local area network 1 2 has a wireless connection to a first laptop computer 223, the second wireless local area network 14 has wireless connections to a second laptop computer 224 and a personal digital assistant 225.
- FIG. 1 Also illustrated is a compact disc which carries software which can be loaded directly or indirectly onto each of the computing devices of Figure 1 (218 - 225) and which will cause them to operate in accordance with a first embodiment of the present invention when run.
- Figure 2 shows, in tree diagram form, a Document Type Definition (DTD) which indicates a predetermined logical structure for a 'profile' document written in extensible Mark-Up Language (XML).
- DTD Document Type Definition
- XML extensible Mark-Up Language
- a profile document consists of eight sections, some of which themselves contain one or more fields.
- the eight sections relate to:
- processor information 24 about the processor(s) contained within the device
- physical topology information 34 - this comprises a list of Internet Protocol addresses for the immediate neighbours of the device.
- the physical topology information is input to the echo pattern information distribution scheme described below.
- Figure 3 shows, in tree diagram form, a Document Type Definition (DTD) which indicates a predetermined logical structure for a 'policy' document written in extensible Mark-Up Language (XML).
- DTD Document Type Definition
- XML extensible Mark-Up Language
- One purpose of a 'policy' document in this embodiment is to set out the conditions which an applicant computing device must fulfil prior to a specified action being carried out in respect of that computing device. In the present case, the action concerned is the joining of the applicant computing device to a distributed computing network.
- Policy documents may also cause the node which receives them to carry out an action specified in the policy.
- a profile document consists of two sections, each of which has a complex logical structure.
- the first section 100 refers to the creator of the policy and includes fields which indicate the level of authority enjoyed by the creator of the policy (some computing devices may be programmed not to take account of policies generated by a creator who has a level of authority below a predetermined level), the unique name of the policy, the name of any policy it is to replace, times at which the policy is to be applied etc.
- the second section 102 refers to the individual computing devices or classes of computing devices to which the policy is applicable, and sets out the applicable policy 104 for each of those individual computing devices or classes of computing devices.
- Each policy comprises a set of 'conditions' 106 and an action 108 which is to be carried out if all those 'conditions' are met.
- the conditions are in fact values of various fields, e.g. processing power (represented here as 'BogoMIPS' - a term used in Linux operating systems to mean Bogus Machine Instructions Per Second) and free memory. It will be seen that many of the conditions correspond to fields found in a profile document.
- reply-address > ferdina@drake.bt.co.uk ⁇ /reply-address > ⁇ /creator >
- ⁇ /subjects > ⁇ subjects > ⁇ host> 132.146.107.21 9 ⁇ /host> ⁇ conditions > ⁇ action >join ⁇ /action >
- Figure 4 shows the architecture of a software program recorded on the compact disc 16 and installed and executing on each of the computing devices (218-225) of Figure 1 .
- the software program is written in the Java programming language and thus consists of a number of 'class' files which contain bytecode which is interpretable by the Java Virtual Machine software on each of the computing devices.
- the classes and the interactions between them are shown in Figure 4 - the classes are grouped into modules (as indicated by the dashed-line boxes).
- the purpose of the software is to allow a task to shared amongst a plurality of computing devices.
- a user must provide a sub-class of a predetermined SimpleTask or CompositeTask abstract class in order to specify the task that he or she wishes to be carried out by the devices (218 - 225) included within the internetwork.
- the Secretary module 106 handles its reception and stores it using the Task Repository 108 module until the task is carried out as explained below.
- the Work Manager module 1 10 causes a task to be carried out if a task arrives at the computing device and the computing device has sufficient resources to carry out that task.
- Each task results in the starting of a new execution thread 1 12 which carries out the task or, in insufficient resources are available at the device, delegates some or all of the class to one of a selected subset (218-220, 225) of computing devices (218-225) which form a task group suitable for carrying out the task.
- the manner in which the task group (218-220, 225) is assembled will be explained below.
- the Guardian module 1 14 provides the interface to the other computing devices in the internetwork ( Figure 1 ). It implements the communications protocols used by the system and also acts as a security firewall, only accepting objects which have come from an authorised source.
- the Guardian module uses Remote Method Invocation to communicate with other computing devices in the internetwork ( Figure 1 ). More precisely, the NodeGatelmpl object encapsulates the RMI technology and implements the remote interface called NodeGate.
- the Topology Centre module 1 18 maintains a remote graph data structure - a graph in this sense being a network comprising a plurality of nodes connected to one another via links.
- Each of the computing devices which is a member of the task group (218-220, 225) is represented by an RMI remote object in the remote graph data structure.
- RMI Remote Management Entity
- the Initiator module comprises two objects.
- Each of the computing devices (218 - 225) also stores a launch script. The processes carried out by each computing device on execution of that script are illustrated in Figure 5.
- the first stage is the collection of information about the capabilities of the computing device on which the script is run. This involves the transfer of:
- a MetaDataHandler execution thread is started together with another execution thread (step 140) which runs the Initiator class ( Figure 4 : 1 20).
- the MetaDataHandler execution thread starts by generating 1 32 a profile XML document in accordance the DTD seen in Figure 2.
- the OS Version field of the general information section 20 can be filled with a value taken from the system properties available from the operating system;
- the processor speed field of the CPU section 24 can be found from the CPU information file saved in the preliminary system information collection step (step 130);
- the MetaDataHandler thread then opens a socket on port 1240 and listens for connections from other computing devices. The action taken in response to receiving a file via that socket will be explained below with reference to Figures 7 and 8.
- the part of the script which launches the Initiator class may include the RMI name of a computing device to connect to (it will not if the computing device concerned is the first node in the task group). If it does, then the Initiator class results in an attempt to connect to that node.
- a script including a reference to the server 218 is run on the PC 21 9. As explained above, this results in the Initiator class 1 20 being run on the PC 219. This in turn requests HydraNodeConnector 1 50 to connect to the server 218
- NodeGatelmpl encapsulates RMI technology.
- NodeGatelmpl 154 uses Naming class (a standard RMI facility) to obtain a reference to NodeGate of the server 218
- NodeGate is the node remote interface seen by other nodes, normally implemented by NodeGatelmpl. As soon as it has the reference, NodeGatelmpl 154 requests
- NodeGate of the server 218 to connect contains the remote reference to RemoteGraphNode of the PC 219 and the XML profile document representing the capabilities of the PC 21 9.
- the request When received at the server 218, the request is passed to the Guardian and then to the HydraNodeConnector. As explained below, the MetaDataHandler thread determines whether the request to connect to the distributed computing network should be accepted and informs HydraNodeConnector accordingly. In the present case, the connection is accepted. Hence, HydraNodeConnector supplies the local RemoteGraphNode with a reference to its counterpart on the PC 219 and orders the RemoteGraphNode to establish a connection. The server 218 and the PC 219 exchange references and link to each other using their internal connection mechanisms.
- the task group topology databases in the server 218 and the PC 219 are then updated accordingly.
- the MetaDataHandler On receiving a profile file (step 1 70), the MetaDataHandler checks that the XML document is well-formed - a concept which will be understood by those skilled in the art (step 172). This check is carried out by an XML parser - in the present case the Xerces XML parser available from the Apache Software Foundation is used. Thereafter, in step 1 74, the MetaDataHandler recognises the input file as a profile which results in the use of an evaluateConditions method of a PolicyHandler class to check the profile against any policies stored in the computing device which has received the profile document.
- the PolicyHandler class This involves a comparision of the values stored in the profile which those stored in the policy.
- the nature of that comparison i.e. whether, for example, the value in the profile must be equal to the value in the policy or can also be greater than
- the PolicyHandler class is programmed into the PolicyHandler class.
- the policy example given above includes a value of 1 12000K between ⁇ HD > tags.
- the profile example given above has two sets of data relating to permanent memory, one for each of two hard discs. The second set of data is:
- the PolicyHandler class is programmed to calculate the amount of free hard disc space (i.e. 4489K) and will refuse connection since that amount is not greater than or equal to the required 1 1 2000K of permanent storage.
- step 178 it is determined whether all the required conditions are met. If they are the connection is formed (step 180) and the task group topology data is updated (step 182) as described above. If one or more of the conditions is not met then the profile is forwarded to another node in the internetwork (step 184).
- the first step is identical to that carried out in relation to the receipt of a profile file.
- the file is checked (step 1 92) to see whether it is well- formed.
- the policy file is validated by checking it against the structure defined in the relevant DTD.
- the DTD may be incorporated directly in the policy file, or it can be a separate file which is referenced in an XML DOCTYPE declaration as a Universal Resource Identifier (URI).
- URI Universal Resource Identifier
- the policy document therefore includes information on the location of the DTD to use - normally, the DTD will be stored at an accessible web server.
- the Network Policy subsystem is started (step 194).
- step 1 96 This then causes a check to be carried out to see whether the policy uses the correct date system and has sensible values for parameters (step 1 96).
- the computing device receiving the policy then extracts the domain and/or subject-list within the policy document (step 198).
- a test (step 200) is then carried out to see whether the receiving computing device is within a domain to which the policy applies or is included in a list of subjects to which the policy applies.
- step 202 If the computing device is not in the target group then it forwards the policy to its neighbours which are yet to receive the policy (step 202).
- This forwarding step is carried out in accordance with the so-called echo pattern explained in Koon-Seng Lim and Rolf Stadler, 'Developing pattern-based management programs', Center for Telecommunications Research and Department of Electrical Engineering, Columbia University, NewYork, CTR Technical Report 503-01 -01 , August 6, 2001 .
- the physical topology information 34 found in the profile is used as an input to this step.
- the computing device checks whether if already has the policy (steps 204 and 206). If the policy is already stored, then it is just forwarded (step 208) as explained in relation to step 202 above. Alternatively, the current policy can be overwritten, thus providing a mechanism for updating a policy.
- the policy is not already stored, then it is stored (step 210). Copies of the policy are then forwarded as explained above. It is to be noted that the policy may specify that the node receiving the policy is to re-send its profile to the node to which it initially connected. If this is combined with a replacement of the policy adopted by the node to which it initially connected, repeating the joining steps explained above will re-configure the distributed computing network in accordance with the replacement policy.
- the adminstrator of the internetwork of Figure 1 might wish to use spare computing power around the internetwork to carry out a complex computational task.
- the administrator writes a policy which includes a first portion applicable to the domain including all computing devices having an IP address 1 32.146.107.xxx (say), which portion includes a first condition that the utilisation measured over the last 1 5 mins is less than 5% of processor cycles.
- the policy also includes a second portion which is applicable only to the server 218 and includes the additional condition that the processor speed is greater than 51 2 million instructions per second. He supplies that policy to the server computer 218 and runs a script as explained above, but without specifying the IP address of a host to connect to.
- the personal digital assistant might pass the utilisation test, but fail the test on processor speed. In this case, although the server 21 8 rejects the request, the personal computer 219 will accept the request.
- the resulting logical topology (which places the fastest processors closest to the centre of the task group) will result in better performance than had the personal digital assistant connected directly to the server 21 8. It will be seen how the generation of policies and profiles and comparison of the two prior to accepting a connection to a task group allows the automatic generation of a logical topology which suits the nature of the distributed task which is to be carried out.
- the same set of network nodes can be arranged into different distributed networks in dependence on policies which might reflect, for example, a requirement for large amounts of memory (e.g. in a file-sharing network), a requirement for low latency (e.g.
- the internetwork might be much larger than that illustrated in Figure 1 - for example, it might include other nodes connected to those shown in Figure 1 via a wide area network;
- nodes applied to join the task group in response to the administrator running a script program on them.
- a node already in the task group might ask its neighbours whether they have enough resources to meet the requirements of the policy for this task group.
- the comparison of the policy and the profile might take place in the applicant node, or in the responding node, or in a third party computer;
- a logical network is created on the basis of a physical network as a precursor to distributing a computational task amongst the computers forming the nodes of that logical network. Similar techniques for generating a logical network based on a physical network might also be used in creating storage networks or ad hoc wireless networks based on a physical network topology. In those case, the task to be distributed would not be computation as such, but the storage of electronic data, or the forwarding of messages or packets across the network.
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP03732731A EP1514183A2 (en) | 2002-06-20 | 2003-06-19 | Distributed computer |
US10/517,434 US7937704B2 (en) | 2002-06-20 | 2003-06-19 | Distributed computer |
CA2489142A CA2489142C (en) | 2002-06-20 | 2003-06-19 | Distributed computer |
AU2003240116A AU2003240116A1 (en) | 2002-06-20 | 2003-06-19 | Distributed computer |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP02254294 | 2002-06-20 | ||
EP02254294.8 | 2002-06-20 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2004001598A2 true WO2004001598A2 (en) | 2003-12-31 |
WO2004001598A3 WO2004001598A3 (en) | 2004-12-09 |
Family
ID=29797291
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB2003/002631 WO2004001598A2 (en) | 2002-06-20 | 2003-06-19 | Distributed computer |
Country Status (5)
Country | Link |
---|---|
US (1) | US7937704B2 (en) |
EP (1) | EP1514183A2 (en) |
AU (1) | AU2003240116A1 (en) |
CA (1) | CA2489142C (en) |
WO (1) | WO2004001598A2 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006072987A (en) * | 2004-08-02 | 2006-03-16 | Sony Computer Entertainment Inc | Network system, management computer, cluster management method, and computer program |
WO2007032713A1 (en) | 2005-09-14 | 2007-03-22 | Telefonaktiebolaget Lm Ericsson (Publ) | Controlled temporary mobile network |
EP1785865A1 (en) * | 2004-08-02 | 2007-05-16 | Sony Computer Entertainment Inc. | Network system, management computer, cluster management method, and computer program |
US7610333B2 (en) | 2002-12-31 | 2009-10-27 | British Telecommunications Plc | Method and apparatus for operating a computer network |
EP1810447B1 (en) * | 2004-10-12 | 2010-07-28 | International Business Machines Corporation | Method, system and program product for automated topology formation in dynamic distributed environments |
US7805503B2 (en) | 2007-05-10 | 2010-09-28 | Oracle International Corporation | Capability requirements for group membership |
US7937704B2 (en) | 2002-06-20 | 2011-05-03 | British Telecommunications Public Limited Company | Distributed computer |
WO2020009875A1 (en) * | 2018-07-02 | 2020-01-09 | Convida Wireless, Llc | Dynamic fog service deployment and management |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB0412655D0 (en) * | 2004-06-07 | 2004-07-07 | British Telecomm | Distributed storage network |
US8051170B2 (en) | 2005-02-10 | 2011-11-01 | Cisco Technology, Inc. | Distributed computing based on multiple nodes with determined capacity selectively joining resource groups having resource requirements |
US7543020B2 (en) | 2005-02-10 | 2009-06-02 | Cisco Technology, Inc. | Distributed client services based on execution of service attributes and data attributes by multiple nodes in resource groups |
US7356539B2 (en) | 2005-04-04 | 2008-04-08 | Research In Motion Limited | Policy proxy |
US8832156B2 (en) * | 2009-06-15 | 2014-09-09 | Microsoft Corporation | Distributed computing management |
EP2383955B1 (en) | 2010-04-29 | 2019-10-30 | BlackBerry Limited | Assignment and distribution of access credentials to mobile communication devices |
US8572229B2 (en) | 2010-05-28 | 2013-10-29 | Microsoft Corporation | Distributed computing |
US9268615B2 (en) * | 2010-05-28 | 2016-02-23 | Microsoft Technology Licensing, Llc | Distributed computing using communities |
US8998544B1 (en) | 2011-05-20 | 2015-04-07 | Amazon Technologies, Inc. | Load balancer |
US9264395B1 (en) | 2012-04-11 | 2016-02-16 | Artemis Internet Inc. | Discovery engine |
US8990392B1 (en) * | 2012-04-11 | 2015-03-24 | NCC Group Inc. | Assessing a computing resource for compliance with a computing resource policy regime specification |
US8799482B1 (en) | 2012-04-11 | 2014-08-05 | Artemis Internet Inc. | Domain policy specification and enforcement |
US9928149B2 (en) | 2014-08-29 | 2018-03-27 | Cynny Space Srl | Systems and methods to maintain data integrity and redundancy in a computing system having multiple computers |
CN112152871B (en) * | 2020-08-14 | 2021-09-24 | 上海纽盾科技股份有限公司 | Artificial intelligence test method, device and system for network security equipment |
CN116996359B (en) * | 2023-09-26 | 2023-12-12 | 中国空气动力研究与发展中心计算空气动力研究所 | Method and device for constructing network topology of supercomputer |
Family Cites Families (70)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US662235A (en) * | 1900-04-07 | 1900-11-20 | Samuel W Hornbeck | Support for music-leaf turners. |
US5129080A (en) | 1990-10-17 | 1992-07-07 | International Business Machines Corporation | Method and system increasing the operational availability of a system of computer programs operating in a distributed system of computers |
US5313631A (en) | 1991-05-21 | 1994-05-17 | Hewlett-Packard Company | Dual threshold system for immediate or delayed scheduled migration of computer data files |
US5732397A (en) * | 1992-03-16 | 1998-03-24 | Lincoln National Risk Management, Inc. | Automated decision-making arrangement |
US5423037A (en) * | 1992-03-17 | 1995-06-06 | Teleserve Transaction Technology As | Continuously available database server having multiple groups of nodes, each group maintaining a database copy with fragments stored on multiple nodes |
AU3944793A (en) * | 1992-03-31 | 1993-11-08 | Aggregate Computing, Inc. | An integrated remote execution system for a heterogenous computer network environment |
US5745687A (en) * | 1994-09-30 | 1998-04-28 | Hewlett-Packard Co | System for distributed workflow in which a routing node selects next node to be performed within a workflow procedure |
US5790848A (en) * | 1995-02-03 | 1998-08-04 | Dex Information Systems, Inc. | Method and apparatus for data access and update in a shared file environment |
JP4309480B2 (en) * | 1995-03-07 | 2009-08-05 | 株式会社東芝 | Information processing device |
US5564037A (en) | 1995-03-29 | 1996-10-08 | Cheyenne Software International Sales Corp. | Real time data migration system and method employing sparse files |
EP0826181A4 (en) * | 1995-04-11 | 2005-02-09 | Kinetech Inc | Identifying data in a data processing system |
US5774668A (en) * | 1995-06-07 | 1998-06-30 | Microsoft Corporation | System for on-line service in which gateway computer uses service map which includes loading condition of servers broadcasted by application servers for load balancing |
US5666486A (en) | 1995-06-23 | 1997-09-09 | Data General Corporation | Multiprocessor cluster membership manager framework |
US5829023A (en) * | 1995-07-17 | 1998-10-27 | Cirrus Logic, Inc. | Method and apparatus for encoding history of file access to support automatic file caching on portable and desktop computers |
GB9521568D0 (en) * | 1995-10-20 | 1995-12-20 | Lynxvale Ltd | Delivery of biologically active polypeptides |
AU2343097A (en) * | 1996-03-21 | 1997-10-10 | Mpath Interactive, Inc. | Network match maker for selecting clients based on attributes of servers and communication links |
US6438614B2 (en) | 1998-02-26 | 2002-08-20 | Sun Microsystems, Inc. | Polymorphic token based control |
US6128590A (en) * | 1996-07-09 | 2000-10-03 | Siemens Nixdorf Informationssysteme Aktiengesellschaft | Method for the migration of hardware-proximate, subprogram-independent programs with portable and non-portable program parts |
GB9617859D0 (en) * | 1996-08-27 | 1996-10-09 | Metrix S A | Management of workstations |
US6108699A (en) | 1997-06-27 | 2000-08-22 | Sun Microsystems, Inc. | System and method for modifying membership in a clustered distributed computer system and updating system configuration |
FR2767939B1 (en) * | 1997-09-04 | 2001-11-02 | Bull Sa | MEMORY ALLOCATION METHOD IN A MULTIPROCESSOR INFORMATION PROCESSING SYSTEM |
US6289424B1 (en) * | 1997-09-19 | 2001-09-11 | Silicon Graphics, Inc. | Method, system and computer program product for managing memory in a non-uniform memory access system |
US6353608B1 (en) * | 1998-06-16 | 2002-03-05 | Mci Communications Corporation | Host connect gateway for communications between interactive voice response platforms and customer host computing applications |
AUPP638698A0 (en) * | 1998-10-06 | 1998-10-29 | Canon Kabushiki Kaisha | Efficient memory allocator utilising a dual free-list structure |
US6405284B1 (en) * | 1998-10-23 | 2002-06-11 | Oracle Corporation | Distributing data across multiple data storage devices in a data storage system |
US6393485B1 (en) * | 1998-10-27 | 2002-05-21 | International Business Machines Corporation | Method and apparatus for managing clustered computer systems |
US7047416B2 (en) * | 1998-11-09 | 2006-05-16 | First Data Corporation | Account-based digital signature (ABDS) system |
US6249844B1 (en) * | 1998-11-13 | 2001-06-19 | International Business Machines Corporation | Identifying, processing and caching object fragments in a web environment |
US6433802B1 (en) * | 1998-12-29 | 2002-08-13 | Ncr Corporation | Parallel programming development environment |
US6330621B1 (en) * | 1999-01-15 | 2001-12-11 | Storage Technology Corporation | Intelligent data storage manager |
US6438705B1 (en) * | 1999-01-29 | 2002-08-20 | International Business Machines Corporation | Method and apparatus for building and managing multi-clustered computer systems |
FR2792087B1 (en) * | 1999-04-07 | 2001-06-15 | Bull Sa | METHOD FOR IMPROVING THE PERFORMANCE OF A MULTIPROCESSOR SYSTEM INCLUDING A WORK WAITING LINE AND SYSTEM ARCHITECTURE FOR IMPLEMENTING THE METHOD |
US6801949B1 (en) * | 1999-04-12 | 2004-10-05 | Rainfinity, Inc. | Distributed server cluster with graphical user interface |
US6463457B1 (en) * | 1999-08-26 | 2002-10-08 | Parabon Computation, Inc. | System and method for the establishment and the utilization of networked idle computational processing power |
US7062556B1 (en) * | 1999-11-22 | 2006-06-13 | Motorola, Inc. | Load balancing method in a communication network |
US7003571B1 (en) * | 2000-01-31 | 2006-02-21 | Telecommunication Systems Corporation Of Maryland | System and method for re-directing requests from browsers for communication over non-IP based networks |
WO2001065380A1 (en) * | 2000-02-29 | 2001-09-07 | Iprivacy Llc | Anonymous and private browsing of web-sites through private portals |
WO2001082678A2 (en) * | 2000-05-02 | 2001-11-08 | Sun Microsystems, Inc. | Cluster membership monitor |
US7434257B2 (en) * | 2000-06-28 | 2008-10-07 | Microsoft Corporation | System and methods for providing dynamic authorization in a computer system |
JP3784245B2 (en) | 2000-07-03 | 2006-06-07 | 松下電器産業株式会社 | Receiver |
US6622221B1 (en) * | 2000-08-17 | 2003-09-16 | Emc Corporation | Workload analyzer and optimizer integration |
US6662235B1 (en) | 2000-08-24 | 2003-12-09 | International Business Machines Corporation | Methods systems and computer program products for processing complex policy rules based on rule form type |
US7162538B1 (en) | 2000-10-04 | 2007-01-09 | Intel Corporation | Peer to peer software distribution system |
US6631449B1 (en) * | 2000-10-05 | 2003-10-07 | Veritas Operating Corporation | Dynamic distributed data system and method |
AU2002230799A1 (en) * | 2000-11-01 | 2002-05-15 | Metis Technologies, Inc. | A method and system for application development and a data processing architecture utilizing destinationless messaging |
US7165107B2 (en) * | 2001-01-22 | 2007-01-16 | Sun Microsystems, Inc. | System and method for dynamic, transparent migration of services |
WO2002057917A2 (en) * | 2001-01-22 | 2002-07-25 | Sun Microsystems, Inc. | Peer-to-peer network computing platform |
US20020099815A1 (en) * | 2001-01-25 | 2002-07-25 | Ranjan Chatterjee | Event driven modular controller method and apparatus |
WO2002065329A1 (en) * | 2001-02-14 | 2002-08-22 | The Escher Group, Ltd. | Peer-to peer enterprise storage |
US20030115251A1 (en) * | 2001-02-23 | 2003-06-19 | Fredrickson Jason A. | Peer data protocol |
US6898634B2 (en) * | 2001-03-06 | 2005-05-24 | Hewlett-Packard Development Company, L.P. | Apparatus and method for configuring storage capacity on a network for common use |
US6871219B2 (en) * | 2001-03-07 | 2005-03-22 | Sun Microsystems, Inc. | Dynamic memory placement policies for NUMA architecture |
US6961727B2 (en) * | 2001-03-15 | 2005-11-01 | International Business Machines Corporation | Method of automatically generating and disbanding data mirrors according to workload conditions |
US7539664B2 (en) * | 2001-03-26 | 2009-05-26 | International Business Machines Corporation | Method and system for operating a rating server based on usage and download patterns within a peer-to-peer network |
US7065587B2 (en) | 2001-04-02 | 2006-06-20 | Microsoft Corporation | Peer-to-peer name resolution protocol (PNRP) and multilevel cache for use therewith |
US6961539B2 (en) * | 2001-08-09 | 2005-11-01 | Hughes Electronics Corporation | Low latency handling of transmission control protocol messages in a broadband satellite communications system |
US7092977B2 (en) * | 2001-08-31 | 2006-08-15 | Arkivio, Inc. | Techniques for storing data based upon storage policies |
US20030061491A1 (en) * | 2001-09-21 | 2003-03-27 | Sun Microsystems, Inc. | System and method for the allocation of network storage |
US7212301B2 (en) * | 2001-10-31 | 2007-05-01 | Call-Tell Llc | System and method for centralized, automatic extraction of data from remotely transmitted forms |
EP1315066A1 (en) * | 2001-11-21 | 2003-05-28 | BRITISH TELECOMMUNICATIONS public limited company | Computer security system |
JP4197495B2 (en) | 2002-02-14 | 2008-12-17 | 富士通株式会社 | Data storage control program and data storage control method |
JP4223729B2 (en) * | 2002-02-28 | 2009-02-12 | 株式会社日立製作所 | Storage system |
US20030204856A1 (en) * | 2002-04-30 | 2003-10-30 | Buxton Mark J. | Distributed server video-on-demand system |
US7937704B2 (en) | 2002-06-20 | 2011-05-03 | British Telecommunications Public Limited Company | Distributed computer |
US7613796B2 (en) * | 2002-09-11 | 2009-11-03 | Microsoft Corporation | System and method for creating improved overlay network with an efficient distributed data structure |
US8204992B2 (en) * | 2002-09-26 | 2012-06-19 | Oracle America, Inc. | Presence detection using distributed indexes in peer-to-peer networks |
GB0230331D0 (en) * | 2002-12-31 | 2003-02-05 | British Telecomm | Method and apparatus for operating a computer network |
US7152077B2 (en) * | 2003-05-16 | 2006-12-19 | Hewlett-Packard Development Company, L.P. | System for redundant storage of data |
US7096335B2 (en) * | 2003-08-27 | 2006-08-22 | International Business Machines Corporation | Structure and method for efficient management of memory resources |
GB0412655D0 (en) | 2004-06-07 | 2004-07-07 | British Telecomm | Distributed storage network |
-
2003
- 2003-06-19 US US10/517,434 patent/US7937704B2/en active Active
- 2003-06-19 WO PCT/GB2003/002631 patent/WO2004001598A2/en not_active Application Discontinuation
- 2003-06-19 CA CA2489142A patent/CA2489142C/en not_active Expired - Fee Related
- 2003-06-19 EP EP03732731A patent/EP1514183A2/en not_active Ceased
- 2003-06-19 AU AU2003240116A patent/AU2003240116A1/en not_active Abandoned
Non-Patent Citations (3)
Title |
---|
OMER RANA: "Resource Discovery for Dynamic Clusters in Computational Grids", THE PROCEEDINGS OF THE 15 INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM, pages 1 - 9 |
See also references of EP1514183A2 |
WITTIE, L.D.; VAN TILBORG: "MICROS, a Distributed Operating System for MICRONET, A Reconfigurable Network Computer", IEEE TRANS. ON COMPUTERS, vol. C-29, December 1980 (1980-12-01), pages 1133 - 1144 |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7937704B2 (en) | 2002-06-20 | 2011-05-03 | British Telecommunications Public Limited Company | Distributed computer |
US8463867B2 (en) | 2002-12-31 | 2013-06-11 | British Telecommunications Plc | Distributed storage network |
US7610333B2 (en) | 2002-12-31 | 2009-10-27 | British Telecommunications Plc | Method and apparatus for operating a computer network |
US8775622B2 (en) | 2004-08-02 | 2014-07-08 | Sony Corporation | Computer-based cluster management system and method |
EP1785865A1 (en) * | 2004-08-02 | 2007-05-16 | Sony Computer Entertainment Inc. | Network system, management computer, cluster management method, and computer program |
EP1785865A4 (en) * | 2004-08-02 | 2008-12-17 | Sony Computer Entertainment Inc | Network system, management computer, cluster management method, and computer program |
JP2006072987A (en) * | 2004-08-02 | 2006-03-16 | Sony Computer Entertainment Inc | Network system, management computer, cluster management method, and computer program |
EP1810447B1 (en) * | 2004-10-12 | 2010-07-28 | International Business Machines Corporation | Method, system and program product for automated topology formation in dynamic distributed environments |
US9021065B2 (en) | 2004-10-12 | 2015-04-28 | International Business Machines Corporation | Automated topology formation in dynamic distributed environments |
EP1925123A4 (en) * | 2005-09-14 | 2010-12-22 | Ericsson Telefon Ab L M | Controlled temporary mobile network |
EP1925123A1 (en) * | 2005-09-14 | 2008-05-28 | Telefonaktiebolaget LM Ericsson (publ) | Controlled temporary mobile network |
WO2007032713A1 (en) | 2005-09-14 | 2007-03-22 | Telefonaktiebolaget Lm Ericsson (Publ) | Controlled temporary mobile network |
US7805503B2 (en) | 2007-05-10 | 2010-09-28 | Oracle International Corporation | Capability requirements for group membership |
WO2020009875A1 (en) * | 2018-07-02 | 2020-01-09 | Convida Wireless, Llc | Dynamic fog service deployment and management |
Also Published As
Publication number | Publication date |
---|---|
US7937704B2 (en) | 2011-05-03 |
AU2003240116A1 (en) | 2004-01-06 |
WO2004001598A3 (en) | 2004-12-09 |
US20050257220A1 (en) | 2005-11-17 |
EP1514183A2 (en) | 2005-03-16 |
CA2489142A1 (en) | 2003-12-31 |
CA2489142C (en) | 2013-11-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2489142C (en) | Distributed computer | |
CN108650262B (en) | Cloud platform expansion method and system based on micro-service architecture | |
EP1242882B1 (en) | A digital computer system and a method for responding to a request received over an external network | |
US20060150157A1 (en) | Verifying resource functionality before use by a grid job submitted to a grid environment | |
US20060048157A1 (en) | Dynamic grid job distribution from any resource within a grid environment | |
US20080183876A1 (en) | Method and system for load balancing | |
US20050108394A1 (en) | Grid-based computing to search a network | |
WO2018206374A1 (en) | Load balancing of machine learning algorithms | |
US6675199B1 (en) | Identification of active server cluster controller | |
JP2007518169A (en) | Maintaining application behavior within a sub-optimal grid environment | |
US8660996B2 (en) | Monitoring files in cloud-based networks | |
EP2380079B1 (en) | Parallel tasking application framework | |
US7783786B1 (en) | Replicated service architecture | |
US7966394B1 (en) | Information model registry and brokering in virtualized environments | |
JP2003058376A (en) | Distribution system, distribution server and its distribution method, and distribution program | |
CN111352716A (en) | Task request method, device and system based on big data and storage medium | |
CN113177179B (en) | Data request connection management method, device, equipment and storage medium | |
CN114064155A (en) | Container-based algorithm calling method, device, equipment and storage medium | |
Raghu et al. | Memory-based load balancing algorithm in structured peer-to-peer system | |
US11595471B1 (en) | Method and system for electing a master in a cloud based distributed system using a serverless framework | |
US6925491B2 (en) | Facilitator having a distributed configuration, a dual cell apparatus used for the same, and an integrated cell apparatus used for the same | |
CN106936643B (en) | Equipment linkage method and terminal equipment | |
CN113986835A (en) | Management method, device, equipment and storage medium for FastDFS distributed files | |
Byun et al. | DynaGrid: A dynamic service deployment and resource migration framework for WSRF-compliant applications | |
CN115516842A (en) | Orchestration broker service |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
REEP | Request for entry into the european phase |
Ref document number: 2003732731 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2003732731 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2489142 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10517434 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 2003732731 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: JP |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: JP |