WO2004059503A1 - Virtual server cloud interfacing - Google Patents

Virtual server cloud interfacing Download PDF

Info

Publication number
WO2004059503A1
WO2004059503A1 PCT/US2002/040286 US0240286W WO2004059503A1 WO 2004059503 A1 WO2004059503 A1 WO 2004059503A1 US 0240286 W US0240286 W US 0240286W WO 2004059503 A1 WO2004059503 A1 WO 2004059503A1
Authority
WO
WIPO (PCT)
Prior art keywords
server
cloud
logical
scm
server cloud
Prior art date
Application number
PCT/US2002/040286
Other languages
French (fr)
Inventor
Dave D. Mccrory
Robert A. Hirschfeld
Original Assignee
Protier Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US10/124,195 priority Critical patent/US7574496B2/en
Application filed by Protier Corporation filed Critical Protier Corporation
Priority to PCT/US2002/040286 priority patent/WO2004059503A1/en
Priority to AU2002364059A priority patent/AU2002364059A1/en
Publication of WO2004059503A1 publication Critical patent/WO2004059503A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/30Profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
    • H04L69/322Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
    • H04L69/329Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the application layer [OSI layer 7]

Definitions

  • the present invention relates to virtualization and server technology, and more particularly to server cloud interfacing for establishing flexible logical server management.
  • server computer systems There are many situations in which it is desired to lease one or more server computer systems on a short or long-term basis. Examples include educational or classroom services, demonstration of software to potential users or buyers, website server applications, etc.
  • the servers may be pre-configured with selected operating systems and application software as desired. Although physical servers may be leased and physically delivered for onsite use, servers may also be leased from a central or remote location and accessed via an intermediate network system, such as the Internet.
  • the primary considerations for remote access include the capabilities of the remote access software and the network connection or interface.
  • Virtualization technology enabled multiple logical servers to operate on a single physical computer.
  • logical servers were tied directly to physical servers because they relied on the physical server's attributes and resources for their identity. Virtualization technology weakened this restriction by allowing multiple logical servers to override a physical server's attributes and share its resources.
  • Each logical server is operated substantially independent of other logical servers and provides virtual isolation among users effectively partitioning a physical server into multiple logical servers.
  • a previous disclosure described an ability to completely separate logical servers from particular physical servers so that there was no permanent tie between a physical server and logical resources. Such separation allowed for physical servers to act as a pool of resources supporting logical servers, so that a logical server may be reallocated to a different physical server within a server cloud without users experiencing any change in access approach. The requirement of pre-allocation of physical resources prior to a physical resource change was removed as is required by clustering. It is further desired to provide additional allocation of resources between
  • PROT-0002PCT I server clouds. Relationships between server clouds and other entities need to be defined to enable resource sharing and more efficient resource allocation. Summary of the Present Invention:
  • the present invention concerns a server cloud manager (SCM) for controlling logical servers and physical resources that comprise a virtualized logical server cloud.
  • SCM server cloud manager
  • the SCM includes multiple core components and one or more interface components.
  • the core components serve as a shared foundation to collectively manage events, validate and authorize server cloud users and agents, enforce predetermined requirements and rules and store operation data.
  • the one or more interface components enable communication with external entities and includes an SCM proxy manager that enables communication with one or more SCMs of other server clouds.
  • the core components include an event engine, an authentication engine, a rules engine and a database.
  • the event engine controls and manages events to be- performed by the SCM.
  • the authentication engine validates users and agents of the server cloud and issues security credentials to authorized users and agents.
  • the rules engine validates and enforces predetermined requirements and rules to be followed by SCM operations.
  • the database stores information and includes data validation, data formatting and rules validation for the SCM and the server cloud.
  • the events controlled and managed by the event engine may include individual events or collections of events.
  • the interface components may include a user manager where the core components and the user manager collectively render graphical user interfaces and authorize users of the server cloud according to predetermined roles that define the rights and privileges for each user while accessing server cloud resources.
  • the interface components may include an agent manager that coordinates SCM events with agents within the server cloud that perform specified actions.
  • the interface components may include an administrator manager that renders a user interface, that enables access and control by one or more administrators of the SCM, and that coordinates with core components to authenticate administrative requests.
  • the interface components may include an advanced scripting manager that provides advanced scripting logic and interfaces to other management systems.
  • the interface components may include an SNMP manager that provides an interface between the
  • the interface components may include an image manager that optimizes use of disk resources and files throughout a predetermined domain of the SCM.
  • the core components may employ a URI mapping as a syntax handle that provides sufficient context information and that describes a management relationship between different components of the SCM.
  • the URI mapping may include an identity aspect that determines an identity of an entity requesting an action to be performed.
  • the URI mapping may include a rights aspect that incorporates predetermined roles assigned to an entity that defines the rights and privileges assigned to the entity.
  • the URI mapping may include a presentation aspect that includes logical relationships that define how information is to be presented.
  • the URI mapping may include an implementation aspect that determines which resources or equipment of a domain of the SCM are effected by actions and commands.
  • the implementation aspect may support server abstraction and/or scripting abstraction.
  • the implementation function may incorporate a proxy function for relaying actions and commands to another server cloud.
  • a server cloud system includes a first server cloud that includes a first server cloud manager (SCM) and a first logical server and a second server cloud that includes a second SCM.
  • the first and second SCMs are configured to cooperate to manage operation of the first logical server. Such configuration substantially enhances cloud to cloud interaction, operation and cooperation.
  • the first and second SCMs may be configured, for example, to cooperate to move the first logical server from the first server cloud to the second server cloud.
  • the second server cloud may also include a second logical server, where the first and second SCMs are configured to cooperate to ensure that only one of the first and second logical servers is active at any given time.
  • the first logical server may be activated during a first time period and placed in standby during a second time period, whereas the second logical server is activated during the second time period and placed in standby during the first time period.
  • the first and second SCMs may be configured to cooperate to replicate the first logical server to a second and unique logical server within the second server cloud.
  • the first and second server clouds may have a trust relationship such that the first and second SCMs are peers.
  • the first logical server may be within a subcloud of the first server cloud and the second SCM may have rights over the subcloud.
  • the server cloud system may include an intermediary that has a trust relationship with the first and second server clouds.
  • the first and second server clouds may cooperate with each other through the intermediary.
  • the first and second SCMs may be configured to cooperate via the intermediary to move the first logical server from the first server cloud to the second server cloud.
  • the second server cloud may include a second logical server where the first and second SCMs are configured to cooperate via the intermediary to ensure that only one of the first and second logical servers is active at any given time.
  • the server first and second SCMs may be configured to cooperate via the intermediary to replicate the first logical server to a second and unique logical server within the second server cloud.
  • the second SCM may operate as a proxy for the first logical server so that the first logical server may appear to exist within the second server cloud while actually residing in the first server cloud. If the first server cloud includes a second logical server, the second SCM may operate as a proxy for the first and second logical servers and the first and second SCMs may be configured to cooperate to ensure that only one of the first and second logical servers is active at any given time.
  • the second server cloud may be an exchange cloud that employs intercloud proxy and commercial terms to enable commercial transactions associated with resources within the first server cloud.
  • the first and second server clouds may establish a commercial relationship for the purpose of enabling the second server cloud to direct use and resell logical server resources in the first server cloud.
  • the server cloud system may further include a third server cloud that has an authorized user and that has a commercial relationship with the exchange cloud. In this case, the authorized user may gain access to the first logical server active in the first server cloud via intercloud proxy via the exchange cloud.
  • the exchange cloud may transfer the first logical server from the first server cloud to the third server cloud for access by an end consumer.
  • the location of the first logical server may be transparent to the end consumer.
  • the transfer of the first logical server may be performed by the exchange cloud transparently to the end consumer.
  • FIG. 1A illustrates a routing function
  • FIG. IB illustrates a switching function
  • FIG. 1C illustrates a replication function.
  • FIG. 2 A illustrates the routing function
  • FIG. 2B illustrates the switching function
  • FIG. 2C illustrates the replication function.
  • FIGS 3A - 3C are figurative block diagrams illustrating supercloud actions or actions requested of one SCM of a cloud that are transparently performed by a different SCM of another cloud.
  • FIG. 3A illustrates a routing function
  • FIG. 3B illustrates a switching function
  • FIG. 3C illustrates a replication function.
  • Figures 4A and 4B are figurative block diagrams illustrating user interface with clouds and logical server proxying.
  • Figures 5A - 5G are figurative block diagrams of various scenarios that may occur for a customer with growing or changing needs over time illustrating the flexibility of logical server operation, location and accessibility.
  • FIG. 6 is a figurative block diagram illustrating operation of an exchange cloud as an intermediary according to an embodiment of the present invention.
  • FIG. 7 is a figurative block diagram illustrating logical server management by an exchange cloud according to an embodiment of the present invention.
  • FIG. 8 is a figurative block diagram illustrating load shifting of a logical server across geographic areas employing the proxy functionality.
  • FIG. 9 is a figurative block diagram illustrating various trust relationships with an SCM of a server cloud and between server clouds.
  • FIG. 10 is a figurative block diagram illustrating an example of proxy syntax for proxying a logical server associated with a user from one server cloud to another server cloud.
  • FIG. 11 is a block diagram illustrating the fundamental components of an SCM of a server cloud including core components and interface modules.
  • FIG. 12 is a figurative block diagram that illustrates relationship mapping between data and information and the associated syntax employed by core components of the exemplary SCM of FIG. 11.
  • a "physical” device is a material resource such as a server, network switch, or disk drive. Even though physical devices are discrete resources, they are not inherently unique. For example, random access memory (RAM) devices and a central processing unit (CPU) in a physical server may be interchangeable between like physical devices. Also, network switches may be easily exchanged with minimal impact.
  • a "logical” device is a representation of a physical device to make it unique and distinct from other physical devices. For example, every network interface has a unique media access control (MAC) address. A MAC address is the logical unique identifier of a physical network interface card (NIC).
  • a "traditional" device is a combined logical and physical device in which the logical device provides the entire identity of a physical device.
  • a physical NIC has its MAC address permanently affixed so the physical device is inextricably tied to the logical device.
  • a "virtualized” device breaks the traditional interdependence between physical and logical devices. Virtualization allows logical devices to exist as an abstraction without being directly tied to a specific physical device. Simple virtualization can be achieved using logical names instead of physical identifiers. For example, using an Internet Uniform Resource Locator (URL) instead of a server's MAC address for network identification effectively virtualizes the target server. Complex virtualization separates physical device dependencies from the logical device. For example, a virtualized NIC could have an assigned MAC address that exists independently of the physical resources managing the NIC network traffic.
  • a "server cloud” or “cloud” is a collection of logical devices which may or may not include underlying physical servers. The essential element of a cloud is that all logical devices in the cloud may be accessed without any knowledge or with limited knowledge of the underlying physical devices within the cloud.
  • PROT-0002PCT Q Fundamentally, a cloud has persistent logical resources, but is non-deterministic in its use of physical resources.
  • the Internet may be viewed as a cloud because two computers using logical names can reliably communicate even though the physical network is constantly changing.
  • a "virtualized logical server cloud” refers to a logical server cloud comprising multiple logical servers, where each logical server is linked to one of a bank of physical servers.
  • the boundary of the logical server cloud is defined by the physical resources controlled by a "cloud management infrastructure” or a “server cloud manager” or SCM.
  • the server cloud manager has the authority to allocate physical resources to maintain the logical server cloud; consequently, the logical server cloud does not exceed the scope of physical resources under management control.
  • the physical servers controlled by the SCM determine a logical server cloud's boundary.
  • Agents are resource managers that act under the direction of the SCM. An agent's authority is limited in scope and it is typically task-specific.
  • a physical server agent is defined to have the authority to allocate physical resources to logical servers, but does not have the authority or capability to create administrative accounts on a logical server.
  • An agent generally works to service requests from the server cloud manager and does not instigate actions for itself or on other agents.
  • a prior disclosure introduced virtualization that enabled complete separation between logical and physical servers so that a logical server may exist independent of a specific physical server.
  • the logical server cloud virtualization added a layer of abstraction and redirection between logical and physical servers. Logical servers were implemented to exist as logical entities that were decoupled from physical server resources that instantiated the logical server.
  • Decoupling meant that the logical attributes of a logical server were non-deterministically allocated to physical resources, thereby effectively creating a cloud of logical servers over one or more physical servers.
  • the prior disclosure described a new deployment architecture which applied theoretical treatment of servers as logical resources in order to create a logical server cloud. Complete logical separation was facilitated by the addition of the SCM, which is an automated multi-server management layer.
  • SCM which is an automated multi-server management layer.
  • a fundamental aspect to a logical server cloud is that the user does not have to know or provide any physical server information to access one or more logical server(s), since this information is
  • PROT:0002PCT ⁇ maintained within the SCM.
  • Each logical server is substantially accessed in the same manner regardless of underlying physical servers. The user experiences no change in access approach even when a logical server is reallocated to a different physical server. Any such reallocation can be completely transparent to the user.
  • the present disclosure builds upon logical server cloud virtualization by adding a layer of abstraction and redirection between logical servers and the server clouds as managed and controlled by corresponding SCMs.
  • the server cloud is accessed via its SCM by a user via a user interface for accessing logical and physical servers and by the logical and physical servers themselves, such as via logical and/or physical agents as previously described.
  • SCMs may further interface each other according to predetermined relationships or protocols, such as “peer” SCMs or server clouds or between a server cloud and a “super peer", otherwise referred to as an "Exchange".
  • the present disclosure introduces the concept of a "subcloud” in which an SCM interfaces or communicates with one or more logical and/or physical servers of another server cloud.
  • the SCM of the server cloud operates as an intermediary or proxy for enabling communication between a logical server activated within a remote cloud.
  • Logical servers may be moved from one server cloud to another or replicated between clouds.
  • a remote SCM may manage one or more logical servers in a subcloud of a remote server cloud.
  • a logical server may not be aware that it is in a remote cloud and may "think" that or otherwise behave as though it resides in the same cloud as the SCM managing its operations.
  • the proxy functionality enables transparency between users and logical servers.
  • the user of a logical server may or may not be aware of where the logical server exists or in which server cloud it is instantiated. Many advantages and capabilities are enabled with cloud to cloud interfacing.
  • Routing, switching, replication and cloud balancing may be performed intercloud, such as between "trusted” clouds, extracloud; such as between "untrusted” clouds, or via an intermediary (e.g., super-peer, supercloud, shared storage, exchange) in which actions requested of one SCM are transparently performed by a different SCM.
  • An exchange cloud may be established that has predetermined commercial relationships with other clouds or that is capable of querying public or otherwise accessible clouds for resource information.
  • Such an exchange cloud may be established on a commercial basis, for example, to provide a free market exchange for servers or
  • FIGS. 1A - 1C are figurative block diagrams illustrating intercloud actions or actions between trusted clouds A and B where data can be transferred directly between clouds.
  • An SCM 103 of cloud A and an SCM 105 of cloud B in each of these cases are considered "peers".
  • FIG. 1A illustrates the routing function in which only a single instance of a logical server (LS) 101 exists within the boundaries of both clouds A and B.
  • the LS 101 maybe active or held in standby.
  • the SCMs 103 and 105 coordinate to move the instance of LS 101 from cloud A to cloud B as illustrated by arrows 102.
  • FIG. IB illustrates the switching function in which multiple instances of the LS 101 exist although only one is active at any given time.
  • Cloud A includes LS instances 101a, 101b and 101c while cloud B includes instances 101c, lOld and lOlf.
  • the LS lOlf is shown with diagonal lines to indicate that it is active in cloud B.
  • the remaining LS instances lOla - lOle are in standby as indicated by a shading pattern.
  • the SCMs 103 and 105 coordinate with each other as illustrated by arrow 104 to manage the multiple logical servers lOla - lOlf to ensure that only one logical server is active at any given time.
  • FIG. 1C illustrates the replication function in which a logical server LS 101 is replicated from a master template to create a set of similar although unique logical servers 107, 109 and 111.
  • the LS 107 is replicated within the same cloud A whereas the logical servers 109 and 111 are replicated in another cloud B.
  • the logical servers 107, 109 and 111 are shown with different line patterns to illustrate that they are different logical servers even if similar to the original LS 101.
  • the SCMs 103 and 105 coordinate with each other as illustrated by arrow 106 to replicate the LS 101 as a master template into multiple unique logical servers 107 - 109.
  • the instance information from the LS 101 is passed to the SCM 105 for replicating it within cloud B.
  • the logical servers 101, 107, 109 and 111 may all be activated at the same time since they are different logical servers with unique identities.
  • PROT:0002PCT ' Q Figures 2A - 2C are figurative block diagrams illustrating extracloud actions or actions between "untrusted" clouds A and B where data is not transferred directly between clouds. Instead, data and commands are transferred via an intermediary (IM) 213, which may comprise a super-peer, a supercloud, a shared storage or an exchange.
  • IM intermediary
  • the clouds A and B include SCMs 203 and 205, respectively, which are similar to the SCMs 103 and 105.
  • the routing, switching and replication functions are similar, except that the SCMs 203 and 205 cooperate with the IM 213 to perform the respective extracloud functions.
  • FIG. 2A illustrates the routing function in which only a single instance of a logical server (LS) 201 exists.
  • the LS 201 may be active or held in standby.
  • the SCMs 203 and 205 coordinate with the IM 213 to move the instance of LS 201 from cloud A to cloud B as illustrated by arrows 202.
  • FIG. 2B illustrates the switching function in which multiple instances of the LS 201 exist although only one is active at any given time.
  • Cloud A includes LS instances 201a, 201b and 201c while cloud B includes instances 201c, 20 Id and 20 If.
  • the LS 20 If is shown with diagonal lines to indicating that it is active in cloud B.
  • the remaining LS instances 201a - 201 e are in standby as indicated by shading.
  • the SCMs 203 and 205 coordinate with each other via the IM 213 as illustrated by arrows 204 to manage the multiple logical servers 201a - 201 f to ensure that only one of the logical servers is active at any given time.
  • FIG. 2C illustrates the replication function in which a logical server LS 201 is replicated from a master template to create a set of similar although unique logical servers 207, 209 and 211.
  • the LS 207 is replicated within the same cloud A whereas the logical servers 209 and 211 are replicated in another cloud B.
  • the logical servers 207, 209 and 211 are shown with different patterns to illustrate that they are different logical servers even if similar to the original LS 201.
  • the SCMs 203 and 205 coordinate with each other via the IM 213 as illustrated by arrows 206 to replicate the LS 201 as a master template into multiple unique logical servers 207 - 211.
  • the instance information from the LS 201 is passed to the SCM 205 via the IM 213 for replicating it within cloud B.
  • the logical servers 201, 207, 209 and 211 may all be simultaneously activated since they are different logical servers with unique identities.
  • FIGS. 3 A - 3C are figurative block diagrams illustrating supercloud actions or actions requested of one SCM 303 of cloud A that are explicitly or transparently performed by a different SCM 305 of another cloud B.
  • Cloud A operates as a supercloud with respect to a subcloud of cloud B.
  • the clouds A and B include SCMs 303 and 305, respectively, which are similar to the SCMs 103 and 105 or 203 and 205. h each case, the SCM 303 acts as a proxy or gateway to the SCM 305 so that the SCM 303 of cloud A appears to own or otherwise control logical servers in the cloud B.
  • FIG. 3A illustrates the routing function in which only a single instance of a logical server LS 301 exists.
  • the LS 301 resides in the cloud B although it appears to reside in cloud A as shown by LS 301' with dotted lines.
  • the SCM 303 forwards requests from LS 301' to the SCM 305 and to the LS 301 where the LS 301 is active.
  • the proxy SCM 303 appears to own the LS 301 even though it is active in a different cloud B.
  • the LS 301 may not "know” that it is in cloud B but may "think” or otherwise act as though it is active in cloud A.
  • FIG. 3B illustrates the switching function in which multiple instances of the LS 301 exist although only one is active at any given time.
  • the LS 301 appears to be active in cloud A as shown as LS 301' with dotted lines, while cloud B includes instances 301a, 301b and 301c.
  • the LS 301c is shown with diagonal lines to indicate that it is active in cloud B.
  • the remaining LS instances 301a and 301b are in standby as indicated by shading.
  • the SCM 303 acts as a gateway for access to the switched LS 301 (LS 301a, b or c) in the cloud B via intermediate paths as shown by arrow 304.
  • the gateway SCM 303 appears to own or otherwise control the active one of the LSs 301a-c even though active in cloud B. Again, the active one of the LSs 301a-c may not know that it is in cloud B but may think it is active in cloud A.
  • FIG. 3C illustrates the replication function in which a logical server LS 301 is replicated from a master template in cloud A to create a set of similar although unique logical servers 309 and 311 in cloud B as shown by arrows 306.
  • the LS 301 is active in cloud A.
  • the logical servers 309 and 311 are shown with different line patterns to illustrate that they are different logical servers even if similar to the original LS 301.
  • the SCMs 303 and 305 coordinate with each other as illustrated by arrows 306 to replicate the LS 301 as a master template into multiple unique logical
  • the SCM 303 instructs the SCM 305 to replicate the LS 301 in the cloud A as the logical servers 309 and 311 in cloud B.
  • the function of cloud balancing may be performed within any of the intercloud, extracloud or supercloud architectures and facilitated by the routing, switching or replication functions.
  • the routing function as applied to the intercloud, extracloud or supercloud configurations, the LS 101 is moved from one physical server (PS) to another with more capacity or with greater resources or simply in a different geographic area or time zone.
  • the commands are proxied to a different instance of the logical server in another cloud, where the different instances may have different capacity or be located in a different geographic area or time zone.
  • the SCMs of the clouds A and B coordinate (either directly or via the IM 213) to select the instance of the LS with the appropriate capacity or resource level based on demands or needs.
  • the replication function the SCM creates additional LS instances with variant capacities or in different areas or times and replaces one LS instance with another in order to allocate more capacity within a cloud or across clouds.
  • FIGS. 4A and 4B are figurative block diagrams illustrating user interface with clouds and logical server proxying.
  • a user (U) 401 attempts to access a logical server LSI 403 via cloud A.
  • the user 401 provides a pathname indicative of cloud A and the logical server LSI 403, such as, for example, a pathname including "CLOUDA...LSI ! ACTION", where "CLOUD A” references cloud A, "LSI” references the logical server LSI 403, and "ACTION” denotes a particular action or operation to perform, such as login, reboot, etc.
  • the logical server LSI 403 appears to be active in cloud A to the user 401, the logical server LSI 403 is actually active in cloud B.
  • the SCM (not shown) of cloud A serves as a proxy to forward data and commands from cloud A to the SCM of cloud B, which forwards the data and commands to the LSI 403 active in cloud B.
  • Such proxy scenario may be completely transparent to the user 401, such that the user thinks or believes that LSI 403 resides in cloud A.
  • the logical server LSI 403 may behave in a manner that indicates that it is active in cloud A when in fact it is active in cloud B.
  • Cloud A may lack the
  • PROT:0002PCT ⁇ necessary resources to build or operate the logical server LSI, so that it is moved or replicated and operated in cloud B and proxied via cloud A.
  • the underlying physical resources of cloud A including its physical servers, may have experienced a temporary failure or shutdown or the like, which would otherwise render the logical server LSI inoperable or unavailable.
  • logical server LSI is available in cloud B via proxy.
  • the resources of cloud A may be temporarily oversubscribed or subscribed at or near its full capacity, so that the logical server LSI is temporarily moved to cloud B to prevent interruption in service or to maintain desired level of service.
  • the user 401 may have requested additional capacity or capabilities that were not available at the time in cloud A, so that the expanded capacity LSI is temporarily or permanently active in cloud B.
  • Such proxy may be on a permanent or temporary basis depending upon the situation or the needs of the user 401.
  • FIG. 4B is a figurative block diagram similar to FIG. 4A except that the logical server LSI is active in cloud A and a copy of the logical server LSI is maintained in cloud B. As illustrated by dashed line 405, LSI is active in either cloud A or B at any given time, although preferably not in both to avoid ambiguity or malfunction.
  • the cloud A can command activation of LSI in either cloud A or B and re-direct data and commands as needed.
  • the user 401 accesses the logical server LSI via cloud A regardless of where it is activated, and may not even be aware that the backup of LSI exists in a different cloud.
  • the user 401 may not only be aware of the backup copy of LS 1 in cloud B, but may in fact control the switching of activation of LS 1 between the clouds A and B. In this manner, the user 401 may load balance or cluster servers as desired.
  • Figures 5 A - 5G are figurative block diagrams of various scenarios that may occur for a customer (CS) 501 with growing or changing needs over time illustrating the flexibility of logical server operation, location and accessibility.
  • CS customer
  • CS 501 may have need of one or more servers but may not desire or have the resources to purchase physical servers. As shown in FIG. 5 A, CS 501 decides to rent or purchase
  • PROT:0002PCT 13 one or more logical servers 503 from the owner of a server cloud A.
  • CS 501 accesses the logical servers 503 remotely across suitable networks as shown by arrow 502.
  • CS 501 chooses to acquire the use of one or more additional logical servers 505 from the owner of another cloud B as shown in FIG. 5B. There may be various reasons for using the servers of another cloud, such as price, capability or capacity considerations. Since the logical servers 505 are in a different cloud, a separate path or link to the cloud B is necessary as shown by arrow 504.
  • CS 501 determines to co-manage all of the logical servers 503 and 505 from a single cloud, such as via cloud A as shown in FIG. 5C.
  • CS 501 may decide, for example, to treat all of the logical servers 503, 505 as one larger pool of servers to avoid separate accesses 502, 504 or to simplify addressing.
  • CS 501 accesses all of the logical servers 503, 505 in both clouds A and B via the SCM of cloud A through one link as shown by arrow 507.
  • a subcloud link shown by arrow 509 is established between clouds A and B, so that the logical servers 505 are managed by the SCM of cloud A.
  • the logical servers 505 are part of a subcloud of cloud A that exists in cloud B.
  • CS 501 eventually decides to self-manage its logical servers 503, 505 and creates a local server cloud 511 as shown in Figures 5D and 5E.
  • the server cloud 511 need not include any logical servers and needs only have sufficient physical resources to access and manage logical servers within other clouds.
  • the logical servers 503 and 505 are still active in the clouds A and B, respectively.
  • the server cloud 511 includes an SCM 513 that access the server clouds A and B via separate links as shown by arrows 515 and 517.
  • Such configuration may be a logical extension of the configuration of FIG. 5B in which the clouds A and B are accessed via separate links, except that the configuration of FIG.
  • 5D is more convenient since all of the logical servers 503, 505 may be locally managed by CS 501 via its local server cloud 511. h effect, the cloud 511 has access to subclouds of clouds A and B.
  • FIG. 5E a single link from cloud 511 to cloud A is established as illustrated by arrow 519.
  • the subcloud link 509 between the clouds A and B is used so that cloud A has subcloud rights to the logical servers 505 active in cloud B.
  • Such configuration may be a logical extension of the configuration of FIG. 5C in which the clouds A and B are accessed via a single link, except that the
  • PROT:0002PCT 14 configuration of FIG. 5E is more convenient since all of the logical servers 503, 505 are locally managed by CS 501 via its local server cloud 511. hi effect, the cloud 511 has access to a subcloud of cloud A, and cloud A has access to a subcloud of cloud B. CS 501 still has sufficient access to all of its logical servers 503, 505. As CS 501 continues to grow, it may decide to acquire local physical assets and activate one or more local logical servers 521 within cloud 511 as shown in FIG. 5F.
  • CS 501 may choose to add physical resources and consolidate some of its logical servers into its local cloud 511.
  • FIG. 5G for example, the logical servers 505 are moved from cloud B into the local cloud 511. hi this case, links and relationships with cloud B are no longer necessary and may be terminated.
  • the logical servers 505 may effectively be the same in capacity regardless of the cloud in which they are activated, so that the underlying cloud is transparent.
  • the persistent attributes are configured to be the same.
  • FIG. 6 is a figurative block diagram illustrating operation of an exchange cloud E as intermediary according to an embodiment of the present invention.
  • An exchange cloud may be implemented in a similar manner as any server cloud, such as including an SCM 603 or the like, but does not necessarily have to include any logical servers and may include relatively minimal physical resources and/or physical servers.
  • the exchange cloud E includes an exchange database 601 or the like that stores information associated with one or more other public or otherwise accessible clouds A, B, C, etc.
  • the clouds A-C may have, for example, predetermined relationships with the exchange cloud E as defined by parameters contained within the exchange database 601, which also includes access information for those clouds.
  • An exemplary commercial use of the exchange cloud E may be the ability for potential users to search for and use logical servers in the other clouds A-C that meet the user's needs or requirements.
  • a user 605 contacts the exchange cloud E via link arrow 611 with a set of parameters or criterion for purposes of
  • PROT:0002PCT 1 5 finding one or more logical servers that meet its requirements at the lowest price.
  • the exchange cloud E forwards the requirements parameters to the other clouds A-C, or otherwise searches its exchange database 601 to find as many servers as possible that meet the needs of the user 605.
  • the exchange cloud E either selects logical servers from one of the clouds A-C or the clouds A-C may bid against each other to win a contract with the user 605.
  • the inverse situation is contemplated in which multiple users may bid for access privileges of a server cloud.
  • Another exemplary embodiment is the exchange cloud E operating as a central manager or the like for allocating resources distributed among multiple clouds A-C for a plurality of users.
  • the user 605 requests one or more logical servers from the exchange cloud E, which locates one or more suitable logical servers and provides access to the user 605.
  • the exchange cloud E identifies a logical server 607 located in cloud C in response to a request by the user 605.
  • the SCM 603 may act as proxy or intermediary for providing access of the logical server 607 to the user 605, such as shown by dashed arrow 609. i such case, the user 605 maintains a relationship with the exchange cloud E as indicated by arrow 611 through which it accesses the logical server 607 located in cloud C.
  • the SCM 603 forwards access or other credential information to the user 605, which uses the access information to directly access the logical server 607 via the cloud C as illustrated by dashed arrow 613.
  • the user 605 may not have any rights within the cloud C, but may inherit rights otherwise granted to the SCM 603 for the exchange cloud E.
  • FIG. 7 is a figurative block diagram illustrating logical server management by an exchange cloud E according to an embodiment of the present invention.
  • the exchange cloud E includes an exchange database 701 and an SCM 703.
  • a customer (CS) cloud 705 includes a plurality of local servers 707 which it manages locally.
  • the customer CS deems that it requires additional logical servers and accesses the exchange cloud E for additional servers via link arrow 715.
  • the customer CS may have immediate needs for 10 additional servers and may anticipate a need for more servers in the future.
  • the exchange cloud E identifies a cloud A that has sufficient server capacity to meet the immediate and future needs of the customer CS.
  • the cloud A includes 10 logical servers 709 and has at least 40 additional servers 711 to meet the future needs of the customer CS.
  • the customer CS has the option of purchasing or leasing only the 10 logical servers 709 and delaying
  • PROT.0002PCT 1 (5 acquisition of additional servers.
  • the cloud A may be able to provide a group of 50 logical servers at a bulk price so that each of the 50 servers are available at reduced cost.
  • the customer CS decides to acquire (rent or purchase) 50 logical servers from cloud A, including the 10 logical servers 709 for meeting it's immediate needs and 40 additional logical servers 711 for meeting it's future needs.
  • the customer CS may choose, if the option is available, to manage the logical servers 709, 711 from the cloud A as shown by arrow 713.
  • the customer CS passes to the exchange cloud E the credentials for accessing the logical servers 709, 711 as shown by arrows 715 and 717. This manner, the SCM 703 of the exchange cloud E operates a proxy for accessing the logical servers 709, 711 in cloud A on behalf of the customer CS as shown by arrow 719.
  • the logical servers 709 and 711 appear to be located in the exchange cloud E as shown at 709' and 711', respectively. Since the customer CS has immediate need of only 10 logical servers, it assumes control of the logical servers 709 via the exchange cloud E as indicated by arrow 721. The customer CS accesses the logical servers 709 via the SCM 703, and the SCM 703 proxies the logical servers 709 so that they appear to be within the cloud E as shown as 709'.
  • the logical servers 711 may be temporarily sold on the open market as indicated by arrow 723. hi this manner, the customer CS assumes control of the logical servers 709 and sells the remaining logical servers 711 to third parties via the exchange cloud E. This may provide a significant savings to the customer CS in that most, if not all, of its cost in the logical servers 711 may be retrieved via third party rental (minus any management fees charged by the owners of the exchange cloud E). As the needs of the customer CS grows over time, it may request that some or all of the logical servers 711 be reallocated to the customer CS as necessary.
  • the exchange cloud E may move one or more of the logical servers 709, 711 to a different cloud, such as another cloud B if desired.
  • the logical servers 711 may be moved to cloud B while not needed by the customer CS and while being rented by third parties, if desired. In this manner, the present invention provides complete flexibility for managing logical servers between clouds.
  • FIG. 8 is a figurative block diagram illustrating load shifting of a logical server across geographic areas employing the proxy functionality.
  • the Earth 801 is shown in perspective as though viewed from a distance looking directly at the North Pole (NP).
  • Four separate server clouds A, B, C and D are distributed around the Earth 801, generally divided by its "four corners" or separated by equivalent quadrants such as by 8 hour time periods of the total 24-hour period defined by it's the Earth's rotation as indicated by an arrow 803.
  • a logical server (LS) is active from 8PM to 2AM in cloud A as shown at 805, where all times are referenced with respect to the physical location of cloud A.
  • Other instances of the logical server LS exist in each of the other clouds B, C and D as shown at 807, 809 and 811, respectively, each in standby mode while active in cloud A.
  • the logical server LS is proxied from cloud A to cloud B for the next 8 hour time period 2AM to 8AM as indicated by arrow 806.
  • the LS 805 in cloud A is placed in standby mode while the instance of LS 807 in cloud B is activated. Users attempting to access LS 805 in cloud A are simply proxied to the activated LS 807 in cloud B.
  • the logical server is proxied from cloud A to cloud C for the next 8 hour time period 8AM to 2PM as indicated by arrow 808.
  • the LS 807 in cloud B is placed back in standby mode while the instance of the LS 809 in cloud C is activated.
  • the LS 805 in cloud A remains in standby and cloud A proxies data and information to cloud C.
  • the logical server is proxied from cloud A to cloud D for the next 8 hour time period 2PM to 8PM as indicated by arrow 810.
  • the LS 809 in cloud C is placed in standby mode while the instance of the LS 811 in cloud D is activated.
  • the logical server instances in clouds A-C remain in standby and cloud A proxies data and information to cloud D.
  • the LS 805 is activated once again in cloud A for the next 8 hour time period 8PM to 2AM while the instances of the logical server LS in clouds B-D are once again placed in standby. Operation proceeds in this manner for as long as desired.
  • the ability to sequentially proxy the activation of a logical server from one cloud to another over time provides many useful benefits and advantages. It may be desired, for example, to activate a web server in a local geographic area during peak load or access in that area to best serve the needs of users across the globe over time. Alternatively, the local area resources may be needed during peak hours for local access so that one or more servers providing other needs, such as heavy computational
  • FIG. 9 is a figurative block diagram illustrating various trust relationships with the SCM of a server cloud and between server clouds.
  • a user 901 of a logical server 903 of a server cloud A has a "credentialed" (CRED) trust relationship with the cloud A or its SCM, shown as SCMA 905.
  • CRED credentialed
  • a credentialed trust relationship is a form of "explicit” trust relationship in which "rights” or “privi privileges” provided to, and/or actions allowed by, the user 901 are predetermined, clearly defined, and usually “limited” or otherwise “restricted” by a contract or agreement.
  • a non-exhaustive list of "rights” within a cloud relative to logical servers include “add”, “delete”, “move”, “replicate”, “reset”, “reboot”, “build”, “store”, “snapshot”, among others.
  • the agreement is an expression of boundaries of rights and privileges, such as limited number or types of rights and/or limits on particular rights.
  • a credentialed trust relationship may be "transitory" in that it has a predefined duration or expiration date according to the agreement.
  • the user 901 identifies itself to the SCMA 905 with predetermined credentials, such as a username and password or the like, which invokes the user's access to the cloud A according to the credentialed trust relationship.
  • the rights of the user 901 may be limited to the LS 903 may include one or more other logical servers within the cloud A.
  • An agent of SCMA 905, shown as SCMA Agent 907, is an internal "implicit" component of the server cloud A having implicit rights. Implicit rights are generally "unlimited” in that the SCMA Agent 907 has complete control over all logical and physical servers active within the server cloud A via the SCMA 905.
  • SCM agents are employed to perform actions on servers within a cloud, where such actions are delayed or triggered or invoked by a combination of both. It is appreciated that an SCM agent generally has all of the rights of the SCM itself, and that may be further determined by event or timing controls.
  • a delayed action may occur after a predetermined time period or a predefined time and a triggered action is invoked upon detection of an event that causes the action to be queued.
  • An action may be invoked after a delay or at a particular time if and when an event is detected. For example, a logical server may be rebuilt upon logoff or after expiration of a time period.
  • PROT.0002PCT 1 9 Examples of actions or “atomic” actions include “get status”, “send email”, “reset”, “reboot”, “change CD ROM”, “rebuild image”, “snapshot”, “restore snapshot”, “file copy”, “file delete”, “file move”, “resynch passwords” "start sequence”, “mnscript”, among others.
  • a script may be executed, which is a sequence of atomic actions. The preceding list is exemplary only and not intended to be exhaustive.
  • the LS 903 within the server cloud A has implicit rights within cloud A although the implicit rights are "restricted" to itself.
  • the LS 903 may request additional resources from SCMA 905 for purposes of cloud balancing, for example, to match resource demands or loads.
  • the LS 903 may perform a switching function and request a "sister” logical server or the like for the purposes of spreading existing or anticipated loads.
  • the LS 903 may perform other actions, such as snapshot or reset or the like as necessary or desired.
  • the LS 903 may also request to be moved to a different cloud or the SCMA 905 may move the LS 903 to a different cloud in an explicit or transparent manner.
  • Another server cloud such as an exchange cloud E with manager SCME 909, may be have "subcloud” rights within the server cloud A as illustrated by subcloud E'.
  • a non-exclusive and exemplary list of subcloud rights include “Add”, “Maintain”, “Move”, “Delete”, “Replicate”, “Proxy”, etc.
  • Subcloud rights are explicit “permission” rights within another cloud and include “existence” permission rights over a subset of a cloud and separate “expansion” permission rights that allow expansion or contraction of the subset to add or subtract logical servers.
  • the existence permission rights precede the expansion permission rights and may include an entire server cloud or may include none of the cloud with expansion permission rights to increase the subcloud size to include one or more logical servers. Additional subcloud relationships may be defined.
  • the cloud A may have subcloud rights over another cloud B via SCMB 911, as illustrated by a subcloud A' within the server cloud B.
  • the cloud B may have subcloud rights over another cloud C via SCMC 913, as illustrated by a subcloud B' within the server cloud C controlled by SCMC 913.
  • cloud A has explicit subcloud rights over subcloud A' within cloud B, which further has explicit subcloud rights over subcloud B' within cloud C.
  • cloud A may have "implied" rights over subcloud B' within cloud C based on the existing trust relationships. Since cloud B has rights over subcloud B' within cloud C, cloud B may proxy an LS from cloud B
  • Cloud A may move LS 903 to subcloud A' as shown at 915
  • cloud B may move and proxy the already proxied LS 903 to subcloud B' as shown at 917, so that cloud A has implied rights over subcloud B' at least with respect to the proxied LS 903.
  • the LS 903 may request to be moved to cloud B. While in cloud B (or subcloud A'), the LS 903 (915) does not have implicit trust or rights within cloud B, but nonetheless "inherits" explicit rights within cloud B according to the explicit rights between clouds A and B.
  • FIG. 10 is a figurative block diagram illustrating an example of proxy syntax for proxying a logical server LS 1013 associated with a user 1001 from one server cloud A to another server cloud B.
  • the user 1001 accesses the LS 1013 via cloud A and may believe that the LS 1013 is activated within cloud A when in fact it is activated within cloud B as shown.
  • the server clouds A and B include server cloud managers SCMA 1005 and SCMB 1021, respectively.
  • the user 1001 and the cloud managers SCMA 1005 and SCMB 1021 are interfaced with each other via a network 1003, which may be a global computer network such as the Internet, although any type of intermediate network is contemplated.
  • the user 1001 attempts to access the LS 1013 in cloud A using a Uniform Resource Identifier (URI) or the like incorporating an address that defines a route to the LS 1013 in cloud A.
  • URI Uniform Resource Identifier
  • the user 1001 employs a Uniform Resource Locator (URL) within the URI that identifies the target server cloud A.
  • URL Uniform Resource Locator
  • the user 1001 may enter a URI address having syntax "A.DC.R.LS!
  • ACTION in which "A” is a cloud URL or cloud Internet Protocol (IP) address or cloud name for cloud A, "DC” denotes a particular data center within cloud A, “R” denotes a particular rack of servers of the data center DC, "LS” is the logical server name for LS 1013, the exclamation point "! is a separator and "ACTION” denotes a particular action to be performed by the LS 1013, such as "login” or the like. It is noted that the data center DC and rack R information are navigation aids or specific path information employed for a particular configuration and need not be provided to uniquely access the LS 1013. Instead, the user 1001 may enter a short-hand version " A... LS!
  • ACTION to uniquely identify the target logical server LS 1013.
  • the omitted path information is filled in by the SCMA 1005. It is noted that alternative syntax formats are contemplated, such as, for example, "LSl@cloudA” that identifies a logical server LSI located at or otherwise referenced
  • PROT-0002PCT 21 via cloud A.
  • LSI may be accessed via cloud A by proxy if is not currently located within cloud A.
  • the address "A...LS! ACTION" provided by the user 1001 is received by the SCMA 1005 of the server cloud A. Assuming that the user 1001 is not currently logged into the LS 1013, the SCMA 1005 employs a credential (CRED) check 1009 to authenticate the user 1001 to determine if it has rights to access LS 1013. Although shown as a separate function, the credential check 1009 may be incorporated within the SCMA 1005 depending upon the particular configuration.
  • CRED credential
  • the SCMA 1005 prompts the user to provide the credential information, h one embodiment, the SCMA 1005 uses the credential information supplied by the user 1001 to determine the identity of the user and the associated level of rights and privileges provided to the identified user. If the credential information is incorrect or otherwise not recognized by the SCMA 1005, then the attempted command or login is rejected. Otherwise, the SCMA 1005 passes back a temporary token or the like that is used by the user 1001 for subsequent actions or commands during the current session. The supplied token is used to identify the user 1001 during the current session and to associate that user with their level of authority, rights, and/or level of access.
  • the SCMA 1005 accesses a proxy table 1011 or the like for accessing the LS 1013 on behalf of the user 1001.
  • the proxy table 1011 includes the a proxy link illustrated by arrow 1015 to an alternative address "B.DC.RLS" to the LS 1013 activated within cloud B, where "B” denotes a cloud URL or the like addressing the server cloud B.
  • the alternative path
  • B.DC.R.LS includes the necessary path information to locate the LS 1013 within cloud B as illustrated by dashed arrow 1017.
  • the server cloud A has subcloud rights for accessing the LS 1013 activated in cloud B.
  • the SCMA 1005 employs the alternative address including the desired command provided from the authenticated user 1001 to access the LS 1013 in the server cloud B via the SCMB
  • the SCMB 1021 may employ a credential check 1023 that functions in a similar as the credential check 1009, the SCMA 1005 is recognized or otherwise provides sufficient authentication or credential information to enable access to the LS
  • the user 1001 is provided access to the LS 1013 within the server cloud B via the SCMA 1005 and SCMB 1021 for subsequent commands and actions. i the embodiment shown, the user 1001 continues to access the LS 1013 via the SCMA 1005 employing a proxied relationship to the server cloud B. It is noted that the user 1001 may not have any implicit or explicit rights within the server cloud B. Thus, if the user 1001 attempts to provide the address "B.DC.R.LS" directly to the server cloud B using the same credentials for accessing cloud A, the SCMB 1021 may reject the access as not recognized by the credential check 1023.
  • FIG. 11 is a block diagram illustrating the fundamental components of an exemplary SCM 1101 of a typical server cloud, where the SCM 1101 includes core components 1103 and interface components that define how the SCM operates within the cloud and how it interfaces external entities including other SCMs.
  • the core components 1103 of the SCM 1101 include an events engine 1105, a rules engine 1107, an authentication engine 1109 and a database 1111.
  • the core components 1103 comprise a shared library of functions used by all SCM components and interface components.
  • the interface components are considered part of the SCM 1101 and establish interface with external entities, such as users, administrators, agents, other SCMs, or other applications, such as management applications, billing applications, resource applications, etc.
  • the database 1111 stores data and parameters associated with the SCM 1101 and generally defines how the SCM 1101 tracks data and information.
  • the database 1111 is integrated with the core engines and may even incorporate all or substantial parts of the core engines.
  • the database 1111 includes, for example, data validation, data formatting, and rules validation.
  • the event engine 1105 controls and manages all of the events to be performed by the SCM 1101, where such events are either immediately performed or queued for later execution. It is noted that “commands” and “actions” are generally synonymous and that "events" are commands or actions
  • the rules engine 1107 ensures that the SCM 1101 operates in a consistent manner with respect to data and information and applies the appropriate level of security for each operation.
  • the operations of the SCM 1101 follow specific requirements and rules as validated and enforced by the rules engine 1107, including, for example, credential and role information.
  • the authentication engine 1109 is used to validate users (explicit rights) and agents (implicit rights) and to generate and issue tokens or similar security credentials. For example, if the credentials provided by a user (or entity) are valid and recognized by the authentication engine 1109, the authentication engine 1109 generates and assigns a temporary token that is used by that user for each subsequent access during the current session until logoff.
  • Tokens may be computer-generated alphanumeric or binary values temporarily assigned to each user.
  • the authentication engine 1109 accesses the database 1111 to assign the corresponding privileges attached to each role to the authenticated user according to that user's role or authorizations.
  • the SCM 1101 may include one or more interface components that implement an interface layer, such as managers that implement interfaces with specific-type entities. Each interface component has its own needs and methods requirements and is designed to handle the operation of commands for specific entities. As shown, the interface components include a user manager 1113, an agent manager 1115, an SCM proxy manager 1117, an administrator manager 1119, an advanced scripting manager 1121, a simple network management protocol (SNMP) manager 1123, and an image manager 1125.
  • SNMP simple network management protocol
  • the interface component managers shown and described herein are exemplary only, where each is optional depending upon the particular configuration and design criterion and where additional interface components may be defined, generated and deployed in a similar manner. Each SCM will have at least one interface component.
  • the user manager 1113 manages access to the SCM 1101 and the resources of the associated server cloud by users as previously described.
  • the user manager 1113 builds appropriate user interfaces and translates SCM data into useful screens or renderings for display or consumption by each user.
  • the agent manager 1115 coordinates SCM events with the appropriate agent(s) or other system components
  • PROT.0002PCT 24 within the associated server cloud such as physical server agents (PSA), logical server agents (LSA), etc.
  • the SCM proxy manager 1117 enables communication with other SCMs including proxy operations as described herein.
  • the administrator manager 1119 incorporates scripting logic and renders user interface(s) to administrators and provides useful access and control of the SCM 1101 and the server cloud and associated functions to one or more administrators.
  • the advanced scripting manager 1121 enables a more sophisticated scripting interface with other management systems, such as a billing package or the like.
  • the SNMP manager 1123 enables communication with an SNMP management system or entity.
  • the image manager 1125 enables optimized use of storage resources and files throughout the entire domain of the SCM 1101, including the physical and logical resources of its home cloud and the resources within subclouds of other server clouds.
  • FIG. 12 is a figurative block diagram that illustrates relationships between data and information and associated syntax employed by the core components 1103 of the exemplary SCM 1101.
  • the SCM 1101 serves as a gateway to the resources and services within its associated server cloud including its logical servers, physical servers and or proxied servers.
  • a central aspect of the core components 1103 is a URI 1205, which is a handle that provides the complete context information for the SCM 1101.
  • the URI 1205 is a naming convention or key that describes the relationship between different components within the SCM 1101.
  • the URI 1205 essentially unifies different aspects of information and data managed by the SCM 1101.
  • the core components 1103 use the URI 1205 mapping as a syntax that maps between different types of data.
  • the URI 1205 is a bridge point between different aspects or functions of the core components 1103, and includes an identity aspect 1207, a rights aspect 1209, a presentation aspect 1211 and an implementation aspect 1213.
  • the URI 1205 maps to particular resources of the server cloud, such as a logical server. Every interaction within the SCM 1101 has 3 components including an identity (identifying a user, agent, other cloud, etc.), an URI defining the target resource or server, and a command or action. For example, a user USER1 (identity) may request a SHUTDOWN (command) to shut down a logical server LSI (located by URI).
  • the URI 1205 serves as a mapping between information in that given URI plus any other one aspect provides enough information
  • PROT-0002PCT 25 to determine other aspects.
  • a URI plus the identity of a user provides sufficient information to determine the rights or roles assigned to that user for a given logical server (such as commands that are authorized for the user of a target logical server) as well as the physical implementation aspect that defines the physical resources effected by the command.
  • the core components 1103 combine multiple types of data to compose or execute actions and user interface (UI). Actual work is done by the SCM 1101, which can be execution of an action or rending UI.
  • the core components 1103 employ the identity aspect 1207 to determine the identity ("who") of the entity requesting an action to be performed.
  • the entity or user may comprise an individual or a group of individuals.
  • a user attempting to access the server cloud must first present valid credential information, such as a username and password or the like, which identifies the user to the SCM 1101.
  • Each user has unique credentials which map to a corresponding role.
  • Each individual user may be assigned separate and unique credentials within a given user group and/or users within a given user group may be assigned to the same role with different credential information.
  • the rights aspect 1209 incorporates predetermined roles assigned to each entity or user that defines what that entity is allowed do. Each role defines the rights and privileges assigned to one or more users as enforced by the rules engine 1107, such as which commands are authorized for a particular user or
  • the presentation aspect 1211 includes the logical or virtual relationships that define how information is to be presented. There are many presentations or paths to the SCM 1101 of a server cloud and to its resources. The paths may be represented by addresses or the like according to any predetermined syntax or protocol.
  • the presentation aspect 1211 incorporates various paths or logical representations to access one or more logical servers or other resources within the server cloud.
  • the presentations aspect 1211 defines various access paths to the servers and resources within (or proxied by) the server cloud. Different presentations may correspond to different privileges.
  • Each role may map to one or more presentations within the presentation aspect 1211. Generally, each role maps to the highest level presentation authorized for that role.
  • the presentation aspect 1211 incorporates a logical identity of server cloud, and may optionally include other logical representations, such as Data Center and/or Rack representations depending upon the particular configuration.
  • the implementation aspect 1213 determines which physical resources or equipment of which cloud is effected by an action or command.
  • a requested action may be sourced from an agent of a logical or physical server or sourced externally to be performed by a logical server within the server cloud or within another cloud or subcloud via proxy.
  • the command or action may include the status of any given logical server within the cloud or proxied by the cloud.
  • the implementation aspect 1213 enables server abstraction so that logical servers may be abstracted from the underlying hardware.
  • the implementation aspect 1213 also enables scripting abstraction in that action scripts initiated by users or agents that might otherwise be invalid because of transparent physical changes are transparently handled by SCMs.
  • the implementation aspect 1213 manages the relationship between logical and physical resources transparently to the user. Abstraction enhances scalability and maintenance because it simplifies server operation.
  • a user may have rights to a logical server (LSI) via the SCM regardless of rights of the physical server (PS1) to which the logical server is linked.
  • the user may have no direct rights at all with respect to PS1 and may even lack any knowledge whatsoever of the PS1.
  • the SCM controls and maintains the LSI and PS1 relationship for controlling operations initiated by an authorized user.
  • the SCM manages changes in relationships between components transparently to the users and agents. For example, the user may initiate a script to shutdown LSI on PS1, copy LSI to PS2, start LSI on PS2 and create a user on the moved LSI.
  • the SCM validates the shutdown, copy, start and create requests for LSI on behalf of the user.
  • the SCM may re-map or route the requests to PS1 and PS2 and authenticate the requests as valid.
  • PS1 and PS2 act on the user-initiated requests as authorized by the SCM and the SCM re-maps feedback to LSI.
  • the implementation aspect 1213 enables scripting abstraction since the LS context is the only context needed to manipulate the LS instance for both its logical and physical attributes. Thus, scripting is global in the sense that a script that is valid for one physical relationship remains valid regardless of the physical configuration or LS location since the SCM transparently maintains the relationships to control actions and operations without requiring scripting modifications from the perspective of the user.
  • the SCM may
  • PROT.0002PCT 27 modify scripting and procedures in accordance with the specific relationships at the time action is necessary (e.g., re-mapping), but such is handled and controlled transparently by the SCM. In this manner, even though it appears to the user that the entire operation is handled directly, the actual control mechanisms are transparently controlled by the SCM on behalf of the user.
  • Agent Request is a request for a physical server agent (PSA), logical server agent (LSA) or SCM to perform one or more specific actions:
  • Agent Response is generated in reply to an Agent Request, once the specified agent has completed the request.
  • a response is only generated if the initial request included a valid response block.

Abstract

A server cloud manager (SCM) 1101 for controlling logical servers 101 and physical resources that form a virtualized logical server cloud (e.g., A, B, C, D, E). The SCM includes multiple core components 1103 and one or more interface components 1113 - 1125. The core components serve as a shared foundation to collectively manage events, validate and authorize server cloud users and agents, enforce predetermined requirements and rules and store operation data. The one or more interface components enable communication with external entities and includes an SCM proxy manager 1117 that enables communication with one or more SCMs of other server clouds. A server cloud system including a first server cloud that includes a first server cloud manager and a first logical server, and a second server cloud that includes a second SCM. The first and second SCMs are configured to cooperate to manage operation of the first logical server.

Description

Title: Virtual Server Cloud Interfacing
Inventor(s): Dave D. McCrory and Robert A. Hirschfeld
Field of the Invention:
The present invention relates to virtualization and server technology, and more particularly to server cloud interfacing for establishing flexible logical server management.
Description of Related Art:
There are many situations in which it is desired to lease one or more server computer systems on a short or long-term basis. Examples include educational or classroom services, demonstration of software to potential users or buyers, website server applications, etc. The servers may be pre-configured with selected operating systems and application software as desired. Although physical servers may be leased and physically delivered for onsite use, servers may also be leased from a central or remote location and accessed via an intermediate network system, such as the Internet. The primary considerations for remote access include the capabilities of the remote access software and the network connection or interface.
Virtualization technology enabled multiple logical servers to operate on a single physical computer. Previously, logical servers were tied directly to physical servers because they relied on the physical server's attributes and resources for their identity. Virtualization technology weakened this restriction by allowing multiple logical servers to override a physical server's attributes and share its resources. Each logical server is operated substantially independent of other logical servers and provides virtual isolation among users effectively partitioning a physical server into multiple logical servers. A previous disclosure described an ability to completely separate logical servers from particular physical servers so that there was no permanent tie between a physical server and logical resources. Such separation allowed for physical servers to act as a pool of resources supporting logical servers, so that a logical server may be reallocated to a different physical server within a server cloud without users experiencing any change in access approach. The requirement of pre-allocation of physical resources prior to a physical resource change was removed as is required by clustering. It is further desired to provide additional allocation of resources between
PROT-0002PCT I server clouds. Relationships between server clouds and other entities need to be defined to enable resource sharing and more efficient resource allocation. Summary of the Present Invention:
The present invention concerns a server cloud manager (SCM) for controlling logical servers and physical resources that comprise a virtualized logical server cloud. The SCM includes multiple core components and one or more interface components. The core components serve as a shared foundation to collectively manage events, validate and authorize server cloud users and agents, enforce predetermined requirements and rules and store operation data. The one or more interface components enable communication with external entities and includes an SCM proxy manager that enables communication with one or more SCMs of other server clouds.
In one embodiment, the core components include an event engine, an authentication engine, a rules engine and a database. The event engine controls and manages events to be- performed by the SCM. The authentication engine validates users and agents of the server cloud and issues security credentials to authorized users and agents. The rules engine validates and enforces predetermined requirements and rules to be followed by SCM operations. The database stores information and includes data validation, data formatting and rules validation for the SCM and the server cloud. The events controlled and managed by the event engine may include individual events or collections of events.
The interface components may include a user manager where the core components and the user manager collectively render graphical user interfaces and authorize users of the server cloud according to predetermined roles that define the rights and privileges for each user while accessing server cloud resources. The interface components may include an agent manager that coordinates SCM events with agents within the server cloud that perform specified actions. The interface components may include an administrator manager that renders a user interface, that enables access and control by one or more administrators of the SCM, and that coordinates with core components to authenticate administrative requests. The interface components may include an advanced scripting manager that provides advanced scripting logic and interfaces to other management systems. The interface components may include an SNMP manager that provides an interface between the
PROT-0002PCT 9 SCM and an S MP management application. The interface components may include an image manager that optimizes use of disk resources and files throughout a predetermined domain of the SCM.
The core components may employ a URI mapping as a syntax handle that provides sufficient context information and that describes a management relationship between different components of the SCM. The URI mapping may include an identity aspect that determines an identity of an entity requesting an action to be performed. The URI mapping may include a rights aspect that incorporates predetermined roles assigned to an entity that defines the rights and privileges assigned to the entity. The URI mapping may include a presentation aspect that includes logical relationships that define how information is to be presented. The URI mapping may include an implementation aspect that determines which resources or equipment of a domain of the SCM are effected by actions and commands. The implementation aspect may support server abstraction and/or scripting abstraction. The implementation function may incorporate a proxy function for relaying actions and commands to another server cloud.
A server cloud system according to an embodiment of the present invention includes a first server cloud that includes a first server cloud manager (SCM) and a first logical server and a second server cloud that includes a second SCM. The first and second SCMs are configured to cooperate to manage operation of the first logical server. Such configuration substantially enhances cloud to cloud interaction, operation and cooperation. The first and second SCMs may be configured, for example, to cooperate to move the first logical server from the first server cloud to the second server cloud. The second server cloud may also include a second logical server, where the first and second SCMs are configured to cooperate to ensure that only one of the first and second logical servers is active at any given time. For example, the first logical server may be activated during a first time period and placed in standby during a second time period, whereas the second logical server is activated during the second time period and placed in standby during the first time period. The first and second SCMs may be configured to cooperate to replicate the first logical server to a second and unique logical server within the second server cloud. The first and second server clouds may have a trust relationship such that the first and second SCMs are peers.
PROT-0002PCT 3 The first logical server may be within a subcloud of the first server cloud and the second SCM may have rights over the subcloud.
The server cloud system may include an intermediary that has a trust relationship with the first and second server clouds. In this case, the first and second server clouds may cooperate with each other through the intermediary. The first and second SCMs may be configured to cooperate via the intermediary to move the first logical server from the first server cloud to the second server cloud. The second server cloud may include a second logical server where the first and second SCMs are configured to cooperate via the intermediary to ensure that only one of the first and second logical servers is active at any given time. The server first and second SCMs may be configured to cooperate via the intermediary to replicate the first logical server to a second and unique logical server within the second server cloud.
The second SCM may operate as a proxy for the first logical server so that the first logical server may appear to exist within the second server cloud while actually residing in the first server cloud. If the first server cloud includes a second logical server, the second SCM may operate as a proxy for the first and second logical servers and the first and second SCMs may be configured to cooperate to ensure that only one of the first and second logical servers is active at any given time.
The second server cloud may be an exchange cloud that employs intercloud proxy and commercial terms to enable commercial transactions associated with resources within the first server cloud. The first and second server clouds may establish a commercial relationship for the purpose of enabling the second server cloud to direct use and resell logical server resources in the first server cloud. The server cloud system may further include a third server cloud that has an authorized user and that has a commercial relationship with the exchange cloud. In this case, the authorized user may gain access to the first logical server active in the first server cloud via intercloud proxy via the exchange cloud. The exchange cloud may transfer the first logical server from the first server cloud to the third server cloud for access by an end consumer. The location of the first logical server may be transparent to the end consumer. The transfer of the first logical server may be performed by the exchange cloud transparently to the end consumer.
PROT-0002PCT Brief Description of the Drawings:
A better understanding of the present invention can be obtained when the following detailed description of embodiments of the invention is considered in conjunction with the following drawings, in which: Figures 1A - 1C are figurative block diagrams illustrating intercloud actions or actions between "trusted" clouds where data can be transferred directly between clouds. FIG. 1A illustrates a routing function, FIG. IB illustrates a switching function, and FIG. 1C illustrates a replication function.
Figures 2A - 2C are figurative block diagrams illustrating extracloud actions or actions between "untrusted" clouds where data is transferred indirectly between clouds through an intermediary (IM). FIG. 2 A illustrates the routing function, FIG. 2B illustrates the switching function, and FIG. 2C illustrates the replication function.
Figures 3A - 3C are figurative block diagrams illustrating supercloud actions or actions requested of one SCM of a cloud that are transparently performed by a different SCM of another cloud. FIG. 3A illustrates a routing function, FIG. 3B illustrates a switching function, and FIG. 3C illustrates a replication function.
Figures 4A and 4B are figurative block diagrams illustrating user interface with clouds and logical server proxying.
Figures 5A - 5G are figurative block diagrams of various scenarios that may occur for a customer with growing or changing needs over time illustrating the flexibility of logical server operation, location and accessibility.
FIG. 6 is a figurative block diagram illustrating operation of an exchange cloud as an intermediary according to an embodiment of the present invention.
FIG. 7 is a figurative block diagram illustrating logical server management by an exchange cloud according to an embodiment of the present invention.
FIG. 8 is a figurative block diagram illustrating load shifting of a logical server across geographic areas employing the proxy functionality.
FIG. 9 is a figurative block diagram illustrating various trust relationships with an SCM of a server cloud and between server clouds. FIG. 10 is a figurative block diagram illustrating an example of proxy syntax for proxying a logical server associated with a user from one server cloud to another server cloud.
PROT:0002PCT * FIG. 11 is a block diagram illustrating the fundamental components of an SCM of a server cloud including core components and interface modules.
FIG. 12 is a figurative block diagram that illustrates relationship mapping between data and information and the associated syntax employed by core components of the exemplary SCM of FIG. 11.
Detailed Description of Embodiment(s) of the Invention:
The following definitions are provided for this disclosure with the intent of providing a common lexicon. A "physical" device is a material resource such as a server, network switch, or disk drive. Even though physical devices are discrete resources, they are not inherently unique. For example, random access memory (RAM) devices and a central processing unit (CPU) in a physical server may be interchangeable between like physical devices. Also, network switches may be easily exchanged with minimal impact. A "logical" device is a representation of a physical device to make it unique and distinct from other physical devices. For example, every network interface has a unique media access control (MAC) address. A MAC address is the logical unique identifier of a physical network interface card (NIC). A "traditional" device is a combined logical and physical device in which the logical device provides the entire identity of a physical device. For example, a physical NIC has its MAC address permanently affixed so the physical device is inextricably tied to the logical device.
A "virtualized" device breaks the traditional interdependence between physical and logical devices. Virtualization allows logical devices to exist as an abstraction without being directly tied to a specific physical device. Simple virtualization can be achieved using logical names instead of physical identifiers. For example, using an Internet Uniform Resource Locator (URL) instead of a server's MAC address for network identification effectively virtualizes the target server. Complex virtualization separates physical device dependencies from the logical device. For example, a virtualized NIC could have an assigned MAC address that exists independently of the physical resources managing the NIC network traffic. A "server cloud" or "cloud" is a collection of logical devices which may or may not include underlying physical servers. The essential element of a cloud is that all logical devices in the cloud may be accessed without any knowledge or with limited knowledge of the underlying physical devices within the cloud.
PROT-0002PCT Q Fundamentally, a cloud has persistent logical resources, but is non-deterministic in its use of physical resources. For example, the Internet may be viewed as a cloud because two computers using logical names can reliably communicate even though the physical network is constantly changing.
- A "virtualized logical server cloud" refers to a logical server cloud comprising multiple logical servers, where each logical server is linked to one of a bank of physical servers. The boundary of the logical server cloud is defined by the physical resources controlled by a "cloud management infrastructure" or a "server cloud manager" or SCM. The server cloud manager has the authority to allocate physical resources to maintain the logical server cloud; consequently, the logical server cloud does not exceed the scope of physical resources under management control. Specifically, the physical servers controlled by the SCM determine a logical server cloud's boundary. "Agents" are resource managers that act under the direction of the SCM. An agent's authority is limited in scope and it is typically task-specific. For example, a physical server agent (PSA) is defined to have the authority to allocate physical resources to logical servers, but does not have the authority or capability to create administrative accounts on a logical server. An agent generally works to service requests from the server cloud manager and does not instigate actions for itself or on other agents. A prior disclosure introduced virtualization that enabled complete separation between logical and physical servers so that a logical server may exist independent of a specific physical server. The logical server cloud virtualization added a layer of abstraction and redirection between logical and physical servers. Logical servers were implemented to exist as logical entities that were decoupled from physical server resources that instantiated the logical server. Decoupling meant that the logical attributes of a logical server were non-deterministically allocated to physical resources, thereby effectively creating a cloud of logical servers over one or more physical servers. The prior disclosure described a new deployment architecture which applied theoretical treatment of servers as logical resources in order to create a logical server cloud. Complete logical separation was facilitated by the addition of the SCM, which is an automated multi-server management layer. A fundamental aspect to a logical server cloud is that the user does not have to know or provide any physical server information to access one or more logical server(s), since this information is
PROT:0002PCT η maintained within the SCM. Each logical server is substantially accessed in the same manner regardless of underlying physical servers. The user experiences no change in access approach even when a logical server is reallocated to a different physical server. Any such reallocation can be completely transparent to the user. The present disclosure builds upon logical server cloud virtualization by adding a layer of abstraction and redirection between logical servers and the server clouds as managed and controlled by corresponding SCMs. The server cloud is accessed via its SCM by a user via a user interface for accessing logical and physical servers and by the logical and physical servers themselves, such as via logical and/or physical agents as previously described. As further described herein, SCMs may further interface each other according to predetermined relationships or protocols, such as "peer" SCMs or server clouds or between a server cloud and a "super peer", otherwise referred to as an "Exchange". The present disclosure introduces the concept of a "subcloud" in which an SCM interfaces or communicates with one or more logical and/or physical servers of another server cloud. The SCM of the server cloud operates as an intermediary or proxy for enabling communication between a logical server activated within a remote cloud. Logical servers may be moved from one server cloud to another or replicated between clouds. A remote SCM may manage one or more logical servers in a subcloud of a remote server cloud. In fact, a logical server may not be aware that it is in a remote cloud and may "think" that or otherwise behave as though it resides in the same cloud as the SCM managing its operations. The proxy functionality enables transparency between users and logical servers. The user of a logical server may or may not be aware of where the logical server exists or in which server cloud it is instantiated. Many advantages and capabilities are enabled with cloud to cloud interfacing.
Routing, switching, replication and cloud balancing may be performed intercloud, such as between "trusted" clouds, extracloud; such as between "untrusted" clouds, or via an intermediary (e.g., super-peer, supercloud, shared storage, exchange) in which actions requested of one SCM are transparently performed by a different SCM. An exchange cloud may be established that has predetermined commercial relationships with other clouds or that is capable of querying public or otherwise accessible clouds for resource information. Such an exchange cloud may be established on a commercial basis, for example, to provide a free market exchange for servers or
PROT:0002PCT g services related thereto. Exchange clouds include intercloud proxy and predetermined business rules and relationships to conduct commercial transactions. Such commercial transactions may include, for example, sale or lease of logical servers on the market through a common exchange and medium, such as the Internet. Figures 1A - 1C are figurative block diagrams illustrating intercloud actions or actions between trusted clouds A and B where data can be transferred directly between clouds. An SCM 103 of cloud A and an SCM 105 of cloud B in each of these cases are considered "peers".
FIG. 1A illustrates the routing function in which only a single instance of a logical server (LS) 101 exists within the boundaries of both clouds A and B. The LS 101 maybe active or held in standby. The SCMs 103 and 105 coordinate to move the instance of LS 101 from cloud A to cloud B as illustrated by arrows 102.
FIG. IB illustrates the switching function in which multiple instances of the LS 101 exist although only one is active at any given time. Cloud A includes LS instances 101a, 101b and 101c while cloud B includes instances 101c, lOld and lOlf. The LS lOlf is shown with diagonal lines to indicate that it is active in cloud B. The remaining LS instances lOla - lOle are in standby as indicated by a shading pattern. The SCMs 103 and 105 coordinate with each other as illustrated by arrow 104 to manage the multiple logical servers lOla - lOlf to ensure that only one logical server is active at any given time.
FIG. 1C illustrates the replication function in which a logical server LS 101 is replicated from a master template to create a set of similar although unique logical servers 107, 109 and 111. The LS 107 is replicated within the same cloud A whereas the logical servers 109 and 111 are replicated in another cloud B. The logical servers 107, 109 and 111 are shown with different line patterns to illustrate that they are different logical servers even if similar to the original LS 101. The SCMs 103 and 105 coordinate with each other as illustrated by arrow 106 to replicate the LS 101 as a master template into multiple unique logical servers 107 - 109. For example, the instance information from the LS 101 is passed to the SCM 105 for replicating it within cloud B. The logical servers 101, 107, 109 and 111 may all be activated at the same time since they are different logical servers with unique identities.
PROT:0002PCT ' Q Figures 2A - 2C are figurative block diagrams illustrating extracloud actions or actions between "untrusted" clouds A and B where data is not transferred directly between clouds. Instead, data and commands are transferred via an intermediary (IM) 213, which may comprise a super-peer, a supercloud, a shared storage or an exchange. In each case, the clouds A and B include SCMs 203 and 205, respectively, which are similar to the SCMs 103 and 105. The routing, switching and replication functions are similar, except that the SCMs 203 and 205 cooperate with the IM 213 to perform the respective extracloud functions.
FIG. 2A illustrates the routing function in which only a single instance of a logical server (LS) 201 exists. Again, the LS 201 may be active or held in standby. The SCMs 203 and 205 coordinate with the IM 213 to move the instance of LS 201 from cloud A to cloud B as illustrated by arrows 202.
FIG. 2B illustrates the switching function in which multiple instances of the LS 201 exist although only one is active at any given time. Cloud A includes LS instances 201a, 201b and 201c while cloud B includes instances 201c, 20 Id and 20 If. The LS 20 If is shown with diagonal lines to indicating that it is active in cloud B. The remaining LS instances 201a - 201 e are in standby as indicated by shading. The SCMs 203 and 205 coordinate with each other via the IM 213 as illustrated by arrows 204 to manage the multiple logical servers 201a - 201 f to ensure that only one of the logical servers is active at any given time.
FIG. 2C illustrates the replication function in which a logical server LS 201 is replicated from a master template to create a set of similar although unique logical servers 207, 209 and 211. The LS 207 is replicated within the same cloud A whereas the logical servers 209 and 211 are replicated in another cloud B. The logical servers 207, 209 and 211 are shown with different patterns to illustrate that they are different logical servers even if similar to the original LS 201. The SCMs 203 and 205 coordinate with each other via the IM 213 as illustrated by arrows 206 to replicate the LS 201 as a master template into multiple unique logical servers 207 - 211. Again, the instance information from the LS 201 is passed to the SCM 205 via the IM 213 for replicating it within cloud B. Also, the logical servers 201, 207, 209 and 211 may all be simultaneously activated since they are different logical servers with unique identities.
PROT:0002PCT 10 Figures 3 A - 3C are figurative block diagrams illustrating supercloud actions or actions requested of one SCM 303 of cloud A that are explicitly or transparently performed by a different SCM 305 of another cloud B. Cloud A operates as a supercloud with respect to a subcloud of cloud B. In a similar manner as described above, the clouds A and B include SCMs 303 and 305, respectively, which are similar to the SCMs 103 and 105 or 203 and 205. h each case, the SCM 303 acts as a proxy or gateway to the SCM 305 so that the SCM 303 of cloud A appears to own or otherwise control logical servers in the cloud B.
FIG. 3A illustrates the routing function in which only a single instance of a logical server LS 301 exists. In this case, however, the LS 301 resides in the cloud B although it appears to reside in cloud A as shown by LS 301' with dotted lines. As shown by arrows 302, the SCM 303 forwards requests from LS 301' to the SCM 305 and to the LS 301 where the LS 301 is active. The proxy SCM 303 appears to own the LS 301 even though it is active in a different cloud B. The LS 301 may not "know" that it is in cloud B but may "think" or otherwise act as though it is active in cloud A.
FIG. 3B illustrates the switching function in which multiple instances of the LS 301 exist although only one is active at any given time. The LS 301 appears to be active in cloud A as shown as LS 301' with dotted lines, while cloud B includes instances 301a, 301b and 301c. The LS 301c is shown with diagonal lines to indicate that it is active in cloud B. The remaining LS instances 301a and 301b are in standby as indicated by shading. The SCM 303 acts as a gateway for access to the switched LS 301 (LS 301a, b or c) in the cloud B via intermediate paths as shown by arrow 304. The gateway SCM 303 appears to own or otherwise control the active one of the LSs 301a-c even though active in cloud B. Again, the active one of the LSs 301a-c may not know that it is in cloud B but may think it is active in cloud A.
FIG. 3C illustrates the replication function in which a logical server LS 301 is replicated from a master template in cloud A to create a set of similar although unique logical servers 309 and 311 in cloud B as shown by arrows 306. In this case, the LS 301 is active in cloud A. The logical servers 309 and 311 are shown with different line patterns to illustrate that they are different logical servers even if similar to the original LS 301. The SCMs 303 and 305 coordinate with each other as illustrated by arrows 306 to replicate the LS 301 as a master template into multiple unique logical
PROT.0002PCT 11 servers 309 and 311. The SCM 303 instructs the SCM 305 to replicate the LS 301 in the cloud A as the logical servers 309 and 311 in cloud B.
The function of cloud balancing may be performed within any of the intercloud, extracloud or supercloud architectures and facilitated by the routing, switching or replication functions. For the routing function as applied to the intercloud, extracloud or supercloud configurations, the LS 101 is moved from one physical server (PS) to another with more capacity or with greater resources or simply in a different geographic area or time zone. In the supercloud case, the commands are proxied to a different instance of the logical server in another cloud, where the different instances may have different capacity or be located in a different geographic area or time zone. For the switching function, the SCMs of the clouds A and B coordinate (either directly or via the IM 213) to select the instance of the LS with the appropriate capacity or resource level based on demands or needs. For the replication function, the SCM creates additional LS instances with variant capacities or in different areas or times and replaces one LS instance with another in order to allocate more capacity within a cloud or across clouds.
Figures 4A and 4B are figurative block diagrams illustrating user interface with clouds and logical server proxying. As shown in FIG. 4A, a user (U) 401 attempts to access a logical server LSI 403 via cloud A. As described further below, the user 401 provides a pathname indicative of cloud A and the logical server LSI 403, such as, for example, a pathname including "CLOUDA...LSI ! ACTION", where "CLOUD A" references cloud A, "LSI" references the logical server LSI 403, and "ACTION" denotes a particular action or operation to perform, such as login, reboot, etc. Although the logical server LSI 403 appears to be active in cloud A to the user 401, the logical server LSI 403 is actually active in cloud B. As described above, the SCM (not shown) of cloud A serves as a proxy to forward data and commands from cloud A to the SCM of cloud B, which forwards the data and commands to the LSI 403 active in cloud B. Such proxy scenario may be completely transparent to the user 401, such that the user thinks or believes that LSI 403 resides in cloud A. Also, the logical server LSI 403 may behave in a manner that indicates that it is active in cloud A when in fact it is active in cloud B.
Many rationales exist for activating the logical server LSI in a different cloud than its home cloud or its apparent cloud of residence. Cloud A may lack the
PROT:0002PCT γχ necessary resources to build or operate the logical server LSI, so that it is moved or replicated and operated in cloud B and proxied via cloud A. For example, the underlying physical resources of cloud A, including its physical servers, may have experienced a temporary failure or shutdown or the like, which would otherwise render the logical server LSI inoperable or unavailable. Instead, logical server LSI is available in cloud B via proxy. Or, the resources of cloud A may be temporarily oversubscribed or subscribed at or near its full capacity, so that the logical server LSI is temporarily moved to cloud B to prevent interruption in service or to maintain desired level of service. Or, the user 401 may have requested additional capacity or capabilities that were not available at the time in cloud A, so that the expanded capacity LSI is temporarily or permanently active in cloud B. Such proxy may be on a permanent or temporary basis depending upon the situation or the needs of the user 401. Regardless of the particular reason or scenario, it is understood that the present invention provides the ability to move and operate logical servers in any server cloud of choice.
FIG. 4B is a figurative block diagram similar to FIG. 4A except that the logical server LSI is active in cloud A and a copy of the logical server LSI is maintained in cloud B. As illustrated by dashed line 405, LSI is active in either cloud A or B at any given time, although preferably not in both to avoid ambiguity or malfunction. The cloud A can command activation of LSI in either cloud A or B and re-direct data and commands as needed. In one embodiment, the user 401 accesses the logical server LSI via cloud A regardless of where it is activated, and may not even be aware that the backup of LSI exists in a different cloud. Alternatively, as illustrated by dashed line 407, the user 401 may not only be aware of the backup copy of LS 1 in cloud B, but may in fact control the switching of activation of LS 1 between the clouds A and B. In this manner, the user 401 may load balance or cluster servers as desired.
Figures 5 A - 5G are figurative block diagrams of various scenarios that may occur for a customer (CS) 501 with growing or changing needs over time illustrating the flexibility of logical server operation, location and accessibility. At the start, CS
501 may have need of one or more servers but may not desire or have the resources to purchase physical servers. As shown in FIG. 5 A, CS 501 decides to rent or purchase
PROT:0002PCT 13 one or more logical servers 503 from the owner of a server cloud A. CS 501 accesses the logical servers 503 remotely across suitable networks as shown by arrow 502.
As its needs grow, CS 501 chooses to acquire the use of one or more additional logical servers 505 from the owner of another cloud B as shown in FIG. 5B. There may be various reasons for using the servers of another cloud, such as price, capability or capacity considerations. Since the logical servers 505 are in a different cloud, a separate path or link to the cloud B is necessary as shown by arrow 504.
Eventually, CS 501 determines to co-manage all of the logical servers 503 and 505 from a single cloud, such as via cloud A as shown in FIG. 5C. CS 501 may decide, for example, to treat all of the logical servers 503, 505 as one larger pool of servers to avoid separate accesses 502, 504 or to simplify addressing. In this case, CS 501 accesses all of the logical servers 503, 505 in both clouds A and B via the SCM of cloud A through one link as shown by arrow 507. Also, a subcloud link shown by arrow 509 is established between clouds A and B, so that the logical servers 505 are managed by the SCM of cloud A. In effect, the logical servers 505 are part of a subcloud of cloud A that exists in cloud B.
CS 501 eventually decides to self-manage its logical servers 503, 505 and creates a local server cloud 511 as shown in Figures 5D and 5E. The server cloud 511 need not include any logical servers and needs only have sufficient physical resources to access and manage logical servers within other clouds. The logical servers 503 and 505 are still active in the clouds A and B, respectively. As shown in FIG. 5D, the server cloud 511 includes an SCM 513 that access the server clouds A and B via separate links as shown by arrows 515 and 517. Such configuration may be a logical extension of the configuration of FIG. 5B in which the clouds A and B are accessed via separate links, except that the configuration of FIG. 5D is more convenient since all of the logical servers 503, 505 may be locally managed by CS 501 via its local server cloud 511. h effect, the cloud 511 has access to subclouds of clouds A and B. Alternatively as shown in FIG. 5E, a single link from cloud 511 to cloud A is established as illustrated by arrow 519. The subcloud link 509 between the clouds A and B is used so that cloud A has subcloud rights to the logical servers 505 active in cloud B. Such configuration may be a logical extension of the configuration of FIG. 5C in which the clouds A and B are accessed via a single link, except that the
PROT:0002PCT 14 configuration of FIG. 5E is more convenient since all of the logical servers 503, 505 are locally managed by CS 501 via its local server cloud 511. hi effect, the cloud 511 has access to a subcloud of cloud A, and cloud A has access to a subcloud of cloud B. CS 501 still has sufficient access to all of its logical servers 503, 505. As CS 501 continues to grow, it may decide to acquire local physical assets and activate one or more local logical servers 521 within cloud 511 as shown in FIG. 5F. Even though separate arrows 523 are shown from cloud 511 to clouds A and B, either of the configurations of Figures 5D or 5E may be implemented so that CS 501 locally manages all of the logical servers 503, 505 and 521 via its local server cloud 511. CS 501 may choose to add physical resources and consolidate some of its logical servers into its local cloud 511. As shown in FIG. 5G, for example, the logical servers 505 are moved from cloud B into the local cloud 511. hi this case, links and relationships with cloud B are no longer necessary and may be terminated. The logical servers 505 may effectively be the same in capacity regardless of the cloud in which they are activated, so that the underlying cloud is transparent. The persistent attributes are configured to be the same. Thus, users of the logical servers 505 may not be aware that the logical servers 505 have moved into a different cloud. The cloud 511 still has access to a subcloud of cloud A containing the logical servers 503 as shown by arrow 519. FIG. 6 is a figurative block diagram illustrating operation of an exchange cloud E as intermediary according to an embodiment of the present invention. An exchange cloud may be implemented in a similar manner as any server cloud, such as including an SCM 603 or the like, but does not necessarily have to include any logical servers and may include relatively minimal physical resources and/or physical servers. The exchange cloud E includes an exchange database 601 or the like that stores information associated with one or more other public or otherwise accessible clouds A, B, C, etc. The clouds A-C may have, for example, predetermined relationships with the exchange cloud E as defined by parameters contained within the exchange database 601, which also includes access information for those clouds. An exemplary commercial use of the exchange cloud E may be the ability for potential users to search for and use logical servers in the other clouds A-C that meet the user's needs or requirements. For example, a user 605 contacts the exchange cloud E via link arrow 611 with a set of parameters or criterion for purposes of
PROT:0002PCT 1 5 finding one or more logical servers that meet its requirements at the lowest price. The exchange cloud E forwards the requirements parameters to the other clouds A-C, or otherwise searches its exchange database 601 to find as many servers as possible that meet the needs of the user 605. The exchange cloud E either selects logical servers from one of the clouds A-C or the clouds A-C may bid against each other to win a contract with the user 605. The inverse situation is contemplated in which multiple users may bid for access privileges of a server cloud. Another exemplary embodiment is the exchange cloud E operating as a central manager or the like for allocating resources distributed among multiple clouds A-C for a plurality of users. The user 605 requests one or more logical servers from the exchange cloud E, which locates one or more suitable logical servers and provides access to the user 605.
As shown, the exchange cloud E identifies a logical server 607 located in cloud C in response to a request by the user 605. The SCM 603 may act as proxy or intermediary for providing access of the logical server 607 to the user 605, such as shown by dashed arrow 609. i such case, the user 605 maintains a relationship with the exchange cloud E as indicated by arrow 611 through which it accesses the logical server 607 located in cloud C. Alternatively, the SCM 603 forwards access or other credential information to the user 605, which uses the access information to directly access the logical server 607 via the cloud C as illustrated by dashed arrow 613. As described further below, the user 605 may not have any rights within the cloud C, but may inherit rights otherwise granted to the SCM 603 for the exchange cloud E.
FIG. 7 is a figurative block diagram illustrating logical server management by an exchange cloud E according to an embodiment of the present invention. The exchange cloud E includes an exchange database 701 and an SCM 703. A customer (CS) cloud 705 includes a plurality of local servers 707 which it manages locally. The customer CS deems that it requires additional logical servers and accesses the exchange cloud E for additional servers via link arrow 715. For example, the customer CS may have immediate needs for 10 additional servers and may anticipate a need for more servers in the future. The exchange cloud E identifies a cloud A that has sufficient server capacity to meet the immediate and future needs of the customer CS. hi particular, the cloud A includes 10 logical servers 709 and has at least 40 additional servers 711 to meet the future needs of the customer CS. The customer CS has the option of purchasing or leasing only the 10 logical servers 709 and delaying
PROT.0002PCT 1 (5 acquisition of additional servers. However, the cloud A may be able to provide a group of 50 logical servers at a bulk price so that each of the 50 servers are available at reduced cost.
The customer CS decides to acquire (rent or purchase) 50 logical servers from cloud A, including the 10 logical servers 709 for meeting it's immediate needs and 40 additional logical servers 711 for meeting it's future needs. The customer CS may choose, if the option is available, to manage the logical servers 709, 711 from the cloud A as shown by arrow 713. Alternatively, the customer CS passes to the exchange cloud E the credentials for accessing the logical servers 709, 711 as shown by arrows 715 and 717. this manner, the SCM 703 of the exchange cloud E operates a proxy for accessing the logical servers 709, 711 in cloud A on behalf of the customer CS as shown by arrow 719. The logical servers 709 and 711 appear to be located in the exchange cloud E as shown at 709' and 711', respectively. Since the customer CS has immediate need of only 10 logical servers, it assumes control of the logical servers 709 via the exchange cloud E as indicated by arrow 721. The customer CS accesses the logical servers 709 via the SCM 703, and the SCM 703 proxies the logical servers 709 so that they appear to be within the cloud E as shown as 709'.
One advantage of the exchange cloud E is that the logical servers 711 may be temporarily sold on the open market as indicated by arrow 723. hi this manner, the customer CS assumes control of the logical servers 709 and sells the remaining logical servers 711 to third parties via the exchange cloud E. This may provide a significant savings to the customer CS in that most, if not all, of its cost in the logical servers 711 may be retrieved via third party rental (minus any management fees charged by the owners of the exchange cloud E). As the needs of the customer CS grows over time, it may request that some or all of the logical servers 711 be reallocated to the customer CS as necessary. The exchange cloud E may move one or more of the logical servers 709, 711 to a different cloud, such as another cloud B if desired. For example, the logical servers 711 may be moved to cloud B while not needed by the customer CS and while being rented by third parties, if desired. In this manner, the present invention provides complete flexibility for managing logical servers between clouds.
PROT-0002PCT 17 FIG. 8 is a figurative block diagram illustrating load shifting of a logical server across geographic areas employing the proxy functionality. The Earth 801 is shown in perspective as though viewed from a distance looking directly at the North Pole (NP). Four separate server clouds A, B, C and D are distributed around the Earth 801, generally divided by its "four corners" or separated by equivalent quadrants such as by 8 hour time periods of the total 24-hour period defined by it's the Earth's rotation as indicated by an arrow 803. A logical server (LS) is active from 8PM to 2AM in cloud A as shown at 805, where all times are referenced with respect to the physical location of cloud A. Other instances of the logical server LS exist in each of the other clouds B, C and D as shown at 807, 809 and 811, respectively, each in standby mode while active in cloud A.
At time 2AM, the logical server LS is proxied from cloud A to cloud B for the next 8 hour time period 2AM to 8AM as indicated by arrow 806. The LS 805 in cloud A is placed in standby mode while the instance of LS 807 in cloud B is activated. Users attempting to access LS 805 in cloud A are simply proxied to the activated LS 807 in cloud B. At time 8AM, the logical server is proxied from cloud A to cloud C for the next 8 hour time period 8AM to 2PM as indicated by arrow 808. The LS 807 in cloud B is placed back in standby mode while the instance of the LS 809 in cloud C is activated. The LS 805 in cloud A remains in standby and cloud A proxies data and information to cloud C. At time 2PM, the logical server is proxied from cloud A to cloud D for the next 8 hour time period 2PM to 8PM as indicated by arrow 810. As before, the LS 809 in cloud C is placed in standby mode while the instance of the LS 811 in cloud D is activated. The logical server instances in clouds A-C remain in standby and cloud A proxies data and information to cloud D. At time 8PM, the LS 805 is activated once again in cloud A for the next 8 hour time period 8PM to 2AM while the instances of the logical server LS in clouds B-D are once again placed in standby. Operation proceeds in this manner for as long as desired.
The ability to sequentially proxy the activation of a logical server from one cloud to another over time provides many useful benefits and advantages. It may be desired, for example, to activate a web server in a local geographic area during peak load or access in that area to best serve the needs of users across the globe over time. Alternatively, the local area resources may be needed during peak hours for local access so that one or more servers providing other needs, such as heavy computational
PROT-0002PCT l g operations or the like, may be off-loaded to servers in a different geographic area during off-hours in that area. In this manner, the physical resources supporting the logical servers may be employed in the most efficient manner with the ability to shift loads to available resources at any chosen time. FIG. 9 is a figurative block diagram illustrating various trust relationships with the SCM of a server cloud and between server clouds. A user 901 of a logical server 903 of a server cloud A has a "credentialed" (CRED) trust relationship with the cloud A or its SCM, shown as SCMA 905. A credentialed trust relationship is a form of "explicit" trust relationship in which "rights" or "privileges" provided to, and/or actions allowed by, the user 901 are predetermined, clearly defined, and usually "limited" or otherwise "restricted" by a contract or agreement. A non-exhaustive list of "rights" within a cloud relative to logical servers include "add", "delete", "move", "replicate", "reset", "reboot", "build", "store", "snapshot", among others. The agreement is an expression of boundaries of rights and privileges, such as limited number or types of rights and/or limits on particular rights. A credentialed trust relationship may be "transitory" in that it has a predefined duration or expiration date according to the agreement. The user 901 identifies itself to the SCMA 905 with predetermined credentials, such as a username and password or the like, which invokes the user's access to the cloud A according to the credentialed trust relationship. The rights of the user 901 may be limited to the LS 903 may include one or more other logical servers within the cloud A.
An agent of SCMA 905, shown as SCMA Agent 907, is an internal "implicit" component of the server cloud A having implicit rights. Implicit rights are generally "unlimited" in that the SCMA Agent 907 has complete control over all logical and physical servers active within the server cloud A via the SCMA 905. SCM agents are employed to perform actions on servers within a cloud, where such actions are delayed or triggered or invoked by a combination of both. It is appreciated that an SCM agent generally has all of the rights of the SCM itself, and that may be further determined by event or timing controls. A delayed action may occur after a predetermined time period or a predefined time and a triggered action is invoked upon detection of an event that causes the action to be queued. An action may be invoked after a delay or at a particular time if and when an event is detected. For example, a logical server may be rebuilt upon logoff or after expiration of a time period.
PROT.0002PCT 1 9 Examples of actions or "atomic" actions include "get status", "send email", "reset", "reboot", "change CD ROM", "rebuild image", "snapshot", "restore snapshot", "file copy", "file delete", "file move", "resynch passwords" "start sequence", "mnscript", among others. A script may be executed, which is a sequence of atomic actions. The preceding list is exemplary only and not intended to be exhaustive.
The LS 903 within the server cloud A has implicit rights within cloud A although the implicit rights are "restricted" to itself. The LS 903 may request additional resources from SCMA 905 for purposes of cloud balancing, for example, to match resource demands or loads. The LS 903 may perform a switching function and request a "sister" logical server or the like for the purposes of spreading existing or anticipated loads. The LS 903 may perform other actions, such as snapshot or reset or the like as necessary or desired. The LS 903 may also request to be moved to a different cloud or the SCMA 905 may move the LS 903 to a different cloud in an explicit or transparent manner. Another server cloud, such as an exchange cloud E with manager SCME 909, may be have "subcloud" rights within the server cloud A as illustrated by subcloud E'. A non-exclusive and exemplary list of subcloud rights include "Add", "Maintain", "Move", "Delete", "Replicate", "Proxy", etc. Subcloud rights are explicit "permission" rights within another cloud and include "existence" permission rights over a subset of a cloud and separate "expansion" permission rights that allow expansion or contraction of the subset to add or subtract logical servers. The existence permission rights precede the expansion permission rights and may include an entire server cloud or may include none of the cloud with expansion permission rights to increase the subcloud size to include one or more logical servers. Additional subcloud relationships may be defined. For example, the cloud A may have subcloud rights over another cloud B via SCMB 911, as illustrated by a subcloud A' within the server cloud B. Further, the cloud B may have subcloud rights over another cloud C via SCMC 913, as illustrated by a subcloud B' within the server cloud C controlled by SCMC 913. In this manner, cloud A has explicit subcloud rights over subcloud A' within cloud B, which further has explicit subcloud rights over subcloud B' within cloud C. It is noted that cloud A may have "implied" rights over subcloud B' within cloud C based on the existing trust relationships. Since cloud B has rights over subcloud B' within cloud C, cloud B may proxy an LS from cloud B
PROT:0002PCT 20 to cloud C within subcloud B'. Cloud A may move LS 903 to subcloud A' as shown at 915, and cloud B may move and proxy the already proxied LS 903 to subcloud B' as shown at 917, so that cloud A has implied rights over subcloud B' at least with respect to the proxied LS 903. The LS 903 may request to be moved to cloud B. While in cloud B (or subcloud A'), the LS 903 (915) does not have implicit trust or rights within cloud B, but nonetheless "inherits" explicit rights within cloud B according to the explicit rights between clouds A and B.
FIG. 10 is a figurative block diagram illustrating an example of proxy syntax for proxying a logical server LS 1013 associated with a user 1001 from one server cloud A to another server cloud B. The user 1001 accesses the LS 1013 via cloud A and may believe that the LS 1013 is activated within cloud A when in fact it is activated within cloud B as shown. The server clouds A and B include server cloud managers SCMA 1005 and SCMB 1021, respectively. The user 1001 and the cloud managers SCMA 1005 and SCMB 1021 are interfaced with each other via a network 1003, which may be a global computer network such as the Internet, although any type of intermediate network is contemplated. The user 1001 attempts to access the LS 1013 in cloud A using a Uniform Resource Identifier (URI) or the like incorporating an address that defines a route to the LS 1013 in cloud A. The user 1001 employs a Uniform Resource Locator (URL) within the URI that identifies the target server cloud A. For example, the user 1001 may enter a URI address having syntax "A.DC.R.LS! ACTION", in which "A" is a cloud URL or cloud Internet Protocol (IP) address or cloud name for cloud A, "DC" denotes a particular data center within cloud A, "R" denotes a particular rack of servers of the data center DC, "LS" is the logical server name for LS 1013, the exclamation point "!" is a separator and "ACTION" denotes a particular action to be performed by the LS 1013, such as "login" or the like. It is noted that the data center DC and rack R information are navigation aids or specific path information employed for a particular configuration and need not be provided to uniquely access the LS 1013. Instead, the user 1001 may enter a short-hand version " A... LS! ACTION" to uniquely identify the target logical server LS 1013. The omitted path information is filled in by the SCMA 1005. It is noted that alternative syntax formats are contemplated, such as, for example, "LSl@cloudA" that identifies a logical server LSI located at or otherwise referenced
PROT-0002PCT 21 via cloud A. As previously described, LSI may be accessed via cloud A by proxy if is not currently located within cloud A.
The address "A...LS! ACTION" provided by the user 1001 is received by the SCMA 1005 of the server cloud A. Assuming that the user 1001 is not currently logged into the LS 1013, the SCMA 1005 employs a credential (CRED) check 1009 to authenticate the user 1001 to determine if it has rights to access LS 1013. Although shown as a separate function, the credential check 1009 may be incorporated within the SCMA 1005 depending upon the particular configuration. If the user's credential information, such as username and password, is not already incorporated in the address, then the SCMA 1005 prompts the user to provide the credential information, h one embodiment, the SCMA 1005 uses the credential information supplied by the user 1001 to determine the identity of the user and the associated level of rights and privileges provided to the identified user. If the credential information is incorrect or otherwise not recognized by the SCMA 1005, then the attempted command or login is rejected. Otherwise, the SCMA 1005 passes back a temporary token or the like that is used by the user 1001 for subsequent actions or commands during the current session. The supplied token is used to identify the user 1001 during the current session and to associate that user with their level of authority, rights, and/or level of access.
Upon login by the user 1001, the SCMA 1005 accesses a proxy table 1011 or the like for accessing the LS 1013 on behalf of the user 1001. In the case illustrated, the proxy table 1011 includes the a proxy link illustrated by arrow 1015 to an alternative address "B.DC.RLS" to the LS 1013 activated within cloud B, where "B" denotes a cloud URL or the like addressing the server cloud B. The alternative path
"B.DC.R.LS" includes the necessary path information to locate the LS 1013 within cloud B as illustrated by dashed arrow 1017. In this case, the server cloud A has subcloud rights for accessing the LS 1013 activated in cloud B. The SCMA 1005 employs the alternative address including the desired command provided from the authenticated user 1001 to access the LS 1013 in the server cloud B via the SCMB
1021 as illustrated by arrows 1019 and 1025. It is appreciated that the SCMA 1005 accesses the SCMB 1021 of the server cloud B via the intermediate network 1003.
Although the SCMB 1021 may employ a credential check 1023 that functions in a similar as the credential check 1009, the SCMA 1005 is recognized or otherwise provides sufficient authentication or credential information to enable access to the LS
PROT-0002PCT 22 1013 within the server cloud B as indicated by arrow 1025. The user 1001 is provided access to the LS 1013 within the server cloud B via the SCMA 1005 and SCMB 1021 for subsequent commands and actions. i the embodiment shown, the user 1001 continues to access the LS 1013 via the SCMA 1005 employing a proxied relationship to the server cloud B. It is noted that the user 1001 may not have any implicit or explicit rights within the server cloud B. Thus, if the user 1001 attempts to provide the address "B.DC.R.LS" directly to the server cloud B using the same credentials for accessing cloud A, the SCMB 1021 may reject the access as not recognized by the credential check 1023. The user 1001 needs explicit rights and corresponding valid credentials to directly access cloud B. Even so, access to logical servers within cloud B does not necessarily mean that the user 1001 is able to access LS 1013 since within a subcloud of cloud A. The user 1001 indirectly inherits the rights of the SCMA 1005 within the server cloud B as long as the access is through the SCMA 1005. FIG. 11 is a block diagram illustrating the fundamental components of an exemplary SCM 1101 of a typical server cloud, where the SCM 1101 includes core components 1103 and interface components that define how the SCM operates within the cloud and how it interfaces external entities including other SCMs. The core components 1103 of the SCM 1101 include an events engine 1105, a rules engine 1107, an authentication engine 1109 and a database 1111. The core components 1103 comprise a shared library of functions used by all SCM components and interface components. The interface components are considered part of the SCM 1101 and establish interface with external entities, such as users, administrators, agents, other SCMs, or other applications, such as management applications, billing applications, resource applications, etc.
The database 1111 stores data and parameters associated with the SCM 1101 and generally defines how the SCM 1101 tracks data and information. The database 1111 is integrated with the core engines and may even incorporate all or substantial parts of the core engines. The database 1111 includes, for example, data validation, data formatting, and rules validation. The event engine 1105 controls and manages all of the events to be performed by the SCM 1101, where such events are either immediately performed or queued for later execution. It is noted that "commands" and "actions" are generally synonymous and that "events" are commands or actions
PROT-0002PCT 23 being performed or that represent an actual request to implement one or more commands. The rules engine 1107 ensures that the SCM 1101 operates in a consistent manner with respect to data and information and applies the appropriate level of security for each operation. The operations of the SCM 1101 follow specific requirements and rules as validated and enforced by the rules engine 1107, including, for example, credential and role information. The authentication engine 1109 is used to validate users (explicit rights) and agents (implicit rights) and to generate and issue tokens or similar security credentials. For example, if the credentials provided by a user (or entity) are valid and recognized by the authentication engine 1109, the authentication engine 1109 generates and assigns a temporary token that is used by that user for each subsequent access during the current session until logoff. All subsequent accesses by the user require the assigned token. Tokens may be computer-generated alphanumeric or binary values temporarily assigned to each user. The authentication engine 1109 accesses the database 1111 to assign the corresponding privileges attached to each role to the authenticated user according to that user's role or authorizations.
The SCM 1101 may include one or more interface components that implement an interface layer, such as managers that implement interfaces with specific-type entities. Each interface component has its own needs and methods requirements and is designed to handle the operation of commands for specific entities. As shown, the interface components include a user manager 1113, an agent manager 1115, an SCM proxy manager 1117, an administrator manager 1119, an advanced scripting manager 1121, a simple network management protocol (SNMP) manager 1123, and an image manager 1125. The interface component managers shown and described herein are exemplary only, where each is optional depending upon the particular configuration and design criterion and where additional interface components may be defined, generated and deployed in a similar manner. Each SCM will have at least one interface component.
The user manager 1113 manages access to the SCM 1101 and the resources of the associated server cloud by users as previously described. The user manager 1113 builds appropriate user interfaces and translates SCM data into useful screens or renderings for display or consumption by each user. The agent manager 1115 coordinates SCM events with the appropriate agent(s) or other system components
PROT.0002PCT 24 within the associated server cloud, such as physical server agents (PSA), logical server agents (LSA), etc. The SCM proxy manager 1117 enables communication with other SCMs including proxy operations as described herein. The administrator manager 1119 incorporates scripting logic and renders user interface(s) to administrators and provides useful access and control of the SCM 1101 and the server cloud and associated functions to one or more administrators. The advanced scripting manager 1121 enables a more sophisticated scripting interface with other management systems, such as a billing package or the like. The SNMP manager 1123 enables communication with an SNMP management system or entity. The image manager 1125 enables optimized use of storage resources and files throughout the entire domain of the SCM 1101, including the physical and logical resources of its home cloud and the resources within subclouds of other server clouds.
FIG. 12 is a figurative block diagram that illustrates relationships between data and information and associated syntax employed by the core components 1103 of the exemplary SCM 1101. The SCM 1101 serves as a gateway to the resources and services within its associated server cloud including its logical servers, physical servers and or proxied servers. A central aspect of the core components 1103 is a URI 1205, which is a handle that provides the complete context information for the SCM 1101. In general, the URI 1205 is a naming convention or key that describes the relationship between different components within the SCM 1101. The URI 1205 essentially unifies different aspects of information and data managed by the SCM 1101. The core components 1103 use the URI 1205 mapping as a syntax that maps between different types of data. The URI 1205 is a bridge point between different aspects or functions of the core components 1103, and includes an identity aspect 1207, a rights aspect 1209, a presentation aspect 1211 and an implementation aspect 1213.
From a syntax point of view, the URI 1205 maps to particular resources of the server cloud, such as a logical server. Every interaction within the SCM 1101 has 3 components including an identity (identifying a user, agent, other cloud, etc.), an URI defining the target resource or server, and a command or action. For example, a user USER1 (identity) may request a SHUTDOWN (command) to shut down a logical server LSI (located by URI). The URI 1205 serves as a mapping between information in that given URI plus any other one aspect provides enough information
PROT-0002PCT 25 to determine other aspects. For example, a URI plus the identity of a user provides sufficient information to determine the rights or roles assigned to that user for a given logical server (such as commands that are authorized for the user of a target logical server) as well as the physical implementation aspect that defines the physical resources effected by the command.
The core components 1103 combine multiple types of data to compose or execute actions and user interface (UI). Actual work is done by the SCM 1101, which can be execution of an action or rending UI. The core components 1103 employ the identity aspect 1207 to determine the identity ("who") of the entity requesting an action to be performed. The entity or user may comprise an individual or a group of individuals. A user attempting to access the server cloud must first present valid credential information, such as a username and password or the like, which identifies the user to the SCM 1101. Each user has unique credentials which map to a corresponding role. Each individual user may be assigned separate and unique credentials within a given user group and/or users within a given user group may be assigned to the same role with different credential information. The rights aspect 1209 incorporates predetermined roles assigned to each entity or user that defines what that entity is allowed do. Each role defines the rights and privileges assigned to one or more users as enforced by the rules engine 1107, such as which commands are authorized for a particular user or entity.
The presentation aspect 1211 includes the logical or virtual relationships that define how information is to be presented. There are many presentations or paths to the SCM 1101 of a server cloud and to its resources. The paths may be represented by addresses or the like according to any predetermined syntax or protocol. The presentation aspect 1211 incorporates various paths or logical representations to access one or more logical servers or other resources within the server cloud. The presentations aspect 1211 defines various access paths to the servers and resources within (or proxied by) the server cloud. Different presentations may correspond to different privileges. Each role may map to one or more presentations within the presentation aspect 1211. Generally, each role maps to the highest level presentation authorized for that role. The presentation aspect 1211 incorporates a logical identity of server cloud, and may optionally include other logical representations, such as Data Center and/or Rack representations depending upon the particular configuration.
PROT:0002PCT 26 The implementation aspect 1213 determines which physical resources or equipment of which cloud is effected by an action or command. A requested action may be sourced from an agent of a logical or physical server or sourced externally to be performed by a logical server within the server cloud or within another cloud or subcloud via proxy. The command or action may include the status of any given logical server within the cloud or proxied by the cloud. It is noted that the implementation aspect 1213 enables server abstraction so that logical servers may be abstracted from the underlying hardware. The implementation aspect 1213 also enables scripting abstraction in that action scripts initiated by users or agents that might otherwise be invalid because of transparent physical changes are transparently handled by SCMs. The implementation aspect 1213 manages the relationship between logical and physical resources transparently to the user. Abstraction enhances scalability and maintenance because it simplifies server operation. Although virtualization software employing virtualization techniques may be employed as described herein, alternative abstraction technologies may be employed.
A user may have rights to a logical server (LSI) via the SCM regardless of rights of the physical server (PS1) to which the logical server is linked. The user may have no direct rights at all with respect to PS1 and may even lack any knowledge whatsoever of the PS1. The SCM, however, controls and maintains the LSI and PS1 relationship for controlling operations initiated by an authorized user. The SCM manages changes in relationships between components transparently to the users and agents. For example, the user may initiate a script to shutdown LSI on PS1, copy LSI to PS2, start LSI on PS2 and create a user on the moved LSI. The SCM validates the shutdown, copy, start and create requests for LSI on behalf of the user. The SCM may re-map or route the requests to PS1 and PS2 and authenticate the requests as valid. PS1 and PS2 act on the user-initiated requests as authorized by the SCM and the SCM re-maps feedback to LSI. The implementation aspect 1213 enables scripting abstraction since the LS context is the only context needed to manipulate the LS instance for both its logical and physical attributes. Thus, scripting is global in the sense that a script that is valid for one physical relationship remains valid regardless of the physical configuration or LS location since the SCM transparently maintains the relationships to control actions and operations without requiring scripting modifications from the perspective of the user. The SCM may
PROT.0002PCT 27 modify scripting and procedures in accordance with the specific relationships at the time action is necessary (e.g., re-mapping), but such is handled and controlled transparently by the SCM. In this manner, even though it appears to the user that the entire operation is handled directly, the actual control mechanisms are transparently controlled by the SCM on behalf of the user.
The following exemplary messaging structure illustrates an Agent Request, which is a request for a physical server agent (PSA), logical server agent (LSA) or SCM to perform one or more specific actions:
Example: <?xml version="1.0" encoding="utf-8" ?> <request>
<header>
<to>
<class>AGENT</class> <identifier>PT.NO.DEV.DEV1 DCAA</identifier>
</to> <from>
<class>SCM</class> <identifier>PT</identifier> <authentication type="intrinsic" />
</from>
<date>11 Nov 2001 16:59:59 GMT</date> <response request="yes" path="http://dev1.protier.com/services/callback.aspx" /> </header>
<action type="password change" id="98fu207sg">
<parameter name="user" value="dev-jsmith"/> <parameter name="domain" value="DEV1"/> <parameter name="new password" value="waffles"/> </action>
</request>
An Agent Response is generated in reply to an Agent Request, once the specified agent has completed the request. A response is only generated if the initial request included a valid response block. The following exemplary messaging structure illustrates an Agent Response:
PROT-0002PCT 28 Example:
<?xml version="1.0" encoding="utf-8" ?>
<response>
<header> <to>
<class>SCM</class> <identifier>PT</identifier> </to> <from> <class>AGENT</class>
<identifier>PT.NO.DEV.DEV1DCAA</identifier> <authentication type="intrinsic" /> </from>
<date>11 Nov 2001 17:06:12 GMT</date> <response request="no"/>
</header> <action id="98fu207sg">
<code>SUCCESS</code>
<message>The password for user DEV1\dev-jsmith has been changed. </message> </action> </response>
Although a system and method according to the present invention has been described in connection with one or more embodiments, it is not intended to be limited to the specific form set forth herein, but on the contrary, it is intended to cover such alternatives, modifications, and equivalents, as can be reasonably included within the spirit and scope of the invention as defined by the appended claims.
PROT-0002PCT 29

Claims

Claims:
1. A server cloud manager (SCM) for controlling logical servers and physical resources that comprise a virtualized logical server cloud, comprising: a plurality of core components that serve as a shared foundation to collectively manage events, validate and authorize server cloud users and agents, enforce predetermined requirements and rules and store operation data; and at least one interface component for enabling communication with external entities, including an SCM proxy manager that enables communication with one or more SCMs of other server clouds.
2. The SCM of claim 1, wherein the core components comprise: an event engine that controls and manages events to be performed by the SCM; an authentication engine that validates users and agents of the server cloud and that issues security credentials to authorized users and agents; a rules engine that validates and enforces predetermined requirements and rules to be followed by SCM operations; and a database that stores information and includes data validation, data formatting and rules validation for the SCM and the server cloud.
3. The SCM of claim 2, wherein the events controlled and managed by the event engine include individual events and collections of events.
4. The SCM of claim 1, wherein the at least one interface component includes a user manager, and wherein the core components and the user manager collectively render graphical user interfaces and authorize users of the server cloud according to predetermined roles that define the rights and privileges for each user while accessing server cloud resources.
5. The SCM of claim 1, wherein the at least one interface component includes an agent manager that coordinates SCM events with agents within the server cloud that perform specified actions.
6. The SCM of claim 1, wherein the at least one interface component includes an administrator manager that renders a user interface, enables access and control by one or more administrators of the SCM, and coordinates with core components to authenticate administrative requests.
PROT-0002PCT 30
7. The SCM of claim 1, wherein the at least one interface component includes an advanced scripting manager that provides advanced scripting logic and interfaces to other management systems.
8. The SCM of claim 1, wherein the at least one interface component includes an SNMP manager that provides an interface between the SCM and an
SNMP management application.
9. The SCM of claim 1, wherein the at least one interface component includes an image manager that optimizes use of disk resources and files throughout a predetermined domain of the SCM.
10. The SCM of claim 1, wherein the core components employ a URI mapping as a syntax handle that provides sufficient context information and that describes a management relationship between different components of the SCM.
11. The SCM of claim 10, wherein the URI mapping includes an identity aspect that determines an identity of an entity requesting an action to be performed.
12. The SCM of claim 10, wherein the URI mapping includes a rights aspect that incorporates predetermined roles assigned to an entity that defines the rights and privileges assigned to the entity.
13. The SCM of claim 10, wherein the URI mapping includes a presentation aspect that includes logical relationships that define how information is to be presented.
14. The SCM of claim 1, wherein the URI mapping includes an implementation aspect that determines which resources or equipment of a domain of the SCM are effected by actions and commands.
15. The SCM of claim 14, wherein the implementation aspect supports server abstraction.
16. The SCM of claim 14, wherein the implementation aspect supports scripting abstraction.
17. The SCM of claim 14, wherein the implementation function incorporates a proxy function for relaying actions and commands to another server cloud.
PROT-0002PCT 31
18. A server cloud system, comprising: a first server cloud including a first server cloud manager (SCM) and a first logical server; and a second server cloud including a second SCM; the first and second SCMs configured to cooperate to manage operation of the first logical server.
19. The server cloud system of claim 18, wherein the first and second SCMs are configured to cooperate to move the first logical server from the first server cloud to the second server cloud.
20. The server cloud system of claim 18, further comprising: the second server cloud including a second logical server; and the first and second SCMs being configured to cooperate to ensure that only one of the first and second logical servers is active at any given time.
21. The server cloud system of claim 20, wherein the first logical server is activated during a first time period and is placed in standby during a second time period, and wherein the second logical server is activated during the second time period and placed in standby during the first time period.
22. The server cloud system of claim 18, wherein the first and second SCMs are configured to cooperate to replicate the first logical server to a second and unique logical server within the second server cloud.
23. The server cloud system of claim 18, wherein the first and second server clouds have a trust relationship and wherein the first and second SCMs are peers.
24. The server cloud system of claim 23, wherein the first logical server is within a subcloud of the first server cloud and wherein the second SCM has rights over the subcloud.
25. The server cloud system of claim 18, further comprising: an intermediary that has a trust relationship with the first and second server clouds; and wherein the first and second server clouds cooperate with each other through the intermediary.
PROT-0002PCT 32
26. The server cloud system of claim 25, wherein the first and second SCMs are configured to cooperate via the intermediary to move the first logical server from the first server cloud to the second server cloud.
27. The server cloud system of claim 25, further comprising: the second server cloud including a second logical server; and the first and second SCMs being configured to cooperate via the intermediary to ensure that only one of the first and second logical servers is active at any given time.
28. The server cloud system of claim 25, wherein the first and second SCMs are configured to cooperate via the intermediary to replicate the first logical server to a second and unique logical server within the second server cloud.
29. The server cloud system of claim 18, wherein the second SCM operates as a proxy for the first logical server so that the first logical server may appear to exist within the second server cloud while actually residing in the first server cloud.
30. The server cloud system of claim 18, further comprising: the first server cloud including a second logical server; and wherein the second SCM operates as a proxy for the first and second logical servers and wherein the first and second SCMs are configured to cooperate to ensure that only one of the first and second logical servers is active at any given time.
31. The server cloud system of claim 18, wherein the second server cloud is an exchange cloud that employs intercloud proxy and commercial terms to enable commercial transactions associated with resources within the first server cloud.
32. The server cloud system of claim 31, wherein the first and second server clouds establish a commercial relationship for the purpose of enabling the second server cloud to direct use and resell logical server resources in the first server cloud.
33. The server cloud system of claim 31 , further comprising: a third server cloud having an authorized user, the third server cloud having a commercial relationship with the exchange cloud; and wherein the authorized user gains access to the first logical server active in the first server cloud via intercloud proxy via the exchange cloud.
34. The server cloud system of claim 31 , further comprising:
PROT-0002PCT 33 a third server cloud having a commercial relationship with the exchange cloud; and the exchange cloud transferring the first logical server from the first server cloud to the third server cloud for access by an end consumer.
35. The server cloud system of claim 34, wherein location of the first logical server is transparent to the end consumer.
36. The server cloud system of claim 34, wherein transfer of the first logical server is performed by the exchange cloud transparently to the end consumer.
PROT:OOO2PCT 34
PCT/US2002/040286 2001-11-30 2002-12-16 Virtual server cloud interfacing WO2004059503A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US10/124,195 US7574496B2 (en) 2001-11-30 2002-04-17 Virtual server cloud interfacing
PCT/US2002/040286 WO2004059503A1 (en) 2001-11-30 2002-12-16 Virtual server cloud interfacing
AU2002364059A AU2002364059A1 (en) 2002-12-16 2002-12-16 Virtual server cloud interfacing

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US33425301P 2001-11-30 2001-11-30
US10/124,195 US7574496B2 (en) 2001-11-30 2002-04-17 Virtual server cloud interfacing
PCT/US2002/040286 WO2004059503A1 (en) 2001-11-30 2002-12-16 Virtual server cloud interfacing

Publications (1)

Publication Number Publication Date
WO2004059503A1 true WO2004059503A1 (en) 2004-07-15

Family

ID=32995689

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2002/040286 WO2004059503A1 (en) 2001-11-30 2002-12-16 Virtual server cloud interfacing

Country Status (2)

Country Link
US (1) US7574496B2 (en)
WO (1) WO2004059503A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090199116A1 (en) * 2008-02-04 2009-08-06 Thorsten Von Eicken Systems and methods for efficiently booting and configuring virtual servers
WO2013118005A1 (en) * 2012-02-06 2013-08-15 International Business Machines Corporation Consolidating disparate cloud service data and behavior based on trust relationships between cloud services
US20140047086A1 (en) * 2012-08-10 2014-02-13 Adobe Systems, Incorporated Systems and Methods for Providing Hot Spare Nodes
CN104699625A (en) * 2009-09-21 2015-06-10 甲骨文国际公司 System and method for synchronizing transient resource usage between virtual machines in a hypervisor environment

Families Citing this family (196)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2395498C (en) * 1999-12-24 2013-08-27 Telstra New Wave Pty Ltd A virtual token
US20030221012A1 (en) * 2002-05-22 2003-11-27 International Business Machines Corporation Resource manager system and method for access control to physical resources in an application hosting environment
US8577795B2 (en) 2002-10-10 2013-11-05 Convergys Information Management Group, Inc. System and method for revenue and authorization management
US8489742B2 (en) * 2002-10-10 2013-07-16 Convergys Information Management Group, Inc. System and method for work management
US8041761B1 (en) * 2002-12-23 2011-10-18 Netapp, Inc. Virtual filer and IP space based IT configuration transitioning framework
US9754038B2 (en) * 2003-02-05 2017-09-05 Open Text Sa Ulc Individually deployable managed objects and system and method for managing the same
US8473620B2 (en) * 2003-04-14 2013-06-25 Riverbed Technology, Inc. Interception of a cloud-based communication connection
US20050044380A1 (en) * 2003-08-21 2005-02-24 International Business Machines Corporation Method and system to enable access to multiple restricted applications through user's host application
US7467102B2 (en) * 2003-09-11 2008-12-16 International Business Machines Corporation Request type grid computing
US7769004B2 (en) * 2003-09-26 2010-08-03 Surgient, Inc. Network abstraction and isolation layer for masquerading machine identity of a computer
US7539640B2 (en) 2003-11-06 2009-05-26 Trading Technologies International, Inc. Aggregated trading system
US8336040B2 (en) 2004-04-15 2012-12-18 Raytheon Company System and method for topology-aware job scheduling and backfilling in an HPC environment
US8335909B2 (en) 2004-04-15 2012-12-18 Raytheon Company Coupling processors to each other for high performance computing (HPC)
US9178784B2 (en) 2004-04-15 2015-11-03 Raytheon Company System and method for cluster management based on HPC architecture
GB2419697A (en) 2004-10-29 2006-05-03 Hewlett Packard Development Co Virtual overlay infrastructures each having an infrastructure controller
GB2419703A (en) * 2004-10-29 2006-05-03 Hewlett Packard Development Co Isolated virtual overlay infrastructures each having an interface to control interaction with others
GB2419702A (en) * 2004-10-29 2006-05-03 Hewlett Packard Development Co Virtual overlay infrastructures which can be suspended and later reactivated
US9378099B2 (en) * 2005-06-24 2016-06-28 Catalogic Software, Inc. Instant data center recovery
US8478986B2 (en) * 2005-08-10 2013-07-02 Riverbed Technology, Inc. Reducing latency of split-terminated secure communication protocol sessions
US8438628B2 (en) * 2005-08-10 2013-05-07 Riverbed Technology, Inc. Method and apparatus for split-terminating a secure network connection, with client authentication
US7805600B2 (en) * 2005-09-15 2010-09-28 Sas Institute Inc. Computer-implemented systems and methods for managing images
KR100862659B1 (en) * 2006-01-04 2008-10-10 삼성전자주식회사 Method and apparatus for accessing home storage or internet storage
US8782393B1 (en) 2006-03-23 2014-07-15 F5 Networks, Inc. Accessing SSL connection data by a third-party
US8078728B1 (en) 2006-03-31 2011-12-13 Quest Software, Inc. Capacity pooling for application reservation and delivery
US20080104699A1 (en) * 2006-09-28 2008-05-01 Microsoft Corporation Secure service computation
US20080083031A1 (en) * 2006-12-20 2008-04-03 Microsoft Corporation Secure service computation
US8655939B2 (en) * 2007-01-05 2014-02-18 Digital Doors, Inc. Electromagnetic pulse (EMP) hardened information infrastructure with extractor, cloud dispersal, secure storage, content analysis and classification and method therefor
US20080192648A1 (en) * 2007-02-08 2008-08-14 Nuova Systems Method and system to create a virtual topology
US7925749B1 (en) * 2007-04-24 2011-04-12 Netapp, Inc. System and method for transparent data replication over migrating virtual servers
US8914774B1 (en) 2007-11-15 2014-12-16 Appcelerator, Inc. System and method for tagging code to determine where the code runs
US8954989B1 (en) 2007-11-19 2015-02-10 Appcelerator, Inc. Flexible, event-driven JavaScript server architecture
US8260845B1 (en) 2007-11-21 2012-09-04 Appcelerator, Inc. System and method for auto-generating JavaScript proxies and meta-proxies
US8566807B1 (en) 2007-11-23 2013-10-22 Appcelerator, Inc. System and method for accessibility of document object model and JavaScript by other platforms
US8719451B1 (en) 2007-11-23 2014-05-06 Appcelerator, Inc. System and method for on-the-fly, post-processing document object model manipulation
US8756579B1 (en) 2007-12-03 2014-06-17 Appcelerator, Inc. Client-side and server-side unified validation
US8806431B1 (en) 2007-12-03 2014-08-12 Appecelerator, Inc. Aspect oriented programming
US8819539B1 (en) 2007-12-03 2014-08-26 Appcelerator, Inc. On-the-fly rewriting of uniform resource locators in a web-page
US8938491B1 (en) 2007-12-04 2015-01-20 Appcelerator, Inc. System and method for secure binding of client calls and server functions
US8527860B1 (en) 2007-12-04 2013-09-03 Appcelerator, Inc. System and method for exposing the dynamic web server-side
US8335982B1 (en) 2007-12-05 2012-12-18 Appcelerator, Inc. System and method for binding a document object model through JavaScript callbacks
US8285813B1 (en) 2007-12-05 2012-10-09 Appcelerator, Inc. System and method for emulating different user agents on a server
US8639743B1 (en) 2007-12-05 2014-01-28 Appcelerator, Inc. System and method for on-the-fly rewriting of JavaScript
US8194674B1 (en) 2007-12-20 2012-06-05 Quest Software, Inc. System and method for aggregating communications and for translating between overlapping internal network addresses and unique external network addresses
US8489995B2 (en) 2008-03-18 2013-07-16 Rightscale, Inc. Systems and methods for efficiently managing and configuring virtual servers
US8849971B2 (en) * 2008-05-28 2014-09-30 Red Hat, Inc. Load balancing in cloud-based networks
US8291079B1 (en) 2008-06-04 2012-10-16 Appcelerator, Inc. System and method for developing, deploying, managing and monitoring a web application in a single environment
US8880678B1 (en) 2008-06-05 2014-11-04 Appcelerator, Inc. System and method for managing and monitoring a web application using multiple cloud providers
US9489647B2 (en) 2008-06-19 2016-11-08 Csc Agility Platform, Inc. System and method for a cloud computing abstraction with self-service portal for publishing resources
US10411975B2 (en) 2013-03-15 2019-09-10 Csc Agility Platform, Inc. System and method for a cloud computing abstraction with multi-tier deployment policy
US9069599B2 (en) 2008-06-19 2015-06-30 Servicemesh, Inc. System and method for a cloud computing abstraction layer with security zone facilities
EP2316071A4 (en) 2008-06-19 2011-08-17 Servicemesh Inc Cloud computing gateway, cloud computing hypervisor, and methods for implementing same
US9087308B2 (en) * 2008-07-25 2015-07-21 Appirio, Inc. System and method for conducting competitions
US8250215B2 (en) * 2008-08-12 2012-08-21 Sap Ag Method and system for intelligently leveraging cloud computing resources
US7596620B1 (en) * 2008-11-04 2009-09-29 Aptana, Inc. System and method for developing, deploying, managing and monitoring a web application in a single environment
US20100064033A1 (en) * 2008-09-08 2010-03-11 Franco Travostino Integration of an internal cloud infrastructure with existing enterprise services and systems
US8069242B2 (en) * 2008-11-14 2011-11-29 Cisco Technology, Inc. System, method, and software for integrating cloud computing systems
US8984505B2 (en) * 2008-11-26 2015-03-17 Red Hat, Inc. Providing access control to user-controlled resources in a cloud computing environment
US8392530B1 (en) 2008-12-18 2013-03-05 Adobe Systems Incorporated Media streaming in a multi-tier client-server architecture
US20120005724A1 (en) * 2009-02-09 2012-01-05 Imera Systems, Inc. Method and system for protecting private enterprise resources in a cloud computing environment
US9485117B2 (en) * 2009-02-23 2016-11-01 Red Hat, Inc. Providing user-controlled resources for cloud computing environments
US8977750B2 (en) * 2009-02-24 2015-03-10 Red Hat, Inc. Extending security platforms to cloud-based networks
US8707043B2 (en) * 2009-03-03 2014-04-22 Riverbed Technology, Inc. Split termination of secure communication sessions with mutual certificate-based authentication
US20100242101A1 (en) * 2009-03-20 2010-09-23 Reese Jr George Edward Method and system for securely managing access and encryption credentials in a shared virtualization environment
US8261126B2 (en) 2009-04-03 2012-09-04 Microsoft Corporation Bare metal machine recovery from the cloud
US8805953B2 (en) * 2009-04-03 2014-08-12 Microsoft Corporation Differential file and system restores from peers and the cloud
US20100257403A1 (en) * 2009-04-03 2010-10-07 Microsoft Corporation Restoration of a system from a set of full and partial delta system snapshots across a distributed system
US8769049B2 (en) * 2009-04-24 2014-07-01 Microsoft Corporation Intelligent tiers of backup data
US8560639B2 (en) * 2009-04-24 2013-10-15 Microsoft Corporation Dynamic placement of replica data
US8935366B2 (en) * 2009-04-24 2015-01-13 Microsoft Corporation Hybrid distributed and cloud backup architecture
US8769055B2 (en) * 2009-04-24 2014-07-01 Microsoft Corporation Distributed backup and versioning
US8578076B2 (en) * 2009-05-01 2013-11-05 Citrix Systems, Inc. Systems and methods for establishing a cloud bridge between virtual storage resources
US9501329B2 (en) * 2009-05-08 2016-11-22 Rackspace Us, Inc. Methods and systems for cloud computing management
US20100318609A1 (en) * 2009-06-15 2010-12-16 Microsoft Corporation Bridging enterprise networks into cloud
US20100325139A1 (en) * 2009-06-18 2010-12-23 Microsoft Corporation Service Provider Management Console
US8244559B2 (en) * 2009-06-26 2012-08-14 Microsoft Corporation Cloud computing resource broker
US8966017B2 (en) * 2009-07-09 2015-02-24 Novell, Inc. Techniques for cloud control and management
US8886788B2 (en) 2009-08-31 2014-11-11 Accenture Global Services Limited Enterprise-level management, control and information aspects of cloud console
US8271655B2 (en) 2009-12-03 2012-09-18 International Business Machines Corporation Cloud computing roaming services
US9129052B2 (en) * 2009-12-03 2015-09-08 International Business Machines Corporation Metering resource usage in a cloud computing environment
US20110137805A1 (en) * 2009-12-03 2011-06-09 International Business Machines Corporation Inter-cloud resource sharing within a cloud computing environment
US8914469B2 (en) * 2009-12-11 2014-12-16 International Business Machines Corporation Negotiating agreements within a cloud computing environment
US9009294B2 (en) * 2009-12-11 2015-04-14 International Business Machines Corporation Dynamic provisioning of resources within a cloud computing environment
US9122685B2 (en) 2009-12-15 2015-09-01 International Business Machines Corporation Operating cloud computing and cloud computing information system
WO2011091056A1 (en) * 2010-01-19 2011-07-28 Servicemesh, Inc. System and method for a cloud computing abstraction layer
US8810829B2 (en) 2010-03-10 2014-08-19 Ricoh Co., Ltd. Method and apparatus for a print driver to control document and workflow transfer
US8547576B2 (en) 2010-03-10 2013-10-01 Ricoh Co., Ltd. Method and apparatus for a print spooler to control document and workflow transfer
US8700892B2 (en) 2010-03-19 2014-04-15 F5 Networks, Inc. Proxy SSL authentication in split SSL for client-side proxy agent resources with content insertion
US8504400B2 (en) * 2010-03-24 2013-08-06 International Business Machines Corporation Dynamically optimized distributed cloud computing-based business process management (BPM) system
US9137213B2 (en) * 2010-03-26 2015-09-15 Avaya Inc. On-demand feature server activation in the cloud
US9342801B2 (en) 2010-03-29 2016-05-17 Amazon Technologies, Inc. Managing committed processing rates for shared resources
WO2011121353A2 (en) * 2010-03-30 2011-10-06 Disos Pty Ltd Cloud computing operating system and method
US8572706B2 (en) * 2010-04-26 2013-10-29 Vmware, Inc. Policy engine for cloud platform
US8984589B2 (en) * 2010-04-27 2015-03-17 Accenture Global Services Limited Cloud-based billing, credential, and data sharing management system
US9075663B2 (en) * 2010-05-12 2015-07-07 Samsung Electronics Co., Ltd. Cloud-based web workers and storages
US8504689B2 (en) 2010-05-28 2013-08-06 Red Hat, Inc. Methods and systems for cloud deployment analysis featuring relative cloud resource importance
WO2011153155A2 (en) 2010-05-30 2011-12-08 Sonian, Inc. Method and system for arbitraging computing resources in a cloud computing environment
US9460307B2 (en) 2010-06-15 2016-10-04 International Business Machines Corporation Managing sensitive data in cloud computing environments
EP2583211B1 (en) 2010-06-15 2020-04-15 Oracle International Corporation Virtual computing infrastructure
US10715457B2 (en) 2010-06-15 2020-07-14 Oracle International Corporation Coordination of processes in cloud computing environments
US8904382B2 (en) 2010-06-17 2014-12-02 International Business Machines Corporation Creating instances of cloud computing environments
WO2011162750A1 (en) * 2010-06-23 2011-12-29 Hewlett-Packard Development Company, L.P. Authorization control
US9183374B2 (en) * 2010-07-15 2015-11-10 Novell, Inc. Techniques for identity-enabled interface deployment
US9323561B2 (en) 2010-08-13 2016-04-26 International Business Machines Corporation Calibrating cloud computing environments
US8478845B2 (en) 2010-08-16 2013-07-02 International Business Machines Corporation End-to-end provisioning of storage clouds
WO2012023050A2 (en) 2010-08-20 2012-02-23 Overtis Group Limited Secure cloud computing system and method
US9003014B2 (en) 2010-08-31 2015-04-07 International Business Machines Corporation Modular cloud dynamic application assignment
US9342368B2 (en) 2010-08-31 2016-05-17 International Business Machines Corporation Modular cloud computing system
US8607242B2 (en) 2010-09-02 2013-12-10 International Business Machines Corporation Selecting cloud service providers to perform data processing jobs based on a plan for a cloud pipeline including processing stages
US8612330B1 (en) * 2010-09-14 2013-12-17 Amazon Technologies, Inc. Managing bandwidth for shared resources
US8694400B1 (en) 2010-09-14 2014-04-08 Amazon Technologies, Inc. Managing operational throughput for shared resources
US20120102103A1 (en) * 2010-10-20 2012-04-26 Microsoft Corporation Running legacy applications on cloud computing systems without rewriting
US9442771B2 (en) 2010-11-24 2016-09-13 Red Hat, Inc. Generating configurable subscription parameters
US9606831B2 (en) * 2010-11-30 2017-03-28 Red Hat, Inc. Migrating virtual machine operations
US9563479B2 (en) * 2010-11-30 2017-02-07 Red Hat, Inc. Brokering optimized resource supply costs in host cloud-based network using predictive workloads
US8683026B2 (en) 2010-12-08 2014-03-25 International Business Machines Corporation Framework providing unified infrastructure management for polymorphic information technology (IT) functions across disparate groups in a cloud computing environment
US9195509B2 (en) 2011-01-05 2015-11-24 International Business Machines Corporation Identifying optimal platforms for workload placement in a networked computing environment
US8572623B2 (en) 2011-01-11 2013-10-29 International Business Machines Corporation Determining an optimal computing environment for running an image based on performance of similar images
US9460169B2 (en) 2011-01-12 2016-10-04 International Business Machines Corporation Multi-tenant audit awareness in support of cloud environments
US8868749B2 (en) 2011-01-18 2014-10-21 International Business Machines Corporation Workload placement on an optimal platform in a networked computing environment
CN102638484A (en) * 2011-02-15 2012-08-15 鸿富锦精密工业(深圳)有限公司 Cloud access system and method for displaying data object according to community network
US9483354B1 (en) * 2011-03-11 2016-11-01 Veritas Technologies Llc Techniques for providing data management using a backup data bank system
US8407501B2 (en) 2011-03-28 2013-03-26 International Business Machines Corporation Allocation of storage resources in a networked computing environment based on energy utilization
US8806483B2 (en) 2011-04-13 2014-08-12 International Business Machines Corporation Determining starting values for virtual machine attributes in a networked computing environment
US20120271949A1 (en) 2011-04-20 2012-10-25 International Business Machines Corporation Real-time data analysis for resource provisioning among systems in a networked computing environment
US9817677B2 (en) 2011-04-22 2017-11-14 Microsoft Technologies Licensing, LLC Rule based data driven validation
US9519496B2 (en) * 2011-04-26 2016-12-13 Microsoft Technology Licensing, Llc Detecting and preventing virtual disk storage linkage faults
CA3081068C (en) 2011-04-29 2023-10-17 American Greetings Corporation Systems, methods and apparatuses for creating, editing, distributing and viewing electronic greeting cards
US8806485B2 (en) 2011-05-03 2014-08-12 International Business Machines Corporation Configuring virtual machine images in a networked computing environment
US8793377B2 (en) 2011-05-03 2014-07-29 International Business Machines Corporation Identifying optimal virtual machine images in a networked computing environment
US20120297066A1 (en) * 2011-05-19 2012-11-22 Siemens Aktiengesellschaft Method and system for apparatus means for providing a service requested by a client in a public cloud infrastructure
US8630008B2 (en) 2011-05-20 2014-01-14 Xerox Corporation Method and system for managing print device information using a cloud administration system
US8730502B2 (en) 2011-05-20 2014-05-20 Xerox Corporation Method and system for managing print jobs using a cloud administration system
US8505004B2 (en) 2011-05-20 2013-08-06 Xerox Corporation Methods and systems for providing software updates using a cloud administration system
US9218578B2 (en) 2011-05-20 2015-12-22 Xerox Corporation Methods and systems for managing print device licenses using a cloud administration system
US8762709B2 (en) 2011-05-20 2014-06-24 Lockheed Martin Corporation Cloud computing method and system
US8593676B2 (en) 2011-05-20 2013-11-26 Xerox Corporation Method and system for managing print device information using a cloud administration system
US8537398B2 (en) 2011-05-20 2013-09-17 Xerox Corporation Methods and systems for tracking and managing print device inventory information using a cloud administration system
US8769531B2 (en) 2011-05-25 2014-07-01 International Business Machines Corporation Optimizing the configuration of virtual machine instances in a networked computing environment
US9251033B2 (en) 2011-07-07 2016-02-02 Vce Company, Llc Automatic monitoring and just-in-time resource provisioning system
US8943564B2 (en) 2011-07-21 2015-01-27 International Business Machines Corporation Virtual computer and service
WO2013028636A1 (en) * 2011-08-19 2013-02-28 Panavisor, Inc Systems and methods for managing a virtual infrastructure
US8793378B2 (en) 2011-09-01 2014-07-29 International Business Machines Corporation Identifying services and associated capabilities in a networked computing environment
US20130117157A1 (en) * 2011-11-09 2013-05-09 Gravitant, Inc. Optimally sourcing services in hybrid cloud environments
US8682958B2 (en) 2011-11-17 2014-03-25 Microsoft Corporation Decoupling cluster data from cloud deployment
US9367354B1 (en) 2011-12-05 2016-06-14 Amazon Technologies, Inc. Queued workload service in a multi tenant environment
CN103186570B (en) * 2011-12-28 2017-08-18 富泰华工业(深圳)有限公司 Data source query system and method based on cloud server
US8756209B2 (en) 2012-01-04 2014-06-17 International Business Machines Corporation Computing resource allocation based on query response analysis in a networked computing environment
EP2812809A4 (en) 2012-02-10 2016-05-25 Oracle Int Corp Cloud computing services framework
KR101373461B1 (en) * 2012-02-24 2014-03-11 주식회사 팬택 Terminal and method for using cloud sevices
US9270523B2 (en) 2012-02-28 2016-02-23 International Business Machines Corporation Reconfiguring interrelationships between components of virtual computing networks
US8909769B2 (en) 2012-02-29 2014-12-09 International Business Machines Corporation Determining optimal component location in a networked computing environment
US8335851B1 (en) * 2012-03-12 2012-12-18 Ringcentral, Inc. Network resource deployment for cloud-based services
US9071613B2 (en) * 2012-04-06 2015-06-30 International Business Machines Corporation Dynamic allocation of workload deployment units across a plurality of clouds
US9086929B2 (en) 2012-04-06 2015-07-21 International Business Machines Corporation Dynamic allocation of a workload across a plurality of clouds
US9292352B2 (en) 2012-08-10 2016-03-22 Adobe Systems Incorporated Systems and methods for cloud management
US9251517B2 (en) 2012-08-28 2016-02-02 International Business Machines Corporation Optimizing service factors for computing resources in a networked computing environment
US9778860B2 (en) 2012-09-12 2017-10-03 Microsoft Technology Licensing, Llc Re-TRIM of free space within VHDX
CN103051469B (en) * 2012-09-13 2016-04-20 曙光信息产业(北京)有限公司 Centralized configuring management method under cloud environment
FR2996018A1 (en) * 2012-09-27 2014-03-28 France Telecom DEVICE AND METHOD FOR MANAGING ACCESS TO A SET OF COMPUTER RESOURCES AND NETWORKS PROVIDED TO AN ENTITY BY A CLOUD COMPUTING SYSTEM
US8810821B2 (en) 2012-12-21 2014-08-19 Xerox Corporation Method and system for managing service activity in a network printing context using a cloud administration system
US8682870B1 (en) 2013-03-01 2014-03-25 Storagecraft Technology Corporation Defragmentation during multiphase deduplication
US8738577B1 (en) 2013-03-01 2014-05-27 Storagecraft Technology Corporation Change tracking for multiphase deduplication
US8732135B1 (en) * 2013-03-01 2014-05-20 Storagecraft Technology Corporation Restoring a backup from a deduplication vault storage
US8874527B2 (en) 2013-03-01 2014-10-28 Storagecraft Technology Corporation Local seeding of a restore storage for restoring a backup from a remote deduplication vault storage
US20140280367A1 (en) * 2013-03-14 2014-09-18 Sap Ag Silo-aware databases
US9619545B2 (en) 2013-06-28 2017-04-11 Oracle International Corporation Naïve, client-side sharding with online addition of shards
US9710292B2 (en) 2013-08-02 2017-07-18 International Business Machines Corporation Allowing management of a virtual machine by multiple cloud providers
KR102012259B1 (en) * 2013-08-21 2019-08-21 한국전자통신연구원 Method and apparatus for controlling resource of cloud virtual base station
FR3015168A1 (en) * 2013-12-12 2015-06-19 Orange TOKEN AUTHENTICATION METHOD
US8751454B1 (en) 2014-01-28 2014-06-10 Storagecraft Technology Corporation Virtual defragmentation in a deduplication vault
US9762436B2 (en) * 2014-02-25 2017-09-12 Red Hat, Inc. Unified and persistent network configuration
US10178181B2 (en) * 2014-04-02 2019-01-08 Cisco Technology, Inc. Interposer with security assistant key escrow
US10979279B2 (en) 2014-07-03 2021-04-13 International Business Machines Corporation Clock synchronization in cloud computing
US10834592B2 (en) 2014-07-17 2020-11-10 Cirrent, Inc. Securing credential distribution
US10154409B2 (en) 2014-07-17 2018-12-11 Cirrent, Inc. Binding an authenticated user with a wireless device
US10356651B2 (en) 2014-07-17 2019-07-16 Cirrent, Inc. Controlled connection of a wireless device to a network
US9942756B2 (en) 2014-07-17 2018-04-10 Cirrent, Inc. Securing credential distribution
GB2528473B (en) 2014-07-23 2016-06-22 Ibm Effective roaming for software-as-a-service infrastructure
US9800673B2 (en) 2014-08-20 2017-10-24 At&T Intellectual Property I, L.P. Service compiler component and service controller for open systems interconnection layer 4 through layer 7 services in a cloud computing system
US10291689B2 (en) 2014-08-20 2019-05-14 At&T Intellectual Property I, L.P. Service centric virtual network function architecture for development and deployment of open systems interconnection communication model layer 4 through layer 7 services in a cloud computing system
US9473567B2 (en) 2014-08-20 2016-10-18 At&T Intellectual Property I, L.P. Virtual zones for open systems interconnection layer 4 through layer 7 services in a cloud computing system
US9742690B2 (en) 2014-08-20 2017-08-22 At&T Intellectual Property I, L.P. Load adaptation architecture framework for orchestrating and managing services in a cloud computing system
US9749242B2 (en) 2014-08-20 2017-08-29 At&T Intellectual Property I, L.P. Network platform as a service layer for open systems interconnection communication model layer 4 through layer 7 services
US9992277B2 (en) 2015-03-31 2018-06-05 At&T Intellectual Property I, L.P. Ephemeral feedback instances
US9524200B2 (en) 2015-03-31 2016-12-20 At&T Intellectual Property I, L.P. Consultation among feedback instances
US10277666B2 (en) 2015-03-31 2019-04-30 At&T Intellectual Property I, L.P. Escalation of feedback instances
US9769206B2 (en) 2015-03-31 2017-09-19 At&T Intellectual Property I, L.P. Modes of policy participation for feedback instances
US10129157B2 (en) 2015-03-31 2018-11-13 At&T Intellectual Property I, L.P. Multiple feedback instance inter-coordination to determine optimal actions
US10129156B2 (en) 2015-03-31 2018-11-13 At&T Intellectual Property I, L.P. Dynamic creation and management of ephemeral coordinated feedback instances
US9300660B1 (en) * 2015-05-29 2016-03-29 Pure Storage, Inc. Providing authorization and authentication in a cloud for a user of a storage array
US10158727B1 (en) 2016-03-16 2018-12-18 Equinix, Inc. Service overlay model for a co-location facility
US10530632B1 (en) * 2017-09-29 2020-01-07 Equinix, Inc. Inter-metro service chaining
US10721144B2 (en) 2017-12-22 2020-07-21 At&T Intellectual Property I, L.P. Virtualized intelligent and integrated network monitoring as a service
CN110389817B (en) * 2018-04-20 2023-05-23 伊姆西Ip控股有限责任公司 Scheduling method, device and computer readable medium of multi-cloud system
US10749868B2 (en) * 2018-06-29 2020-08-18 Microsoft Technology Licensing, Llc Registration of the same domain with different cloud services networks
US10739983B1 (en) 2019-04-10 2020-08-11 Servicenow, Inc. Configuration and management of swimlanes in a graphical user interface

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5611050A (en) * 1993-12-03 1997-03-11 Xerox Corporation Method for selectively performing event on computer controlled device whose location and allowable operation is consistent with the contextual and locational attributes of the event
US6272537B1 (en) * 1997-11-17 2001-08-07 Fujitsu Limited Method for building element manager for a computer network element using a visual element manager builder process

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4912628A (en) 1988-03-15 1990-03-27 International Business Machines Corp. Suspending and resuming processing of tasks running in a virtual machine data processing system
US5201049A (en) 1988-09-29 1993-04-06 International Business Machines Corporation System for executing applications program concurrently/serially on different virtual machines
US5062037A (en) 1988-10-24 1991-10-29 Ibm Corp. Method to provide concurrent execution of distributed application programs by a host computer and an intelligent work station on an sna network
US5802290A (en) 1992-07-29 1998-09-01 Virtual Computer Corporation Computer network of distributed virtual computers which are EAC reconfigurable in response to instruction to be executed
SE9402059D0 (en) 1994-06-13 1994-06-13 Ellemtel Utvecklings Ab Methods and apparatus for telecommunications
US6069894A (en) * 1995-06-12 2000-05-30 Telefonaktiebolaget Lm Ericsson Enhancement of network operation and performance
US5996026A (en) * 1995-09-05 1999-11-30 Hitachi, Ltd. Method and apparatus for connecting i/o channels between sub-channels and devices through virtual machines controlled by a hypervisor using ID and configuration information
US6185601B1 (en) 1996-08-02 2001-02-06 Hewlett-Packard Company Dynamic load balancing of a network of client and server computers
US5889989A (en) 1996-09-16 1999-03-30 The Research Foundation Of State University Of New York Load sharing controller for optimizing monetary cost
US5999518A (en) 1996-12-04 1999-12-07 Alcatel Usa Sourcing, L.P. Distributed telecommunications switching system and method
US6272523B1 (en) 1996-12-20 2001-08-07 International Business Machines Corporation Distributed networking using logical processes
US6003050A (en) * 1997-04-02 1999-12-14 Microsoft Corporation Method for integrating a virtual machine with input method editors
US6075938A (en) 1997-06-10 2000-06-13 The Board Of Trustees Of The Leland Stanford Junior University Virtual machine monitors for scalable multiprocessors
AU735024B2 (en) 1997-07-25 2001-06-28 British Telecommunications Public Limited Company Scheduler for a software system
US6067545A (en) 1997-08-01 2000-05-23 Hewlett-Packard Company Resource rebalancing in networked computer systems
US6567839B1 (en) 1997-10-23 2003-05-20 International Business Machines Corporation Thread switch control in a multithreaded processor system
US6041347A (en) * 1997-10-24 2000-03-21 Unified Access Communications Computer system and computer-implemented process for simultaneous configuration and monitoring of a computer network
US6633916B2 (en) 1998-06-10 2003-10-14 Hewlett-Packard Development Company, L.P. Method and apparatus for virtual resource handling in a multi-processor computer system
US6256637B1 (en) * 1998-05-05 2001-07-03 Gemstone Systems, Inc. Transactional virtual machine architecture
US6496847B1 (en) 1998-05-15 2002-12-17 Vmware, Inc. System and method for virtualizing computer systems
US6970913B1 (en) * 1999-07-02 2005-11-29 Cisco Technology, Inc. Load balancing using distributed forwarding agents with application based feedback for different virtual machines
US6640239B1 (en) 1999-11-10 2003-10-28 Garuda Network Corporation Apparatus and method for intelligent scalable switching network
US20020065864A1 (en) 2000-03-03 2002-05-30 Hartsell Neal D. Systems and method for resource tracking in information management environments
US6985937B1 (en) * 2000-05-11 2006-01-10 Ensim Corporation Dynamically modifying the resources of a virtual server
US7089558B2 (en) 2001-03-08 2006-08-08 International Business Machines Corporation Inter-partition message passing method, system and program product for throughput measurement in a partitioned processing environment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5611050A (en) * 1993-12-03 1997-03-11 Xerox Corporation Method for selectively performing event on computer controlled device whose location and allowable operation is consistent with the contextual and locational attributes of the event
US6272537B1 (en) * 1997-11-17 2001-08-07 Fujitsu Limited Method for building element manager for a computer network element using a visual element manager builder process

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090199116A1 (en) * 2008-02-04 2009-08-06 Thorsten Von Eicken Systems and methods for efficiently booting and configuring virtual servers
US9116715B2 (en) * 2008-02-04 2015-08-25 Rightscale, Inc. Systems and methods for efficiently booting and configuring virtual servers
CN104699625A (en) * 2009-09-21 2015-06-10 甲骨文国际公司 System and method for synchronizing transient resource usage between virtual machines in a hypervisor environment
WO2013118005A1 (en) * 2012-02-06 2013-08-15 International Business Machines Corporation Consolidating disparate cloud service data and behavior based on trust relationships between cloud services
GB2513753A (en) * 2012-02-06 2014-11-05 Ibm Consolidating disparate cloud service data and behavior based on trust relationships between cloud services
CN104094576B (en) * 2012-02-06 2016-11-16 国际商业机器公司 Different cloud service data and behavior are integrated based on the trusting relationship between cloud service
US20140047086A1 (en) * 2012-08-10 2014-02-13 Adobe Systems, Incorporated Systems and Methods for Providing Hot Spare Nodes
US10963420B2 (en) * 2012-08-10 2021-03-30 Adobe Inc. Systems and methods for providing hot spare nodes

Also Published As

Publication number Publication date
US7574496B2 (en) 2009-08-11
US20030105810A1 (en) 2003-06-05

Similar Documents

Publication Publication Date Title
US7574496B2 (en) Virtual server cloud interfacing
CN108650262B (en) Cloud platform expansion method and system based on micro-service architecture
RU2679188C2 (en) Multifunctional identification of a virtual computing node
US8179809B1 (en) Approach for allocating resources to an apparatus based on suspendable resource requirements
US7703102B1 (en) Approach for allocating resources to an apparatus based on preemptable resource requirements
US8032634B1 (en) Approach for allocating resources to an apparatus based on resource requirements
US6950874B2 (en) Method and system for management of resource leases in an application framework system
US7463648B1 (en) Approach for allocating resources to an apparatus based on optional resource requirements
JP2022061978A (en) System and method for providing interface for block chain cloud service
US8234650B1 (en) Approach for allocating resources to an apparatus
US8019870B1 (en) Approach for allocating resources to an apparatus based on alternative resource requirements
CN102947797B (en) The online service using directory feature extending transversely accesses and controls
CN104247333B (en) System and method for the management of network service
US7856499B2 (en) Autonomic provisioning of hosted applications with level of isolation terms
AU2004288532B2 (en) Method and system for accessing and managing virtual machines
US9614748B1 (en) Multitenant data center providing virtual computing services
EP1942629B1 (en) Method and system for object-based multi-level security in a service oriented architecture
US20040250248A1 (en) System and method for server load balancing and server affinity
EP1428135A2 (en) Virtualized logical server cloud
KR20110040691A (en) Apparatus and methods for managing network resources
Cunsolo et al. Cloud@ home: Bridging the gap between volunteer and cloud computing
CN111108736B (en) Method and system for automatic address failover of a computing device
US10148529B2 (en) Apparatus of mapping logical point-of-delivery to physical point-of-delivery based on telecommunication information networking
US20240080296A1 (en) Application routing infrastructure for private -level redirect trapping and creation of nat mapping to work with connectivity in cloud and customer networks
Tan et al. Service domains

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 69(1) EPC(COMMUNICATION DATED 23-08-2005, EPO FORM 1205A)

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP