Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20070204266 A1
Publication typeApplication
Application numberUS 11/364,449
Publication dateAug 30, 2007
Filing dateFeb 28, 2006
Priority dateFeb 28, 2006
Also published asUS8601471, US20080222638
Publication number11364449, 364449, US 2007/0204266 A1, US 2007/204266 A1, US 20070204266 A1, US 20070204266A1, US 2007204266 A1, US 2007204266A1, US-A1-20070204266, US-A1-2007204266, US2007/0204266A1, US2007/204266A1, US20070204266 A1, US20070204266A1, US2007204266 A1, US2007204266A1
InventorsKirk Beaty, Norman Bobroff, Gautam Kar, Gunjan Khanna, Andrzej Kochut
Original AssigneeInternational Business Machines Corporation
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Systems and methods for dynamically managing virtual machines
US 20070204266 A1
Abstract
Techniques for dynamic management of virtual machine environments are disclosed. For example, a technique for automatically managing a first set of virtual machines being hosted by a second set of physical machines comprises the following steps/operations. An alert is obtained that a service level agreement (SLA) pertaining to at least one application being hosted by at least one of the virtual machines in the first set of virtual machines is being violated. Upon obtaining the SLA violation alert, the technique obtains at least one performance measurement for at least a portion of the machines in at least one of the first set of virtual machines and the second set of physical machines, and a cost of migration for at least a portion of the virtual machines in the first set of virtual machines. Based on the obtained performance measurements and the obtained migration costs, an optimal migration policy is determined for moving the virtual machine hosting the at least one application to another physical machine.
Images(6)
Previous page
Next page
Claims(17)
1. A method of automatically managing a first set of virtual machines being hosted by a second set of physical machines, comprising the steps of:
obtaining an alert that a service level agreement (SLA) pertaining to at least one application being hosted by at least one of the virtual machines in the first set of virtual machines is being violated;
upon obtaining the SLA violation alert:
obtaining at least one performance measurement for at least a portion of the machines in at least one of the first set of virtual machines and the second set of physical machines;
obtaining a cost of migration for at least a portion of the virtual machines in the first set of virtual machines; and
determining, based on the obtained performance measurements and the obtained migration costs, an optimal migration policy for moving the virtual machine hosting the at least one application to another physical machine.
2. The method of claim 1, wherein the optimal policy determining step further comprises the step of selecting a virtual machine from the first set of virtual machines with the lowest migration cost.
3. The method of claim 2, wherein the optimal policy determining step further comprises the step of selecting a physical machine from the second set of physical machines that has a resource residue that is the lowest among the physical machines and that can accommodate the selected virtual machine.
4. The method of claim 3, wherein the optimal policy determining step further comprises the step of generating an instruction to move the selected virtual machine to the selected physical machine.
5. The method of claim 4, wherein the optimal policy determining step further comprises the step of recalculating resource residues for the second set of physical machines.
6. The method of claim 5, wherein the optimal policy determining step further comprises the step of sorting the first set of virtual machines according to migration costs.
7. The method of claim 3, wherein when the second set of physical machines does not include a physical machine that can accommodate the selected virtual machine, mapping the selected virtual machine to a physical machine that is not in the second set of physical machines.
8. The method of claim 1, wherein at least a portion of the steps of the management method are iteratively performed until the SLA violation is remedied.
9. Apparatus for automatically managing a first set of virtual machines being hosted by a second set of physical machines, comprising:
a memory; and
at least one processor coupled to the memory and operative to: (i) obtain an alert that a service level agreement (SLA) pertaining to at least one application being hosted by at least one of the virtual machines in the first set of virtual machines is being violated; and (ii) upon obtaining the SLA violation alert: obtain at least one performance measurement for at least a portion of the machines in at least one of the first set of virtual machines and the second set of physical machines; obtain a cost of migration for at least a portion of the virtual machines in the first set of virtual machines, and determine, based on the obtained performance measurements and the obtained migration costs, an optimal migration policy for moving the virtual machine hosting the at least one application to another physical machine.
10. The apparatus of claim 9, wherein the optimal policy determining operation further comprises selecting a virtual machine from the first set of virtual machines with the lowest migration cost.
11. The apparatus of claim 10, wherein the optimal policy determining operation further comprises selecting a physical machine from the second set of physical machines that has a resource residue that is the lowest among the physical machines and that can accommodate the selected virtual machine.
12. The apparatus of claim 11, wherein the optimal policy determining operation further comprises generating an instruction to move the selected virtual machine to the selected physical machine.
13. The apparatus of claim 12, wherein the optimal policy determining operation further comprises recalculating resource residues for the second set of physical machines.
14. The apparatus of claim 13, wherein the optimal policy determining operation further comprises sorting the first set of virtual machines according to migration costs.
15. The apparatus of claim 11, wherein when the second set of physical machines does not include a physical machine that can accommodate the selected virtual machine, mapping the selected virtual machine to a physical machine that is not in the second set of physical machines.
16. The apparatus of claim 9, wherein at least a portion of the operations of the management apparatus are iteratively performed until the SLA violation is remedied.
17. An article of manufacture for automatically managing a first set of virtual machines being hosted by a second set of physical machines, comprising a machine readable medium containing one or more programs which when executed implement the steps of:
obtaining an alert that a service level agreement (SLA) pertaining to at least one application being hosted by at least one of the virtual machines in the first set of virtual machines is being violated;
upon obtaining the SLA violation alert:
obtaining at least one performance measurement for at least a portion of the machines in at least one of the first set of virtual machines and the second set of physical machines;
obtaining a cost of migration for at least a portion of the virtual machines in the first set of virtual machines; and
determining, based on the obtained performance measurements and the obtained migration costs, an optimal migration policy for moving the virtual machine hosting the at least one application to another physical machine.
Description
FIELD OF THE INVENTION

This present invention generally relates to virtual machine environments and, more particularly, to techniques for dynamically managing virtual machines.

BACKGROUND OF THE INVENTION

An important problem encountered in today's information technology (IT) environment is known as server sprawl. Because of unplanned growth, many data centers today have large numbers of heterogeneous servers, each hosting one application and often grossly under utilized.

A solution to this problem is a technique known as server consolidation. In general, server consolidation involves converting each physical server or physical machine into a virtual server or virtual machine (VM), and then mapping multiple VMs to a physical machine, thus increasing utilization and reducing the required number of physical machines.

There are some critical runtime issues associated with a consolidated server environment. For example, due to user application workload changes or fluctuations, a critical problem often arises in these environments. The critical problem is that end user application performance degrades due to over utilization of critical resources in some of the physical machines. Accordingly, an existing allocation of VMs to physical machines may no longer satisfy service level agreement (SLA) requirements. As is known, an SLA is an agreement between a service customer (e.g., application owner) and a service provider (e.g., application host) that specifies the parameters of a particular service (e.g., minimum quality of service level). As a result, VMs may need to be reallocated to other physical machines. However, such a reallocation has an associated migration cost. Existing consolidation approaches do not account for the cost of migration.

SUMMARY OF THE INVENTION

Principles of the present invention provide techniques for dynamic management of virtual machine environments.

For example, in one aspect of the invention, a technique for automatically managing a first set of virtual machines being hosted by a second set of physical machines comprises the following steps/operations. An alert is obtained that a service level agreement (SLA) pertaining to at least one application being hosted by at least one of the virtual machines in the first set of virtual machines is being violated. Upon obtaining the SLA violation alert, the technique obtains at least one performance measurement for at least a portion of the machines in at least one of the first set of virtual machines and the second set of physical machines, and a cost of migration for at least a portion of the virtual machines in the first set of virtual machines. Based on the obtained performance measurements and the obtained migration costs, an optimal migration policy is determined for moving the virtual machine hosting the at least one application to another physical machine.

These and other objects, features and advantages of the present invention will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 illustrates an example of server consolidation;

FIG. 2A illustrates a virtual server management methodology, according to an embodiment of the present invention;

FIG. 2B illustrates a virtual machine reallocation methodology, according to an embodiment of the present invention;

FIG. 3 illustrates an example mapping of virtual machines to physical machines; and

FIG. 4 illustrates a computing system in accordance with which one or more components/steps of a virtual server management system may be implemented, according to an embodiment of the present invention.

DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

The following description will illustrate the invention using an exemplary SLA-based service provider environment. It should be understood, however, that the invention is not limited to use with such a particular environment. The invention is instead more generally applicable to any data processing or computing environment in which it would be desirable to manage virtual servers used to perform such data processing or computing operations.

It is to be appreciated that, as used herein, a “physical machine” or “physical server” refers to an actual computing device, while a “virtual machine” or “virtual server” refers to a logical object that acts as a physical machine. In one embodiment, the computing device may be a Blade™ available from International Business Machines Corporation (Armonk, N.Y.). A Blade™ includes a “thin” software layer called a Hypervisor™, which creates the virtual machine. A physical machine equipped with a Hypervisor™ can create multiple virtual machines. Each virtual machine can execute a separate copy of the operating system, as well as one or more applications.

As will be illustrated below, a methodology of the invention provides a polynomial time approximate solution for dynamic migration of virtual machines (VMs) to maintain SLA compliance. Such a management methodology minimizes associated cost of migration, allows dynamic addition or removal of physical machines as needed (in order to reduce total cost of ownership). The approach of the methodology is an iterative approach, which improves upon the existing solution of allocating VMs to physical machines. Moreover, the approach is independent of application software, and works with virtual machines at the operating system level. Such a methodology can be used as a part of a larger management system, e.g., the International Business Machines Corporation (Armonk, N.Y.) Director system, by using its monitoring mechanism and producing event action plans for automatic migration of VMs when needed.

Before describing an illustration of the inventive approach, an explanation of the basic steps and features of the server consolidation process is given in the context of FIG. 1.

As shown in FIG. 1, physical servers 100-1 through 100-n each host a separate application (App1 through Appn, respectively). However, as noted below each box representing the server, each application only utilizes between 25% and 50% of the processing capacity of the server. Thus, each server is considered under utilized.

Each physical server (100-1 through 100-n) is converted (step 105) into a virtual machine (VM1 through VMn denoted as 110-1 through 110-n, respectively) using available virtualization technology, e.g., available from VMWare or XenSource (both of Palo Alto, Calif.). Server virtualization is a technique that is well known in the art and is, therefore, not described in detail herein.

Multiple VMs are then mapped into a physical machine, using central processing unit (CPU) utilization, memory usage, etc. as metrics for resource requirements, thus increasing the utilization and reducing the total number of physical machines required to support the original set of applications. That is, as shown, the VMs are mapped into a lesser number of physical machines (120-1 through 120-i, where i is less than n). For example, App1 and App2 are now each hosted by server 120-1, which can be of the same processing capacity as server 100-1, but now is more efficiently utilized.

Typically, at the end of this consolidation process, the data center will consist of a fewer number of homogeneous servers, each loaded with multiple virtual machines (VMs), where each VM represents one of the original servers. The benefit of this process is that heterogeneity and server sprawl is reduced, resulting in less complex management processes and lower cost of ownership.

However, as mentioned above, there are several runtime issues associated with consolidated server environment. Due to application workload changes or fluctuations, end user application performance degrades due to over utilization of critical resources in some of the physical machines. Thus, an existing allocation of VM to physical machines may no longer satisfy SLA requirements. VMs may need to be reallocated to physical machines, which have an associated migration cost. Existing consolidation approaches do not account for the cost of migration.

Illustrative principles of the invention provide a solution to this problem using an automated virtual machine migration methodology that is deployable to dynamically balance the load on the physical servers with an overall objective of maintaining application SLAs. We assume that there is a cost associated with each migration. The inventive solution finds the best set of virtual machine migrations that restores the violated SLAs, and that minimizes the number of required physical servers, and minimizes the migration cost associated with the reallocation.

The methodology assumes that SLAs are directly related to metrics of the host, such as CPU utilization or memory usage. The inventive methodology is embodied in a virtual server management system that monitors those metrics and if any of them exceeds a predetermined threshold for a physical server or VM, one or more VMs from that physical machine is moved to another physical machine in order to restore acceptable levels of utilization. The VM chosen to be moved is the one with the smallest migration cost, as will be explained below. The chosen VM is moved to the physical machine which has the least residual capacity for the resource associated with that metric and is able to accommodate the VM. An overall objective is to maximize the variance of the utilization across all the existing physical servers, as will be explained below. The procedure is repeated until the SLA violation is corrected. The overall management methodology 200 is depicted on FIG. 2A, while the reallocation or migration methodology is illustrated in FIG. 2B.

As shown in FIG. 2A, a heterogeneous under-utilized server environment (block 210) is the input to server consolidation step 220. The server consolidation step 220 is the server virtualization process described above in the context of FIG. 1. Thus, the input to step 220 is data indicative of the heterogeneous under-utilized server environment, such as the environment including servers 100-1 through 100-n in FIG. 1. This may include information regarding the application running on the physical server as well as server utilization information. Again, since the virtualization process is well known, as well as the data input thereto, a further description of this process is not given herein. The result of the server consolidation step is a consolidated homogenous environment (block 230). That is, server consolidation step 220 outputs a mapping of multiple VMs to physical servers, which serves to reduce heterogeneity and server sprawl.

As further shown in FIG. 2A, utilization values are monitored. This is accomplished by monitoring agents 240. That is, performance metrics or measurements such as CPU utilization, memory utilization, input/output utilization of each server in the consolidated environment are measured. The agents may simply be one or more software modules that compile these utilization values reported by the servers. It is to be appreciated that while a utilization value may be from a physical machine or a virtual machine, such values are preferably taken for both the physical machine and the virtual machine. For example, for three virtual machines executing on one physical machine, the system gathers CPU utilization values of the physical machine, three CPU utilization values denoting the virtual machines CPU usage, and CPU utilization due to the overhead of the Hypervisor™.

These utilization values are then compared to threshold values in step 250 to determine whether they are greater than, less than, or equal to, some predetermined threshold value for the respective type of utilization that is being monitored (e.g., CPU utilization threshold value, memory utilization value, input/output utilization value). Such thresholds are preferably thresholds generated based on the SLA that governs the agreed-upon requirements for hosting the application running on the subject server. For example, the SLA may require that a response to an end user query to an application running on the subject server be less than a certain number of seconds for a percentage of the requests. Based on knowledge of the processing capacity of the server, this requirement is easily translated into a threshold percentage for CPU capacity. Thus, the subject server hosting the application should never reach or exceed the threshold percent of its CPU capacity.

Accordingly, if the subject server is being under utilized (e.g., below the threshold value) or over utilized (e.g., greater than or equal to the threshold value), then the computation step will detect this condition and generate an appropriate alert, if necessary. In this embodiment, when the server is being over utilized, an SLA violation alert is generated.

If such an SLA violation alert is generated, a VM reallocation methodology of the invention is then triggered in step 260. The input to the VM reallocation methodology includes: (i) utilization values (e.g., CPU, memory, I/O) as computed by the monitoring agents 240; (ii) SLA information related to the thresholds; (iii) metric thresholds as computed in the threshold computation step; and (iv) a weight coefficient vector specifying the importance of each utilization dimension to the overall cost function.

The cost of reallocation (also referred to as migration) of a VM is defined as a dot product of a vector representing utilization and the vector representing the weight coefficient. For example, the dimension of both these vectors would be 2, if we consider only two resource metrics, CPU utilization and memory usage. It is to be appreciated that these migration costs are computed and maintained by the reallocation component 260 of the virtual server management system 200 or, alternatively, by another component of the system.

By way of example, assume that for a particular VM that the metrics are [0.2, 0.5], where 0.2 denotes 20% CPU utilization and 0.5 denotes 50% memory usage and the cost vector is [5, 10], signifying that, in the total cost of migration, the CPU usage has a weight of 5 and the memory usage has a weight of 10. Then, the cost of migration for this example VM is: 0.2*5+0.5*10=6 units.

The reallocation methodology of 260 includes two steps. Assume PM1, PM2, . . . PMm are the physical machines and Vij is the j-th virtual machine on PMi. For each physical machine PMi, the methodology maintains a list of virtual machines allocated to PMi ordered by non-decreasing migration cost, i.e., the first VM, Vi1, has the lowest cost. For each physical machine PMi, the methodology calculates and stores a vector representing the residual capacity of PMi. The methodology maintains the list of residual capacities in non-decreasing order of the L2 norms of the capacity vector. An example configuration of VMs and their parent physical machines is shown on FIG. 3.

As mentioned above, the reallocation algorithm is triggered by one of the monitored utilization values exceeding one of the utilization thresholds. Assume that a physical machine PMi exhibits a condition, whereby one of the measured metrics (e.g., CPU utilization) exceeds the set threshold. According to the reallocation methodology (illustrated in FIG. 2B), one of the associated VMs of the threshold-exceeding physical machine is chosen to migrate to another physical machine in the following manner:

(i) select the VM (e.g., VMij) which has associated therewith the least migration cost (step 261);

(ii) select the physical machine (PMj) which has the least residue resource vector, but enough to accommodate VMij (step 262);

(iii) instruct the virtual machine migration system (block 270 in FIG. 2A) to move VMij to PMj; (step 263);

(iv) if no physical machine is available to accommodate the virtual machine, a new physical machine is introduced into the server farm and VMij is mapped thereto (step 264); and

(v) recalculate the residue vectors and sort the VMs according to the costs (step 265).

It is to be understood that the reallocation methodology (step 260) generates one or more migration instructions, e.g., move VM1 from PM1 to PM2, remove server PM3, etc. The virtual machine migration system (block 270) then takes the instructions and causes them to be implemented in the consolidated homogeneous server environment (block 230). It is to be understood that existing products can be used for the virtual machine migration system, for example, the Virtual Center from VMWare (Palo Alto, Calif.).

The above heuristic is based on a goal of maximizing the variance of the residue vector, so that the physical machines are as closely packed as SLA requirements will allow, thus leading to a high overall utilization, minimizing the cost of migration and minimizing the need for introducing new physical machines. Further, it is to be appreciated that the virtual server management procedure is iterative in nature, i.e., the steps of FIG. 2A (and thus FIG. 2B) are repeated until all SLA violations are remedied. Still further, based on the iterative nature of the methodology, minimal migration moves are made for each triggering event. Also, the methodology serves to maximize physical server load variance.

FIG. 4 illustrates a computing system in accordance with which one or more components/steps of the virtual server management techniques (e.g., components and methodologies described in the context of FIGS. 1 through 3) may be implemented, according to an embodiment of the present invention. It is to be understood that the individual components/steps may be implemented on one such computing system or on more than one such computing system. In the case of an implementation on a distributed computing system, the individual computer systems and/or devices may be connected via a suitable network, e.g., the Internet or World Wide Web. However, the system may be realized via private or local networks. In any case, the invention is not limited to any particular network.

Thus, the computing system shown in FIG. 4 may represent one or more servers or one or more other processing devices capable of providing all or portions of the functions described herein.

As shown, the computing system architecture 400 may comprise a processor 410, a memory 420, I/O devices 430, and a network interface 440, coupled via a computer bus 450 or alternate connection arrangement.

It is to be appreciated that the term “processor” as used herein is intended to include any processing device, such as, for example, one that includes a CPU and/or other processing circuitry. It is also to be understood that the term “processor” may refer to more than one processing device and that various elements associated with a processing device may be shared by other processing devices.

The term “memory” as used herein is intended to include memory associated with a processor or CPU, such as, for example, RAM, ROM, a fixed memory device (e.g., hard drive), a removable memory device (e.g., diskette), flash memory, etc.

In addition, the phrase “input/output devices” or “I/O devices” as used herein is intended to include, for example, one or more input devices (e.g., keyboard, mouse, etc.) for entering data to the processing unit, and/or one or more output devices (e.g., display, etc.) for presenting results associated with the processing unit.

Still further, the phrase “network interface” as used herein is intended to include, for example, one or more transceivers to permit the computer system to communicate with another computer system via an appropriate communications protocol.

Accordingly, software components including instructions or code for performing the methodologies described herein may be stored in one or more of the associated memory devices (e.g., ROM, fixed or removable memory) and, when ready to be utilized, loaded in part or in whole (e.g., into RAM) and executed by a CPU.

In any case, it is to be appreciated that the techniques of the invention, described herein and shown in the appended figures, may be implemented in various forms of hardware, software, or combinations thereof, e.g., one or more operatively programmed general purpose digital computers with associated memory, implementation-specific integrated circuit(s), functional circuitry, etc. Given the techniques of the invention provided herein, one of ordinary skill in the art will be able to contemplate other implementations of the techniques of the invention.

Although illustrative embodiments of the present invention have been described herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments, and that various other changes and modifications may be made by one skilled in the art without departing from the scope or spirit of the invention.

Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US7383327 *Oct 11, 2007Jun 3, 2008Swsoft Holdings, Ltd.Management of virtual and physical servers using graphic control panels
US7941510Jun 2, 2008May 10, 2011Parallels Holdings, Ltd.Management of virtual and physical servers using central console
US8046468 *Jan 26, 2009Oct 25, 2011Vmware, Inc.Process demand prediction for distributed power and resource management
US8095929 *Apr 16, 2007Jan 10, 2012Vmware, Inc.Method and system for determining a cost-benefit metric for potential virtual machine migrations
US8102781 *Jul 31, 2008Jan 24, 2012Cisco Technology, Inc.Dynamic distribution of virtual machines in a communication network
US8112527 *May 15, 2007Feb 7, 2012Nec CorporationVirtual machine management apparatus, and virtual machine management method and program
US8127291 *Nov 2, 2007Feb 28, 2012Dell Products, L.P.Virtual machine manager for managing multiple virtual machine configurations in the scalable enterprise
US8146098Sep 7, 2007Mar 27, 2012Manageiq, Inc.Method and apparatus for interfacing with a computer user via virtual thumbnails
US8161475 *Sep 29, 2006Apr 17, 2012Microsoft CorporationAutomatic load and balancing for virtual machines to meet resource requirements
US8185894 *Sep 24, 2008May 22, 2012Hewlett-Packard Development Company, L.P.Training a virtual machine placement controller
US8219788 *Oct 31, 2007Jul 10, 2012Oracle America, Inc.Virtual core management
US8234640Oct 17, 2006Jul 31, 2012Manageiq, Inc.Compliance-based adaptations in managed virtual systems
US8234641Nov 27, 2007Jul 31, 2012Managelq, Inc.Compliance-based adaptations in managed virtual systems
US8261266 *Apr 30, 2009Sep 4, 2012Microsoft CorporationDeploying a virtual machine having a virtual hardware configuration matching an improved hardware profile with respect to execution of an application
US8332847 *Sep 23, 2008Dec 11, 2012Hewlett-Packard Development Company, L. P.Validating manual virtual machine migration
US8407688Nov 27, 2007Mar 26, 2013Managelq, Inc.Methods and apparatus for storing and transmitting historical configuration data associated with information technology assets
US8413147Dec 30, 2010Apr 2, 2013Huawei Technologies Co., Ltd.Method, apparatus and system for making a decision about virtual machine migration
US8418173Nov 27, 2007Apr 9, 2013Manageiq, Inc.Locating an unauthorized virtual machine and bypassing locator code by adjusting a boot pointer of a managed virtual machine in authorized environment
US8423591Oct 31, 2011Apr 16, 2013Prowness Consulting, LLCMethod and system for modularizing windows imaging format
US8458695Nov 27, 2007Jun 4, 2013Manageiq, Inc.Automatic optimization for virtual systems
US8464267 *Apr 10, 2009Jun 11, 2013Microsoft CorporationVirtual machine packing method using scarcity
US8468230 *Oct 9, 2008Jun 18, 2013Fujitsu LimitedMethod, apparatus and recording medium for migrating a virtual machine
US8504686 *Nov 1, 2010Aug 6, 2013InMon Corp.Method and apparatus for combining data associated with hardware resources and network traffic
US8516481 *Apr 4, 2008Aug 20, 2013Hewlett-Packard Development Company, L.P.Virtual machine manager system and methods
US8554738 *Mar 20, 2009Oct 8, 2013Microsoft CorporationMitigation of obsolescence for archival services
US8572608 *Oct 22, 2008Oct 29, 2013Vmware, Inc.Methods and systems for converting a related group of physical machines to virtual machines
US8572621 *Mar 4, 2009Oct 29, 2013Nec CorporationSelection of server for relocation of application program based on largest number of algorithms with identical output using selected server resource criteria
US8601483 *Mar 22, 2011Dec 3, 2013International Business Machines CorporationForecasting based service for virtual machine reassignment in computing environment
US8645529Nov 19, 2010Feb 4, 2014Infosys LimitedAutomated service level management of applications in cloud computing environment
US8667500 *Oct 17, 2006Mar 4, 2014Vmware, Inc.Use of dynamic entitlement and adaptive threshold for cluster process balancing
US8670971 *Jul 31, 2007Mar 11, 2014Hewlett-Packard Development Company, L.P.Datacenter workload evaluation
US8671166Aug 9, 2007Mar 11, 2014Prowess Consulting, LlcMethods and systems for deploying hardware files to a computer
US8694995 *Dec 14, 2011Apr 8, 2014International Business Machines CorporationApplication initiated negotiations for resources meeting a performance parameter in a virtualized computing environment
US8694996 *Jul 18, 2012Apr 8, 2014International Business Machines CorporationApplication initiated negotiations for resources meeting a performance parameter in a virtualized computing environment
US8745230 *Nov 21, 2008Jun 3, 2014Datagardens Inc.Adaptation of service oriented architecture
US20100100879 *Oct 22, 2008Apr 22, 2010Vmware, Inc.Methods and systems for converting a related group of physical machines to virtual machines
US20100153946 *Dec 17, 2008Jun 17, 2010Vmware, Inc.Desktop source transfer between different pools
US20100241615 *Mar 20, 2009Sep 23, 2010Microsoft CorporationMitigation of obsolescence for archival services
US20100242044 *Mar 18, 2009Sep 23, 2010Microsoft CorporationAdaptable software resource managers based on intentions
US20100262964 *Apr 10, 2009Oct 14, 2010Microsoft CorporationVirtual Machine Packing Method Using Scarcity
US20100281482 *Apr 30, 2009Nov 4, 2010Microsoft CorporationApplication efficiency engine
US20100306765 *May 28, 2009Dec 2, 2010Dehaan Michael PaulMethods and systems for abstracting cloud management
US20110029974 *Apr 4, 2008Feb 3, 2011Paul BroylesVirtual Machine Manager System And Methods
US20110113136 *Nov 1, 2010May 12, 2011InMon Corp.Method and apparatus for combining data associated with hardware resources and network traffic
US20110161483 *Jun 25, 2009Jun 30, 2011Nec CorporationVirtual server system and physical server selection method
US20120096461 *Oct 5, 2011Apr 19, 2012Citrix Systems, Inc.Load balancing in multi-server virtual workplace environments
US20120216053 *Feb 16, 2012Aug 23, 2012Fujitsu LimitedMethod for changing placement of virtual machine and apparatus for changing placement of virtual machine
US20120246638 *Mar 22, 2011Sep 27, 2012International Business Machines CorporationForecasting based service assignment in cloud computing
US20130047158 *Jun 13, 2012Feb 21, 2013Esds Software Solution Pvt. Ltd.Method and System for Real Time Detection of Resource Requirement and Automatic Adjustments
US20130125116 *Dec 8, 2011May 16, 2013Institute For Information IndustryMethod and Device for Adjusting Virtual Resource and Computer Readable Storage Medium
US20130152076 *Dec 7, 2011Jun 13, 2013Cisco Technology, Inc.Network Access Control Policy for Virtual Machine Migration
US20130159997 *Dec 14, 2011Jun 20, 2013International Business Machines CorporationApplication initiated negotiations for resources meeting a performance parameter in a virtualized computing environment
US20130160008 *Jul 18, 2012Jun 20, 2013International Business Machines CorporationApplication initiated negotiations for resources meeting a performance parameter in a virtualized computing environment
CN101398770BSep 28, 2008Nov 27, 2013赛门铁克公司System for and method of migrating one or more virtual machines
EP2048578A2 *Aug 29, 2008Apr 15, 2009Dell Products, L.P.Virtual machine (VM) migration between processor architectures
EP2270728A1 *Jun 30, 2009Jan 5, 2011Alcatel LucentA method of managing resources, corresponding computer program product, and data storage device therefor
EP2335162A1 *Jul 24, 2009Jun 22, 2011Cisco Technology, Inc.Dynamic distribution of virtual machines in a communication network
EP2417534A2 *Apr 1, 2010Feb 15, 2012Microsoft CorporationOptimized virtual machine migration mechanism
WO2009070654A1 *Nov 26, 2008Jun 4, 2009Manageiq IncCompliance-based adaptations in managed virtual systems
WO2009123640A1 *Apr 4, 2008Oct 8, 2009Hewlett-Packard Development Company, L.P.Virtual machine manager system and methods
WO2010014189A1Jul 24, 2009Feb 4, 2010Cisco Technology, Inc.Dynamic distribution of virtual machines in a communication network
Classifications
U.S. Classification718/1
International ClassificationG06F9/455
Cooperative ClassificationG06F2009/4557, G06F9/45558
European ClassificationG06F9/455H
Legal Events
DateCodeEventDescription
Mar 9, 2006ASAssignment
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BEATY, KIRK A.;BOBROFF, NORMAN;KAR, GAUTAM;AND OTHERS;REEL/FRAME:017321/0063
Effective date: 20060227