Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20090150529 A1
Publication typeApplication
Application numberUS 11/953,839
Publication dateJun 11, 2009
Filing dateDec 10, 2007
Priority dateDec 10, 2007
Also published asUS7962587
Publication number11953839, 953839, US 2009/0150529 A1, US 2009/150529 A1, US 20090150529 A1, US 20090150529A1, US 2009150529 A1, US 2009150529A1, US-A1-20090150529, US-A1-2009150529, US2009/0150529A1, US2009/150529A1, US20090150529 A1, US20090150529A1, US2009150529 A1, US2009150529A1
InventorsSunay Tripathi
Original AssigneeSun Microsystems, Inc.
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Method and system for enforcing resource constraints for virtual machines across migration
US 20090150529 A1
Abstract
In general, the invention relates to a method for migrating virtual machines. The method includes obtaining migration criteria for a first virtual machine (VM) where the migration criteria is a bandwidth constraint for the first VM. The method further includes sending a request comprising the migration criteria to a second computer in the chassis, receiving a response to request from the second computer, where response indicates that the second computer can satisfy the migration criteria. The method further includes suspending execution of the first VM on the first computer and obtaining information to migrate the first VM, migrating the first VM and a first VNIC associated with the first VM, updating a virtual switching table in the chassis to reflect the migration of the first VM; and resuming execution of the first VM on the second computer.
Images(12)
Previous page
Next page
Claims(20)
1. A method for migrating virtual machines, comprising:
obtaining migration criteria for a first virtual machine (VM), wherein the first VM is located on a first computer in a chassis and wherein the migration criteria is a bandwidth constraint for the first VM;
sending a request comprising the migration criteria to a second computer in the chassis;
receiving a response to request from the second computer, wherein response indicates that the second computer can satisfy the migration criteria;
suspending execution of the first VM on the first computer and obtaining information to migrate the first VM;
identifying a first virtual network interface card (VNIC) executing in the first computer that is associated with the first VM and obtaining information required to migrate the first VNIC;
migrating the first VM and the first VNIC to the second computer using the information required to migrate the first VM and the information required to migrate the first VNIC;
configuring the first VNIC and the first VM to satisfy the migration criteria;
updating a virtual switching table in the chassis to reflect the migration of the first VM from the first computer to the second computer; and
resuming execution of the first VM on the second computer.
2. The method of claim 1, wherein the first computer is a blade.
3. The method of claim 1, wherein the second VNIC is located on a third computer in the chassis, wherein the second VNIC is associated with a second VM, wherein the first VNIC is connected to a second VNIC using a virtual wire, wherein the virtual wire is implemented by the virtual switching table.
4. The method of claim 1, wherein the chassis comprises a Peripheral Component Interface Express (PCI-E) backplane, wherein updating the virtual switching table comprises updating a mapping of the first VNIC to a PCI-E endpoint on the PCI-E backplane, and wherein the PCI-E endpoint is connected to the second computer.
5. The method of claim 1, wherein the request is generated by a control operating system executing in second VM located in one selected from a group consisting of the first computer, the second computer and a third computer, wherein the third computer is located in the chassis.
6. The method of claim 1, wherein configuring the first VNIC to satisfy the migration criteria comprises requesting a host operating system on the second computer to configure the first VNIC to operate in accordance with the bandwidth constraint.
7. The method of claim 1, wherein the migration criteria further comprises a hardware resource constraint.
8. The method of claim 1, wherein the request is sent to the second computer using a multicast packet.
9. A computer readable medium comprising a plurality of executable instructions for migrating virtual machines, wherein the plurality of executable instructions comprises instructions to:
obtain migration criteria for a first virtual machine (VM), wherein the first VM is located on a first blade in a blade chassis and wherein the migration criteria is a bandwidth constraint for the first VM;
send a request comprising the migration criteria to a second blade in the blade chassis;
receive a response to request from the second blade, wherein response indicates that the second blade can satisfy the migration criteria;
suspend execution of the first VM on the first blade and obtaining information to migrate the first VM;
identify a first virtual network interface card (VNIC) executing in the first blade that is associated with the first VM and obtaining information required to migrate the first VNIC;
migrate the first VM and the first VNIC to the second blade using the information required to migrate the first VM and the information required to migrate the first VNIC;
configure the first VNIC and the first VM to satisfy the migration criteria;
update a virtual switching table in the blade chassis to reflect the migration of the first VM from the first blade to the second blade; and
continue execution of the first VM on the second blade.
10. The computer readable medium of claim 9, wherein the second VNIC is located on a third blade in the blade chassis, wherein the second VNIC is associated with a second VM, wherein the first VNIC is connected to a second VNIC using a virtual wire, wherein the virtual wire is implemented by the virtual switching table.
11. The computer readable medium of claim 9, wherein the blade chassis comprises a Peripheral Component Interface Express (PCI-E) backplane, wherein updating the virtual switching table comprises updating a mapping of the first VNIC to a PCI-E endpoint on the PCI-E backplane, and wherein the PCI-E endpoint is connected to the second blade.
12. The computer readable medium of claim 9, wherein the request is generated by a control operating system executing in second VM located in one selected from a group consisting of the first blade, the second blade and a third blade, wherein the third blade is located in the blade chassis.
13. The computer readable medium of claim 9, wherein configuring the first VNIC to satisfy the migration criteria comprises requesting a host operating system on the second blade to configure the first VNIC to operate in accordance with the bandwidth constraint.
14. The computer readable medium of claim 9, wherein the migration criteria further comprises a hardware resource constraint.
15. The computer readable medium of claim 9, wherein the request is sent to the second blade using a multicast packet.
16. A method for migrating virtual machines, comprising:
obtaining migration criteria for a first virtual machine (VM),
wherein the first VM is located on a first blade in a blade chassis,
wherein the first VM is associated with a first virtual network interface card (VNIC),
wherein the first VNIC is connected to a second VNIC on a second blade in the blade chassis using a first virtual wire having a first priority, and
wherein the migration criteria is a bandwidth constraint for the first VM;
sending a request comprising the migration criteria to a third blade in the blade chassis,
wherein the second blade comprises a second VM associated with the second VNIC, a third VM associated with the third VNIC,
wherein the third VNIC is connected to a fourth VNIC on the third blade in the blade chassis using a second virtual wire having a second priority,
receiving a response to request from the third blade, wherein response indicates that the third blade cannot satisfy the migration criteria;
suspending the second virtual wire, wherein the first priority is higher than the second priority and wherein suspending the second virtual wire comprises suspending the third VNIC and fourth VNIC;
suspending execution of the first VM on the first blade and obtaining information to migrate the first VM;
obtaining information required to migrate the first VNIC;
migrating the first VM and the first VNIC to the third blade using the information required to migrate the first VM and the information required to migrate the first VNIC;
configuring the first VNIC and the first VM to satisfy the migration criteria;
updating a virtual switching table in the blade chassis to reflect the migration of the first VM from the first blade to the third blade; and
resuming execution of the first VM on the third blade.
17. The method of claim 15, wherein the first virtual wire is implemented by the virtual switching table.
18. The method of claim 15, wherein the blade chassis comprises a Peripheral Component Interface Express (PCI-E) backplane, wherein updating the virtual switching table comprises updating a mapping of the first VNIC to a PCI-E endpoint on the PCI-E backplane, and wherein the PCI-E endpoint is connected to the third blade.
19. The method of claim 15, wherein the request is generated by a control operating system executing in fifth VM located in one selected from a group consisting of the first blade, the second blade, the third blade, and a fourth blade, wherein the fourth blade is located in the blade chassis.
20. The method of claim 15, wherein configuring the first VNIC to satisfy the migration criteria comprises requesting a host operating system on the third blade to configure the first VNIC to operate in accordance with the bandwidth constraint.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application contains subject matter that may be related to the subject matter in the following U.S. applications filed on Dec. 10, 2007, and assigned to the assignee of the present application: “Method and System for Creating a Virtual Network Path” with U.S. application Ser. No. TBD (Attorney Docket No. 03227/067001; SUN080179); “Method and System for Controlling Network Traffic In a Blade” with U.S. application Ser. No. TBD (Attorney Docket No. 03227/068001; SUN080180); “Method and System for Reconfiguring a Virtual Network Path” with U.S. application Ser. No. TBD (Attorney Docket No. 03227/069001; SUN080181); “Method and System for Monitoring Virtual Wires” with U.S. application Ser. No. TBD (Attorney Docket No. 03227/073001; SUN080185); and “Method and System for Scaling Applications On A Blade Chassis” with U.S. application Ser. No. TBD (Attorney Docket No. 03227/076001; SUN080188).

BACKGROUND

Conventionally, in the computer-related arts, a network is an arrangement of physical computer systems configured to communicate with each other. In some cases, the physical computer systems include virtual machines, which may also be configured to interact with the network (i.e., communicate with other physical computers and/or virtual machines in the network). Many different types of networks exist, and a network may be classified based on various aspects of the network, such as scale, connection method, functional relationship of computer systems in the network, and/or network topology.

Regarding connection methods, a network may be broadly categorized as wired (using a tangible connection medium such as Ethernet cables) or wireless (using an intangible connection medium such as radio waves). Different connection methods may also be combined in a single network. For example, a wired network may be extended to allow devices to connect to the network wirelessly. However, core network components such as routers, switches, and servers are generally connected using physical wires. Ethernet is defined within the Institute of Electrical and Electronics Engineers (IEEE) 802.3 standards, which are supervised by the IEEE 802.3 Working Group.

To create a wired network, computer systems must be physically connected to each other. That is, the ends of physical wires (for example, Ethernet cables) must be physically connected to network interface cards in the computer systems forming the network. To reconfigure the network (for example, to replace a server or change the network topology), one or more of the physical wires must be disconnected from a computer system and connected to a different computer system.

SUMMARY

In general, in one aspect, the invention relates to a method and system for creating and managing a virtual network path between virtual machines in a network, wherein the virtual machines are located on different computers connected to a chassis interconnect.

In general, in one aspect, the invention relates to a method for migrating virtual machines. The method includes obtaining migration criteria for a first virtual machine (VM), wherein the first VM is located on a first computer in a chassis and wherein the migration criteria is a bandwidth constraint for the first VM, sending a request comprising the migration criteria to a second computer in the chassis, receiving a response to request from the second computer, wherein response indicates that the second computer can satisfy the migration criteria, suspending execution of the first VM on the first computer and obtaining information to migrate the first VM, identifying a first virtual network interface card (VNIC) executing in the first computer that is associated with the first VM and obtaining information required to migrate the first VNIC, migrating the first VM and the first VNIC to the second computer using the information required to migrate the first VM and the information required to migrate the first VNIC, configuring the first VNIC and the first VM to satisfy the migration criteria, updating a virtual switching table in the chassis to reflect the migration of the first VM from the first computer to the second computer, and resuming execution of the first VM on the second computer.

In general, in one aspect, the invention relates to a computer readable medium comprising a plurality of executable instructions for migrating virtual machines, wherein the plurality of executable instructions comprises instructions to: obtain migration criteria for a first virtual machine (VM), wherein the first VM is located on a first blade in a blade chassis and wherein the migration criteria is a bandwidth constraint for the first VM, send a request comprising the migration criteria to a second blade in the blade chassis, receive a response to request from the second blade, wherein response indicates that the second blade can satisfy the migration criteria, suspend execution of the first VM on the first blade and obtaining information to migrate the first VM, identify a first virtual network interface card (VNIC) executing in the first blade that is associated with the first VM and obtaining information required to migrate the first VNIC, migrate the first VM and the first VNIC to the second blade using the information required to migrate the first VM and the information required to migrate the first VNIC, configure the first VNIC and the first VM to satisfy the migration criteria, update a virtual switching table in the blade chassis to reflect the migration of the first VM from the first blade to the second blade, and continue execution of the first VM on the second blade.

In general, in one aspect, the invention relates to a method for migrating virtual machines. The method includes obtaining migration criteria for a first virtual machine (VM), wherein the first VM is located on a first blade in a blade chassis, wherein the first VM is associated with a first virtual network interface card (VNIC), wherein the first VNIC is connected to a second VNIC on a second blade in the blade chassis using a first virtual wire having a first priority, and wherein the migration criteria is a bandwidth constraint for the first VM, sending a request comprising the migration criteria to a third blade in the blade chassis, wherein the second blade comprises a second VM associated with the second VNIC, a third VM associated with the third VNIC, wherein the third VNIC is connected to a fourth VNIC on the third blade in the blade chassis using a second virtual wire having a second priority, receiving a response to request from the third blade, wherein response indicates that the third blade cannot satisfy the migration criteria, suspending the second virtual wire, wherein the first priority is higher than the second priority and wherein suspending the second virtual wire comprises suspending the third VNIC and fourth VNIC, suspending execution of the first VM on the first blade and obtaining information to migrate the first VM, obtaining information required to migrate the first VNIC, migrating the first VM and the first VNIC to the third blade using the information required to migrate the first VM and the information required to migrate the first VNIC, configuring the first VNIC and the first VM to satisfy the migration criteria, updating a virtual switching table in the blade chassis to reflect the migration of the first VM from the first blade to the third blade, and resuming execution of the first VM on the third blade.

Other aspects of the invention will be apparent from the following description and the appended claims.

BRIEF DESCRIPTION OF DRAWINGS

FIG. 1 shows a diagram of a blade chassis in accordance with one or more embodiments of the invention.

FIG. 2 shows a diagram of a blade in accordance with one or more embodiments of the invention.

FIG. 3 shows a diagram of a network express manager in accordance with one or more embodiments of the invention.

FIG. 4 shows a diagram of a virtual machine in accordance with one or more embodiments of the invention.

FIG. 5 shows a flowchart of a method for creating a virtual network path in accordance with one or more embodiments of the invention.

FIGS. 6A-6C show an example of creating virtual network paths in accordance with one or more embodiments of the invention.

FIGS. 7A-7B show a flowchart of a method for migrating a virtual machine in accordance with one or more embodiments of the invention.

FIGS. 8A-8B show an example of migrating a virtual machine in accordance with one or more embodiments of the invention.

FIGS. 9A-9B show an example of migrating a virtual machine in accordance with one or more embodiments of the invention.

DETAILED DESCRIPTION

Specific embodiments of the invention will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency.

In the following detailed description of embodiments of the invention, numerous specific details are set forth in order to provide a more thorough understanding of the invention. However, it will be apparent to one of ordinary skill in the art that the invention may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.

In general, embodiments of the invention provide a method and system for migrating a virtual machine located on one blade in a blade chassis to another blade on the blade chassis, where the migration preserves at least the bandwidth constraint. In one embodiment of the invention, after the migration of the virtual machine to the other blade on the blade chassis, at least the bandwidth constraint associated with the virtual machine is the same as the bandwidth constraint prior to the migration of the virtual machine. In one embodiment of the invention, other constraints on the operation of the virtual machine may be preserved across migration.

FIG. 1 shows a diagram of a blade chassis (100) in accordance with one or more embodiments of the invention. The blade chassis (100) includes multiple blades (e.g., blade A (102), blade B (104)) communicatively coupled with a chassis interconnect (106). For example, the blade chassis (100) may be a Sun Blade 6048 Chassis by Sun Microsystems Inc., an IBM BladeCenter® chassis, an HP BladeSystem enclosure by Hewlett Packard Inc., or any other type of blade chassis. The blades may be of any type(s) compatible with the blade chassis (100). BladeCenter® is a registered trademark of International Business Machines, Inc. (IBM), headquartered in Armonk, N.Y.

In one or more embodiments of the invention, the blades are configured to communicate with each other via the chassis interconnect (106). Thus, the blade chassis (100) allows for communication between the blades without requiring traditional network wires (such as Ethernet cables) between the blades. For example, depending on the type of blade chassis (100), the chassis interconnect (106) may be a Peripheral Component Interface Express (PCI-E) backplane, and the blades may be configured to communicate with each other via PCI-E endpoints. Those skilled in the art will appreciate that other connection technologies may be used to connect the blades to the blade chassis.

Continuing with the discussion of FIG. 1, to communicate with clients outside the blade chassis (100), the blades are configured to share a physical network interface (110). The physical network interface (110) includes one or more network ports (for example, Ethernet ports), and provides an interface between the blade chassis (100) and the network (i.e., interconnected computer systems external to the blade chassis (100)) to which the blade chassis (100) is connected. The blade chassis (100) may be connected to multiple networks, for example using multiple network ports.

In one or more embodiments, the physical network interface (110) is managed by a network express manager (108). Specifically, the network express manager (108) is configured to manage access by the blades to the physical network interface (110). The network express manager (108) may also be configured to manage internal communications between the blades themselves, in a manner discussed in detail below. The network express manager (108) may be any combination of hardware, software, and/or firmware including executable logic for managing network traffic.

FIG. 2 shows a diagram of a blade (200) in accordance with one or more embodiments of the invention. “Blade” is a term of art referring to a computer system located within a blade chassis (for example, the blade chassis (100) of FIG. 1). Blades typically include fewer components than stand-alone computer systems or conventional servers. In one embodiment of the invention, fully featured stand-alone computer systems or conventional servers may also be used instead of or in combination with the blades. Generally, blades in a blade chassis each include one or more processors and associated memory. Blades may also include storage devices (for example, hard drives and/or optical drives) and numerous other elements and functionalities typical of today's computer systems (not shown), such as a keyboard, a mouse, and/or output means such as a monitor. One or more of the aforementioned components may be shared by multiple blades located in the blade chassis. For example, multiple blades may share a single output device.

Continuing with discussion of FIG. 2, the blade (200) includes a host operating system (not shown) configured to execute one or more virtual machines (e.g., virtual machine C (202), virtual machine D (204)). Broadly speaking, the virtual machines are distinct operating environments configured to inherit underlying functionality of the host operating system via an abstraction layer. In one or more embodiments of the invention, each virtual machine includes a separate instance of an operating system (e.g., operating system instance C (206), operating system instance D (208)). For example, the Xen® virtualization project allows for multiple guest operating systems executing in a host operating system. Xen® is a trademark overseen by the Xen Project Advisory Board. In one embodiment of the invention, the host operating system supports virtual execution environments (not shown). An example of virtual execution environment is a Solaris™ Container. In such cases, the Solaris™ Container may execute in the host operating system, which may be a Solaris™ operating system. Solaris™ is a trademark of Sun Microsystems, Inc. In one embodiment of the invention, the host operating system may include both virtual machines and virtual execution environments.

Many different types of virtual machines and virtual execution environment exist. Further, the virtual machines may include many different types of functionality, such as a switch, a router, a firewall, a load balancer, an application server, any other type of network-enabled service, or any combination thereof.

In one or more embodiments of the invention, the virtual machines and virtual execution environments inherit network connectivity from the host operating system via VNICs (e.g., VNIC C (210), VNIC D (212)). To the virtual machines and the virtual execution environments, the VNICs appear as physical NICs. In one or more embodiments of the invention, the use of VNICs allows an arbitrary number of virtual machines or and virtual execution environments to share the blade's (200) networking functionality. Further, in one or more embodiments of the invention, each virtual machine or and virtual execution environment may be associated with an arbitrary number of VNICs, thereby providing increased flexibility in the types of networking functionality available to the virtual machines and/or and virtual execution environments. For example, a virtual machine may use one VNIC for incoming network traffic, and another VNIC for outgoing network traffic. VNICs in accordance with one or more embodiments of the invention are described in detail in commonly owned U.S. patent application Ser. No. 11/489,942, entitled “Multiple Virtual Network Stack Instances using Virtual Network Interface Cards,” in the names of Nicolas G. Droux, Erik Nordmark, and Sunay Tripathi, the contents of which are hereby incorporated by reference in their entirety.

VNICs in accordance with one or more embodiments of the invention also are described in detail in commonly owned U.S. patent application Ser. No. 11/480,000, entitled “Method and System for Controlling Virtual Machine Bandwidth” in the names of Sunay Tripathi, Tim P. Marsland, and Nicolas G. Droux the contents of which are hereby incorporated by reference in their entirety.

In one embodiment of the invention, one of the blades in the blade chassis includes a control operating system executing in a virtual machine (also referred to as the control virtual machine). The control operating system is configured to manage the creation and maintenance of the virtual wires and/or virtual network paths (discussed below). In addition, the control operating system also includes functionality to migrate virtual machines between blades in the blade chassis (discussed below).

Continuing with the discussion of FIG. 2, each blade's networking functionality (and, by extension, networking functionality inherited by the VNICs) includes access to a shared physical network interface and communication with other blades via the chassis interconnect. FIG. 3 shows a diagram of a network express manager (300) in accordance with one or more embodiments of the invention. The network express manager (300) is configured to route network traffic traveling to and from VNICs located in the blades. Specifically, the network express manager (300) includes a virtual switching table (302), which includes a mapping of VNIC identifiers (304) to VNIC locations (306) in the chassis interconnect. In one or more embodiments, the VNIC identifiers (304) are Internet Protocol (IP) addresses, and the VNIC locations (306) are PCI-E endpoints associated with the blades (e.g., if the chassis interconnect is a PCI-E backplane). In another embodiment of the invention, the VNIC identifiers (304) may be media access control (MAC) addresses. Alternatively, another routing scheme may be used.

In one or more embodiments, the network express manager (300) is configured to receive network traffic via the physical network interface and route the network traffic to the appropriate location (i.e., where the VNIC is located) using the virtual switching table (302). In one embodiment of the invention, once a determination is made about where to route a given packet, the packet is stored in the appropriate receive buffer (308) or transmit buffer (310). In one embodiment of the invention, each VNIC listed in the virtual switching table (302) is associated with a receive buffer (308) and a transmit buffer (310). The receive buffer (308) is configured to temporarily store packets destined for a given VNIC prior to the VNIC receiving (via a polling or interrupt mechanism) the packets. Similarly, the transmit buffer (310) is configured to temporarily store packets received from the VNIC prior to send the packet towards its packet destination.

In one embodiment of the invention, the receive buffer (308) enables the VNICs to implement bandwidth control. More specifically, when the VNIC is implementing bandwidth control, packets remain in the receive buffer (308) until the VNIC (or an associated process) requests packets from the receive buffer (308). As such, if the rate at which packets are received is greater than the rate at which packets requested by the VNIC (or an associated process), then packets may be dropped from the receive buffer once the receive buffer is full. Those skilled in the art will appreciate that the rate at which packets are dropped from the receive buffer is determined by the size of the receive buffer.

Continuing with the discussion of FIG. 3, the network express manager (300) may be configured to route network traffic between different VNICs located in the blade chassis. In one or more embodiments of the invention, using the virtual switching table (302) in this manner facilitates the creation of a virtual network path, which includes virtual wires (discussed below). Thus, using the virtual switching table (302), virtual machines located in different blades may be interconnected to form an arbitrary virtual network topology, where the VNICs associated with each virtual machine do not need to know the physical locations of other VNICs. Further, if a virtual machine is migrated from one blade to another, the virtual network topology may be preserved by updating the virtual switching table (302) to reflect the corresponding VNIC's new physical location (for example, a different PCI-E endpoint).

In some cases, network traffic from one VNIC may be destined for a VNIC located in the same blade, but associated with a different virtual machine. In one or more embodiments of the invention, a virtual switch may be used to route the network traffic between the VNICs independent of the blade chassis. Virtual switches in accordance with one or more embodiments of the invention are discussed in detail in commonly owned U.S. patent application Ser. No. 11/480,261, entitled “Virtual Switch,” in the names of Nicolas G. Droux, Sunay Tripathi, and Erik Nordmark, the contents of which are hereby incorporated by reference in their entirety.

For example, FIG. 4 shows a diagram of a virtual switch (400) in accordance with one or more embodiments of the invention. The virtual switch (400) provides connectivity between VNIC X (406) associated with virtual machine X (402) and VNIC Y (408) associated with virtual machine Y (404). In one or more embodiments, the virtual switch (400) is managed by a host (410) within which virtual machine X (402) and virtual machine Y (404) are located. Specifically, the host (410) may be configured to identify network traffic targeted at a VNIC in the same blade, and route the traffic to the VNIC using the virtual switch (400). In one or more embodiments of the invention, the virtual switch (400) may reduce utilization of the blade chassis and the network express manager by avoiding unnecessary round-trip network traffic.

FIG. 5 shows a flowchart of a method for creating a virtual network path in accordance with one or more embodiments of the invention. In one or more embodiments of the invention, one or more of the steps shown in FIG. 5 may be ornitted, repeated, and/or performed in a different order. Accordingly, embodiments of the invention should not be considered limited to the specific arrangement of steps shown in FIG. 5.

In one or more embodiments of the invention, in Step 502, VNICs are instantiated for multiple virtual machines. The virtual machines are located in blades, as discussed above. Further, the virtual machines may each be associated with one or more VNICs. In one or more embodiments of the invention, instantiating a VNIC involves loading a VNIC object in memory and registering the VNIC object with a host, i.e., an operating system that is hosting the virtual machine associated with the VNIC. Registering the VNIC object establishes an interface between the host's networking functionality and the abstraction layer provided by the VNIC. Thereafter, when the host receives network traffic addressed to the VNIC, the host forwards the network traffic to the VNIC. Instantiation of VNICs in accordance with one or more embodiments of the invention is discussed in detail in U.S. patent application Ser. No. 11/489,942, incorporated by reference above.

As discussed above, a single blade may include multiple virtual machines configured to communicate with each other. In one or more embodiments of the invention, in Step 504, a virtual switch is instantiated to facilitate communication between the virtual machines. As noted above, the virtual switch allows communication between VNICs independent of the chassis interconnect. Instantiation of virtual switches in accordance with one or more embodiments of the invention is discussed in detail in U.S. patent application Ser. No. 11/480,261, incorporated by reference above.

In one or more embodiments of the invention, in Step 506, a virtual switching table is populated. As noted above, the virtual switching table may be located in a network express manager configured to manage network traffic flowing to and from the virtual machines. Populating the virtual switching table involves associating VNIC identifiers (for example, IP addresses) with VNIC locations (for example, PCI-E endpoints). In one or more embodiments of the invention, the virtual switching table is populated in response to a user command issued via a control operating system, i.e., an operating system that includes functionality to control the network express manager.

In one or more embodiments of the invention, VNICs include settings for controlling the processing of network packets. In one or more embodiments of the invention, in Step 508, settings are assigned to the VNICs according to a networking policy. Many different types of networking policies may be enforced using settings in the VNICs. For example, a setting may be used to provision a particular portion of a blade's available bandwidth to one or more VNICs. As another example, a setting may be used to restrict use of a VNIC to a particular type of network traffic, such as Voice over IP (VoIP) or Transmission Control Protocol/IP (TCP/IP). Further, settings for multiple VNICs in a virtual network path may be identical. For example, VNICs in a virtual network path may be capped at the same bandwidth limit, thereby allowing for consistent data flow across the virtual network path. In one or more embodiments of the invention, a network express manager is configured to transmit the desired settings to the VNICs.

In one or more embodiments of the invention, once the VNICs are instantiated and the virtual switching table is populated, network traffic may be transmitted from a VNIC in one blade to a VNIC in another blade. The connection between the two VNICs may be thought of as a “virtual wire,” because the arrangement obviates the need for traditional network wires such as Ethernet cables. A virtual wire functions similar to a physical wire in the sense that network traffic passing through one virtual wire is isolated from network traffic passing through another virtual wire, even though the network traffic may pass through the same blade (i.e., using the same virtual machine or different virtual machines located in the blade).

In one embodiment of the invention, each virtual wire may be associated with a priority (discussed below in FIGS. 9A-9B). In addition, each virtual wire may be associated with a security setting, which defines packet security (e.g., encryption, etc.) for packets transmitted over the virtual wire. In one embodiment of the invention, the bandwidth, priority and security settings are defined on a per-wire basis, Further, the aforementioned settings are the same for VNICs on either end of the virtual wire.

In one embodiment of the invention, a combination of two or more virtual wires may be thought of as a “virtual network path.” In one embodiment of the invention, the bandwidth, priority and security settings for all virtual wires in the virtual network path are the same. Further, the aforementioned settings are the same for VNICs on either end of the virtual wires, which make up the virtual network path.

Continuing with the discussion of FIG. 5, once the virtual wires and/or virtual network paths have been created and configured, network traffic may be transmitted over the virtual network path through, for example, a first virtual wire (Step 510) and then through a second virtual wire (Step 512). For example, when receiving network traffic from a client via the physical network interface, one virtual wire may be located between the physical network interface and a VNIC, and a second virtual wire may be located between the VNIC and another VNIC. In one embodiment of the invention, at least Steps 502-508 are performed and/or managed by the control operating system.

FIGS. 6A-6C show an example of creating virtual network paths in accordance with one or more embodiments of the invention. Specifically, FIG. 6A shows a diagram of an actual topology (600) in accordance with one or more embodiments of the invention, FIG. 6B shows how network traffic may be routed through the actual topology (600), and FIG. 6C shows a virtual network topology (640) created by routing network traffic as shown in FIG. 6B. FIGS. 6A-6C are provided as examples only, and should not be construed as limiting the scope of the invention.

Referring first to FIG. 6A, the actual topology (600) includes multiple virtual machines. Specifically, the actual topology (600) includes a router (602), a firewall (604), application server M (606), and application server N (608), each executing in a separate virtual machine. The virtual machines are located in blades communicatively coupled with a chassis interconnect (622), and include networking functionality provided by the blades via VNICs (i.e., VNIC H (610), VNIC J (612), VNIC K (614), VNIC M (618), and VNIC N (620)). As shown in FIG. 6A, each virtual machine is communicatively coupled to all other virtual machines. However, as discussed below, while there is full connectivity between the virtual machines, embodiments of the invention create virtual wires and/or virtual network paths to limit the connectivity of the virtual machines. For ease of illustration, the blades themselves are not shown in the diagram.

In one or more embodiments of the invention, the router (602), the firewall (604), application server M (606), and application server N (608) are each located in separate blades. Alternatively, as noted above, a blade may include multiple virtual machines. For example, the router (602) and the firewall (604) may be located in a single blade. Further, each virtual machine may be associated with a different number of VNICs than the number of VNICs shown in FIG. 6A.

Continuing with discussion of FIG. 6A, a network express manager (624) is configured to manage network traffic flowing to and from the virtual machines. Further, the network express manager (624) is configured to manage access to a physical network interface (626) used to communicate with client O (628) and client P (630).

In FIG. 6A, the virtual machines, VNICs, chassis interconnect (622), network express manager (624), and physical network interface (626) are all located within a chassis interconnect. Client O (628) and client P (630) are located in one or more networks (not shown) to which the chassis interconnect is connected.

FIG. 6B shows how network traffic may be routed through the actual topology (600) in accordance with one or more embodiments of the invention. In one or more embodiments of the invention, the routing is performed by the network express manager (624) using a virtual switching table (634).

As discussed above, network traffic routed to and from the VNICs may be thought of as flowing through a “virtual wire.” For example, FIG. 6B shows a virtual wire (632) located between application server M (606) and application server N (608). To use the virtual wire, application server M (606) transmits a network packet via VNIC M (618). The network packet is addressed to VNIC N (620) associated with application server N (608). The network express manager (624) receives the network packet via the chassis interconnect (622), inspects the network packet, and determines the target VNIC location using the virtual switching table (634). If the target VNIC location is not found in the virtual switching table (634), then the network packet may be dropped. In this example, the target VNIC location is the blade in which VNIC N (620) is located. The network express manager (624) routes the network packet to the target VNIC location, and application server N (608) receives the network packet via VNIC N (620), thereby completing the virtual wire (632). In one or more embodiments of the invention, the virtual wire (632) may also be used to transmit network traffic in the opposite direction, i.e., from application server N (608) to application server M (606).

Further, as discussed above, multiple virtual wires may be combined to form a “virtual network path.” For example, FIG. 6B shows virtual network path R (636), which flows from client O (628), through the router (602), through the firewall (604), and terminates at application server M (606). Specifically, the virtual network path R (636) includes the following virtual wires. A virtual wire is located between the physical network interface (626) and VNIC H (610). Another virtual wire is located between VNIC J (612) and VNIC K (614). Yet another virtual wire is located between VNIC L (616) and VNIC M (618). If the router (602) and the firewall (604) are located in the same blade, then a virtual switch may be substituted for the virtual wire located between VNIC J (612) and VNIC K (614), thereby eliminating use of the chassis interconnect (622) from communications between the router (602) and the firewall (604).

Similarly, FIG. 6B shows virtual network path S (638), which flows from client P (630), through the router (602), and terminates at application server N (608). Virtual network path S (638) includes a virtual wire between the physical network interface (626) and VNIC H (610), and a virtual wire between VNIC J (612) and VNIC N (620). The differences between virtual network path R (636) and virtual network path S (638) exemplify how multiple virtual network paths may be located in the same blade chassis.

In one or more embodiments of the invention, VNIC settings are applied separately for each virtual network path. For example, different bandwidth limits may be used for virtual network path R (636) and virtual network path S (638). Thus, the virtual network paths may be thought of as including many of the same features as traditional network paths (e.g., using Ethernet cables), even though traditional network wires are not used within the blade chassis. However, traditional network wires may still be required outside the blade chassis, for example between the physical network interface (626) and client O (628) and/or client P (630).

FIG. 6C shows a diagram of the virtual network topology (640) resulting from the use of the virtual network path R (636), virtual network path S (638), and virtual wire (632) shown in FIG. 6B. The virtual network topology (640) allows the various components of the network (i.e., router (602), firewall (604), application server M (606), application server N (608), client O (628), and client P (630)) to interact in a manner similar to a traditional wired network. However, as discussed above, communication between the components located within the blade chassis (i.e., router (602), firewall (604), application server M (606), and application server N (608)) is accomplished without the use of traditional network wires.

Embodiments of the invention allow for virtual network paths to be created using virtual wires, without the need for traditional network wires. Specifically, by placing virtual machines in blades coupled via a chassis interconnect, and routing network traffic using VNICs and a virtual switching table, the need for traditional network wires between the virtual machines is avoided. Thus, embodiments of the invention facilitate the creation and reconfiguration of virtual network topologies without the physical labor typically involved in creating a traditional wired network.

In one embodiment of the invention, one or more virtual machines may be migrated from one blade to another blade in the blade chassis. Migration may be necessitated by a number of factors. For example, a virtual machine may need to be migrated from one blade to another blade because the virtual machine requires additional resources, which are not available on the blade on which it is currently executing. Alternatively, a virtual machine may need to be migrated from one blade to another blade because the blade on which the virtual machine is currently executing is powering down, failing, and/or other suspending operation.

In one embodiment of the invention, at least the bandwidth constraint associated with virtual machine is preserved across the migration, such that at least the bandwidth constraint associated with virtual machine is the same before and after the migration of the virtual machine. Those skilled in the art will appreciate that the bandwidth associated with a given virtual machine is enforced by VNIC associated with the virtual machine. As the VNIC is located in the host executing on the blade, the host includes functionality to associate the VNIC with the virtual machine and set the bandwidth of the VNIC.

FIGS. 7A-7B show flowcharts of a method for migrating a virtual machine in accordance with one or more embodiments of the invention. In one or more embodiments of the invention, one or more of the steps shown in FIGS. 7A-7B may be omitted, repeated, and/or performed in a different order. Accordingly, embodiments of the invention should not be considered limited to the specific arrangement of steps shown in FIG. 7A-7B.

Referring to FIG. 7A, in Step 700, a virtual machine (VM) to migrate is identified. The determination of whether to migrate a given VM may be based on any number of factors, some of which are discussed above. In Step 207, migration criteria for the VM is obtained. In one embodiment of the invention, the migration criteria corresponds the bandwidth constraint of the VM (e.g., the minimum bandwidth and/or maximum bandwidth available to the VM), a hardware constraint (e.g., minimum amount of computing resources required by the VM), a software constraint (e.g., version of host operating system required by VM), and/or any other constraint required by the VM. In one embodiment of the invention, the migration constraints may be obtained from the VM, the host on which the VM is executing, the control operating system, or any combination thereof.

In Step 704, the control operating system sends a request including the migration criteria to hosts executing on blades in the blade chassis. In one embodiment of the invention, the control operating system uses a multicast message to send the request. In Step 706, the control operating system receives responses from the hosts. The responses may include: (i) a response indicating that the host which sent the response is unable to satisfy the migration criteria or (ii) a response indicating that the host which sent the response is able to satisfy the migration criteria.

In Step 708, a determination is made, using the responses received in Step 706, about whether there are sufficient resources available to migrate the VM. If there are insufficient resources, the method proceeds to FIG. 7B (described below). Alternatively, if there are sufficient resources, the method proceeds to Step 710. In Step 710, a target host is selected. The target host corresponds to a host to which the VM will be migrated. This selection is made by the control operating system based on the responses received in Step 706.

In Step 712, execution on the VM is suspended. In one embodiment of the invention, suspending the VM may also include suspending execution of associated VNICs (discussed below). In Step 714, state information required to migrate the VM is obtained. In one embodiment of the invention, the state information corresponds to information required to resume execution of the VM on the target host from the state of the VM prior to being suspended in Step 712.

In Step 716, the VNIC(s) to migrate with the VM is identified. Identifying the VNIC(s) corresponds to determining which VNIC(s) is associated with the VM. In one embodiment of the invention, a VNIC is associated with the VM if the VNIC is executing on the same host as the VM and the VM receives packets from and/or transmits packets to the VNIC. In Step 718, information required to migrate the VNIC identified in Step 716 is obtained. In one embodiment of the invention, the information corresponds to information required to resume execution of the VNIC on the target host from the state of the VNIC prior to suspending the VM in Step 712.

In Step 720, VM and VNIC(s) are migrated to the target host. In Step 722, the VM and VNIC(s) are configured on the target host. In one embodiment of the invention, the VM and VNIC(s) are configured such that they operate in the same manner on the target host as they operated on the source host (i.e., the host from which they were migrated). Configuring the VM and VNICs may also include configuring various portions of the target host. In one embodiment of the invention, the VM and VNIC(s) are configured using the information obtained in Steps 714 and 718. In one embodiment of the invention, Step 722 is initiated and monitored by the control operating system. In Step 724, the virtual switching table is updated to reflect that the VNIC(s) identified in Step 716 are on the target host. In Step 726, the execution of the VM is resumed on the host.

Referring to FIG. 7B, as described above, if there are insufficient resources, the method proceeds to FIG. 7B. In Step 726, the lowest priority active virtual wire operating in the blade chassis is obtained. In one embodiment of the invention, the control operating system maintains a data structure which includes the priorities of the various virtual wires operating in the blade chassis. Further, in one embodiment of the invention, only the control operating system includes functionality to set and change the priorities of the virtual wires.

In Step 728, the lowest priority active virtual wire is suspended. In one embodiment of the invention, suspending the lowest priority active virtual wire includes suspending operation of the VNICs on either end of the virtual wire. In addition, the VMs associated with the VNICs may also be suspended. Further, suspending the VNICs and, optionally, the VMs, results in freeing bandwidth and computing resources on the respective blades on which the suspended VNICs and VMs were executed.

In Step 730, the control operating system sends a request including the migration criteria to hosts executing on blades in the blade chassis. In one embodiment of the invention, the control operating system uses a multicast message to send the request. In Step 732, the control operating system receives responses from the hosts. The responses may include: (i) a response indicating that the host which sent the response is unable to satisfy the migration criteria or (ii) a response indicating that the host which sent the response is able to satisfy the migration criteria.

In Step 734, a determination is made, using the responses received in Step 732, about whether there are sufficient resources available to migrate the VM. If there are insufficient resources, the method proceeds to Step 726. Alternatively, if there are sufficient resources, the method proceeds to Step 710 in FIG. 7A.

In one embodiment of the invention, if one or more virtual wires are suspended per Step 728, then the method described in FIGS. 7A and 7B may be used to migrate the VMs associated with the suspended virtual wires. In one embodiment of the invention, the order in which VMs are migrated to resume activity of suspended virtual wires is based on the priority of the suspended virtual wires.

FIGS. 8A-8B show an example of migrating a virtual machine in accordance with one or more embodiments of the invention. FIGS. 8A-8B are provided as examples only, and should not be construed as limiting the scope of the invention.

Referring to FIG. 8A, consider the scenario in which the system includes three blades (800, 802, 804) connected to the chassis interconnect (824) in a blade chassis (not shown). The system is initially configured such that Blade A (800) includes Host A (806), Blade B (802) includes Host B (808) and Blade C (804) includes Host C (810).

As shown in FIG. 8A, Host A (806) includes VNIC A (818) associated with Control Operation System (OS) (812) and VNIC B (820) associated with Virtual Machine (VM) A (814). Further, Host B (808) includes VNIC C (822) associated with VM B (816). Host C (810) initially does not include any VMs.

As shown in FIG. 8A, VM A (814) communicates with VM B (816) using a virtual wire with a bandwidth limit of 5 gigabits per second (GBPS). The virtual wire connects VNIC B (820) with VNIC C (822). As such, network traffic from VM A (814) is transmitted through VNIC B (820) to receive buffer C (830) where it remains until it is sent to or requested by VNIC C (822). Similarly, network traffic from VM B (816) is transmitted through VNIC C (822) to receive buffer B (828) where it remains until it is sent to or requested by VNIC B (820). The virtual switching table (832) in the interface manager (826) implements the virtual wire.

After system has been configured as described above and shown in FIG. 8A, a determination is made that VM A (814) requires additional processing resources and that such resources are not available on Host A (806). In accordance with FIGS. 7A and 7B, the control OS determines the migration criteria for VM A (806). The migration criteria includes a hardware constraint defining the processing resources required by VM A (818) as well as a bandwidth constraint (i.e., 5 GBPS).

The control OS then sends a request including the migration criteria to the hosts in the blade chassis (i.e., Host B (808) and Host C (810)). Host B (808) responds that it does not have sufficient resources to satisfy the migration criteria. Host C responds that it has sufficient resources to satisfy the migration criteria. At this stage, the control OS selects Host C (810) as the target host. The control OS then initiates the migration of VM A (814) along with VNIC B (820) to Host C (810) in accordance with Steps 712-726.

The result of the migration is shown in FIG. 8B. As shown in FIG. 8B, after the migration, VM A (814) and VNIC B (820) are located on Host C (820). Further, the virtual wire is preserved across the migration.

FIGS. 9A-9B show an example of migrating a virtual machine in accordance with one or more embodiments of the invention. FIGS. 9A-9B are provided as examples only, and should not be construed as limiting the scope of the invention.

Referring to FIG. 9A, consider the scenario in which the system includes three blades (900, 902, 904) connected to the chassis interconnect (not shown) in a blade chassis (not shown). The system is initially configured such that Blade A (900) includes Host A (906), Blade B (902) includes Host B (908) and Blade C (904) includes Host C (910).

As shown in FIG. 9A, Host A (906) includes VNIC A (922) associated with VM A (912) and VNIC B (924) associated with VM B (914). Further, Host B (908) includes VNIC C1 (926) and VNIC C2 (928) both associated with VM C (916). Finally, Host C (910) includes VNIC D (930) associated with VM D (918) and VNIC E (932) associated with VM E (920).

As shown in FIG. 9A, VM A (912) communicates with VM E (920) using virtual wire (VW) A (936) with a bandwidth limit of 5 gigabits per second (GBPS). VW A (936) connects VNIC A (922) with VNIC E (932). Further, VM B (914) communicates with VM C (916) using VW B (934) with a bandwidth limit of 3 GBPS. VW B (934) connects VNIC B (924) with VNIC C1 (926). Finally, VM C (916) communicates with VM D (918) using VW C2 (936) with a bandwidth limit of 8 GBPS. VW C2 (936) connects VNIC C2 (928) with VNIC D (930). Each of the VWs is associated with the priority. The priority of the VWs in FIG. 9A from highest to lowest is: VW B (934), VW C2 (936), and VW A (938).

After system has been configured as described above and shown in FIG. 9A, a determination is made to power down blade B (902). As such, VM C (916) must be migrated to another host. In accordance with FIGS. 7A and 7B, the control OS (not shown) determines the migration criteria for VM C (916). The migration criteria includes a bandwidth constraint (i.e., 11 GBPS).

The control OS then sends a request including the migration criteria to the hosts in the blade chassis (i.e., Host A (908) and Host C (910)). Both Host A (908) and Host C (910) respond that they do not have sufficient resources to satisfy the migration criteria. At this stage, the control OS, pursuant to FIG. 7B, identifies the lowest priority active VW (i.e., VW A). VW A (938) is subsequently suspended. Suspending VW A (938) includes suspending VM A (912), VNIC A (922), VM E (920), and VNIC E (932)

The control OS then resends the request include the migration criteria to the hosts in the blade chassis (i.e., Host A (908) and Host C (910)). Host A (906) responds that it does not have sufficient resources to satisfy the migration criteria. Host C (910) responds that it has sufficient resources to satisfy the migration criteria. At this stage, the control OS selects Host C (910) as the target host.

The control OS then initiates the migration of VM C (916) along with VNIC C1 (926) and VNIC C2 (928) to Host C (910) in accordance with Steps 712-726. Once the migration is complete, Blade B (920) is powered down. The result of the migration is shown in FIG. 9B. As shown in FIG. 9B, the VWs are preserved across the migration. However, because VNIC C2 (928) and VNIC D (930) are now located on Host C (918), virtual switch (VS) C2 (937) instead of VW C2 (936) is used to connect VNIC C2 (928) and VNIC D (930).

Those skilled in the art will appreciate that while the invention has been described with respect to using blades, the invention may be extended for use with other computer systems, which are not blades. Specifically, the invention may be extended to any computer, which includes at least memory, a processor, and a mechanism to physically connect to and communicate over the chassis bus. Examples of such computers include, but are not limited to, multi-processor servers, network appliances, and light-weight computing devices (e.g., computers that only include memory, a processor, a mechanism to physically connect to and communicate over the chassis bus), and the necessary hardware to enable the aforementioned components to interact.

Further, those skilled in the art will appreciate that if one or more computers, which are not blades, are not used to implement the invention, then an appropriate chassis may be used in place of the blade chassis.

Software instructions to perform embodiments of the invention may be stored on a computer readable medium such as a compact disc (CD), a diskette, a tape, a file, or any other computer readable storage device.

While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this disclosure, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as disclosed herein. Accordingly, the scope of the invention should be limited only by the attached claims.

Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US8054832Dec 30, 2008Nov 8, 2011Juniper Networks, Inc.Methods and apparatus for routing between virtual resources based on a routing location policy
US8190769Dec 30, 2008May 29, 2012Juniper Networks, Inc.Methods and apparatus for provisioning at a network device in response to a virtual resource migration notification
US8255496 *Dec 30, 2008Aug 28, 2012Juniper Networks, Inc.Method and apparatus for determining a network topology during network provisioning
US8311032Dec 3, 2009Nov 13, 2012International Business Machines CorporationDynamically provisioning virtual machines
US8331362Dec 30, 2008Dec 11, 2012Juniper Networks, Inc.Methods and apparatus for distributed dynamic network provisioning
US8352953Dec 3, 2009Jan 8, 2013International Business Machines CorporationDynamically provisioning virtual machines
US8359386 *Apr 16, 2009Jan 22, 2013Dell Products, LpSystem and method of migrating virtualized environments
US8413147Dec 30, 2010Apr 2, 2013Huawei Technologies Co., Ltd.Method, apparatus and system for making a decision about virtual machine migration
US8418185 *Oct 19, 2010Apr 9, 2013International Business Machines CorporationMemory maximization in a high input/output virtual machine environment
US8468288 *Dec 10, 2009Jun 18, 2013International Business Machines CorporationMethod for efficient guest operating system (OS) migration over a network
US8565118 *Dec 30, 2008Oct 22, 2013Juniper Networks, Inc.Methods and apparatus for distributed dynamic network provisioning
US8774054 *Jan 21, 2014Jul 8, 2014Huawei Technologies Co., Ltd.Network policy configuration method, management device, and network management center device
US8782525 *Jul 28, 2011Jul 15, 2014National Insturments CorporationDisplaying physical signal routing in a diagram of a system
US20100165876 *Dec 30, 2008Jul 1, 2010Amit ShuklaMethods and apparatus for distributed dynamic network provisioning
US20110145471 *Dec 10, 2009Jun 16, 2011Ibm CorporationMethod for efficient guest operating system (os) migration over a network
US20120096473 *Oct 19, 2010Apr 19, 2012International Business Machines CorporationMemory maximization in a high input/output virtual machine environment
US20130031509 *Jul 28, 2011Jan 31, 2013Curtis Matthew CDisplaying Physical Signal Routing in a Diagram of a System
WO2011000323A1 *Jul 1, 2010Jan 6, 2011Huawei Technologies Co., Ltd.Method, apparatus and system for virtual machine migration decision
WO2012094898A1 *Aug 25, 2011Jul 19, 2012Huawei Technologies Co., Ltd.Virtual machine migration method, switch, virtual machine system
WO2012138339A1 *Apr 7, 2011Oct 11, 2012Hewlett-Packard Development Company, L.P.Method and apparatus for moving a software object
WO2013165354A1 *Apr 30, 2012Nov 7, 2013Hewlett-Packard Development Company, L.P.Determining virtual machine placement
Classifications
U.S. Classification709/222
International ClassificationG06F15/177
Cooperative ClassificationG06F9/455
European ClassificationG06F9/455
Legal Events
DateCodeEventDescription
Dec 11, 2007ASAssignment
Owner name: SUN MICROSYSTEMS, INC., CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TRIPATHI, SUNAY;REEL/FRAME:020229/0239
Effective date: 20071206