|Publication number||US7266637 B1|
|Application number||US 10/821,559|
|Publication date||Sep 4, 2007|
|Filing date||Apr 9, 2004|
|Priority date||May 7, 2002|
|Also published as||US6757778|
|Publication number||10821559, 821559, US 7266637 B1, US 7266637B1, US-B1-7266637, US7266637 B1, US7266637B1|
|Inventors||Hans F. van Rietschote|
|Original Assignee||Veritas Operating Corporation|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (59), Non-Patent Citations (64), Referenced by (31), Classifications (10), Legal Events (7)|
|External Links: USPTO, USPTO Assignment, Espacenet|
This application is a divisional of U.S. Ser. No. 10/140,614, filed May 7, 2002 now U.S. Pat. No. 6,757,778.
1. Field of the Invention
This invention is related to the field of storage management and, more particularly, to software used in storage management.
2. Description of the Related Art
Computer systems typically include storage devices (or are coupled to access storage devices through, e.g., a network) for storing the software to be executed on the computer system, the data to be operated on by the software, and/or the data resulting from the execution of the software by the computer system. Growth continues in the amount of storage that is included in computer systems or to which computer systems have access. Additionally, different types of storage (e.g. network attached storage (NAS), storage area networks (SAN), etc.) continue to be developed and expanded.
In order to manage the use and operation of the storage devices, various storage management software has been developed. For example, file system software, volume managers, volume replicators, etc. have been developed to help effectively manage storage devices. Typically, such storage management software includes one or more modules of code that are included “inside” the operating system (executing with operating system privileges and/or interfacing closely with other operating system code). Thus, such storage management software requires the support of, or at least the permission of, the operating system vendors who make the operating systems for the computer systems on which the storage management software is to execute.
In many cases, the storage management software may be made available on several different operating system platforms. In these cases, different versions of the software must be maintained for each different operating system. Additionally, each time a given operating system changes, the corresponding version often must be modified and retested (and in some cases, recertified by the operating system vendor).
In the case of open source operating systems (e.g. Linux), the module that is incorporated in the operating system (and sometimes other parts of the storage management software) must be open-sourced for inclusion in the operating system. Thus, at least a portion of the storage management software becomes publicly available.
In the extreme, the operating system vendor may drop support/permission with regard to subsequently developed versions of the operating system. Additionally, the module may have to make use of unpublished application programming interfaces (APIs), which may be changed more freely and/or more often by the operation system vendor (and thus may require more frequent changes to the module and/or the storage management software as a whole).
A storage management system is provided. In one embodiment, the storage management system is configured to provide one or more virtual storage devices for use by an operating system. The storage management system is configured to map files representing the virtual storage devices to a plurality of volumes to be stored on physical storage devices. In various embodiments, the storage management system may include storage management components (e.g. a file system, a volume manager, a volume replicator, or a hierarchical storage manager) which manage the files representing the virtual storage devices.
In one implementation, a storage management system may include one or more storage management components and may be configured to provide one or more virtual storage devices for use by the operating system. The storage management system may support a set of storage commands for the virtual storage devices. The set of storage commands may include: (i) a set of standard commands used by the operating system to communicate with storage devices, and (ii) one or more additional commands for communicating with the storage management components.
In another embodiment, a storage management system may be configured to schedule various applications/operating systems for execution on multiple processing hardware. The storage management system may be configured to present a consistent view of storage for a given application/operating system, independent of which of the multiple processing hardware on which the application/operation system is executing. In some embodiments, the application/operating system may be executing within a complete virtual machine.
The following detailed description makes reference to the accompanying drawings, which are now briefly described.
While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims.
Turning now to
The storage management system 24 is a software layer which operates between the operating system 14 and the physical hardware (e.g. the processing hardware 26, the network hardware 28, and the physical storage devices 30A-30B). The storage management system 24 provides a set of virtual storage devices 22A-22B (and a virtual network 20, in this embodiment) for use by the operating system 14. For example, when the computing system 10 is started, the storage management system 24 may be started first, and then the operating system 14 may be started. During start up, the operating system 14 may attempt to locate the storage devices and other I/O devices in the computer system on which the operating system 14 is executing. The storage management system 24 may respond to these attempts by indicating that one or more storage devices and/or network hardware devices are in the computer system. The storage devices and/or network devices are virtual (the virtual network 20 and the virtual storage devices 22A-22B). That is, the virtual devices are not actual hardware, but are logical software constructs created by the storage management system 24. From the point of view of the operating system 14, the devices appear to be physical devices with which the operating system 14 can interact, but in actuality the storage management system 24 may respond to commands from the operating system 14.
More particularly, in the embodiment of
Thus, in the embodiment of
In one embodiment, the storage management system 24 may implement the virtual storage devices as small computer systems interface (SCSI) devices. Thus, in such an embodiment, the storage commands may include standard SCSI commands (e.g. as defined in SCSI-1, SCSI-2, and/or SCSI-3 standards). Other embodiments may employ other standard storage device types. In one embodiment, the storage management system 24 may implement the virtual network as an Ethernet network interface controller (NIC). Other embodiments may implement other network interface controllers. The virtual storage devices which couple via the virtual network 20 (e.g. the virtual storage device 22A in
The network driver 16 is coded to interact with the network hardware represented by the virtual network 20. That is, the network driver 16 is configured to generate network commands to the network hardware in response to higher level software in the operating system 14. Similarly, the storage driver 18 is coded to interact with the storage device represented by the virtual storage device 22B. That is, the storage driver 18 is configured to generate storage commands to the storage device 22B. In a typical computer system, the network commands and storage commands would be driven out of the processor on which the operating system 14 is executing and would be routed by the computer system hardware to the network hardware or storage device, respectively (e.g. over various interconnect within the computer system such as peripheral buses).
In the illustrated embodiment, by contrast, the network commands and storage commands are received by the storage management system 24 (which provides the virtual network 20 and the virtual storage devices 22A-22B for use by the operating system 14), illustrated in
The network driver 16 and the storage driver 18 generate the network commands and storage commands, respectively, while executing on the underlying processing hardware 26. The processing hardware 26 may include one or more processors, and one of the processors may be executing the network driver 16 or storage driver 18 when a command is generated. The executing processor may trap the network commands and storage commands to the storage management system 24, prior to the commands being transmitted out of the executing processor to other hardware within the processing hardware 26. For example, in one embodiment, the executing processor may support various privilege levels at which software may execute. The storage commands and network commands may be generated by instructions which require the higher privilege levels (the “more privileged” privilege levels) to execute. The storage management system 24 may force the operating system 14 (including the drivers 16 and 18) and the applications 12A-12B to executed at a privilege level which is insufficiently privileged to perform the commands. Thus, when the instructions to generate the commands are encountered, the executing processor may trap. The storage management system 24 may begin execution in response to the trap, and may thus capture the network commands/storage commands. Other mechanisms for trapping the commands may be used in other embodiments.
In one embodiment, the storage management system 24 maps each virtual storage device 22A-22B to one or more files to be stored on the physical storage devices 30A-30B in the computing system 10. The physical storage devices 30A-30B need not be the same type as the virtual storage devices 22A-22B. For example, in one embodiment, each virtual storage device may map to a file or files representing the state of the virtual storage device (e.g. the file or files may be a series of blocks corresponding to the blocks stored on the virtual storage device). Additionally, if desired, writes may be accumulated in a copy-on-write file so that the writes may be discarded if it is later determined that the writes are not to be committed to the persistent state of the virtual storage device. In other cases, the writes may be committed directly to the file representing the state of the virtual storage device. The storage management system 24 may generate storage commands and/or network commands (when executing on the processing hardware 26) to the network hardware 28 and the physical storage devices 30A-30B to access/update the files in response to commands generated by the operating system 14 (or more particularly the driver software 16 and 18). Additionally, as described in more detail below, the storage management system 24 may generate commands in response to the operation of various storage management components within the storage management system 24 or in response to input from the remote management interface 32.
As illustrated in
The storage management system 24 may include a set of software components to perform various functions included in the storage management system 24. For example, the embodiment of the storage management system 24 shown in
The file system 36 may provide a organizational structure for the files (e.g. allowing the files to be arranged in a hierarchy of directories) and may map each file to a volume based on the position of the file in the organizational structure, the desired attributes of the file (e.g. speed of storage on which the file is stored, whether or not the file is to have failure protection mechanisms applied such as redundant array of independent disks (RAID), what failure protection mechanisms to use (e.g. what RAID level to use), etc.), etc. Additionally, the file system 36 may provide backup and recovery mechanisms (e.g. journaling of file updates), mechanisms to reduce the fragmentation of files among noncontiguous blocks on the underlying storage device, etc. An exemplary file system which may be used in one implementation is the VERITAS File System™ available from VERITAS™ Software Corporation (Mountain View, Calif.).
Generally, volumes may be used to abstract the organizational structure of the file system 36 from the storage of data on the physical storage devices 30A-30B. A “volume” is a defined amount of storage space. The size of the volume may be modified via specific commands to the volume manager 38, but otherwise may remain fixed during operation. A volume may be mapped to storage on one physical storage device 30A-30B, or may be spread across multiple physical storage devices, as desired. Each volume may have a set of volume attributes, which may control the type of storage on which the volume is stored, the RAID level used, etc.
The volume manager 38 manages the storage of the volumes on the physical storage devices 30A-30B, based on the volume attributes and the available storage. The volume manager 38 may change the size of volumes, and may move volumes from physical storage device to physical storage device in response to changes to the volume attributes or changes requested via the remote management interface 32 or other software. Additionally, the volume manager 38 may implement the desired RAID level (if used) in software by managing the mapping of volumes to physical storage devices 30A-30B (e.g. striping, mirroring, generating parity, etc.). The volume manager 38 may further be configured to monitor performance of the storage system and change volume configuration to improve the performance of the storage system. An exemplary volume manager that may be used in one implementation is the VERITAS Volume Manager™ available from VERITAS™ Software Corporation. Additionally, the volume replicator 40 may replicate a given volume to one or more remote physical storage devices (not shown in
The hierarchical storage manager 42 may attempt to move files which are infrequently accessed from more expensive (usually faster) storage devices to less expensive (usually slower) storage devices. The hierarchical storage manager 42 may monitor access to the various files in the file system, and if a given file has not been accessed for a defined period of time, may move the file to another storage device. The hierarchical storage manager 42 may retain information in the original file location to locate the moved file. Thus, in the event that the file is accessed, the hierarchical storage manager 42 may automatically retrieve the file and return it to the more expensive storage. The hierarchical storage manager 42 may move the file to a different volume which has volume attributes mapping the volume to a slower storage device, for example. An exemplary volume manager that may be used in one implementation is the VERITAS Netbackup Storage Migrator™ available from VERITAS™ Software Corporation.
The storage management components may not, in some embodiments, require the support/permission of the operating system vendors for their products. A single version of the storage management component (for the storage management system 24) may be used to support multiple operating system platforms. Additionally, the storage management components need not be open-sourced, in one embodiment. Free versions of the storage management components need not be negotiated with the operating system vendors, in one embodiment. Unpublished APIs of the operating system 14, in one embodiment, need not be used by the storage management components.
In one implementation, the operating system 14 may execute unmodified on the computing system 10 (unmodified as compared to versions which execute on computing systems which do not include the storage management system 24). The operating system 14 uses the virtual network 20 and the virtual storage devices 22A-22B in the same fashion as the corresponding physical devices are used. The operating system 14 executes without any recognition that the storage management system 24 is in the computing system 10. However, in some cases, it may be desirable to have an application 12A-12B which is “aware” of the storage management system 24. For example, a backup program may execute as an application, and it may be desirable for the backup program to communicate with one or more of the storage management components to perform the backup. Similarly, if the operating system 14 were subsequently modified to be “aware” of the storage management system 24, it may be desirable for the operating system 14 to be able to communicate with the storage management components.
As mentioned above, the storage commands transmitted by the storage driver 18 to the virtual storage devices may include the standard storage commands according to the type of virtual storage device (e.g. SCSI commands). The storage management system 24 may also support additional storage commands besides the standard storage commands. The additional storage commands may be used to communicate with the storage management components. The additional storage commands, as a group, may form an expanded API for the virtual storage device. For example, if the standard storage commands are SCSI commands, additional input/output control (IOCTL) commands may be defined to communicate with the storage management components. In other words, some encodings of the IOCTL commands for the standard SCSI interface are not defined. These encodings may be defined to be various commands to the storage management components.
Any set of additional commands may be defined, depending on the storage management components included in the storage management system. For example, commands may include a create new disk command to create a new virtual storage device, a create snapshot of disk command to create a copy of a virtual storage device at a given point in time, a set RAID level command to change the RAID level used for a volume containing a virtual storage device, a start replication command to cause a volume storing a virtual storage device to be replicated, a move disk command to move a volume storing a virtual storage device to a different physical storage device, etc.
The processing hardware 26 includes at least one processor, and may include multiple processors in a multi-processing configuration. The processing hardware 26 may also include additional circuitry (e.g. a memory controller and memory, peripheral bus controllers, etc.) as desired. The processing hardware 26 may be a single blade for a blade server (in which each blade includes processors and memory). In other computing systems, multiple blades may be used (e.g.
The network hardware 28 may comprise any hardware for coupling the processing hardware 26 to a network. For example, the network hardware 28 may comprise a network interface controller for an Ethernet network. Alternatively, the network hardware 28 may interface to a token ring network, a fibre channel interface, etc.
The physical storage devices 30A-30B may be any type of device capable of storing data. For example, physical storage devices may comprise SCSI drives, integrated drive electronics (IDE) drives, drives connected via various external interfaces (e.g. universal serial bus, firewire (IEEE 1394), etc.), tape drives, compact disc read-only-memory (CD-ROM), CD recordable (CD-R), CD read/write (CD-RW), digital versatile disk (DVD), etc. The physical storage devices 30A-30B may be coupled to the processing hardware via a peripheral interface (e.g. physical storage device 30B) or via a network (e.g. NAS or SAN technologies, illustrated in
The operating system 14 may be any operating system, such as any of the Microsoft™ Windows™ operating systems, any Unix variant (e.g. HP-UX, IBM AIX, Sun Solaris, Linux, etc.), or any other operating system.
It is noted that, while virtual and physical network hardware is shown in
While in the above description, the storage management system provides virtual I/O devices for the operating system 14 to use, other embodiments may employ additional functionality. For example, in one embodiment, the storage management system 24 may also include a process manager 48 for scheduling various portions of the storage management system 24, the operating system 14, and the applications 12A-12B for execution on the processing hardware 26. In other embodiments, the storage management system may employ a full virtual machine in which the operating system 14 and applications 12A-12B may execute. Generally, a virtual machine comprises any combination of software, one or more data structures in memory, and/or one or more files stored on various physical storage devices. The virtual machine mimics the hardware used during execution of a given application and/or operating system. For example, the applications 12A-12B are designed to execute within the operating system 14, and both the applications 12A-12B and the operating system 14 are coded with instructions executed by a virtual CPU in the virtual machine. The applications 12A-12B and/or the operating system 14 may make use of various virtual storage and virtual I/O devices (including the virtual network 20, the virtual storage devices 22A-22B, and other virtual I/O devices (not shown) such as modems, audio devices, video devices, universal serial bus (USB) ports, firewire (IEEE 1394) ports, serial ports, parallel ports, etc.).
The virtual machine in which an application/operating system is executing encompasses the entire system state associated with an application. Generally, when a virtual machine is active (i.e. the application/operating system within the virtual machine is executing), the virtual machine may be stored in the memory of the computing system on which the virtual machine is executing and in the files on the physical storage devices. The virtual machines may be suspended, in which an image of the virtual machine is written to a physical storage device, thus capturing the current state of the executing application. The image may include one or more files written in response to the suspend command, capturing the state of the virtual machine that was in memory in the computing system, as well as the files stored on the physical storage devices 30A-30B that represent the virtual storage devices included in the virtual machine. The state may include not only files written by the application, but uncommitted changes to files which may still be in the memory within the virtual machine, the state of the hardware (including the virtual CPU, the memory in the virtual machine, etc.) within the virtual machine, etc. Thus, the image may be a snapshot of the state of the executing application.
A suspended virtual machine may be resumed. In response to the resume command, the storage management system may read the image of the suspended virtual machine from disk and may activate the virtual machine in the computer system.
In one embodiment, the storage virtualizer may be part of the virtual machine software. Any virtual machine software may be used. For example, in one implementation, the ESX server™ product available from VMWare, Inc. (Palo Alto, Calif.) or the GSX server™ product available from VMWare, Inc. may be used.
In some embodiments, the storage management system 24 may further include cluster server software 50 to allow the computing system 10 to be clustered with other computing systems to allow the fail over of an application 12A-12B if a failure occurs on the computing system 10. Generally, the cluster server 50 may monitor each application and the resources it needs (e.g. operating system services, hardware in the computing system 10, etc.) for execution to detect a failure. In embodiments including full virtual machines, the entire virtual machine may be failed over. In still other embodiments, an application may be failed other rather than a virtual machine. For example, the application may be preprocessed to provide information used in the failover (e.g. the Instant Application Switching technology from Ejasent, Inc. (Mountain View, Calif.) may be used to fail over applications).
The remote management interface 32 may be used to directly interface to the storage management system 24. For example, a system administrator responsible for the computing system 10 may use the remote management interface 32 to communicate with the storage management components to reconfigure the physical storage devices 30A-30B for the computing system 10 while keeping the virtual storage unchanged relative to the operating system 14. In embodiments employing virtual machines, the remote management interface 32 may execute in a separate virtual machine from the operating system 14 and the applications 12A-12B.
Turning next to
The file system 36 maps the files corresponding to the virtual disks to a plurality of volumes (volume 1 through volume M in
For example, the volume manager 38 maps volume 1 as stripes to physical storage devices 30A and 30B. The virtual disk 22A is mapped to volume 1, and thus has striping characteristics (RAID level 0). On the other hand, the volume manager 38 maps volume 2 to the physical storage device 30E, and mirrors volume 2 on the physical storage device 30F (RAID level 1). Since vdsk2.dsk and vdskn.dsk are mapped to volume 2, these files have RAID level 1 characteristics. The volume manager maps volume 3 to a slower (and possibly less expensive) physical storage device 30C. The file vdsk3.dsk is mapped to volume 3. The hierarchical storage manager 42 may map the file vdsk3.dsk to volume 3 in response to determining that the file vdsk3.dsk has not been accessed for a long period of time. Alternatively, the file system 36 may map the file vdsk3.dsk to volume 3. Finally, volume M is mapped to the physical storage device 30D in this example.
Additionally, the volume replicator 40 may be operating to replicate one or more of the volumes 1 through M to one or more remote physical storage devices (not shown in
It is noted that, while mirroring and striping are illustrated in
Turning now to
The storage management system 24 decodes the storage command and determines if the storage command is part of the expanded API (decision block 60). If not, the storage management system 24 processes the standard storage command to the addressed virtual storage device (block 62). On the other hand, if the storage command is part of the expanded API, the storage management system 24 may further decode the storage command to select the storage management component to which the API call is to be directed (block 64). The storage command may also include one or more parameters for the API call. The storage management system 24 forwards an API call to the selected storage management component (block 66), which then processes the API call. It is noted that the storage commands, when generated by an application such as application 12A or 12B, may pass through the operating system 14 without the operating system 14 operating on the command itself. In other words, the operating system 14 (and particularly the storage driver 18) may transmit the storage command to the virtual storage device without operating on the command itself.
Turning now to
The storage management system 24 is configured to manage the execution of the virtual machines 70A-70P on any of the processing hardware 26A-26N. At different points in time, the storage management system 24 may schedule a given virtual machine (e.g. the virtual machine 70A) for execution on different processing hardware 26A-26N. Independent of which processing hardware 26A-26N is selected, the storage management system 24 ensures that the virtual machine has a consistent view of storage. That is, the same virtual storage devices are made available within the virtual machine independent of which processing hardware 26A-26N is executing the virtual machine.
For example, at a first point in time, the virtual machine 70A may be scheduled by the storage management system 24 for execution on the processing hardware 26A (solid line from the virtual machine 70A to the processing hardware 26A). During execution, the processing hardware 26A accesses the virtual disk 1 in the virtual machine 70A with a storage command. The storage management system 24 traps the storage command, and causes the storage command to be processed to the file vdsk1.dsk on the physical storage device 30A (solid line from the processing hardware 26A to the physical storage device 30A). For example, if the storage command is a read, data is returned from the vdsk1.dsk file. If the storage command is a write, the vdsk1.dsk file is updated. Thus the data stored on the virtual storage device is accessed from the physical storage device 30A.
Subsequently, the virtual machine 70A may be suspended from the processing hardware 26A and may be scheduled for execution on different processing hardware (e.g. the processing hardware 26B—dotted line from the virtual machine 70A to the processing hardware 26B). The suspension of the virtual machine 70A may occur due to the storage management system 24 scheduling another virtual machine 70B-70P for execution on the processing hardware 26A, due to the detection of a failure and the failover of the virtual machine 70A, etc.
During execution, the processing hardware 26B accesses the virtual disk 1 in the virtual machine 70A with a storage command. Similar to the above discussion, the storage management system 24 traps the storage command, and causes the storage command to be processed to the file vdsk1.dsk on the physical storage device 30A (dotted line from the processing hardware 26B to the physical storage device 30A). Thus, the virtual machine 70A has a consistent view of its virtual storage independent of the processing hardware on which the virtual machine 70A executes.
The processing hardware 26A-26N may represent any processing hardware configuration including at least one processor per processing hardware. For example, each processing hardware 26A-26N may be a blade for a blade server. That is, the processing hardware 26A-26N may collectively comprise the blade server. Each processing hardware 26A-26N may be an enclosed computer system, wherein the computer systems are networked together.
Turning now to
As illustrated in
It is noted that the operating system 14, the drivers 16, 18, 44, and 46, the applications 12A-12B, the storage management system 24 and the components thereof have been referred to herein as software, programs, code, etc. Generally, software, programs, code, etc. are intended to be synonymous herein, and refer to a sequence of instructions which, when executed, perform the functions assigned to the software, program, code, etc. The instructions may be machine level instructions from an instruction set implemented in a processor, or may be higher level instructions (e.g. shell scripts, interpretive languages, etc.).
Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US4912628||Mar 15, 1988||Mar 27, 1990||International Business Machines Corp.||Suspending and resuming processing of tasks running in a virtual machine data processing system|
|US4969092||Sep 30, 1988||Nov 6, 1990||Ibm Corp.||Method for scheduling execution of distributed application programs at preset times in an SNA LU 6.2 network environment|
|US5257386||Apr 3, 1991||Oct 26, 1993||Fujitsu Limited||Data transfer control system for virtual machine system|
|US5408617||Apr 13, 1992||Apr 18, 1995||Fujitsu Limited||Inter-system communication system for communicating between operating systems using virtual machine control program|
|US5546558 *||Jun 7, 1994||Aug 13, 1996||Hewlett-Packard Company||Memory system with hierarchic disk array and memory map store for persistent storage of virtual mapping information|
|US5621912||Dec 29, 1994||Apr 15, 1997||International Business Machines Corporation||Method and apparatus for enabling monitoring of guests and native operating systems|
|US5809285 *||Dec 21, 1995||Sep 15, 1998||Compaq Computer Corporation||Computer system having a virtual drive array controller|
|US5852724||Jun 18, 1996||Dec 22, 1998||Veritas Software Corp.||System and method for "N" primary servers to fail over to "1" secondary server|
|US5872931||Aug 13, 1996||Feb 16, 1999||Veritas Software, Corp.||Management agent automatically executes corrective scripts in accordance with occurrences of specified events regardless of conditions of management interface and management engine|
|US5944782||Oct 16, 1996||Aug 31, 1999||Veritas Software Corporation||Event management system for distributed computing environment|
|US5987565 *||Jun 25, 1997||Nov 16, 1999||Sun Microsystems, Inc.||Method and apparatus for virtual disk simulation|
|US6003065||Apr 24, 1997||Dec 14, 1999||Sun Microsystems, Inc.||Method and system for distributed processing of applications on host and peripheral devices|
|US6029166||Mar 31, 1998||Feb 22, 2000||Emc Corporation||System and method for generating an operating system-independent file map|
|US6075938||Jun 10, 1998||Jun 13, 2000||The Board Of Trustees Of The Leland Stanford Junior University||Virtual machine monitors for scalable multiprocessors|
|US6151618||Jun 18, 1997||Nov 21, 2000||Microsoft Corporation||Safe general purpose virtual machine computing system|
|US6230246||Jan 30, 1998||May 8, 2001||Compaq Computer Corporation||Non-intrusive crash consistent copying in distributed storage systems without client cooperation|
|US6298390||Mar 26, 1996||Oct 2, 2001||Sun Microsystems, Inc.||Method and apparatus for extending traditional operating systems file systems|
|US6298428||Mar 30, 1998||Oct 2, 2001||International Business Machines Corporation||Method and apparatus for shared persistent virtual storage on existing operating systems|
|US6324627||Jun 2, 1999||Nov 27, 2001||Virtual Data Security, Llc||Virtual data storage (VDS) system|
|US6341329||Feb 9, 2000||Jan 22, 2002||Emc Corporation||Virtual tape system|
|US6363462||Jan 15, 1999||Mar 26, 2002||Lsi Logic Corporation||Storage controller providing automatic retention and deletion of synchronous back-up data|
|US6370646||Feb 16, 2000||Apr 9, 2002||Miramar Systems||Method and apparatus for multiplatform migration|
|US6397242||Oct 26, 1998||May 28, 2002||Vmware, Inc.||Virtualization system including a virtual machine monitor for a computer with a segmented architecture|
|US6421739||Jan 30, 1999||Jul 16, 2002||Nortel Networks Limited||Fault-tolerant java virtual machine|
|US6438642||May 18, 1999||Aug 20, 2002||Kom Networks Inc.||File-based virtual storage file system, method and computer program product for automated file management on multiple file system storage devices|
|US6493811||Jan 20, 1999||Dec 10, 2002||Computer Associated Think, Inc.||Intelligent controller accessed through addressable virtual space|
|US6496847||Sep 10, 1998||Dec 17, 2002||Vmware, Inc.||System and method for virtualizing computer systems|
|US6694346||Apr 30, 1999||Feb 17, 2004||International Business Machines Corporation||Long running, reusable, extendible, virtual machine|
|US6704925||Dec 1, 1998||Mar 9, 2004||Vmware, Inc.||Dynamic binary translator with a system and method for updating and maintaining coherency of a translation cache|
|US6711672||Sep 22, 2000||Mar 23, 2004||Vmware, Inc.||Method and system for implementing subroutine calls and returns in binary translation sub-systems of computers|
|US6718538||Aug 31, 2000||Apr 6, 2004||Sun Microsystems, Inc.||Method and apparatus for hybrid checkpointing|
|US6725289||Apr 17, 2002||Apr 20, 2004||Vmware, Inc.||Transparent address remapping for high-speed I/O|
|US6735601||Dec 29, 2000||May 11, 2004||Vmware, Inc.||System and method for remote file access by computer|
|US6757778||May 7, 2002||Jun 29, 2004||Veritas Operating Corporation||Storage management system|
|US6763440||Jun 2, 2000||Jul 13, 2004||Sun Microsystems, Inc.||Garbage collection using nursery regions for new objects in a virtual heap|
|US6763445 *||Sep 19, 1997||Jul 13, 2004||Micron Technology, Inc.||Multi-drive virtual mass storage device and method of operating same|
|US6772231 *||Jun 1, 2001||Aug 3, 2004||Hewlett-Packard Development Company, L.P.||Structure and process for distributing SCSI LUN semantics across parallel distributed components|
|US6785886||Aug 24, 2000||Aug 31, 2004||Vmware, Inc.||Deferred shadowing of segment descriptors in a virtual machine monitor for a segmented computer architecture|
|US6789103||May 5, 2000||Sep 7, 2004||Interland, Inc.||Synchronized server parameter database|
|US6789156||Jul 25, 2001||Sep 7, 2004||Vmware, Inc.||Content-based, transparent sharing of memory units|
|US6961806||Dec 10, 2001||Nov 1, 2005||Vmware, Inc.||System and method for detecting access to shared structures and for maintaining coherence of derived structures in virtualized multiprocessor systems|
|US6961941||Jun 8, 2001||Nov 1, 2005||Vmware, Inc.||Computer configuration for resource management in systems including a virtual machine|
|US6985956 *||Nov 2, 2001||Jan 10, 2006||Sun Microsystems, Inc.||Switching system|
|US7069413||Jan 29, 2003||Jun 27, 2006||Vmware, Inc.||Method and system for performing virtual to physical address translations in a virtual machine monitor|
|US7082598||Jul 17, 2002||Jul 25, 2006||Vmware, Inc.||Dynamic driver substitution|
|US7089377||Sep 6, 2002||Aug 8, 2006||Vmware, Inc.||Virtualization system for computers with a region-based memory architecture|
|US7111086||Feb 22, 2002||Sep 19, 2006||Vmware, Inc.||High-speed packet transfer in computer systems with multiple interfaces|
|US7111145||Mar 25, 2003||Sep 19, 2006||Vmware, Inc.||TLB miss fault handler and method for accessing multiple page tables|
|US7111481||Feb 17, 2004||Sep 26, 2006||The Bradbury Company||Methods and apparatus for controlling flare in roll-forming processes|
|US7203944||Jul 9, 2003||Apr 10, 2007||Veritas Operating Corporation||Migrating virtual machines among computer systems to balance load caused by virtual machines|
|US7213246||Mar 28, 2002||May 1, 2007||Veritas Operating Corporation||Failing over a virtual machine|
|US20010016879||Apr 18, 2001||Aug 23, 2001||Hitachi, Ltd.||Multi OS configuration method and computer system|
|US20020049869||Mar 28, 2001||Apr 25, 2002||Fujitsu Limited||Virtual computer system and method for swapping input/output devices between virtual machines and computer readable storage medium|
|US20020099753||Jan 20, 2001||Jul 25, 2002||Hardin David S.||System and method for concurrently supporting multiple independent virtual machines|
|US20020129078||Mar 7, 2001||Sep 12, 2002||Plaxton Iris M.||Method and device for creating and using pre-internalized program files|
|US20030028861||Jun 28, 2001||Feb 6, 2003||David Wallman||Method and apparatus to facilitate debugging a platform-independent virtual machine|
|US20030033431||Aug 2, 2002||Feb 13, 2003||Nec Corporation||Data transfer between virtual addresses|
|US20030110351 *||Dec 7, 2001||Jun 12, 2003||Dell Products L.P.||System and method supporting virtual local data storage|
|US20040010787||Jul 11, 2002||Jan 15, 2004||Traut Eric P.||Method for forking or migrating a virtual machine|
|1||"About LindowsOS," Lindows.com, http://www.lindows.com/lindows<SUB>-</SUB>products<SUB>-</SUB>lindowsos.php, 2002, 2 pages.|
|2||"BladeFram(TM) System Overview," Egenera, Inc., 2001 2 pages.|
|3||"Introduction to Simics Full-System Simulator without Equal," Virtutech, Jul. 8, 2002, 29 pages.|
|4||"Manage Multiple Worlds., From Any Desktop," VMware, Inc., 2000, 2 pages.|
|5||"NeTraverse Win4Lin 4.0-Workstation Edition," Netraverse, Inc, 2002, http://www.netraverse.com/products/win4lin40/, printed from web on Dec. 13, 2002, 1 page.|
|6||"Products," Netraverse, Inc, 2002, http://www.netraverse.com/products/index.php, printed from web on Dec. 13, 2002, 1 page.|
|7||"Savannah: This is a Savannah Admin Documentation," Savannah, Free Software Foundation, Inc. (C) 2000-2002, 31 pages.|
|8||"Simics: A Full System Stimulation Platform," Reprinted with permission from Computer, Feb. 2002, (C) The Institute of Electrical and Electronics Engineering, Inc., pp. 50-58.|
|9||"Solution Overview," TrueSAN Networks, Inc., 2002, 7 pages.|
|10||"The Technology of Virtual Machines," A Conneectix White Paper, Connectix Corp., 2001, 13 pages.|
|11||"The Technology of Virtual PC," A Conneectix White Paper, Connectix Corp., 2000, 12 pages.|
|12||"Virtual PC for Windows," Connectix, Version 5.0, 2002, 2 pages.|
|13||"Virtuozzo Basics," Virtuozzo, http://www.sw-soft.com/en/products/virtuozzo/basics/, (C) 1994-2002 SWsoft, printed from the web on Dec. 13, 2002, 2 pages.|
|14||"VMware ESX Server, The Server Consolidation Solution for High-Performance Environments," VMware, Inc., 2001, 2 pages.|
|15||"Vmware GSX Serve, The Server Consolidation Solution," VMware, Inc., 2001, 2 pages.|
|16||"What is Virtual Environmnet(VE)?," SWsoft, http://www.sw-soft/en/products/virtuozzo/we/, (C) 1994-2002 SWsoft, printed from web on Dec. 13, 2002, 2 pages.|
|17||"White Paper, GSX Server," VMware, Inc., Dec. 2000, pp. 1-9.|
|18||"Win4Lin Desktop 4.0," Netraverse, Inc, 2002, http://www.netraverse.com/ products/win4lin40/features.php, printed from web on Dec. 13, 2002, 2 page.|
|19||"Win4Lin Desktop 4.0," Netraverse, Inc, 2002, http://www.netraverse.com/ products/win4lin40/requirements.php, printed from web on Dec. 13, 2002, 2 page.|
|20||"Win4Lin Desktop 4.0," Netraverse, Inc, 2002, http://www.netraverse.com/products/win4lin40/benefits.php, printed from web on Dec. 13, 2002, 1 page.|
|21||"Win4Lin Terminal Server 2.0," Netraverse, Inc, 2002, http://www.netraverse.com/ products/wts, printed from web on Dec. 13, 2002, 1 page.|
|22||"Win4Lin Terminal Server 2.0," Netraverse, Inc, 2002, http://www.netraverse.com/ products/wts/benefits.php, printed from web on Dec. 13, 2002, 1 page.|
|23||"Win4Lin Terminal Server 2.0," Netraverse, Inc, 2002, http://www.netraverse.com/ products/wts/features.php, printed from web on Dec. 13, 2002, 2 pages.|
|24||"Win4Lin Terminal Server 2.0," Netraverse, Inc, 2002, http://www.netraverse.com/ products/wts/requirements.php, printed from web on Dec. 13, 2002, 2 pages.|
|25||"Win4Lin Terminal Server 2.0," Netraverse, Inc, 2002, http://www.netraverse.com/ products/wts/technology.php, printed from web on Dec. 13, 2002, 1 page.|
|26||"Wine Developer's Guide," pages, www.winehq.org, 1-104.|
|27||"Winelib User'Guide," Winelib, www.winehq.org, 26 pages.|
|28||Barrie Sosinky, Ph.D., "The Business Value of Virtual Volume Management, In Microsoft Window NT and Windows 2000 Netowrks," VERITAS, A white paper for administrators and planners, Oct. 2001, pp. 1-12.|
|29||Dave Gardner, et al., "WINE FAQ,", (C) David Gardner 1995-1998, printed from www.winehq.org, 13 pages.|
|30||Dejan S. Milogicic, et al., "Process Migration," Aug. 10, 1999, 49 pages.|
|31||Edouard Bugnion, et al., "Disco: Running Commodity Operating Systems on Scalable Multiprocessors," Computer Systems Laboratory, Stanford, CA, 33 pages.|
|32||Flinn, et al., "Data Staging on Untrusted Surrogates," IRP-TR-03-03, Mar. 2003, Intel Research, To Appear in the Proceedings of the 2<SUP>nd </SUP>USENIX Conference on File and Storage Technologies, San Francisco, 16 pages.|
|33||Helfrich, et al., "Internet Suspend/Resume," ISR Project Home Page, 2003, 4 pages.|
|34||InfoWorld, Robert McMillan, "VMware Launches VMware Control Center," 2003, 2 pages.|
|35||John Abbott, Enterprise Software, "VMware Heads Toward Utility Computing With New Dynamic Management Tools," Jul. 1, 2003, 4 pages.|
|36||John R. Sheets, et al. "Wine User Guide," www.winehq.org, pp. 1-53.|
|37||Kasidit Chanchio et al., "A Protocol Design of Communication State Transfer for Distributed Computing," Publication date unknown, 4 pages.|
|38||Kasidit Chanchio, et al., "Data Collection and Restoration for Heterogeneous Process Migration," 1997, 6 pages.|
|39||Kinshuk Govil, et al., "Cellular Disco: Resource Management Using Virtual Clusters on Shared-Memory Multiprocessors," 17<SUP>th </SUP>ACM Symposium on Operating Systems Principles (SOSP'99), Published as Operating Systems Review 34(5):154-169, Dec. 1999, pp. 154-169.|
|40||Kozuch, et al., "Efficient State Transfer for Internet Suspend/Resume," IRP-02-03, May 2002, Intel Research, 13 pages.|
|41||Kozuch, et al., "Internet Suspend/Resume," IRP-TR-02-01, Apr. 2002, Accepted to the Fourth IEEE Workshop on Mobile Computing Systems and Applications, Callicoon, NY, Jun. 2002, Intel Research, 9 pages.|
|42||Melinda Varian, "VM and the VM Community: Past, Present, and Future," Operating Systems, Computing and Information Technology, Princeton Univ., Aug. 1997, pp. 1-67.|
|43||OpenMosix, "OpenMosix Documentation Wiki-don't," May 7, 2003, 2 pages.|
|44||OpenMosix, "The openMosix HOWTO: Live free() or die ()," May 7, 2003, 3 pages.|
|45||Position Paper, "Taking Control of The Data Center," Egenera, Inc., 2001, 4 pages.|
|46||Position Paper, "The Linux Operating System: How Open Source Software Makes Better Hardware," Egenera, Inc., 2001, 2 pages.|
|47||Sapuntzakis, et al., "Optimizing the Migration of Virtual Computers," Proceedings of the Fifth Symposium on Operating Systems Design and Implementation, Dec. 2002, 14 pages.|
|48||SourceForge(TM), "Project: openMosix: Document Manager: Display Document," 14 pages.|
|49||Tolia, et al., "Opportunistic Use of Content Addressable Storage for Distributed File Systems," IRP-TR-03-02, Jun. 2003, Intel Research, To Appear in the Proceedings of the 2003 USENIX Annual Technical Conference, San Antonio, TX, 16 pages.|
|50||Tolia, et al., "Using Content Addressing to Transfer Virtual Machine State," IRP-TR-02-11, Summer 2002, Intel Research, 11 pages.|
|51||VERITAS, "Comparison: Microsoft Logical Disk Manager and VERITAS Volume Manager for Windows," May 2001, 4 pages.|
|52||VERITAS, "How VERITAS Volume Manager Complements Hardware RAID in Microsoft Server Environments," May 2001, pp. 1-7.|
|53||VERITAS, "VERITAS Volume Manager for Windows NT," Version 27, 2001, 4 pages.|
|54||VERITAS, "VERITAS Volume Manager for Windows, Best Practices," May 2001, pp. 1-7.|
|55||Vertas, "Executive Overview," Technical Overview, pp. 1-9.|
|56||VMware, Inc., "VMware Control Center," 2003, 3 pages.|
|57||VMware, Inc., "VMware Control Center: Enterprise-class Software to Manage and Control Your Virtual Machines," 2003, 2 pages.|
|58||White Paper, "Emerging Server Architectures," Egenera, Inc., 2001, 12 pages.|
|59||White Paper, "Guidelines for Effective E-Business Infrastructure Management," Egenera, Inc., 2001, 11 pages.|
|60||White Paper, "Improving Data Center Performance," Egenera, Inc., 2001, 19 pages.|
|61||White Paper, "The Egenera(TM) Processing Area Network (PAN) Architecture," Egenera, Inc., 2002, 20 pages.|
|62||White Paper, "The Pros and Cons of Server Clustering in the ASP Environment," Egenera, Inc., 2001, 10 pages.|
|63||Win4Lin, Netraverse, Inc, 2002, http://www.netraverse.com/ support/docs/Win4Lin-whitepapers.php, printed from web on Dec. 13, 2002, 5 pages.|
|64||Xian-He Sun, et al., "A Coordinated Approach for Process Migration in Heterogeneous Environments," 1999, 12 pages.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7370164 *||Mar 21, 2006||May 6, 2008||Symantec Operating Corporation||Backup of virtual machines from the base machine|
|US7529972 *||Jan 3, 2006||May 5, 2009||Emc Corporation||Methods and apparatus for reconfiguring a storage system|
|US7603670||Mar 28, 2002||Oct 13, 2009||Symantec Operating Corporation||Virtual machine transfer between computer systems|
|US7802302||Mar 10, 2006||Sep 21, 2010||Symantec Corporation||Single scan for a base machine and all associated virtual machines|
|US7809976 *||Apr 30, 2007||Oct 5, 2010||Netapp, Inc.||System and method for failover of guest operating systems in a virtual machine environment|
|US7810092||Mar 2, 2004||Oct 5, 2010||Symantec Operating Corporation||Central administration and maintenance of workstations using virtual machines, network filesystems, and replication|
|US7899988||Feb 28, 2008||Mar 1, 2011||Harris Corporation||Video media data storage system and related methods|
|US8001342 *||Aug 16, 2011||International Business Machines Corporation||Method for storing and restoring persistent memory content and virtual machine state information|
|US8006079||Feb 22, 2008||Aug 23, 2011||Netapp, Inc.||System and method for fast restart of a guest operating system in a virtual machine environment|
|US8046446 *||Oct 18, 2004||Oct 25, 2011||Symantec Operating Corporation||System and method for providing availability using volume server sets in a storage environment employing distributed block virtualization|
|US8065422 *||Nov 26, 2008||Nov 22, 2011||Netapp, Inc.||Method and/or apparatus for certifying an in-band management application of an external storage array|
|US8255735||May 20, 2010||Aug 28, 2012||Netapp, Inc.||System and method for failover of guest operating systems in a virtual machine environment|
|US8281050 *||Aug 20, 2010||Oct 2, 2012||Hitachi, Ltd.||Method and apparatus of storage array with frame forwarding capability|
|US8291159 *||Mar 12, 2009||Oct 16, 2012||Vmware, Inc.||Monitoring and updating mapping of physical storage allocation of virtual machine without changing identifier of the storage volume assigned to virtual machine|
|US8468521||Oct 26, 2007||Jun 18, 2013||Netapp, Inc.||System and method for utilizing a virtualized compute cluster as an execution engine for a virtual machine of a storage system cluster|
|US8769186 *||Aug 25, 2011||Jul 1, 2014||Amazon Technologies, Inc.||Providing executing programs with reliable access to non-local block data storage|
|US8914610||Nov 6, 2013||Dec 16, 2014||Vmware, Inc.||Configuring object storage system for input/output operations|
|US9047313 *||Apr 21, 2011||Jun 2, 2015||Red Hat Israel, Ltd.||Storing virtual machines on a file system in a distributed environment|
|US9134922||Sep 29, 2012||Sep 15, 2015||Vmware, Inc.||System and method for allocating datastores for virtual machines|
|US9158676||Mar 14, 2013||Oct 13, 2015||Samsung Electronics Co., Ltd.||Nonvolatile memory controller and a nonvolatile memory system|
|US20060282247 *||May 25, 2005||Dec 14, 2006||Brennan James T||Combined hardware and network simulator for testing embedded wireless communication device software and methods|
|US20070174662 *||Jan 3, 2006||Jul 26, 2007||Emc Corporation||Methods and apparatus for reconfiguring a storage system|
|US20070239804 *||Mar 29, 2006||Oct 11, 2007||International Business Machines Corporation||System, method and computer program product for storing multiple types of information|
|US20080270825 *||Apr 30, 2007||Oct 30, 2008||Garth Richard Goodson||System and method for failover of guest operating systems in a virtual machine environment|
|US20090113420 *||Oct 26, 2007||Apr 30, 2009||Brian Pawlowski||System and method for utilizing a virtualized compute cluster as an execution engine for a virtual machine of a storage system cluster|
|US20090217021 *||Feb 22, 2008||Aug 27, 2009||Garth Richard Goodson||System and method for fast restart of a guest operating system in a virtual machine environment|
|US20090222622 *||Feb 28, 2008||Sep 3, 2009||Harris Corporation, Corporation Of The State Of Delaware||Video media data storage system and related methods|
|US20100131581 *||Nov 26, 2008||May 27, 2010||Jibbe Mahmoud K||Method and/or apparatus for certifying an in-band management application of an external storage array|
|US20100235832 *||Mar 12, 2009||Sep 16, 2010||Vmware, Inc.||Storage Virtualization With Virtual Datastores|
|US20120042142 *||Aug 25, 2011||Feb 16, 2012||Amazon Technologies, Inc.||Providing executing programs with reliable access to non-local block data storage|
|US20120272238 *||Oct 25, 2012||Ayal Baron||Mechanism for storing virtual machines on a file system in a distributed environment|
|International Classification||G06F3/06, G06F12/08, G06F13/10|
|Cooperative Classification||G06F3/0607, G06F3/067, G06F3/0664|
|European Classification||G06F3/06A2A4, G06F3/06A4V2, G06F3/06A6D|
|Sep 26, 2007||AS||Assignment|
Owner name: SYMANTEC CORPORATION,CALIFORNIA
Free format text: CHANGE OF NAME;ASSIGNOR:VERITAS OPERATING CORPORATION;REEL/FRAME:019872/0979
Effective date: 20061030
|Mar 4, 2011||FPAY||Fee payment|
Year of fee payment: 4
|Feb 28, 2012||AS||Assignment|
Owner name: SYMANTEC OPERATING CORPORATION, CALIFORNIA
Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE PREVIOUSLY RECORDED ON REEL 019872 FRAME 979. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNEE IS SYMANTEC OPERATING CORPORATION;ASSIGNOR:VERITAS OPERATING CORPORATION;REEL/FRAME:027773/0635
Effective date: 20061030
Owner name: SYMANTEC CORPORATION, CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SYMANTEC OPERATING CORPORATION;REEL/FRAME:027778/0229
Effective date: 20120228
|May 25, 2012||AS||Assignment|
Owner name: STEC IP, LLC, NORTH CAROLINA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SYMANTEC;REEL/FRAME:028271/0971
Effective date: 20120228
Owner name: CLOUDING IP, LLC, NORTH CAROLINA
Free format text: CHANGE OF NAME;ASSIGNOR:STEC IP, LLC;REEL/FRAME:028275/0896
Effective date: 20120524
|Jun 25, 2013||AS||Assignment|
Owner name: SYMANTEC CORPORATION, CALIFORNIA
Free format text: CORRECT AN ERROR MADE IN A PREVIOUSLY RECORDED DOCUMENT THAT ERRONEOUSLY AFFECTS THE IDENTIFIED PATENT, PREVIOUSLY RECORDED ON REEL:028271 AND FRAME 0971;ASSIGNOR:SYMANTEC CORPORATION;REEL/FRAME:030722/0400
Effective date: 20120228
|Feb 2, 2015||AS||Assignment|
Owner name: DBD CREDIT FUNDING, LLC, NEW YORK
Free format text: SECURITY INTEREST;ASSIGNORS:MARATHON PATENT GROUP, INC.;CLOUDING CORP.;REEL/FRAME:034873/0001
Effective date: 20150129
|Feb 26, 2015||FPAY||Fee payment|
Year of fee payment: 8