Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20050182910 A1
Publication typeApplication
Application numberUS 11/051,862
Publication dateAug 18, 2005
Filing dateFeb 4, 2005
Priority dateFeb 4, 2004
Publication number051862, 11051862, US 2005/0182910 A1, US 2005/182910 A1, US 20050182910 A1, US 20050182910A1, US 2005182910 A1, US 2005182910A1, US-A1-20050182910, US-A1-2005182910, US2005/0182910A1, US2005/182910A1, US20050182910 A1, US20050182910A1, US2005182910 A1, US2005182910A1
InventorsRoger Stager, Donald Trimmer, Pawan Saxena, Craig Johnston, Yafen Chang, Rico Blaser
Original AssigneeAlacritus, Inc.
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Method and system for adding redundancy to a continuous data protection system
US 20050182910 A1
Abstract
A method for adding redundancy to a continuous data protection system begins by taking a snapshot of a primary volume at a specific point in time, in accordance with a retention policy. The snapshot is stored on a secondary volume, and the snapshot is cloned and stored on a third volume. The cloned snapshot is eventually expired according to a cloning policy.
Images(6)
Previous page
Next page
Claims(33)
1. A method for adding redundancy to a continuous data protection system, comprising the steps of:
taking a snapshot of a primary volume at a specific point in time;
storing the snapshot on a secondary volume;
cloning the snapshot and storing the cloned snapshot on a third volume;
expiring the cloned snapshot.
2. The method according to claim 1, wherein the taking step is performed according to a retention policy.
3. The method according to claim 1, wherein the cloning step is performed according to a cloning policy.
4. The method according to claim 3, wherein the cloning policy is part of a retention policy.
5. The method according to claim 3, wherein the cloning policy specifies at least one of: a number of clones to be made, a frequency at which a clone is made, and a time period for retaining a clone.
6. The method according to claim 5, wherein the expiring step includes deleting the cloned snapshot at the end of the time period specified in the cloning policy.
7. The method according to claim 1, wherein the expiring step includes deleting the cloned snapshot at the same time as the snapshot.
8. The method according to claim 1, wherein the expiring step includes deleting the cloned snapshot at a different time than the snapshot.
9. The method according to claim 1, wherein the primary volume and the secondary volume are located within a first fault zone and the third volume is located in a second fault zone separate from the first fault zone.
10. A system for adding redundancy to a continuous data protection system, comprising:
snapshot means for taking a snapshot of a primary volume at a specific point in time;
storing means for storing said snapshot on a secondary volume;
cloning means for cloning said snapshot and storing said cloned snapshot on a third volume;
expiring means for expiring said cloned snapshot.
11. The system according to claim 10, wherein said snapshot means performs according to a retention policy.
12. The system according to claim 10, wherein said cloning means performs according to a cloning policy.
13. The system according to claim 12, wherein said cloning policy is part of a retention policy.
14. The system according to claim 12, wherein said cloning policy specifies at least one of: a number of clones to be made, a frequency at which a clone is made, and a time period for retaining a clone.
15. The system according to claim 14, wherein said expiring means includes deleting said cloned snapshot at the end of said time period specified in said cloning policy.
16. The system according to claim 10, wherein said expiring means includes deleting said cloned snapshot at the same time as said snapshot.
17. The system according to claim 10, wherein said expiring means includes deleting said cloned snapshot at a different time than said snapshot.
18. The system according to claim 10, wherein said primary volume and said secondary volume are located within a first fault zone and said third volume is located in a second fault zone separate from said first fault zone.
19. A method for managing a recovery point in a continuous data protection system, comprising the steps of:
setting a retention policy;
taking a snapshot of a primary volume according to the retention policy, the snapshot providing a recovery point on the primary volume;
storing the snapshot on a secondary volume;
expiring the snapshot according to the retention policy;
setting a cloning policy;
creating a clone of the snapshot according to the cloning policy;
storing the cloned snapshot on a third volume; and
expiring the cloned snapshot according to the cloning policy.
20. The method according to claim 19, wherein the cloning policy is part of the retention policy.
21. The method according to claim 19, wherein the cloned snapshot is expired at the same time as the snapshot.
22. The method according to claim 19, wherein the cloned snapshot is expired at a different time than the snapshot.
23. A system for managing a recovery point in a continuous data protection system, comprising:
first policy means;
snapshot means for taking a snapshot of a primary volume, said first policy means controlling said snapshot means;
first storing means for storing said snapshot on a secondary volume;
first expiring means for expiring said snapshot, said first policy means controlling said first expiring means;
second policy means;
cloning means for creating a clone of said snapshot, said second policy means controlling said cloning means;
second storing means for storing said cloned snapshot on a third volume; and
second expiring means for expiring said cloned snapshot, said second policy means controlling said second expiring means.
24. The system according to claim 23, wherein:
said first policy means includes a retention policy; and
said second policy means includes a cloning policy.
25. The system according to claim 23, wherein said second policy means is part of said first policy means.
26. A system for continuous data protection, comprising:
a host computer;
a first volume connected to said host computer, said first volume containing data to be protected;
a first data protection system connected to said host computer;
a second volume connected to said first data protection system, said second volume being a protected version of said first volume;
a second data protection system communicating with said first data protection system; and
a third volume connected to said second data protection system, said third volume being a copy of said second volume.
27. The system according to claim 26, wherein said first data protection system and said second data protection system communicate asynchronously.
28. The system according to claim 26, wherein said second data protection system and said third volume are located in a fault zone with said first data protection system and said second volume.
29. The system according to claim 26, wherein said second data protection system and said third volume are located in a fault zone separate from said first data protection system and said second volume.
30. The system according to claim 26, further comprising:
a third data protection system communicating with said second data protection system; and
a fourth volume connected to said third data protection system, said fourth volume being a copy of said third volume.
31. The system according to claim 30, wherein
said second data protection system communications with said first data protection system synchronously, whereby said third volume is a current copy of said second volume; and
said third data protection system communicates with said second data protection asynchronously.
32. The system according to claim 30, wherein
said second data protection system and said third volume are located in a first fault zone with said first data protection system and said second volume; and
said third data protection system and said fourth volume are located in a second fault zone, said second fault zone being separate from said first fault zone.
33. The system according to claim 30, wherein
said first data protection system and said second volume are located in a first fault zone;
said second data protection system and said third volume are located in a second fault zone, said second fault zone being separate from said first fault zone; and
said third data protection system and said fourth volume are located in a third fault zone, said third fault zone being separate from said first fault zone and said second fault zone.
Description
    CROSS REFERENCE TO RELATED APPLICATIONS
  • [0001]
    This application claims priority from U.S. Provisional Application No. 60/541,626, filed on Feb. 4, 2004; and U.S. Provisional Application No. 60/542,011, filed on Feb. 5, 2004, which are incorporated by reference as if fully set forth herein.
  • FIELD OF INVENTION
  • [0002]
    The present invention relates generally to continuous data protection, and more particularly, to a method and system for adding redundancy to a continuous data protection system.
  • BACKGROUND
  • [0003]
    Hardware redundancy schemes have traditionally been used in enterprise environments to protect against component failures. Redundant arrays of independent disks (RAID) have been implemented successfully to assure continued access to data even in the event of one or more media failures (depending on the RAID Level). Unfortunately, hardware redundancy schemes are ineffective in dealing with logical data loss or corruption. For example, an accidental file deletion or virus infection is automatically replicated to all of the redundant hardware components and can neither be prevented nor recovered from by such technologies. To overcome this problem, backup technologies have traditionally been deployed to retain multiple versions of a production system over time. This allowed administrators to restore previous versions of data and to recover from data corruption.
  • [0004]
    Backup copies are generally policy-based, are tied to a periodic schedule, and reflect the state of a primary volume (i.e., a protected volume) at the particular point in time that is captured. Because backups are not made on a continuous basis, there will be some data loss during the restoration, resulting from a gap between the time when the backup was performed and the restore point that is required. This gap can be significant in typical environments where backups are only performed once per day. In a mission-critical setting, such a data loss can be catastrophic. Beyond the potential data loss, restoring a primary volume from a backup system can be complicated and often takes many hours to complete. This additional downtime further exacerbates the problems associated with a logical data loss.
  • [0005]
    The traditional process of backing up data to tape media is time driven and time dependent. That is, a backup process typically is run at regular intervals and covers a certain period of time. For example, a full system backup may be run once a week on a weekend, and incremental backups may be run every weekday during an overnight backup window that starts after the close of business and ends before the next business day. These individual backups are then saved for a predetermined period of time, according to a retention policy. In order to conserve tape media and storage space, older backups are gradually faded out and replaced by newer backups. Further to the above example, after a full weekly backup is completed, the daily incremental backups for the preceding week may be discarded, and each weekly backup may be maintained for a few months, to be replaced by monthly backups. The daily backups are typically not all discarded on the same day. Instead, the Monday backup set is overwritten on Monday, the Tuesday backup set is overwritten on Tuesday, and so on. This ensures that a backup set is available that is within eight business hours of any corruption that may have occurred in the past week.
  • [0006]
    Despite frequent hardware failures and the necessity of ongoing maintenance and tuning, the backup creation process can be automated, while restoring data from a backup remains a manual and time-critical process. First, the appropriate backup tapes need to be located, including the latest full backup and any incremental backups made since the last full backup. In the event that only a partial restoration is required, locating the appropriate backup tape can take just as long. Once the backup tapes are located, they must be restored to the primary volume. Even under the best of circumstances, this type of backup and restore process cannot guarantee high availability of data.
  • [0007]
    Another type of data protection involves making point in time (PIT) copies of data. A first type of PIT copy is a hardware-based PIT copy, which is a mirror of the primary volume onto a secondary volume. The main drawbacks to a hardware-based PIT copy are that the data ages quickly and that each copy takes up as much disk space as the primary volume. A software-based PIT, typically called a “snapshot,” is a “picture” of a volume at the block level or a file system at the operating system level. Various types of software-based PITs exist, and most are tied to a particular platform, operating system, or file system. These snapshots also have drawbacks, including occupying additional space on the primary volume, rapid aging, and possible dependencies on data stored on the primary volume wherein data corruption on the primary volume leads to corruption of the snapshot. In addition, snapshot systems generally do not offer the flexibility in scheduling and expiring snapshots that backup software provides.
  • [0008]
    While both hardware-based and software-based PIT techniques reduce the dependency on the backup window, they still require the traditional tape-based backup and restore process to move data from disk to tape media and to manage the different versions of data. This dependency on legacy backup applications and processes is a significant drawback of these technologies. Furthermore, like traditional tape-based backup and restore processes, PIT copies are made at discrete moments in time, thereby limiting any restores that are performed to the points in time at which PIT copies have been made.
  • [0009]
    A need therefore exists for a system that combines the advantages of tape-based systems with the advantages of snapshot systems and eliminates the limitations described above.
  • SUMMARY
  • [0010]
    A method for adding redundancy to a continuous data protection system begins by taking a snapshot of a primary volume at a specific point in time, in accordance with a retention policy. The snapshot is stored on a secondary volume, and the snapshot is cloned and stored on a third volume. The cloned snapshot is eventually expired according to a cloning policy.
  • [0011]
    A system for adding redundancy to a continuous data protection system includes snapshot means, storing means, cloning means, and expiring means. The snapshot means takes a snapshot of a primary volume at a specific point in time. The storing means stores the snapshot on a secondary volume. The cloning means clones the snapshot and stores the clone on a third volume. The expiring means expires the cloned snapshot according to a cloning policy.
  • [0012]
    A method for managing a recovery point in a continuous data protection system begins by setting a retention policy and a cloning policy. A snapshot of a primary volume is taken according to the retention policy, the snapshot providing a recovery point on the primary volume. The snapshot is stored on a secondary volume and is expired according to the retention policy. A clone of the snapshot is created according to the cloning policy and is stored on a third volume. The cloned snapshot is expired according to the cloning policy.
  • [0013]
    A system for managing a recovery point in a continuous data, protection system includes snapshot means for taking a snapshot of a primary volume, the snapshot means being controlled by a first policy means. A first storing means stores the snapshot on a secondary volume. A first expiring means expires snapshot, the first expiring means being controlled by the first policy means. A cloning means creates a clone of the snapshot, the cloning means being controlled by a second policy means. A second storing means stores the cloned snapshot on a third volume. A second expiring means expires the cloned snapshot, the second expiring means being controlled by the second policy means.
  • [0014]
    A system for continuous data protection includes a host computer and a first volume connected to the host computer, the first volume containing data to be protected. A first data protection system is connected to the host computer. A second volume is connected to the first data protection system, the second volume being a protected version of the first volume. A second data protection system communicates with the first data protection system and a third volume is connected to the second data protection system. The third volume is a copy of the second volume.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • [0015]
    A more detailed understanding of the invention may be had from the following description of a preferred embodiment, given by way of example, and to be understood in conjunction with the accompanying drawings, wherein:
  • [0016]
    FIGS. 1A-1C are block diagrams showing a continuous data protection environment in accordance with the present invention;
  • [0017]
    FIG. 2 is an example of a delta map in accordance with the present invention;
  • [0018]
    FIG. 3 is a diagram illustrating a retention policy for the fading out of snapshots in accordance with the present invention;
  • [0019]
    FIG. 4 is a flowchart showing the operation of a retention policy in accordance with the present invention;
  • [0020]
    FIG. 5 is a flowchart showing the operation of a cloning policy in accordance with the present invention;
  • [0021]
    FIG. 6A is a block diagram of a continuous data protection system including local cloning;
  • [0022]
    FIG. 6B is a block diagram of a continuous data protection system including remote cloning; and
  • [0023]
    FIG. 6C is a block diagram of a continuous data protection system including remote cloning with a bunker appliance.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • [0024]
    In the present invention, data is backed up continuously, allowing system administrators to pause, rewind, and replay live enterprise data streams. This moves the traditional backup methodologies into a continuous background process in which policies automatically manage the lifecycle of many generations of restore images.
  • [0025]
    System Construction
  • [0026]
    FIG. 1A shows a preferred embodiment of a protected computer system 100 constructed in accordance with the present invention. A host computer 102 is connected directly to a primary data volume 104 (the primary data volume may also be referred to as the protected volume) and to a data protection system 106. The data protection system 106 manages a secondary data volume 108. The construction of the system 100 minimizes the lag time by writing directly to the primary data volume 104 and permits the data protection system 106 to focus exclusively on managing the secondary data volume 108. The management of the volumes is preferably performed using a volume manager.
  • [0027]
    A volume manager is a software module that runs on a server or intelligent storage switch to manage storage resources. Typical volume managers have the ability to aggregate blocks from multiple different physical disks into one or more virtual volumes. Applications are not aware that they are actually writing to segments of many different disks because they are presented with one large, contiguous volume. In addition to block aggregation, volume managers usually also offer software RAID functionality. For example, they are able to split the segments of the different volumes into two groups, where one group is a mirror of the other group. This is, in a preferred embodiment, the feature that the data protection system is taking advantage of when the present invention is implemented as shown in FIG. 1A. In many environments, the volume manager or host-based driver already mirrors the writes to two distinct different primary volumes for redundancy in case of a hardware failure. The present invention is configured as a tertiary mirror target in this scenario, such that the volume manager or host-based driver also sends copies of all writes to the data protection system.
  • [0028]
    It is noted that the primary data volume 104 and the secondary data volume 108 can be any type of data storage, including, but not limited to, a single disk, a disk array (such as a RAID), or a storage area network (SAN). The main difference between the primary data volume 104 and the secondary data volume 108 lies in the structure of the data stored at each location, as will be explained in detail below. It is noted that there may also be differences in terms of the technologies that are used. The primary volume 104 is typically an expensive, fast, and highly available storage subsystem, whereas the secondary volume 108 is typically cost-effective, high capacity, and comparatively slow (for example, ATA/SATA disks). Normally, the slower secondary volume cannot be used as a synchronous mirror to the high-performance primary volume, because the slower response time will have an adverse impact on the overall system performance.
  • [0029]
    The data protection system 106, however, is optimized to keep up with high-performance primary volumes. These optimizations are described in more detail below, but at a high level, random writes to the primary volume 104 are processed sequentially on the secondary volume 108. Sequential writes improve both the cache behavior and the actual volume performance of the secondary volume 108. In addition, it is possible to aggregate multiple sequential writes on the secondary volume 108, whereas this is not possible with the random writes to the primary volume 104. The present invention does not require writes to the data protection system 106 to be synchronous. However, even in the case of an asynchronous mirror, minimizing latencies is important.
  • [0030]
    FIG. 1B shows an alternate embodiment of a protected computer system 120 constructed in accordance with the present invention. The host computer 102 is directly connected to the data protection system 106, which manages both the primary data volume 104 and the secondary data volume 108. The system 120 is likely slower than the system 100 described above, because the data protection system 106 must manage both the primary data volume 104 and the secondary data volume 108. This results in a higher latency for writes to the primary volume 104 in the system 120 and lowers the available bandwidth for use. Additionally, the introduction of a new component into the primary data path is undesirable because of reliability concerns.
  • [0031]
    FIG. 1C shows another alternate embodiment of a protected computer system 140 constructed in accordance with the present invention. The host computer 102 is connected to an intelligent switch 142. The switch 142 is connected to the primary data volume 104 and the data protection system 106, which in turn manages the secondary data volume 108. The switch 142 includes the ability to host applications and contains some of the functionality of the data protection system 106 in hardware, to assist in reducing system latency and improve bandwidth.
  • [0032]
    It is noted that the data protection system 106 operates in the same manner, regardless of the particular construction of the protected computer system 100, 120, 140. The major difference between these deployment options is the manner and place in which a copy of each write is obtained. To those skilled in the art it is evident that other embodiments, such as the cooperation between a switch platform and an external server, are also feasible.
  • [0033]
    Conceptual Overview
  • [0034]
    To facilitate further discussion, it is necessary to explain some fundamental concepts associated with a continuous data protection system constructed in accordance with the present invention. In practice, certain applications require continuous data protection with a block-by-block granularity, for example, to rewind individual transactions. However, the period in which such fine granularity is required is generally short (for example, two days), which is why the system can be configured to fade out data over time. The present invention discloses data structures and methods to manage this process automatically.
  • [0035]
    The present invention keeps a log of every write made to a primary volume (a “write log”) by duplicating each write and directing the copy to a cost-effective secondary volume in a sequential fashion. The resulting write log on the secondary volume can then be played back one write at a time to recover the state of the primary volume at any previous point in time. Replaying the write log one write at a time is very time consuming, particularly if a large amount of write activity has occurred since the creation of the write log. In typical recovery scenarios, it is necessary to examine how the primary volume looked like at multiple points in time before deciding which point to recover to. For example, consider a system that was infected by a virus. In order to recover from the virus, it is necessary to examine the primary volume as it was at different points in time to find the latest recovery point where the system was not yet infected by the virus. Additional data structures are needed to efficiently compare multiple potential recovery points.
  • [0036]
    Delta Maps
  • [0037]
    Delta maps provide a mechanism to efficiently recover the primary volume as it was at a particular point in time without the need to replay the write log in its entirety, one write at a time. In particular, delta maps are data structures that keep track of data changes between two points in time. These data structures can then be used to selectively play back portions of the write log such that the resulting point-in-time image is the same as if the log were played back one write at a time, starting at the beginning of the log.
  • [0038]
    FIG. 2 shows a delta map 200 constructed in accordance with the present invention. While the format shown in FIG. 2 is preferred, any format containing similar information may be used. For each write to a primary volume, a duplicate write is made, in sequential order, to a secondary volume. To create a mapping between the two volumes, it is preferable to have an originating entry and a terminating entry for each write. The originating entry includes information regarding the origination of a write, while the terminating entry includes information regarding the termination of the write.
  • [0039]
    As shown in delta map 200, row 210 is an originating entry and row 220 is a terminating entry. Row 210 includes a field 212 for specifying the region of a primary volume where the first block was written, a field 214 for specifying the block offset in the region of the primary volume where the write begins, a field 216 for specifying where on the secondary volume the duplicate write (i.e., the copy of the primary volume write) begins, and a field 218 for specifying the physical device (the physical volume or disk identification) used to initiate the write. Row 220 includes a field 222 for specifying the region of the primary volume where the last block was written, a field 224 for specifying the block offset in the region of the primary volume where the write ends, a field 226 for specifying the where on the secondary volume the duplicate write ends, and a field 228. While fields 226 and 228 are provided in a terminating entry such as row 220, it is noted that field 226 is optional because this value can be calculated by subtracting the offsets of the originating entry and the terminating entry (field 226=(field 224−field 214)+field 216), and field 228 is not necessary since there is no physical device usage associated with termination of a write.
  • [0040]
    In a preferred embodiment, as explained above, each delta map contains a list of all blocks that were changed during the particular time period to which the delta map corresponds. That is, each delta map specifies a block region on the primary volume, the offset on the primary volume, and physical device information. It is noted, however, that other fields or a completely different mapping format may be used while still achieving the same functionality. For example, instead of dividing the primary volume into block regions, a bitmap could be kept, representing every block on the primary volume. Once the retention policy (which is set purely according to operator preference) no longer requires the restore granularity to include a certain time period, corresponding blocks are freed up, with the exception of any blocks that may still be necessary to restore to later recovery points. Once a particular delta map expires, its block list is returned to the appropriate block allocator for re-use.
  • [0041]
    Delta maps are initially created from the write log using a map engine, and can be created in real-time, after a certain number of writes, or according to a time interval. It is noted that these are examples of ways to trigger the creation of a delta map, and that one skilled in the art could devise various other triggers. Additional delta maps may also be created as a result of a merge process (called “merged delta maps”) and may be created to optimize the access and restore process. The delta maps are stored on the secondary volume and contain a mapping of the primary address space to the secondary address space. The mapping is kept in sorted order based on the primary address space.
  • [0042]
    One significant benefit of merging delta maps is a reduction in the number of delta map entries that are required. For example, when there are two writes that are adjacent to each other on the primary volume, the terminating entry for the first write can be eliminated from the merged delta map, since its location is the same as the originating entry for the second write. The delta maps and the structures created by merging maps reduces the amount of overhead required in maintaining the mapping between the primary and secondary volumes.
  • [0043]
    Data Recovery
  • [0044]
    Data is stored in a block format, and delta maps can be merged to reconstruct the full primary volume as it looked like at a particular point in time. Users need to be able to access this new volume seamlessly from their current servers. There are two ways to accomplish this at a block level. The first way is to mount the new volume (representing the primary volume at a previous point in time) to the server. The problem with this approach is that it can be a relatively complex configuration task, especially since the operation needs to be performed under time pressure and during a crisis situation, i.e., during a system outage. However, some systems now support dynamic addition and removal of volumes, so this may not be a concern in some situations.
  • [0045]
    The second way to access the recovered primary volume is to treat the recovered volume as a piece of removable media (e.g., a CD), that is inserted into a shared removable media drive. In order to properly recover data from the primary volume at a previous point in time, an image of the primary volume is loaded onto a location on the network, each location having a separate identification known as a logical unit number (LUN). This procedure is discussed in U.S. patent application Ser. No. 10/772,017, filed Feb. 4, 2004, which is incorporated by reference as if fully set forth herein.
  • [0046]
    Retention Policy
  • [0047]
    FIG. 3 shows a diagram of a retention policy used in connection with fading out the APIT snapshots over time. The retention policy consists of several parts. One part is used to decide how large the APIT window is and another part decides when to take scheduled snapshots and for how long to retain them. Each scheduled snapshot consists of all the changes up to that point in time; over longer periods of time, each scheduled snapshot will contain the changes covering a correspondingly larger period of time, with the granularity of more frequent snapshots being unnecessary.
  • [0048]
    It is noted that outside the APIT window (the left portion of FIG. 3), some data will be phased out (shown by the gaps on the left portion of FIG. 3). Deciding which data to phase out is similar to a typical tape rotation scheme. A policy is entered by the user that decides to retain data that was recorded, for example, at each minute boundary. It is also noted that the present invention provides versioning capabilities with respect to snapshots (i.e., file catalogs, scheduling capabilities, etc.) as well as the ability to establish compound/aggregate policies, etc. when outside an APIT window.
  • [0049]
    Referring now to FIG. 4, there is shown a method 400 for implementing a retention policy outside of an APIT window in a continuous data protection system. The method 400 begins by setting a first time interval to a relatively short period (e.g., one minute) and setting a maximum time interval (e.g., one year; step 402). A snapshot is created for the short time interval (step 404). A short time interval snapshot is one of many snapshots taken at predetermined intervals, to provide a desired level of granularity in the data stored on the secondary volume. Typically the predetermined intervals are set such that there is a high level of granularity (i.e., many snapshots from which to create PIT maps for purposes of a restore) on the secondary volume. The short time interval snapshots are typically used where the data is still relatively fresh and it is likely that changes in the primary volume that occurred between small intervals of time may be needed in the event of a failure.
  • [0050]
    The older the data is, however, the less likely it is that snapshots between small time intervals will be needed (i.e., less granularity is required on the secondary volume). It is then determined whether the short retention time has expired for any of the data (step 406). If the retention time has not expired, the method 400 cycles back to step 404 where additional short time interval snapshots are created. If the retention time for the snapshot has expired (step 406), then longer interval snapshots may be created by merging delta maps for all short interval snapshots (step 408). The retention time is then set to a longer interval (step 410). If the maximum time interval has been reached (step 412), then the method terminates (step 414). If the maximum time interval has not been reached (step 412), then a determination is made whether the longer time interval has expired (step 416). If the longer retention time interval has expired, then the method continues with step 408. If the longer retention time interval has not expired (step 416), then the method waits (step 418) before again checking whether the longer time interval has expired (step 416).
  • [0051]
    A similar method (not shown) uses a number-based policy that states how many snapshots are kept for each retention time frame (e.g., one minute, one hour, etc.). For example, instead of stating for how long the hourly snapshots are retained or how much disk space should be used to store hourly snapshots, the number-based policy states that at least ten hourly snapshots are kept at any given time. Under this type of policy, the oldest snapshot is discarded when a new snapshot is taken, creating a sliding window of the snapshot coverage in terms of time. The size of the window is determined by the policy settings made by the user.
  • [0052]
    Duplicating Snapshots
  • [0053]
    The present invention also supports snapshot clones (including both single clones and double clones) and fault zones. These features extend the data retention policies in an important way. In addition to defining the retention period of each snapshot, cloning allows users to specify the number of physical copies of the data blocks that make up each snapshot that are retained. In other words, a cloning policy defines the amount of redundancy that is used to store each snapshot. Fault zones relate to a group of storage devices that share a common point of failure, for example, all of the volumes connected to a single RAID controller. Fault zones will be discussed in greater detail in connection with FIG. 6.
  • [0054]
    The benefit of retaining multiple copies of certain snapshots is related to the continuous data protection system's journaling structures. Since each write is only retained in one physical location, multiple snapshots typically depend on the same physical data blocks. Future snapshots always depend upon previous data blocks, so if a block has not changed, it will always be in the snapshot. A hardware failure leading to the corruption of a single data block may result in the partial corruption of an entire series of snapshots (from the time the data block becomes corrupted and forward). This behavior is undesirable, and can affect systems in which only metadata is used to create the snapshot and where there are not multiple copies of the same data.
  • [0055]
    One way to eliminate such failure conditions is to duplicate (clone) every write on the secondary volume to two or more independent physical devices. This approach is effective, but also inefficient because it requires multiple times the storage space. A more efficient alternative is to duplicate data selectively in a trade-off between the recovery granularity in the case of a failure and the required storage capacity. Is it desirable to only move metadata structures for purposes of duplication, but to also occasionally make multiple copies of the same data blocks as additional insurance against media failure.
  • [0056]
    For example, a cloning policy can be configured where hourly snapshots are not duplicated, but daily snapshots are duplicated to an independent physical disk subsystem. In the unlikely event of a hardware failure causing a corruption of a data block on the secondary disk that consequently impacts a chain of hourly snapshots, the cloned daily snapshots can be used to bound the window of error from both sides to a 24-hour period. Based upon the user's preferences in setting the cloning policy, the user can choose a point between the two extremes (moving only metadata structures and retaining multiple copies of every snapshot), to set the desired level of redundancy. This permits the user to independently manage the recovery points and the redundancy with which each recovery point is kept.
  • [0057]
    The user can select the number clones to be made, the frequency of the cloning, and the granularity for retaining the clones. This is conceptually different from existing data protection systems, in which the user is bound by the policies predetermined by the data protection system with minimal (if any) input from the user regarding the number of the backups or the frequency of their creation. The retention policy incorporates the cloning policy, so that from an overall perspective, the user selects which points in time to take snapshots of, and for each snapshot, how many copies are to be cloned onto independent disks.
  • [0058]
    A method 500 for implementing a cloning policy in accordance with the present invention is shown in FIG. 5. The method 500 begins by setting the cloning parameters, including the number clones to be made, the frequency of the cloning, and the granularity for retaining the clones (step 502). A snapshot is then created, as set out in the retention policy and as described above (step 504). A determination is made whether a clone of the current snapshot is to be created (step 506). It is noted that the method 500 operates in the same manner whether one clone or multiple clones are created.
  • [0059]
    If no clones of the current snapshot are to be created, then the method returns to step 504. If a clone of the current snapshot is to be created (step 506), then the clone is created and stored on a separate volume (step 508). After the clone has been stored, a determination at some later time is made whether the clone has expired (step 510). If the clone has not expired, then the method returns to step 504. If the clone has expired, then the clone is deleted (step 512) and the method terminates (step 514).
  • [0060]
    The redundancy is used to store each snapshot, which as previously described, is an access point to the secondary storage. If the access point (i.e., snapshot) becomes corrupted, then a restore to that PIT cannot occur due to the corruption of the snapshot. Cloning alleviates this problem by copying the data blocks of a snapshot and the metadata relating to that snapshot. If the primary snapshot becomes corrupted, the user can still restore to that same PIT by accessing the cloned snapshot. Cloning does not create additional points in time to restore to, but makes a specific PIT more reliable for restoring to by storing multiple redundant copies of the data as it was at a specific PIT.
  • [0061]
    Clones of all the snapshots are generally not stored, because doing so would require too much disk space. Clones can expire at the same time as the original snapshot, or can expire at times unrelated to the time of the original snapshot. Expiring the clones at the same time as the original snapshot is related to the granularity for retaining the original snapshots; there is no need to keep a clone of a snapshot that has been phased out based upon the granularity set in the retention policy.
  • [0062]
    The redundant data blocks are kept on separate disk subsystems or LUNs. Because only a subset of snapshots are duplicated, it is noted that the corresponding delta maps for the duplicated snapshots are different as well. For example, if a given retention policy specifies that M hourly snapshots and N daily snapshots are retained during a certain time period (where M>N>0), and the data blocks making up the N daily snapshots are cloned, then the differences in the delta maps are quite apparent. In the original snapshot sequence, the delta maps (and the corresponding blocks) are kept between each hourly snapshot, whereas the cloned snapshots only contain delta maps and data blocks that specify the changes between the daily snapshots, which is essentially a merged view of the original delta map chain. By storing the hourly snapshots without redundancy, there are multiple delta maps, but the daily snapshot is only cloned once a day. Only the data blocks and the delta maps that get copied correspond to what would happen if all the shorter interval delta maps were merged together. The delta map relates to changes between the clones, so it would be a large delta map including all of the changes.
  • [0063]
    This difference, however, can be useful in the case where a delta map structure becomes corrupted, because it is possible to fix the corrupted structure from the cloned instance and vice versa. The ability to fix corrupted structures depends upon the relative granularity that is stored normally and the granularity of the clones. If the granularity is the same, then any corrupted structures can be fixed in either direction. But where the granularity differs, it is only possible to make corrections from the finer granularity structure to the wider granularity structure. So in the example above, only the cloned daily snapshots can be repaired, because the hourly snapshots are normally taken.
  • [0064]
    Fault Zones and Remote Clones
  • [0065]
    A fault zone is a group of storage devices that share a common point of failure. As used in the present invention, fault zones are arranged in a hierarchical structure, for example (from smaller to larger fault zones), RAID controller/disk, chassis/appliance, data center, and campus. It is noted that these fault zones are exemplary, and that one skilled in the art can create fault zones of finer or wider granularity. If an event occurs to disrupt the data protection system, all volumes within the fault zone will be similarly affected. For example, if the fault zone is a data center and there is a power failure, all devices in the data center will be inoperable.
  • [0066]
    In order to improve system tolerance to potential failures, one of the redundant clones should be stored outside of the fault zone of the secondary data volume in as a distant location as possible, from a fault zone perspective. Continuing the above example, if the fault zone is a data center, then one of the clones should be stored in a different data center or a different campus. There is a trade-off involved in creating the system with remote clones between the desire to retain remote clones and the costs associated with establishing a remote site and transferring the clones to the remote site. The remote site should be selected such that there is some level of isolation in terms of fault zones between the secondary data volume and any clones.
  • [0067]
    When a snapshot is copied to a remote site, the delta maps are copied from the local secondary data volume to the remote secondary data volume. If this transfer fails (e.g., a system interruption occurs before the transfer is completed), it can be restarted by resending the delta maps and associated data. In addition, the entire write log, including time stamps, can be transferred to the remote site. In this case, the transfer can be performed asynchronously (i.e., not in real time), which is a benefit since the write log can be a fairly large file.
  • [0068]
    FIGS. 6A-6C show different embodiments of a continuous data protection system including storage for clones. FIG. 6A shows a system 600 which provides local clone storage. It is noted that the parts of the system 600 that correspond to the system 100 described above have been given like reference numerals. The system 600 includes the host computer 102 connected directly to the primary data volume 104 and to the data protection system 106. The data protection system 106 manages the secondary data volume 108 and a copy of the secondary data volume 602 (hereinafter referred to as the “copy volume”). In operation, the data protection system 106 performs writes to the secondary data volume 108 as described above, and writes snapshot clones to the copy volume 602. The fault zone isolation between the secondary data volume 108 and the copy volume 602 is at a disk level.
  • [0069]
    FIG. 6B shows a system 610 that provides remote clone storage. The system 610 includes the host computer 102 connected directly to the primary data volume 104 and to the data protection system 106. The data protection system 106 manages the secondary data volume 108. A second data protection system 612 communicates directly and asynchronously with the data protection system 106 to receive the clones. The second data protection system 612 stores the clones on a third data volume 614. Both the second data protection system 612 and the third data volume 614 are located in a different fault zone from the rest of the system 610; the fault zone isolation in the system 610 is at the appliance level or the data center level.
  • [0070]
    FIG. 6C shows a system 620 that provides remote clone storage via a bunker appliance. The system 620 includes the host computer 102 connected directly to the primary data volume 104 and to the data protection system 106. The data protection system 106 manages the secondary data volume 108. A second data protection system 622 communicates directly with the data protection system 106 in a synchronous manner to receive the clones.
  • [0071]
    The second data protection system 622 stores the clones on a third data volume 624. The second data protection system 622 and the third data volume 624 comprise a bunker appliance 626, which is located in the same fault zone as the data protection system 106 and the secondary data volume 108. The purpose of the bunker appliance 626 is to provide a persistent buffer of data (the write log) that is guaranteed to eventually be copied to the remote node. Alternatively, the bunker appliance 626 can be located in a different fault zone form the data protection system 106 and the secondary data volume 108.
  • [0072]
    A third data protection system 630 communicates with the second data protection system 622 in an asynchronous manner. The third data protection system 630 stores clones received from the second data protection system 622 on a fourth data volume 632. The third data protection system 630 and the fourth data volume 632 comprise a remote node 634, which is located in a different fault zone from the rest of the system 620. The second data protection system 622 and the third data protection system 630 can communicate asynchronously because as long as the third data volume 624 remains intact, it is not critical that the data be transferred to the fourth data volume 632 within a specific time frame. The key point is that the data will be copied to the fourth data volume 632. It is noted that any copies from a secondary volume to a tertiary volume (in this instance, either the third data volume 624 and the fourth data volume 632) can be performed asynchronously. The writes are preferably performed asynchronously so that the multiple writes do not affect the writes to the primary volume (i.e., adds no latency to the writes), which must be synchronous.
  • [0073]
    While specific embodiments of the present invention have been shown and described, many modifications and variations could be made by one skilled in the art without departing from the scope of the invention. The above description serves to illustrate and not limit the particular invention in any way.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US4635145 *Feb 21, 1985Jan 6, 1987Sharp Kabushiki KaishaFloppy disk drive with stand-by mode
US4727512 *Dec 6, 1984Feb 23, 1988Computer Design & Applications, Inc.Interface adaptor emulating magnetic tape drive
US5297124 *Apr 24, 1992Mar 22, 1994Miltope CorporationTape drive emulation system for a disk drive
US5485321 *Dec 29, 1993Jan 16, 1996Storage Technology CorporationFormat and method for recording optimization
US5638509 *Jun 13, 1996Jun 10, 1997Exabyte CorporationData storage and protection system
US5745748 *Dec 9, 1994Apr 28, 1998Sprint Communication Co. L.P.System and method for direct accessing of remote data
US5774292 *Apr 13, 1995Jun 30, 1998International Business Machines CorporationDisk drive power management system and method
US5774643 *Oct 13, 1995Jun 30, 1998Digital Equipment CorporationEnhanced raid write hole protection and recovery
US5774715 *Mar 27, 1996Jun 30, 1998Sun Microsystems, Inc.File system level compression using holes
US5857208 *May 31, 1996Jan 5, 1999Emc CorporationMethod and apparatus for performing point in time backup operation in a computer system
US5864346 *Sep 6, 1995Jan 26, 1999Nintendo Co., Ltd.Picture display unit and image display system
US5872669 *Jun 7, 1995Feb 16, 1999Seagate Technology, Inc.Disk drive apparatus with power conservation capability
US5875479 *Jan 7, 1997Feb 23, 1999International Business Machines CorporationMethod and means for making a dual volume level copy in a DASD storage subsystem subject to updating during the copy interval
US5911779 *Mar 31, 1997Jun 15, 1999Emc CorporationStorage device array architecture with copyback cache
US6012698 *Dec 6, 1996Jan 11, 2000Krinner GmbhMethod and apparatus for clamping the trunk of a Christmas tree
US6021408 *Sep 12, 1996Feb 1, 2000Veritas Software Corp.Methods for operating a log device
US6023709 *Dec 15, 1997Feb 8, 2000International Business Machines CorporationAutomated file error classification and correction in a hierarchical storage management system
US6029179 *Dec 18, 1997Feb 22, 2000International Business Machines CorporationAutomated read-only volume processing in a virtual tape server
US6041329 *May 29, 1997Mar 21, 2000International Business Machines CorporationAutomated message processing system configured to automatically manage introduction of removable data storage media into media library
US6044442 *Nov 21, 1997Mar 28, 2000International Business Machines CorporationExternal partitioning of an automated data storage library into multiple virtual libraries for access by a plurality of hosts
US6049848 *Jul 15, 1998Apr 11, 2000Sutmyn Storage CorporationSystem and method for performing high-speed tape positioning operations
US6061309 *Dec 17, 1997May 9, 2000International Business Machines CorporationMethod and apparatus for maintaining states of an operator panel and convenience input/output station of a dual library manager/dual accessor controller system in the event of a failure to one controller
US6067587 *Jun 17, 1998May 23, 2000Sutmyn Storage CorporationMethod for serializing and synchronizing data packets by utilizing a physical lock system and a control data structure for mutual exclusion lock
US6070224 *Apr 2, 1998May 30, 2000Emc CorporationVirtual tape system
US6173293 *Mar 13, 1998Jan 9, 2001Digital Equipment CorporationScalable distributed file system
US6173359 *Aug 27, 1997Jan 9, 2001International Business Machines Corp.Storage and access to scratch mounts in VTS system
US6195730 *Jul 24, 1998Feb 27, 2001Storage Technology CorporationComputer system with storage device mapping input/output processor
US6225709 *Sep 10, 1999May 1, 2001Kabushiki Kaisha ToshibaPower supply circuit for electric device
US6247096 *Nov 2, 1998Jun 12, 2001International Business Machines CorporationHandling eject requests of logical volumes in a data storage subsystem
US6336163 *Jul 30, 1999Jan 1, 2002International Business Machines CorporationMethod and article of manufacture for inserting volumes for import into a virtual tape server
US6336173 *Apr 1, 1999Jan 1, 2002International Business Machines CorporationStoring and tracking multiple copies of data in data storage libraries
US6339778 *Mar 24, 2000Jan 15, 2002International Business Machines CorporationMethod and article for apparatus for performing automated reconcile control in a virtual tape system
US6341329 *Feb 9, 2000Jan 22, 2002Emc CorporationVirtual tape system
US6343342 *Nov 10, 1999Jan 29, 2002International Business Machiness CorporationStorage and access of data using volume trailer
US6353837 *Jun 30, 1998Mar 5, 2002Emc CorporationMethod and apparatus providing mass storage access from systems using different meta-data formats
US6360232 *Jun 2, 1999Mar 19, 2002International Business Machines CorporationDisaster recovery method for a removable media library
US6385706 *Dec 31, 1998May 7, 2002Emx CorporationApparatus and methods for copying a logical object to a primary storage device using a map of storage locations
US6389503 *Mar 23, 1998May 14, 2002Exabyte CorporationTape drive emulation by removable disk drive and media formatted therefor
US6397307 *Feb 23, 1999May 28, 2002Legato Systems, Inc.Method and system for mirroring and archiving mass storage
US6408359 *Nov 8, 1999Jun 18, 2002Matsushita Electric Industrial Co., Ltd.Storage device management system and method for distributively storing data in a plurality of storage devices
US6546384 *Feb 8, 2002Apr 8, 2003Kom Networks Inc.Method of determining and storing indexing data on a sequential data storage medium for supporting random access of data files stored on the medium
US6557073 *Feb 1, 1999Apr 29, 2003Fujitsu LimitedStorage apparatus having a virtual storage area
US6557089 *Nov 28, 2000Apr 29, 2003International Business Machines CorporationBackup by ID-suppressed instant virtual copy then physical backup copy with ID reintroduced
US6578120 *Jun 24, 1997Jun 10, 2003International Business Machines CorporationSynchronization and resynchronization of loosely-coupled copy operations between a primary and a remote secondary DASD volume under concurrent updating
US6694447 *Sep 29, 2000Feb 17, 2004Sun Microsystems, Inc.Apparatus and method for increasing application availability during a disaster fail-back
US6725331 *Jan 7, 1998Apr 20, 2004Emc CorporationMethod and apparatus for managing the dynamic assignment resources in a data storage system
US6733520 *Apr 9, 2001May 11, 2004Scimed Life Systems, Inc.Sandwich striped sleeve for stent delivery
US6850964 *Jun 26, 2001Feb 1, 2005Novell, Inc.Methods for increasing cache capacity utilizing delta data
US6877016 *Sep 13, 2001Apr 5, 2005Unisys CorporationMethod of capturing a physically consistent mirrored snapshot of an online database
US6898600 *May 16, 2002May 24, 2005International Business Machines CorporationMethod, system, and program for managing database operations
US6988109 *Dec 6, 2001Jan 17, 2006Io Informatics, Inc.System, method, software architecture, and business model for an intelligent object based information technology platform
US7007043 *Dec 23, 2002Feb 28, 2006Storage Technology CorporationStorage backup system that creates mountable representations of past contents of storage volumes
US7020779 *Aug 22, 2000Mar 28, 2006Sun Microsystems, Inc.Secure, distributed e-mail system
US7032126 *Jul 8, 2003Apr 18, 2006Softek Storage Solutions CorporationMethod and apparatus for creating a storage pool by dynamically mapping replication schema to provisioned storage volumes
US7055009 *Mar 21, 2003May 30, 2006International Business Machines CorporationMethod, system, and program for establishing and maintaining a point-in-time copy
US7200546 *Sep 5, 2003Apr 3, 2007Ultera Systems, Inc.Tape storage emulator
US7200726 *Oct 24, 2003Apr 3, 2007Network Appliance, Inc.Method and apparatus for reducing network traffic during mass storage synchronization phase of synchronous data mirroring
US7203726 *Nov 7, 2001Apr 10, 2007Yamaha CorporationSystem and method for appending advertisement to music card, and storage medium storing program for realizing such method
US7346623 *Sep 30, 2002Mar 18, 2008Commvault Systems, Inc.System and method for generating and managing quick recovery volumes
US20020004835 *Jun 1, 2001Jan 10, 2002Inrange Technologies CorporationMessage queue server system
US20020016827 *Jun 2, 2001Feb 7, 2002Mccabe RonFlexible remote data mirroring
US20020026595 *Aug 30, 2001Feb 28, 2002Nec CorporationPower supply control system and power supply control method capable of reducing electric power consumption
US20030004980 *Jun 27, 2001Jan 2, 2003International Business Machines CorporationPreferential caching of uncopied logical volumes in a peer-to-peer virtual tape server
US20030005313 *Jul 18, 2002Jan 2, 2003Berndt GammelMicroprocessor configuration with encryption
US20030014568 *Jul 13, 2001Jan 16, 2003International Business Machines CorporationMethod, system, and program for transferring data between storage devices
US20030025800 *Jul 15, 2002Feb 6, 2003Hunter Andrew ArthurControl of multiple image capture devices
US20030037211 *Aug 8, 2001Feb 20, 2003Alexander WinokurData backup method and system using snapshot and virtual tape
US20030044834 *Sep 11, 2002Mar 6, 2003Daly Roger JohnGDU, a novel signalling protein
US20030046260 *Aug 30, 2001Mar 6, 2003Mahadev SatyanarayananMethod and system for asynchronous transmission, backup, distribution of data and file sharing
US20030120476 *Nov 8, 2002Jun 26, 2003Neville YatesInterfaces for an open systems server providing tape drive emulation
US20030120676 *Dec 21, 2001Jun 26, 2003Sanrise Group, Inc.Methods and apparatus for pass-through data block movement with virtual storage appliances
US20040015731 *Jul 16, 2002Jan 22, 2004International Business Machines CorporationIntelligent data management fo hard disk drive
US20040098244 *Nov 14, 2002May 20, 2004Imation Corp.Method and system for emulating tape storage format using a non-tape storage medium
US20040103147 *Jun 10, 2003May 27, 2004Flesher Kevin E.System for enabling collaboration and protecting sensitive data
US20050010529 *Jul 8, 2003Jan 13, 2005Zalewski Stephen H.Method and apparatus for building a complete data protection scheme
US20050044162 *Aug 22, 2003Feb 24, 2005Rui LiangMulti-protocol sharable virtual storage objects
US20050063374 *Mar 12, 2004Mar 24, 2005Revivio, Inc.Method for identifying the time at which data was written to a data store
US20050065762 *Sep 3, 2004Mar 24, 2005Hirokazu HayashiESD protection device modeling method and ESD simulation method
US20050065962 *Feb 17, 2004Mar 24, 2005Revivio, Inc.Virtual data store creation and use
US20050066118 *Aug 24, 2004Mar 24, 2005Robert PerryMethods and apparatus for recording write requests directed to a data store
US20050066222 *Sep 23, 2003Mar 24, 2005Revivio, Inc.Systems and methods for time dependent data storage and recovery
US20050066225 *Aug 24, 2004Mar 24, 2005Michael RowanData storage system
US20050076070 *Dec 24, 2003Apr 7, 2005Shougo MikamiMethod, apparatus, and computer readable medium for managing replication of back-up object
US20050076261 *Feb 13, 2004Apr 7, 2005Revivio, Inc.Method and system for obtaining data stored in a data store
US20050076262 *Feb 13, 2004Apr 7, 2005Revivio, Inc.Storage management device
US20050076264 *Aug 24, 2004Apr 7, 2005Michael RowanMethods and devices for restoring a portion of a data store
US20050097260 *Nov 3, 2003May 5, 2005Mcgovern William P.System and method for record retention date in a write once read many storage system
US20050108302 *Apr 10, 2003May 19, 2005Rand David L.Recovery of data on a primary data volume
US20050144407 *Dec 31, 2003Jun 30, 2005Colgrove John A.Coordinated storage management operations in replication environment
US20060010177 *Jul 9, 2004Jan 12, 2006Shoji KodamaFile server for long term data archive
US20060047895 *Aug 24, 2004Mar 2, 2006Michael RowanSystems and methods for providing a modification history for a location within a data store
US20060047902 *Aug 24, 2004Mar 2, 2006Ron PasseriniProcessing storage-related I/O requests using binary tree data structures
US20060047903 *Aug 24, 2004Mar 2, 2006Ron PasseriniSystems, apparatus, and methods for processing I/O requests
US20060047905 *Sep 17, 2004Mar 2, 2006Matze John ETape emulating disk based storage system and method with automatically resized emulated tape capacity
US20060047925 *Aug 24, 2004Mar 2, 2006Robert PerryRecovering from storage transaction failures using checkpoints
US20060047989 *Aug 24, 2004Mar 2, 2006Diane DelgadoSystems and methods for synchronizing the internal clocks of a plurality of processor modules
US20060047998 *Aug 24, 2004Mar 2, 2006Jeff DarcyMethods and apparatus for optimally selecting a storage buffer for the storage of data
US20060047999 *Aug 24, 2004Mar 2, 2006Ron PasseriniGeneration and use of a time map for accessing a prior image of a storage device
US20060143376 *Aug 30, 2005Jun 29, 2006Matze John ETape emulating disk based storage system and method
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US7593973Nov 15, 2006Sep 22, 2009Dot Hill Systems Corp.Method and apparatus for transferring snapshot data
US7716183Apr 11, 2007May 11, 2010Dot Hill Systems CorporationSnapshot preserved data cloning
US7716435 *Mar 30, 2007May 11, 2010Emc CorporationProtection of point-in-time application data using snapshot copies of a logical volume
US7720817Feb 4, 2005May 18, 2010Netapp, Inc.Method and system for browsing objects on a protected volume in a continuous data protection system
US7752401Jan 25, 2006Jul 6, 2010Netapp, Inc.Method and apparatus to automatically commit files to WORM status
US7769723 *Apr 28, 2006Aug 3, 2010Netapp, Inc.System and method for providing continuous data protection
US7774610Aug 10, 2010Netapp, Inc.Method and apparatus for verifiably migrating WORM data
US7783603May 10, 2007Aug 24, 2010Dot Hill Systems CorporationBacking store re-initialization method and apparatus
US7783606Aug 24, 2010Netapp, Inc.Method and system for remote data recovery
US7783850Aug 24, 2010Dot Hill Systems CorporationMethod and apparatus for master volume access during volume copy
US7797582Aug 3, 2007Sep 14, 2010Netapp, Inc.Method and system for storing data using a continuous data protection system
US7831565Nov 9, 2010Dot Hill Systems CorporationDeletion of rollback snapshot partition
US7904679Mar 8, 2011Netapp, Inc.Method and apparatus for managing backup data
US7913116 *Mar 22, 2011Red Hat, Inc.Systems and methods for incremental restore
US7933987Sep 22, 2006Apr 26, 2011Lockheed Martin CorporationApplication of virtual servers to high availability and disaster recovery solutions
US7975115Jul 5, 2011Dot Hill Systems CorporationMethod and apparatus for separating snapshot preserved and write data
US7979654Jul 12, 2011Netapp, Inc.Method and system for restoring a volume in a continuous data protection system
US8001345May 10, 2007Aug 16, 2011Dot Hill Systems CorporationAutomatic triggering of backing store re-initialization
US8010496 *Aug 30, 2011Hitachi, Ltd.Backup management method in a remote copy environment
US8028135 *Sep 1, 2004Sep 27, 2011Netapp, Inc.Method and apparatus for maintaining compliant storage
US8078585 *Dec 13, 2011Emc CorporationReactive file recovery based on file naming and access information
US8095751Jan 10, 2012International Business Machines CorporationManaging set of target storage volumes for snapshot and tape backups
US8108635 *Jun 27, 2008Jan 31, 2012International Business Machines CorporationSystem, method and computer program product for copying data
US8200631Jun 12, 2012Dot Hill Systems CorporationSnapshot reset method and apparatus
US8204858Jun 25, 2007Jun 19, 2012Dot Hill Systems CorporationSnapshot reset method and apparatus
US8255660Aug 22, 2011Aug 28, 2012American Megatrends, Inc.Data migration between multiple tiers in a storage system using pivot tables
US8271447 *Sep 18, 2012Emc International CompanyMirroring metadata in a continuous data protection environment
US8386432Jul 20, 2011Feb 26, 2013Hitachi, Ltd.Backup management method in a remote copy environment
US8402209Apr 16, 2009Mar 19, 2013American Megatrends, Inc.Provisioning space in a data storage system
US8438135 *May 7, 2013Emc International CompanyMirroring metadata in a continuous data protection environment
US8458134 *Jun 4, 2013International Business Machines CorporationNear continuous space-efficient data protection
US8473777 *Apr 29, 2010Jun 25, 2013Netapp, Inc.Method and system for performing recovery in a storage system
US8554734 *Jul 15, 2008Oct 8, 2013American Megatrends, Inc.Continuous data protection journaling in data storage systems
US8572040 *Apr 27, 2007Oct 29, 2013International Business Machines CorporationMethods and infrastructure for performing repetitive data protection and a corresponding restore of data
US8600948 *Feb 28, 2006Dec 3, 2013Emc CorporationAvoiding duplicative storage of managed content
US8656123Aug 12, 2009Feb 18, 2014Dot Hill Systems CorporationSnapshot preserved data cloning
US8688936 *Oct 20, 2009Apr 1, 2014International Business Machines CorporationPoint-in-time copies in a cascade using maps and fdisks
US8706694May 27, 2009Apr 22, 2014American Megatrends, Inc.Continuous data protection of files stored on a remote storage device
US8713272May 17, 2012Apr 29, 2014International Business Machines CorporationPoint-in-time copies in a cascade using maps and fdisks
US8751467Jan 18, 2007Jun 10, 2014Dot Hill Systems CorporationMethod and apparatus for quickly accessing backing store metadata
US8812811Aug 10, 2012Aug 19, 2014American Megatrends, Inc.Data migration between multiple tiers in a storage system using pivot tables
US8818936 *Jun 29, 2007Aug 26, 2014Emc CorporationMethods, systems, and computer program products for processing read requests received during a protected restore operation
US8954789Jun 7, 2013Feb 10, 2015Netapp, Inc.Method and system for performing recovery in a storage system
US8990153Nov 20, 2006Mar 24, 2015Dot Hill Systems CorporationPull data replication model
US9323750Nov 26, 2012Apr 26, 2016Emc CorporationStorage array snapshots for logged access replication in a continuous data protection system
US9323760 *Mar 15, 2013Apr 26, 2016Emc CorporationIntelligent snapshot based backups
US9361243Jul 31, 2012Jun 7, 2016Kom Networks Inc.Method and system for providing restricted access to a storage medium
US20070061359 *Feb 28, 2006Mar 15, 2007Emc CorporationOrganizing managed content for efficient storage and management
US20070061373 *Feb 28, 2006Mar 15, 2007Emc CorporationAvoiding duplicative storage of managed content
US20070078982 *Sep 22, 2006Apr 5, 2007Mehrdad AidunApplication of virtual servers to high availability and disaster recovery soultions
US20070185973 *Nov 20, 2006Aug 9, 2007Dot Hill Systems, Corp.Pull data replication model
US20070186001 *Nov 20, 2006Aug 9, 2007Dot Hill Systems Corp.Data replication method and apparatus
US20070260645 *Apr 27, 2007Nov 8, 2007Oliver AugensteinMethods and infrastructure for performing repetitive data protection and a corresponding restore of data
US20070276878 *Apr 28, 2006Nov 29, 2007Ling ZhengSystem and method for providing continuous data protection
US20080005198 *Jun 29, 2006Jan 3, 2008Emc CorporationReactive file recovery based on file naming and access information
US20080072003 *Nov 27, 2007Mar 20, 2008Dot Hill Systems Corp.Method and apparatus for master volume access during colume copy
US20080114951 *Nov 15, 2006May 15, 2008Dot Hill Systems Corp.Method and apparatus for transferring snapshot data
US20080147756 *Feb 29, 2008Jun 19, 2008Network Appliance, Inc.Method and system for restoring a volume in a continuous data protection system
US20080177957 *Jan 18, 2007Jul 24, 2008Dot Hill Systems Corp.Deletion of rollback snapshot partition
US20080256141 *Jul 19, 2007Oct 16, 2008Dot Hill Systems Corp.Method and apparatus for separating snapshot preserved and write data
US20080256311 *Apr 11, 2007Oct 16, 2008Dot Hill Systems Corp.Snapshot preserved data cloning
US20080281875 *May 10, 2007Nov 13, 2008Dot Hill Systems Corp.Automatic triggering of backing store re-initialization
US20080281877 *May 10, 2007Nov 13, 2008Dot Hill Systems Corp.Backing store re-initialization method and apparatus
US20080320258 *Jun 25, 2007Dec 25, 2008Dot Hill Systems Corp.Snapshot reset method and apparatus
US20090217085 *Feb 27, 2008Aug 27, 2009Van Riel Henri HSystems and methods for incremental restore
US20090248759 *May 30, 2008Oct 1, 2009Hitachi, Ltd.Backup management method in a remote copy environment
US20090307450 *Aug 12, 2009Dec 10, 2009Dot Hill Systems CorporationSnapshot Preserved Data Cloning
US20090327627 *Dec 31, 2009International Business Machines CorporationSystem, method and computer program product for copying data
US20090328229 *Dec 31, 2009International Business Machiness CorporationSystem, method and computer program product for performing a data protection operation
US20100017444 *May 27, 2009Jan 21, 2010Paresh ChatterjeeContinuous Data Protection of Files Stored on a Remote Storage Device
US20110072104 *Nov 20, 2006Mar 24, 2011Dot Hill Systems CorporationPull data replication model
US20110087792 *Nov 20, 2006Apr 14, 2011Dot Hill Systems CorporationData replication method and apparatus
US20110208932 *Oct 20, 2009Aug 25, 2011International Business Machines CorporationFlashcopy handling
US20120254122 *Oct 4, 2012International Business Machines CorporationNear continuous space-efficient data protection
WO2008127831A1 *Mar 18, 2008Oct 23, 2008Dot Hill Systems Corp.Snapshot preserved data cloning
Classifications
U.S. Classification711/162, 711/159
International ClassificationG06F12/16
Cooperative ClassificationG06F11/1471, G06F11/2076, G06F11/1461, G06F11/1451, G06F11/1456
Legal Events
DateCodeEventDescription
Oct 11, 2005ASAssignment
Owner name: ALACRITUS, INC., CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:STAGER, ROGER KEITH;TRIMMER, DONALD ALVIN;SAXENA, PAWAN;AND OTHERS;REEL/FRAME:016873/0908
Effective date: 20050422
Feb 17, 2006ASAssignment
Owner name: ALACRITUS, INC., CALIFORNIA
Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ITEM 4. PATENT APPLICATION NO. WAS INCORRECTLY LISTED AS 11/051,882. SHOULD BE 11/051,862. PREVIOUSLY RECORDED ON REEL 016873 FRAME 0908;ASSIGNORS:STAGER, ROGER KEITH;TRIMMER, DONALD ALVIN;SAXENA, PAWAN;AND OTHERS;REEL/FRAME:017182/0934
Effective date: 20050422
Oct 27, 2008ASAssignment
Owner name: NETAPP, INC., CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ALACRITUS, INC.;REEL/FRAME:021744/0001
Effective date: 20081024
Owner name: NETAPP, INC.,CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ALACRITUS, INC.;REEL/FRAME:021744/0001
Effective date: 20081024