WO2009141752A2 - Resilient data storage in the presence of replication faults and rolling disasters - Google Patents
Resilient data storage in the presence of replication faults and rolling disasters Download PDFInfo
- Publication number
- WO2009141752A2 WO2009141752A2 PCT/IB2009/051919 IB2009051919W WO2009141752A2 WO 2009141752 A2 WO2009141752 A2 WO 2009141752A2 IB 2009051919 W IB2009051919 W IB 2009051919W WO 2009141752 A2 WO2009141752 A2 WO 2009141752A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data items
- storage device
- disaster
- operational mode
- processor
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1415—Saving, restoring, recovering or retrying at system level
- G06F11/1443—Transmit or communication errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2082—Data synchronisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2071—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
- G06F11/2074—Asynchronous techniques
Definitions
- the present invention relates generally to data protection, and particularly to methods and systems for resilient data storage using disaster-proof storage devices.
- PCT International Publication WO 2006/111958 whose disclosure is incorporated herein by reference, describes a method for data protection, which includes accepting data for storage from one or more data sources.
- the data is sent for storage in a primary storage device and in a secondary storage device.
- a record associated with the data is temporarily stored in a disaster-proof storage unit adjacent to the primary storage device.
- the data is reconstructed using the record stored in the disaster-proof storage unit and at least part of the data stored in the secondary storage device.
- An embodiment of the present invention provides a method for data protection, including: in a first operational mode, sending data items for storage in a primary storage device and in a secondary storage device, while temporarily caching the data items in a disaster-proof storage unit and subsequently deleting the data items from the disaster-proof storage unit, wherein each data item is deleted from the disaster-proof storage unit upon successful storage of the data item in the secondary storage device; receiving an indication of a fault related to storage of the data in the secondary storage device; and responsively to the indication, switching to operating in a second operational mode in which the data items are sent for storage at least in the primary storage device and are cached and retained in the disaster-proof storage unit irrespective of the successful storage of the data items in the secondary storage device.
- the data items include Input/Output (I/O) transactions that are received from one or more applications, and operating in the second operational mode includes continuing to receive the I/O transactions from the applications while the fault is present.
- switching to operating in the second operational mode includes operating in the second operational mode for a predefined time duration following the indication, and switching back to the first operational mode after the predefined time duration.
- a notification related to memory unavailability in the disaster-proof storage device is received, and an action is performed responsively to the notification. Performing the action may include switching back to the first operational mode responsively to the notification, and/or prompting a user responsively to the notification.
- performing the action includes selecting the action from a group of actions including refusing to accept subsequent data items following the notification, and storing the subsequent data items only in the primary storage device. Selecting the action may include choosing the action responsively to a predefined configuration parameter.
- operating in the second operational mode includes allowing acceptance of subsequent data items for a predefined time period following the indication, responsively to determining that the fault is associated with an unidentifiable software failure.
- operation in the first operational mode continues irrespective of the indication, responsively to determining that the fault is associated with an identifiable software failure.
- the method includes refusing to accept subsequent data items for a predefined time period responsively to detecting a failure in caching the data items in a disaster-proof storage unit.
- the data items are accepted from one or more software applications, and refusing to accept the subsequent data items includes initially refusing to accept the subsequent data items for a first time period that does not disrupt the software applications, and, responsively to an assessed cause of the failure, continuing to refuse to accept the subsequent data items for a second time period that is longer than the first time period.
- a data protection apparatus including: an interface, which is configured to receive data items from one or more data sources; and a processor, which is configured to operate in a first operational mode by sending the data items for storage in a primary storage device and in a secondary storage device, while temporarily caching the data items in a disaster-proof storage unit and subsequently deleting the data items from the disaster-proof storage unit, wherein each data item is deleted from the disaster-proof storage unit upon successful storage of the data item in the secondary storage device, and which is configured, responsively to receiving an indication of a fault related to storage of the data in the secondary storage device, to switch to operating in a second operational mode, in which the data items are sent for storage at least in the primary storage device and are cached and retained in the disaster-proof storage unit, by instructing the disaster-proof storage unit to retain the data items irrespective of the successful storage of the data items in the secondary storage device.
- Fig. 1 is a block diagram that schematically illustrates a data protection system, in accordance with an embodiment of the present invention
- Fig. 2 is a flow chart that schematically illustrates a method for data protection, in accordance with an embodiment of the present invention.
- data replication in the secondary storage device may fail for various reasons, such as communication link failures, failures in the replication software or other system components, or because of a rolling disaster that caused the replication fault but has not yet hit the primary storage device.
- reaction to a replication fault may be to refuse to accept new data for storage. This sort of reaction, however, typically suspends the applications that produce the data. Another possible reaction is to continue accepting data, and to store the data only in the primary storage device without replication. A reaction of this sort does not suspend the applications, but on the other hand does not offer data protection.
- Embodiments of the present invention that are described hereinbelow provide improved methods and systems for data protection in the presence of replication faults.
- data items e.g., I/O transactions
- the data items are sent for storage in a primary storage device and a secondary storage device.
- the data items are cached temporarily in a disaster-proof storage unit, which is typically adjacent to the primary storage device.
- the data items are cached in the disaster-proof storage unit until they have been stored successfully in the secondary storage device. In other words, when a given data item is replicated successfully, it is deleted from the disaster-proof storage unit. Once a replication fault is detected, however, incoming data items are retained in the disaster- proof storage unit regardless of whether or not they have been replicated successfully in the secondary storage device. This mode of operation typically continues until the memory of the disaster-proof storage unit is full, typically on the order of fifteen minutes. If caching in the disaster-proof storage unit fails for some reason, acceptance of additional data items is refused for a short time period that does not yet suspends the applications (e.g., 45 seconds).
- This short blockage period is used for assessing whether the failure is caused by a rolling disaster in the vicinity of the primary site, or by equipment failure in or around the disaster-proof storage unit.
- the methods and systems described herein are highly-effective in mitigating temporary replication faults, such as communication or equipment faults.
- temporary replication faults such as communication or equipment faults.
- data items received during a temporary replication fault are cached in the disaster- proof storage unit, and are replicated in the secondary storage device after the fault is corrected. This caching and replication process is performed without suspending the applications and without loss of data protection. If the replication fault is caused by a permanent fault or a rolling disaster, on the other hand, the data items can later be recovered from the disaster-proof storage unit and the secondary storage device.
- Fig. 1 is a block diagram that schematically illustrates a data protection system 20, in accordance with an embodiment of the present invention.
- System 20 accepts data items from one or more data sources, and stores the data items in a mirrored configuration in order to protect them against disaster events.
- the data sources comprise one or more application servers 24, and the data items comprise Input/Output (I/O) transactions, which are produced by applications running on servers 24. More particularly, the data items may comprise I/O write operations.
- I/O Input/Output
- System 20 stores the accepted transactions in a primary storage device 28, and replicates the transactions in a secondary storage device 32 via a communication link 36.
- the primary and secondary storage devices are typically distant from one another, such that replication of the transactions protects the data against disaster events that may hit the primary storage device.
- the primary and secondary storage devices may comprise disks, magnetic tapes, computer memory devices and/or devices based on any other suitable storage technology.
- the storage devices comprise internal processors that perform local data storage and retrieval-related functions.
- the primary and secondary storage devices comprise Direct Access Storage Devices (DASDs). Alternatively, however the storage devices may comprise any other suitable storage device type.
- system 20 comprises an administrator terminal 40, using which an administrator or other user can control and configure the operation of the system.
- System 20 further comprises a disaster-proof storage unit 44, which is used for temporary caching of transactions that are in the process of being replicated and stored in secondary storage device 32.
- Unit 44 is typically adjacent to primary storage device 28, and is constructed so as to withstand and survive disaster events that may hit the primary storage device.
- Disaster-proof unit 44 is controlled by a protection processor 48.
- the protection processor accepts from the primary storage device transactions for caching, and sends them to unit 44.
- the protection processor also receives from the primary storage device instructions to delete certain transactions from unit 44, for example when these transactions are known to be stored successfully in the secondary storage device.
- Protection processor 48 is connected to the primary storage device using a communication link 52, and to the disaster- proof storage unit using a communication link 56.
- Primary storage device 28 comprises an interface 58 for interacting with application servers 24, and a storage processor 60 that carries out the methods described herein.
- interface 58 accepts transactions for storage from the application servers, and sends the application servers acknowledgements indicating that the transactions have been stored successfully.
- Processor 60 comprises a mirroring application 64 and an I/O status module 68.
- the mirroring application manages the data replication functions of system 20. For example, the mirroring application receives transactions for storage via interface 58 and sends copies of the transactions to the primary storage device, the secondary storage device and the protection processor. The mirroring application also receives acknowledgements from the primary storage device, the secondary storage device and/or the protection processor, indicating that the transactions have been stored successfully. In response to these acknowledgements, the mirroring application sends acknowledgements to the application servers via interface 58, and/or instructs the protection processor to delete certain transactions from the disaster-proof storage unit.
- I/O status module 68 controls the behavior of the applications running on servers 24, during normal operation and in the presence of replication faults. In particular, module 68 controls whether the applications stop or continue sending transactions for storage. I/O module 68 receives completion status indications from mirroring application 64, from primary storage device 28 and from protection processor 48, notifying the I/O module whether a given transaction was stored successfully in the secondary storage device, the primary storage device and the disaster-proof storage unit, respectively. Based on the received status indications, module 68 sends completion status indications to the applications. Using these indications, the I/O status module can cause the applications to stop or continue sending transactions for storage. The functions of module 68 are described in greater detail further below.
- system 20 may comprise multiple primary storage devices, secondary storage devices and/or disaster-proof storage units.
- processor 60 may be implemented on a processor that is external to primary storage device 28.
- the functions of processor 60 and of processor 48 may be implemented in a common processor.
- protection processor 48 may be located between the data sources and the primary storage device, or between the primary storage device and the secondary storage device.
- system 20 may accept and store any other suitable kind of data items accepted from any other suitable kind of data source, such as, for example, communication transactions produced by a telephony system such as a Private Automatic Branch Exchange (PABX) or a telephony switch, and/or information produced by a surveillance system, security system or access control system such as a closed-circuit television (CCTV) system.
- a telephony system such as a Private Automatic Branch Exchange (PABX) or a telephony switch
- PABX Private Automatic Branch Exchange
- CCTV closed-circuit television
- processors 48 and 60 comprise general-purpose processors, which are programmed in software to carry out the functions described herein.
- the software may be downloaded to the processors in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on tangible media, such as magnetic, optical, or electronic memory.
- system 20 stores each incoming I/O transaction in primary storage device 28, sends the transaction for storage in secondary storage device 32, and caches the transaction in disaster-proof storage unit 44 until the transaction is stored successfully in the secondary storage device.
- the system deletes the copy of this transaction that is cached in disaster-proof storage unit 44.
- the disaster-proof storage unit caches only the transactions that are not yet stored successfully in the secondary storage device.
- each transaction is backed-up in secondary storage device 32, in disaster-proof storage unit 44, or in both. If a disaster event damages the data stored in primary storage device 28, the data can be recovered based on the data stored in unit 44 and in secondary storage device 32.
- replication fault means any fault that affects the storage of data items in secondary storage device 32, in the disaster-proof storage unit 44, or in both.
- a replication fault may be caused by a rolling disaster event, i.e., a disaster event that evolves over time and gradually affects additional elements of system 20.
- a rolling disaster event may affect the replication functionality of the system before it hits the primary storage device. During this interim time period, such an event manifests itself as a replication fault.
- a rolling disaster of this sort When a rolling disaster of this sort finally hits the primary site, the data of write operations that occurred in this interim period are typically lost (assuming the fault disrupted replication to the remote site and the applications running on application server 24 were not stopped). The techniques described herein prevent this data loss.
- replication faults may be caused by various kinds of equipment faults in system 20, such as communication faults in communication links 36, 52 and 56, and/or software or hardware faults in the various elements of system 20.
- These kinds of replication faults may sometimes be associated with a rolling disaster, but in many cases it is difficult to determine the cause when the fault is detected. In these scenarios, if replication is disrupted but applications are not stopped, and assuming the fault is later corrected and replication is resumed, data will not be lost since no disaster will typically follow the detection of the fault.
- the delay from the time at which a rolling disaster event damages the replication functionality to the time at which the primary storage device is hit does not exceed fifteen minutes.
- the time interval from detection of a first fault in the primary site until all components fail is typically on the order of 45 seconds. This sort of event is referred to as a local rolling disaster.
- the primary storage device when a replication fault that disrupts replication to secondary storage device 32 occurs, the primary storage device does not immediately refuse to receive incoming transactions from application servers 24. Instead, the primary storage device continues to receive transactions for storage, but retains these transactions in disaster-proof storage unit 44.
- processor 60 in the primary storage device instructs protection processor 48 to stop deleting transactions from unit 44, irrespective of whether or not these transactions are stored successfully in the secondary storage device.
- this mode of operation continues until the memory of disaster-proof storage unit 44 fills up (e.g., on the order of fifteen minutes), by which time if a disaster did not hit the primary site it is likely that the fault is not associated with a disaster. In this latter case, the replication may be stopped and applications can continue writing data to primary DASD 28 without protection to secondary DASD 32.
- Fig. 2 is a flow chart that schematically illustrates a method for data protection, in accordance with an embodiment of the present invention.
- the method begins with system 20 operating in a normal operational mode, at a normal operation step 70.
- storage processor 60 in primary storage device 28 accepts transactions from application servers 24.
- Processor 60 stores a first copy of each transaction in the primary storage device, and sends a second copy of each transaction for mirrored storage in the secondary storage device.
- Processor 60 instructs protection processor 48 to cache a third copy of each transaction in disaster-proof storage unit 44.
- processor 60 Upon successful storage of a given transaction in the secondary storage device (e.g., in response to an acknowledgement received from the secondary storage device), processor 60 instructs protection processor 48 to delete the copy of this transaction from unit 44.
- Processor 60 checks whether a replication fault is present, at a fault checking step 74. If replication functions properly, i.e., if the transactions are stored successfully in the secondary storage device, the method loops back to step 70 above, and system 20 continues to operate in the normal operational mode.
- processor 60 If processor 60 receives an indication of a replication fault, it switches to operate in a fault operational mode, at a fault operation step 78. In this mode of operation, processor 60 continues to accept transactions from application servers 24, to store them in the primary storage device and to cache them in disaster-proof unit 44. Unlike the normal operational mode, however, in the fault operational mode there is no guarantee that the transactions are replicated in the secondary storage device. Therefore, processor 60 stops instructing the protection processor to delete transactions from disaster-proof storage unit 44. In other words, processor 60 retains the transactions cached in unit 44 regardless of whether or not they are stored successfully in the secondary storage device.
- Processor 60 may switch back from the fault operational mode to the normal operational mode according to various conditions, such as after a predefined time period (e.g., fifteen minutes), in response to receiving an indication that the replication fault has been corrected, or using any other suitable condition.
- processor 60 checks whether the replication fault has been corrected or whether a predefined time out expired, at a return checking step 82. If the fault is no longer present, the method loops back to step 70 above, and processor 60 returns to operate in the normal operational mode.
- processor 60 checks whether the memory of the disaster-proof storage unit is full or about to become full, at a memory full checking step 86. If the disaster-proof storage unit still has free memory space to store subsequent transactions, the method loops back to step 78 above, and processor 60 continues to operate in the fault operational mode.
- processor 60 If the memory of disaster-proof unit 44 is full, processor 60 cannot continue to cache transactions in unit 44. Replication in the secondary site is also not available because of the replication fault. At this stage, processor 60 begins to operate at a protection-less operational mode 90.
- the functionality of this mode may vary, for example, depending on the importance of mirroring the transactions.
- system 20 has a global configuration parameter denoted "mirroring criticality.” When the mirroring criticality is high, processor 60 refuses to accept incoming transactions when operating in the protection-less mode, since it is not able to protect these transactions. This refusal, however, typically suspends the applications. In this sort of operation, if a disaster hits the primary site, the transactions can be recovered using the disaster-proof unit and the secondary storage device without data loss, even if the fault is not associated with a rolling disaster.
- processor 60 When the mirroring criticality is low, processor 60 continues to accept transactions without mirroring them in the secondary storage device or caching them in the disaster-proof storage unit.
- the transactions are stored in the primary storage device, typically without protection. This sort of operation, however, allows the applications to continue running. This sort of operation may be based on an assumption that if a disaster did not hit the primary site up to this point in time, the replication fault is probably not associated with a disaster event.
- the specification of system 20 allows a certain maximum amount of data to be lost in the event of a disaster.
- the allowed value is referred to as "LAG” and may be expressed, for example, as a maximum number of transactions that may be lost, a maximum data size (e.g. in MB) that may be lost, or as a maximum time period following the disaster in which transactions may be lost.
- the actual amount of data that is lost following a given disaster event is referred to as a Recovery Point Objective (RPO), and is typically expressed in the same units as the LAG.
- RPO Recovery Point Objective
- the memory size of disaster-proof unit 44 is typically designed to enable caching of transactions during a time period that is characteristic of evolution of a rolling disaster.
- processor 60 is configured to bypass replication mode after fifteen minutes.
- the memory in unit 44 should typically be large enough to cache the transaction volume accepted over a period of fifteen minutes plus the specified LAG.
- the amount of data that can be cached in unit 44 following a replication fault may be user-configurable (within the limits of the memory size of unit 44).
- processor 60 checks whether the memory of unit 44 is filled up to the maximum specified level.
- Processor 60 may take various measures when the memory of unit 44 is about to become full (or when the cached transactions are about to reach the maximum specified level). For example, processor 60 may issue an alert to the administrator using terminal 40. The administrator may perform various actions in response to such an alert. For example, the administrator may specify that the applications are allowed to continue running without caching transactions in unit 44. Alternatively, the administrator may specify that the subsequent transactions are rejected, and that the applications are therefore suspended.
- Identifiable faults typically comprise bugs that can be detected and reported by software logic running on the processor in question before it is reset. Typically although not necessarily, identifiable faults tend to occur in user space. When an identifiable software fault is detected, processor 60 typically causes the application server to continue operating even when mirroring is stopped, for a period of time needed for recovering from the fault. Unidentifiable faults typically comprise bugs that halt the processor in question and affect other software that may run on the processor. Typically although not necessarily, unidentifiable faults tend to occur in kernel space. In many cases, unidentifiable software faults are indistinguishable from hardware failures and are treated accordingly, as described above.
- caching of transactions in disaster-proof storage unit 44 may fail. Such failure may be caused by a rolling disaster that hit the disaster-proof storage unit or its surroundings and is likely to hit the primary storage device a short time afterwards.
- caching in unit 44 may fail due to equipment failures that are not associated with a rolling disaster, e.g., failures in the disaster-proof storage unit or other system component (e.g., protection processor 48 or link 56). Caching may also sometimes fail due to a rolling disaster, but in many cases it is difficult to determine the cause when the caching failure is detected.
- processor 60 attempts to assess the cause for the caching failure and take appropriate measures.
- processor 60 detects a failure to cache transactions in unit 44
- the processor refuses to accept new transactions for storage from application servers 24 for a short blockage period.
- the length of this blockage period is chosen so that, if the caching failure is caused by a rolling disaster that hit unit 44, the disaster is likely to reach the primary storage device during this period.
- the blockage period is chosen to be sufficiently short, so that refusing to accept new transactions during this period will not to cause suspension of the applications. This criterion is important, since recovering the applications after suspension takes a relatively long time and is noticeable to the application users.
- the short blockage period may be on the order of forty- five seconds long, although any other suitable length can also be used.
- the use of the short blockage period described above provides an interim step before blocking transactions for longer time periods (e.g., fifteen minutes) when replication to secondary storage device 32 fails too. As such, suspension of the applications is avoided unless absolutely necessary.
- the mechanism described above can be applied using the following logic:
- caching in the disaster-proof storage unit fails, and as long as replication to the secondary storage device is functional, it is sufficient to block transaction acceptance for a short time period (e.g., 45 seconds) in order to assess whether the caching failures is caused by a local rolling disaster or by equipment failure. " If both replication and caching fail, revert to longer blockage period (e.g., fifteen minutes).
- I/O status module 68 the acceptance or refusal of subsequent transactions from the application servers is managed by I/O status module 68.
- the I/O control module typically receives completion status indications from mirroring application 64, primary storage device 28 and protection processor 48, processes these indications according to certain logic, and generates a composite status indication that is sent to the application.
- "positive status” means that the transaction is stored successfully
- "negative status” means it is not.
- a lack of response from a certain storage device for a certain time period is also considered a negative status.
- the method of Fig. 2 above can be implemented by operating the I/O status module according to the following logic:
- ⁇ Module 68 receives from protection processor 48 a status indication specifying the success of caching the transaction in disaster-proof storage unit 44.
- module 68 typically refrains from generating negative status indications to the application for a certain configurable time period (e.g., fifteen 15 minutes, or any other suitable time period that is typically needed for identifying a rolling disaster).
- a certain configurable time period e.g., fifteen 15 minutes, or any other suitable time period that is typically needed for identifying a rolling disaster.
- the I/O status module also receives indications of software faults, which affect the health of the replication functionality. If replication fails due to a software fault, module 68 generates a positive status indication to the application until the replication functionality recovers.
- module 68 When module 68 identifies a connectivity failure or other replication fault, it instructs the protection processor to stop deleting data from the disaster-proof storage unit. ⁇ When module 68 receives an indication from the protection processor that the disaster- proof storage unit memory is about to become full, module 68 prompts the administrator via terminal 40. The administrator is requested to specify the appropriate action.
- I/O status module 68 When the memory of the disaster-proof storage unit is full, I/O status module 68 generate an appropriate status indication, as explained above. ⁇ When module 68 identifies a failure to cache transactions in the disaster-proof storage unit, it generates a negative status (i.e., blocks transaction acceptance) for a short blockage period, e.g., 45 seconds, during which the cause for the caching failure is assessed.
- a negative status i.e., blocks transaction acceptance
- module 68 Following a negative status indication from the mirroring application in the presence of a failure to cache transactions in the disaster-proof storage unit, module 68 typically generates a negative status (i.e., blocks transaction acceptance) for a long blockage period, e.g., 15 minutes, during which the causes for the failures are assessed.
- a negative status i.e., blocks transaction acceptance
- I/O Mainjoop A component representing the logic applied to each transaction.
- the main loop receives a transaction (e.g., I/O write operation) from a certain application and forwards the transaction to the primary storage device, the mirroring application and the protection processor.
- the main loop forwards the three resulting status indications to the I/O status generator component (below), so as to generate a composite status to the application.
- I/O status ⁇ generator.
- Allocate Buffer A component that allocates memory space within disaster-proof storage unit 44. The Allocate Buffer component stops deleting data from unit 44 when instructed by the I/O status generation component.
- ⁇ Watchdog An example of a mechanism for detecting and reporting software malfunction.
- the four software components use the following variables:
- ⁇ DASD status Status indication returned by the primary storage device.
- ⁇ Replication status Status indication returned by the mirroring application.
- ⁇ Store_Flag A global flag shared by the I/O status _generation and Allocate Buffer components. When this flag is set, the Allocate Buffer component stops deleting data from unit 44.
- I/O Status Generator (DASD status, protection_ status, replication status) f If bad DASD status or DASD status time out occurred Then /*Primary storage is bad*/
- protection True /* indicate that data is cached successfully in disaster-proof unit */ If protection status times out and protection status SW error was not reported in last 3 minutes Then /* not a time out on account of Reboot */ If not after 45 -second block for this error Then /* detected for the first time and suspected as disaster */
- protection status indicates memory near full condition Then /*how to proceed when memory of disaster-proof unit fills */
Abstract
A method for data protection includes, in a first operational mode, sending data items for storage in a primary storage device (28) and in a secondary storage device (32), while temporarily caching the data items in a disaster-proof storage unit (44) and subsequently deleting the data items from the disaster-proof storage unit, wherein each data item is deleted from the disaster-proof storage unit upon successful storage of the data item in the secondary storage device. An indication of a fault related to storage of the data in the secondary storage device is received. Responsively to the indication, operation is switched to a second operational mode in which the data items are sent for storage at least in the primary storage device and are cached and retained in the disaster-proof storage unit irrespective of the successful storage of the data items in the secondary storage device.
Description
RESILIENT DATA STORAGE IN THE PRESENCE OF REPLICATION FAULTS AND
ROLLING DISASTERS
FIELD OF THE INVENTION
The present invention relates generally to data protection, and particularly to methods and systems for resilient data storage using disaster-proof storage devices.
BACKGROUND OF THE INVENTION
Various techniques for protecting data against disaster events are known in the art. For example, PCT International Publication WO 2006/111958, whose disclosure is incorporated herein by reference, describes a method for data protection, which includes accepting data for storage from one or more data sources. The data is sent for storage in a primary storage device and in a secondary storage device. While awaiting an indication of successful storage of the data in the secondary storage device, a record associated with the data is temporarily stored in a disaster-proof storage unit adjacent to the primary storage device. When an event damaging at least some of the data in the primary storage device occurs, the data is reconstructed using the record stored in the disaster-proof storage unit and at least part of the data stored in the secondary storage device.
SUMMARY OF THE INVENTION
An embodiment of the present invention provides a method for data protection, including: in a first operational mode, sending data items for storage in a primary storage device and in a secondary storage device, while temporarily caching the data items in a disaster-proof storage unit and subsequently deleting the data items from the disaster-proof storage unit, wherein each data item is deleted from the disaster-proof storage unit upon successful storage of the data item in the secondary storage device; receiving an indication of a fault related to storage of the data in the secondary storage device; and responsively to the indication, switching to operating in a second operational mode in which the data items are sent for storage at least in the primary storage device and are cached and retained in the disaster-proof storage unit irrespective of the successful storage of the data items in the secondary storage device.
In some embodiments, the data items include Input/Output (I/O) transactions that are received from one or more applications, and operating in the second operational mode
includes continuing to receive the I/O transactions from the applications while the fault is present. In an embodiment, switching to operating in the second operational mode includes operating in the second operational mode for a predefined time duration following the indication, and switching back to the first operational mode after the predefined time duration. In a disclosed embodiment, after switching to operating in the second operational mode, a notification related to memory unavailability in the disaster-proof storage device is received, and an action is performed responsively to the notification. Performing the action may include switching back to the first operational mode responsively to the notification, and/or prompting a user responsively to the notification. In a disclosed embodiment, performing the action includes selecting the action from a group of actions including refusing to accept subsequent data items following the notification, and storing the subsequent data items only in the primary storage device. Selecting the action may include choosing the action responsively to a predefined configuration parameter.
In some embodiments the data items retained in the disaster-proof storage device are sent for storage in the secondary storage device following correction of the fault. In another embodiment, operating in the second operational mode includes allowing acceptance of subsequent data items for a predefined time period following the indication, responsively to determining that the fault is associated with an unidentifiable software failure. In yet another embodiment, operation in the first operational mode continues irrespective of the indication, responsively to determining that the fault is associated with an identifiable software failure.
In still another embodiment, the method includes refusing to accept subsequent data items for a predefined time period responsively to detecting a failure in caching the data items in a disaster-proof storage unit. In an embodiment, the data items are accepted from one or more software applications, and refusing to accept the subsequent data items includes initially refusing to accept the subsequent data items for a first time period that does not disrupt the software applications, and, responsively to an assessed cause of the failure, continuing to refuse to accept the subsequent data items for a second time period that is longer than the first time period.
There is additionally provided, in accordance with an embodiment of the present invention, a data protection apparatus, including: an interface, which is configured to receive data items from one or more data sources; and a processor, which is configured to operate in a first operational mode by sending the data items for storage in a primary storage device and in a secondary storage device, while
temporarily caching the data items in a disaster-proof storage unit and subsequently deleting the data items from the disaster-proof storage unit, wherein each data item is deleted from the disaster-proof storage unit upon successful storage of the data item in the secondary storage device, and which is configured, responsively to receiving an indication of a fault related to storage of the data in the secondary storage device, to switch to operating in a second operational mode, in which the data items are sent for storage at least in the primary storage device and are cached and retained in the disaster-proof storage unit, by instructing the disaster-proof storage unit to retain the data items irrespective of the successful storage of the data items in the secondary storage device. The present invention will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 is a block diagram that schematically illustrates a data protection system, in accordance with an embodiment of the present invention; and Fig. 2 is a flow chart that schematically illustrates a method for data protection, in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION OF EMBODIMENTS
OVERVIEW
Many data protection systems replicate data in primary and secondary storage devices in order to protect the data against disaster events. In such a system, data replication in the secondary storage device may fail for various reasons, such as communication link failures, failures in the replication software or other system components, or because of a rolling disaster that caused the replication fault but has not yet hit the primary storage device.
One possible reaction to a replication fault may be to refuse to accept new data for storage. This sort of reaction, however, typically suspends the applications that produce the data. Another possible reaction is to continue accepting data, and to store the data only in the primary storage device without replication. A reaction of this sort does not suspend the applications, but on the other hand does not offer data protection.
Embodiments of the present invention that are described hereinbelow provide improved methods and systems for data protection in the presence of replication faults. In some embodiments, data items (e.g., I/O transactions) are accepted from one or more data sources. The data items are sent for storage in a primary storage device and a secondary
storage device. The data items are cached temporarily in a disaster-proof storage unit, which is typically adjacent to the primary storage device.
Under normal conditions, the data items are cached in the disaster-proof storage unit until they have been stored successfully in the secondary storage device. In other words, when a given data item is replicated successfully, it is deleted from the disaster-proof storage unit. Once a replication fault is detected, however, incoming data items are retained in the disaster- proof storage unit regardless of whether or not they have been replicated successfully in the secondary storage device. This mode of operation typically continues until the memory of the disaster-proof storage unit is full, typically on the order of fifteen minutes. If caching in the disaster-proof storage unit fails for some reason, acceptance of additional data items is refused for a short time period that does not yet suspends the applications (e.g., 45 seconds). This short blockage period is used for assessing whether the failure is caused by a rolling disaster in the vicinity of the primary site, or by equipment failure in or around the disaster-proof storage unit. The methods and systems described herein are highly-effective in mitigating temporary replication faults, such as communication or equipment faults. When using the disclosed techniques, data items received during a temporary replication fault are cached in the disaster- proof storage unit, and are replicated in the secondary storage device after the fault is corrected. This caching and replication process is performed without suspending the applications and without loss of data protection. If the replication fault is caused by a permanent fault or a rolling disaster, on the other hand, the data items can later be recovered from the disaster-proof storage unit and the secondary storage device.
SYSTEM DESCRIPTION
Fig. 1 is a block diagram that schematically illustrates a data protection system 20, in accordance with an embodiment of the present invention. System 20 accepts data items from one or more data sources, and stores the data items in a mirrored configuration in order to protect them against disaster events. In the present example, the data sources comprise one or more application servers 24, and the data items comprise Input/Output (I/O) transactions, which are produced by applications running on servers 24. More particularly, the data items may comprise I/O write operations.
System 20 stores the accepted transactions in a primary storage device 28, and replicates the transactions in a secondary storage device 32 via a communication link 36. The primary and secondary storage devices are typically distant from one another, such that
replication of the transactions protects the data against disaster events that may hit the primary storage device. The primary and secondary storage devices may comprise disks, magnetic tapes, computer memory devices and/or devices based on any other suitable storage technology. In some embodiments, the storage devices comprise internal processors that perform local data storage and retrieval-related functions. In the present embodiment, the primary and secondary storage devices comprise Direct Access Storage Devices (DASDs). Alternatively, however the storage devices may comprise any other suitable storage device type. In some embodiments, system 20 comprises an administrator terminal 40, using which an administrator or other user can control and configure the operation of the system. System 20 further comprises a disaster-proof storage unit 44, which is used for temporary caching of transactions that are in the process of being replicated and stored in secondary storage device 32. Unit 44 is typically adjacent to primary storage device 28, and is constructed so as to withstand and survive disaster events that may hit the primary storage device. Disaster-proof unit 44 is controlled by a protection processor 48. For example, the protection processor accepts from the primary storage device transactions for caching, and sends them to unit 44. The protection processor also receives from the primary storage device instructions to delete certain transactions from unit 44, for example when these transactions are known to be stored successfully in the secondary storage device. Protection processor 48 is connected to the primary storage device using a communication link 52, and to the disaster- proof storage unit using a communication link 56.
Primary storage device 28 comprises an interface 58 for interacting with application servers 24, and a storage processor 60 that carries out the methods described herein. In particular, interface 58 accepts transactions for storage from the application servers, and sends the application servers acknowledgements indicating that the transactions have been stored successfully.
Processor 60 comprises a mirroring application 64 and an I/O status module 68. The mirroring application manages the data replication functions of system 20. For example, the mirroring application receives transactions for storage via interface 58 and sends copies of the transactions to the primary storage device, the secondary storage device and the protection processor. The mirroring application also receives acknowledgements from the primary storage device, the secondary storage device and/or the protection processor, indicating that the transactions have been stored successfully. In response to these acknowledgements, the mirroring application sends acknowledgements to the application servers via interface 58,
and/or instructs the protection processor to delete certain transactions from the disaster-proof storage unit.
I/O status module 68 controls the behavior of the applications running on servers 24, during normal operation and in the presence of replication faults. In particular, module 68 controls whether the applications stop or continue sending transactions for storage. I/O module 68 receives completion status indications from mirroring application 64, from primary storage device 28 and from protection processor 48, notifying the I/O module whether a given transaction was stored successfully in the secondary storage device, the primary storage device and the disaster-proof storage unit, respectively. Based on the received status indications, module 68 sends completion status indications to the applications. Using these indications, the I/O status module can cause the applications to stop or continue sending transactions for storage. The functions of module 68 are described in greater detail further below.
The configuration shown in Fig. 1 is an example configuration, which is chosen purely for the sake of conceptual clarity. In alternative embodiments, various other suitable system configurations can be used. For example, system 20 may comprise multiple primary storage devices, secondary storage devices and/or disaster-proof storage units. As another example, some or all of the functions of processor 60 may be implemented on a processor that is external to primary storage device 28. In one embodiment, the functions of processor 60 and of processor 48 may be implemented in a common processor. As yet another example, protection processor 48 may be located between the data sources and the primary storage device, or between the primary storage device and the secondary storage device.
In alternative embodiments, system 20 may accept and store any other suitable kind of data items accepted from any other suitable kind of data source, such as, for example, communication transactions produced by a telephony system such as a Private Automatic Branch Exchange (PABX) or a telephony switch, and/or information produced by a surveillance system, security system or access control system such as a closed-circuit television (CCTV) system.
Additional aspects of the disaster-proof storage unit, mirroring application and protection processor, as well as methods and system configurations for data storage using such elements, are addressed in PCT International Publication 2006/111958, cited above, and in U.S. Patent Application Serial Number 12/228,315, which is assigned to the assignee of the present patent application and whose disclosure is incorporated herein by reference. Typically, processors 48 and 60 comprise general-purpose processors, which are programmed in software to carry out the functions described herein. The software may be downloaded to the processors
in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on tangible media, such as magnetic, optical, or electronic memory.
DATA PROTECTION IN THE PRESENCE OF REPLICATION FAULTS
Under normal conditions, system 20 stores each incoming I/O transaction in primary storage device 28, sends the transaction for storage in secondary storage device 32, and caches the transaction in disaster-proof storage unit 44 until the transaction is stored successfully in the secondary storage device. When a given transaction is stored successfully in the secondary storage device, the system deletes the copy of this transaction that is cached in disaster-proof storage unit 44. In other words, the disaster-proof storage unit caches only the transactions that are not yet stored successfully in the secondary storage device. Thus, at any given time, each transaction is backed-up in secondary storage device 32, in disaster-proof storage unit 44, or in both. If a disaster event damages the data stored in primary storage device 28, the data can be recovered based on the data stored in unit 44 and in secondary storage device 32.
The above-described replication scheme can sometimes be disrupted by various kinds of replication faults that occur in system 20. In the present context, the term "replication fault" means any fault that affects the storage of data items in secondary storage device 32, in the disaster-proof storage unit 44, or in both.
In some cases, a replication fault may be caused by a rolling disaster event, i.e., a disaster event that evolves over time and gradually affects additional elements of system 20. In some scenarios, a rolling disaster event may affect the replication functionality of the system before it hits the primary storage device. During this interim time period, such an event manifests itself as a replication fault. When a rolling disaster of this sort finally hits the primary site, the data of write operations that occurred in this interim period are typically lost (assuming the fault disrupted replication to the remote site and the applications running on application server 24 were not stopped). The techniques described herein prevent this data loss.
In other scenarios, however, replication faults may be caused by various kinds of equipment faults in system 20, such as communication faults in communication links 36, 52 and 56, and/or software or hardware faults in the various elements of system 20. These kinds of replication faults may sometimes be associated with a rolling disaster, but in many cases it is difficult to determine the cause when the fault is detected. In these scenarios, if replication is disrupted but applications are not stopped, and assuming the fault is later corrected and
replication is resumed, data will not be lost since no disaster will typically follow the detection of the fault.
In many cases, if the disaster originates outside the premises of the primary site
(sometimes referred to as a regional rolling disaster), the delay from the time at which a rolling disaster event damages the replication functionality to the time at which the primary storage device is hit does not exceed fifteen minutes. When a disaster finally hits the primary site, the time interval from detection of a first fault in the primary site until all components fail is typically on the order of 45 seconds. This sort of event is referred to as a local rolling disaster.
In some embodiments of the present invention, when a replication fault that disrupts replication to secondary storage device 32 occurs, the primary storage device does not immediately refuse to receive incoming transactions from application servers 24. Instead, the primary storage device continues to receive transactions for storage, but retains these transactions in disaster-proof storage unit 44. In other words, upon detection of a replication fault, processor 60 in the primary storage device instructs protection processor 48 to stop deleting transactions from unit 44, irrespective of whether or not these transactions are stored successfully in the secondary storage device. In some embodiments, this mode of operation continues until the memory of disaster-proof storage unit 44 fills up (e.g., on the order of fifteen minutes), by which time if a disaster did not hit the primary site it is likely that the fault is not associated with a disaster. In this latter case, the replication may be stopped and applications can continue writing data to primary DASD 28 without protection to secondary DASD 32.
When using this technique, temporary replication faults do not cause suspension of the applications running on servers 24. The applications do not suffer from unnecessary outages and may continue to operate during such faults. Since the incoming transactions are backed-up in the disaster-proof storage device for the period of time suspected to be a rolling disaster period, data protection is not compromised.
Fig. 2 is a flow chart that schematically illustrates a method for data protection, in accordance with an embodiment of the present invention. The method begins with system 20 operating in a normal operational mode, at a normal operation step 70. In this mode of operation, storage processor 60 in primary storage device 28 accepts transactions from application servers 24. Processor 60 stores a first copy of each transaction in the primary storage device, and sends a second copy of each transaction for mirrored storage in the secondary storage device. Processor 60 instructs protection processor 48 to cache a third copy of each transaction in disaster-proof storage unit 44. Upon successful storage of a given
transaction in the secondary storage device (e.g., in response to an acknowledgement received from the secondary storage device), processor 60 instructs protection processor 48 to delete the copy of this transaction from unit 44.
Processor 60 checks whether a replication fault is present, at a fault checking step 74. If replication functions properly, i.e., if the transactions are stored successfully in the secondary storage device, the method loops back to step 70 above, and system 20 continues to operate in the normal operational mode.
If processor 60 receives an indication of a replication fault, it switches to operate in a fault operational mode, at a fault operation step 78. In this mode of operation, processor 60 continues to accept transactions from application servers 24, to store them in the primary storage device and to cache them in disaster-proof unit 44. Unlike the normal operational mode, however, in the fault operational mode there is no guarantee that the transactions are replicated in the secondary storage device. Therefore, processor 60 stops instructing the protection processor to delete transactions from disaster-proof storage unit 44. In other words, processor 60 retains the transactions cached in unit 44 regardless of whether or not they are stored successfully in the secondary storage device.
Processor 60 may switch back from the fault operational mode to the normal operational mode according to various conditions, such as after a predefined time period (e.g., fifteen minutes), in response to receiving an indication that the replication fault has been corrected, or using any other suitable condition. In the present example, processor 60 checks whether the replication fault has been corrected or whether a predefined time out expired, at a return checking step 82. If the fault is no longer present, the method loops back to step 70 above, and processor 60 returns to operate in the normal operational mode.
If, on the other hand, the replication fault persists, processor 60 checks whether the memory of the disaster-proof storage unit is full or about to become full, at a memory full checking step 86. If the disaster-proof storage unit still has free memory space to store subsequent transactions, the method loops back to step 78 above, and processor 60 continues to operate in the fault operational mode.
If the memory of disaster-proof unit 44 is full, processor 60 cannot continue to cache transactions in unit 44. Replication in the secondary site is also not available because of the replication fault. At this stage, processor 60 begins to operate at a protection-less operational mode 90. The functionality of this mode may vary, for example, depending on the importance of mirroring the transactions. In some embodiments, system 20 has a global configuration parameter denoted "mirroring criticality."
When the mirroring criticality is high, processor 60 refuses to accept incoming transactions when operating in the protection-less mode, since it is not able to protect these transactions. This refusal, however, typically suspends the applications. In this sort of operation, if a disaster hits the primary site, the transactions can be recovered using the disaster-proof unit and the secondary storage device without data loss, even if the fault is not associated with a rolling disaster.
When the mirroring criticality is low, processor 60 continues to accept transactions without mirroring them in the secondary storage device or caching them in the disaster-proof storage unit. The transactions are stored in the primary storage device, typically without protection. This sort of operation, however, allows the applications to continue running. This sort of operation may be based on an assumption that if a disaster did not hit the primary site up to this point in time, the replication fault is probably not associated with a disaster event.
In some embodiments, the specification of system 20 allows a certain maximum amount of data to be lost in the event of a disaster. The allowed value is referred to as "LAG" and may be expressed, for example, as a maximum number of transactions that may be lost, a maximum data size (e.g. in MB) that may be lost, or as a maximum time period following the disaster in which transactions may be lost. The actual amount of data that is lost following a given disaster event is referred to as a Recovery Point Objective (RPO), and is typically expressed in the same units as the LAG. When implementing the method of Fig. 2, the memory size of disaster-proof unit 44 is typically designed to enable caching of transactions during a time period that is characteristic of evolution of a rolling disaster. Consider, for example, an embodiment in which processor 60 is configured to bypass replication mode after fifteen minutes. In this embodiment, the memory in unit 44 should typically be large enough to cache the transaction volume accepted over a period of fifteen minutes plus the specified LAG.
In some embodiments, the amount of data that can be cached in unit 44 following a replication fault may be user-configurable (within the limits of the memory size of unit 44).
For example, an administrator may set this value using terminal 40. In these embodiments, at step 86 of the method of Fig. 2, processor 60 checks whether the memory of unit 44 is filled up to the maximum specified level.
Processor 60 may take various measures when the memory of unit 44 is about to become full (or when the cached transactions are about to reach the maximum specified level). For example, processor 60 may issue an alert to the administrator using terminal 40. The administrator may perform various actions in response to such an alert. For example, the
administrator may specify that the applications are allowed to continue running without caching transactions in unit 44. Alternatively, the administrator may specify that the subsequent transactions are rejected, and that the applications are therefore suspended.
Software faults can often be classified into identifiable and unidentifiable faults. Identifiable faults typically comprise bugs that can be detected and reported by software logic running on the processor in question before it is reset. Typically although not necessarily, identifiable faults tend to occur in user space. When an identifiable software fault is detected, processor 60 typically causes the application server to continue operating even when mirroring is stopped, for a period of time needed for recovering from the fault. Unidentifiable faults typically comprise bugs that halt the processor in question and affect other software that may run on the processor. Typically although not necessarily, unidentifiable faults tend to occur in kernel space. In many cases, unidentifiable software faults are indistinguishable from hardware failures and are treated accordingly, as described above.
In some cases, caching of transactions in disaster-proof storage unit 44 may fail. Such failure may be caused by a rolling disaster that hit the disaster-proof storage unit or its surroundings and is likely to hit the primary storage device a short time afterwards. On the other hand, caching in unit 44 may fail due to equipment failures that are not associated with a rolling disaster, e.g., failures in the disaster-proof storage unit or other system component (e.g., protection processor 48 or link 56). Caching may also sometimes fail due to a rolling disaster, but in many cases it is difficult to determine the cause when the caching failure is detected.
In some embodiments, processor 60 attempts to assess the cause for the caching failure and take appropriate measures. In an example embodiment, when processor 60 detects a failure to cache transactions in unit 44, the processor refuses to accept new transactions for storage from application servers 24 for a short blockage period. The length of this blockage period is chosen so that, if the caching failure is caused by a rolling disaster that hit unit 44, the disaster is likely to reach the primary storage device during this period. On the other hand, the blockage period is chosen to be sufficiently short, so that refusing to accept new transactions during this period will not to cause suspension of the applications. This criterion is important, since recovering the applications after suspension takes a relatively long time and is noticeable to the application users.
In a typical implementation, the short blockage period may be on the order of forty- five seconds long, although any other suitable length can also be used. The use of the short blockage period described above provides an interim step before blocking transactions for
longer time periods (e.g., fifteen minutes) when replication to secondary storage device 32 fails too. As such, suspension of the applications is avoided unless absolutely necessary. The mechanism described above can be applied using the following logic:
■ If both replication to the secondary storage device and caching in the disaster-proof storage unit function properly, operate in the normal operational mode.
■ If caching in the disaster-proof storage unit fails, and as long as replication to the secondary storage device is functional, it is sufficient to block transaction acceptance for a short time period (e.g., 45 seconds) in order to assess whether the caching failures is caused by a local rolling disaster or by equipment failure. " If both replication and caching fail, revert to longer blockage period (e.g., fifteen minutes).
I/O STATUS MODULE FUNCTIONALITY
As noted above, the acceptance or refusal of subsequent transactions from the application servers is managed by I/O status module 68. For a given transaction, the I/O control module typically receives completion status indications from mirroring application 64, primary storage device 28 and protection processor 48, processes these indications according to certain logic, and generates a composite status indication that is sent to the application. For both received and generated status indications, "positive status" means that the transaction is stored successfully, and "negative status" means it is not. Typically, a lack of response from a certain storage device for a certain time period is also considered a negative status. In some embodiments, the method of Fig. 2 above can be implemented by operating the I/O status module according to the following logic:
■ Module 68 receives from protection processor 48 a status indication specifying the success of caching the transaction in disaster-proof storage unit 44.
■ As long as the status indication received from the protection processor is positive, a negative status indication from the mirroring application will not trigger rejection of subsequent transactions from the applications.
■ Following a negative status indication from the mirroring application, module 68 typically refrains from generating negative status indications to the application for a certain configurable time period (e.g., fifteen 15 minutes, or any other suitable time period that is typically needed for identifying a rolling disaster).
■ The I/O status module also receives indications of software faults, which affect the health of the replication functionality. If replication fails due to a software fault, module 68
generates a positive status indication to the application until the replication functionality recovers.
■ When module 68 identifies a connectivity failure or other replication fault, it instructs the protection processor to stop deleting data from the disaster-proof storage unit. ■ When module 68 receives an indication from the protection processor that the disaster- proof storage unit memory is about to become full, module 68 prompts the administrator via terminal 40. The administrator is requested to specify the appropriate action.
■ When the memory of the disaster-proof storage unit is full, I/O status module 68 generate an appropriate status indication, as explained above. ■ When module 68 identifies a failure to cache transactions in the disaster-proof storage unit, it generates a negative status (i.e., blocks transaction acceptance) for a short blockage period, e.g., 45 seconds, during which the cause for the caching failure is assessed.
■ Following a negative status indication from the mirroring application in the presence of a failure to cache transactions in the disaster-proof storage unit, module 68 typically generates a negative status (i.e., blocks transaction acceptance) for a long blockage period, e.g., 15 minutes, during which the causes for the failures are assessed.
EXAMPLE PSEUDO-CODE
This section provides example pseudo-code of four software components, which can be used to implement the methods described herein. The main functions of the four software components are as follows:
■ I/O Mainjoop: A component representing the logic applied to each transaction. The main loop receives a transaction (e.g., I/O write operation) from a certain application and forwards the transaction to the primary storage device, the mirroring application and the protection processor. The main loop forwards the three resulting status indications to the I/O status generator component (below), so as to generate a composite status to the application.
■ I/O status ^generator. A component that generates a composite status indication to the application, based on the status indications received from the primary storage device, the mirroring application and the protection processor. ■ Allocate Buffer. A component that allocates memory space within disaster-proof storage unit 44. The Allocate Buffer component stops deleting data from unit 44 when instructed by the I/O status generation component.
■ Watchdog: An example of a mechanism for detecting and reporting software malfunction.
The four software components use the following variables:
■ DASD status: Status indication returned by the primary storage device.
■ Protection status: Status indication returned by the protection processor.
■ Replication status: Status indication returned by the mirroring application. ■ Store_Flag: A global flag shared by the I/O status _generation and Allocate Buffer components. When this flag is set, the Allocate Buffer component stops deleting data from unit 44.
■ Continue _ with_black_box Jϊill: A global flag, which specifies the status to be returned to the application when the memory of the disaster-proof storage unit is full. ■ Protection: A global flag, which specifies whether or not caching in unit 44 functions properly.
I/O Mainjoop Set timer = false Do forever f
Accept transaction from application server
Forward transaction to mirroring application, wait for replication status Forward transaction to primary DASD, wait for
DASD status Forward transaction to protection processor, wait for protection status Wait until all status indications are received or until time out expires
Status = I/O Status Generator (DASD status, protection status, replication status) Forward status to application
} End Do
I/O Status Generator (DASD status, protection_ status, replication status) f
If bad DASD status or DASD status time out occurred Then /*Primary storage is bad*/
Return check status
If in block period Then /* wait and see if rolling disaster is in progress*/ Return check status
protection=True /* indicate that data is cached successfully in disaster-proof unit */ If protection status times out and protection status SW error was not reported in last 3 minutes Then /* not a time out on account of Reboot */ If not after 45 -second block for this error Then /* detected for the first time and suspected as disaster */
{ Block all writes for 45 seconds
Return check status ;
Else protection=False /* failure in disaster-proof unit, not disaster */ If protection status indicates HW error or SW error Then
If not after 45 seconds block for this error Then {
Block all writes for 45 seconds Return check status
}
Else protection=False
If protection status indicates memory near full condition Then /*how to proceed when memory of disaster-proof unit fills */
Prompt user to set Continue with black box JuIl to true or false If protection status indicates memory full condition and
Then
Return check status
If replication status indicates any error Then If protection = True Then /* regardless of the type of error (HW, SW or time-out), return good status and stop discarding data from disaster-proof unit*/
{
Set Store Flag to Do not discard Return good status
} Else
{ If not after 15-minute block for this error Then
{
Block all writes for 15 minutes /*to account for propagation of disaster from remote site */
Return check status
}
Else /* bypass replication and continue unprotected, 15 minutes have passed and no disaster has occurred */ {
Set Store Flag to Do not discard Return good status
} Return good status ;
Although the embodiments described herein mainly address data protection schemes for mitigating replication faults, the methods and systems described herein can also be used in other applications, such as in database log protection and in various other data protection schemes.
It will thus be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and sub-combinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.
Claims
1. A method for data protection, comprising: in a first operational mode, sending data items for storage in a primary storage device and in a secondary storage device, while temporarily caching the data items in a disaster-proof storage unit and subsequently deleting the data items from the disaster-proof storage unit, wherein each data item is deleted from the disaster-proof storage unit upon successful storage of the data item in the secondary storage device; receiving an indication of a fault related to storage of the data in the secondary storage device; and responsively to the indication, switching to operating in a second operational mode in which the data items are sent for storage at least in the primary storage device and are cached and retained in the disaster-proof storage unit irrespective of the successful storage of the data items in the secondary storage device.
2. The method according to claim 1 , wherein the data items comprise Input/Output (I/O) transactions that are received from one or more applications, and wherein operating in the second operational mode comprises continuing to receive the I/O transactions from the applications while the fault is present.
3. The method according to claim 1, wherein switching to operating in the second operational mode comprises operating in the second operational mode for a predefined time duration following the indication, and switching back to the first operational mode after the predefined time duration.
4. The method according to claim 1, and comprising, after switching to operating in the second operational mode, receiving a notification related to memory unavailability in the disaster-proof storage device, and performing an action responsively to the notification.
5. The method according to any of claims 1-4, wherein performing the action comprises switching back to the first operational mode responsively to the notification.
6. The method according to any of claims 1-4, wherein performing the action comprises prompting a user responsively to the notification.
7. The method according to any of claims 1-4, wherein performing the action comprises selecting the action from a group of actions comprising: refusing to accept subsequent data items following the notification; and storing the subsequent data items only in the primary storage device.
8. The method according to claim 7, wherein selecting the action comprises choosing the action responsively to a predefined configuration parameter.
9. The method according to any of claims 1-4, and comprising, following correction of the fault, sending the data items retained in the disaster-proof storage device for storage in the secondary storage device.
10. The method according to any of claims 1-4, wherein operating in the second operational mode comprises allowing acceptance of subsequent data items for a predefined time period following the indication, responsively to determining that the fault is associated with an unidentifiable software failure.
11. The method according to any of claims 1-4, and comprising continuing to operate in the first operational mode irrespective of the indication responsively to determining that the fault is associated with an identifiable software failure.
12. The method according to any of claims 1-4, and comprising refusing to accept subsequent data items for a predefined time period responsively to detecting a failure in caching the data items in a disaster-proof storage unit.
13. The method according to claim 12, wherein the data items are accepted from one or more software applications, and wherein refusing to accept the subsequent data items comprises: initially refusing to accept the subsequent data items for a first time period that does not disrupt the software applications; and responsively to an assessed cause of the failure, continuing to refuse to accept the subsequent data items for a second time period that is longer than the first time period.
14. A data protection apparatus, comprising: an interface, which is configured to receive data items from one or more data sources; and a processor, which is configured to operate in a first operational mode by sending the data items for storage in a primary storage device and in a secondary storage device, while temporarily caching the data items in a disaster-proof storage unit and subsequently deleting the data items from the disaster-proof storage unit, wherein each data item is deleted from the disaster-proof storage unit upon successful storage of the data item in the secondary storage device, and which is configured, responsively to receiving an indication of a fault related to storage of the data in the secondary storage device, to switch to operating in a second operational mode, in which the data items are sent for storage at least in the primary storage device and are cached and retained in the disaster-proof storage unit, by instructing the disaster-proof storage unit to retain the data items irrespective of the successful storage of the data items in the secondary storage device.
15. The apparatus according to claim 14, wherein the data items comprise Input/Output (I/O) transactions, wherein the interface is configured to receive the I/O transactions from one or more applications, and wherein the processor is configured to continue to receive the I/O transactions from the applications when operating in the second operational mode, while the fault is present.
16. The apparatus according to claim 14, wherein the processor is configured to operate in the second operational mode for a predefined time duration following the indication, and to switch back to the first operational mode after the predefined time duration.
17. The apparatus according to claim 14, wherein the processor is configured, after switching to operating in the second operational mode, to receive a notification related to memory unavailability in the disaster-proof storage device, and to perform an action responsively to the notification.
18. The apparatus according to any of claims 14-17, wherein the processor is configured to switch back to the first operational mode responsively to the notification.
19. The apparatus according to any of claims 14-17, wherein the processor is configured to prompt a user responsively to the notification.
20. The apparatus according to any of claims 14-17, wherein the processor is configured to select the action from a group of actions comprising: refusing to accept subsequent data items following the notification; and storing the subsequent data items only in the primary storage device.
21. The apparatus according to claim 20, wherein the processor is configured to select the action responsively to a predefined configuration parameter.
22. The apparatus according to any of claims 14-17, wherein the processor is configured to send the data items retained in the disaster-proof storage device for storage in the secondary storage device following correction of the fault.
23. The apparatus according to any of claims 14-17, wherein the processor is configured to allow acceptance of subsequent data items for a predefined time period following the indication, responsively to determining that the fault is associated with an unidentifiable software failure.
24. The apparatus according to any of claims 14-17, wherein the processor is configured to continue to operate in the first operational mode irrespective of the indication responsively to determining that the fault is associated with an identifiable software failure.
25. The apparatus according to any of claims 14-17, wherein the processor is configured to refuse to accept subsequent data items for a predefined time period responsively to detecting a failure in caching the data items in the disaster-proof storage unit.
26. The apparatus according to claim 25, wherein the data sources comprise one or more software applications, and wherein the processor is configured to initially refuse to accept the subsequent data items for a first time period that does not disrupt the software applications, and, responsively to an assessed cause of the failure, to continue to refuse to accept the subsequent data items for a second time period that is longer than the first time period.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP09750209A EP2286343A4 (en) | 2008-05-19 | 2009-05-10 | Resilient data storage in the presence of replication faults and rolling disasters |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12828308P | 2008-05-19 | 2008-05-19 | |
US61/128,283 | 2008-05-19 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2009141752A2 true WO2009141752A2 (en) | 2009-11-26 |
WO2009141752A3 WO2009141752A3 (en) | 2010-01-14 |
Family
ID=41317297
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2009/051919 WO2009141752A2 (en) | 2008-05-19 | 2009-05-10 | Resilient data storage in the presence of replication faults and rolling disasters |
Country Status (3)
Country | Link |
---|---|
US (1) | US8015436B2 (en) |
EP (1) | EP2286343A4 (en) |
WO (1) | WO2009141752A2 (en) |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2328089B1 (en) * | 2005-04-20 | 2014-07-09 | Axxana (Israel) Ltd. | Remote data mirroring system |
US9195397B2 (en) | 2005-04-20 | 2015-11-24 | Axxana (Israel) Ltd. | Disaster-proof data recovery |
JP4700562B2 (en) * | 2006-05-18 | 2011-06-15 | 株式会社バッファロー | Data storage device and data storage method |
US8788247B2 (en) * | 2008-08-20 | 2014-07-22 | International Business Machines Corporation | System and method for analyzing effectiveness of distributing emergency supplies in the event of disasters |
WO2010076755A2 (en) * | 2009-01-05 | 2010-07-08 | Axxana (Israel) Ltd | Disaster-proof storage unit having transmission capabilities |
WO2011067702A1 (en) | 2009-12-02 | 2011-06-09 | Axxana (Israel) Ltd. | Distributed intelligent network |
JP5540697B2 (en) * | 2009-12-25 | 2014-07-02 | 富士通株式会社 | Arithmetic processing device, information processing device, and control method of arithmetic processing device |
US9437051B1 (en) * | 2011-06-10 | 2016-09-06 | Stamps.Com Inc. | Systems and methods for providing operational continuity using reduced data replication |
WO2014170810A1 (en) * | 2013-04-14 | 2014-10-23 | Axxana (Israel) Ltd. | Synchronously mirroring very fast storage arrays |
WO2015056169A1 (en) | 2013-10-16 | 2015-04-23 | Axxana (Israel) Ltd. | Zero-transaction-loss recovery for database systems |
US9367409B2 (en) * | 2014-04-30 | 2016-06-14 | Netapp, Inc. | Method and system for handling failures by tracking status of switchover or switchback |
US10025655B2 (en) * | 2014-06-26 | 2018-07-17 | Hitachi, Ltd. | Storage system |
US9298567B2 (en) | 2014-08-12 | 2016-03-29 | International Business Machines Corporation | System availability in PPRC failover environments |
US9875042B1 (en) * | 2015-03-31 | 2018-01-23 | EMC IP Holding Company LLC | Asynchronous replication |
US10379958B2 (en) | 2015-06-03 | 2019-08-13 | Axxana (Israel) Ltd. | Fast archiving for database systems |
US10592326B2 (en) * | 2017-03-08 | 2020-03-17 | Axxana (Israel) Ltd. | Method and apparatus for data loss assessment |
US10379985B1 (en) * | 2018-02-01 | 2019-08-13 | EMC IP Holding Company LLC | Automating and monitoring rolling cluster reboots |
US11461192B1 (en) * | 2019-11-27 | 2022-10-04 | Amazon Technologies, Inc. | Automatic recovery from detected data errors in database systems |
US11775391B2 (en) * | 2020-07-13 | 2023-10-03 | Samsung Electronics Co., Ltd. | RAID system with fault resilient storage devices |
Family Cites Families (77)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5170480A (en) | 1989-09-25 | 1992-12-08 | International Business Machines Corporation | Concurrently applying redo records to backup database in a log sequence using single queue server per queue at a time |
US5027104A (en) * | 1990-02-21 | 1991-06-25 | Reid Donald J | Vehicle security device |
US5544347A (en) * | 1990-09-24 | 1996-08-06 | Emc Corporation | Data storage system controlled remote data mirroring with respectively maintained data indices |
GB2273180A (en) * | 1992-12-02 | 1994-06-08 | Ibm | Database backup and recovery. |
JP2840511B2 (en) * | 1992-12-10 | 1998-12-24 | 富士通株式会社 | Error recovery processing apparatus and method for subsystem using magnetic tape device |
US5799141A (en) * | 1995-06-09 | 1998-08-25 | Qualix Group, Inc. | Real-time data protection system and method |
US5623597A (en) * | 1995-06-15 | 1997-04-22 | Elonex Ip Holdings Ltd. | Secure data storage system for a computer wherein a heat transfer apparatus cools a data storage unit in a fireproof safe in absence of a fire and ceases transfer in the event of a fire |
US5841768A (en) * | 1996-06-27 | 1998-11-24 | Interdigital Technology Corporation | Method of controlling initial power ramp-up in CDMA systems by using short codes |
US5724501A (en) * | 1996-03-29 | 1998-03-03 | Emc Corporation | Quick recovery of write cache in a fault tolerant I/O system |
US5889935A (en) * | 1996-05-28 | 1999-03-30 | Emc Corporation | Disaster control features for remote data mirroring |
RU2128854C1 (en) | 1996-08-30 | 1999-04-10 | Летно-исследовательский институт им.М.М.Громова | System of crew support in risky situations |
US6105078A (en) * | 1997-12-18 | 2000-08-15 | International Business Machines Corporation | Extended remote copying system for reporting both active and idle conditions wherein the idle condition indicates no updates to the system for a predetermined time period |
US6226651B1 (en) * | 1998-03-27 | 2001-05-01 | International Business Machines Corporation | Database disaster remote site recovery |
US6324654B1 (en) * | 1998-03-30 | 2001-11-27 | Legato Systems, Inc. | Computer network remote data mirroring system |
US6144999A (en) * | 1998-05-29 | 2000-11-07 | Sun Microsystems, Incorporated | Method and apparatus for file system disaster recovery |
US6260125B1 (en) * | 1998-12-09 | 2001-07-10 | Ncr Corporation | Asynchronous write queues, reconstruction and check-pointing in disk-mirroring applications |
US6389552B1 (en) * | 1998-12-31 | 2002-05-14 | At&T Corp | Methods and systems for remote electronic vaulting |
US6158833A (en) * | 1999-09-11 | 2000-12-12 | Schwab Corporation | Fire-resistant computer storage apparatus |
TW454120B (en) * | 1999-11-11 | 2001-09-11 | Miralink Corp | Flexible remote data mirroring |
US6629264B1 (en) * | 2000-03-30 | 2003-09-30 | Hewlett-Packard Development Company, L.P. | Controller-based remote copy system with logical unit grouping |
US6658590B1 (en) * | 2000-03-30 | 2003-12-02 | Hewlett-Packard Development Company, L.P. | Controller-based transaction logging system for data recovery in a storage area network |
US7111189B1 (en) * | 2000-03-30 | 2006-09-19 | Hewlett-Packard Development Company, L.P. | Method for transaction log failover merging during asynchronous operations in a data storage network |
US6658540B1 (en) * | 2000-03-31 | 2003-12-02 | Hewlett-Packard Development Company, L.P. | Method for transaction command ordering in a remote data replication system |
US20010047412A1 (en) * | 2000-05-08 | 2001-11-29 | Weinman Joseph B. | Method and apparatus for maximizing distance of data mirrors |
MXPA02012065A (en) | 2000-06-05 | 2003-04-25 | Miralink Corp | Flexible remote data mirroring. |
US7232197B2 (en) * | 2000-08-16 | 2007-06-19 | Davis William P | Fire-safe electronic data storage protection device |
US20020162112A1 (en) * | 2001-02-21 | 2002-10-31 | Vesta Broadband Services, Inc. | PC-based virtual set-top box for internet-based distribution of video and other data |
AU2002245678C1 (en) * | 2001-03-12 | 2005-10-13 | Honeywell International, Inc. | Method of recovering a flight critical computer after a radiation event |
US7103586B2 (en) * | 2001-03-16 | 2006-09-05 | Gravic, Inc. | Collision avoidance in database replication systems |
US7478266B2 (en) * | 2001-05-21 | 2009-01-13 | Mudalla Technology, Inc. | Method and apparatus for fast transaction commit over unreliable networks |
US20030014523A1 (en) * | 2001-07-13 | 2003-01-16 | John Teloh | Storage network data replicator |
RU2221177C2 (en) | 2001-09-03 | 2004-01-10 | Тихомиров Александр Григорьевич | Device to protect objects from impact loads |
US6859865B2 (en) * | 2001-11-09 | 2005-02-22 | Nortel Networks Limited | System and method for removing latency effects in acknowledged data transfers |
US7055056B2 (en) * | 2001-11-21 | 2006-05-30 | Hewlett-Packard Development Company, L.P. | System and method for ensuring the availability of a storage system |
US7482928B2 (en) * | 2001-12-28 | 2009-01-27 | Private Pallet Security Systems, Llc | Mini pallet-box moving container |
JP2003316522A (en) * | 2002-04-26 | 2003-11-07 | Hitachi Ltd | Computer system and method for controlling the same system |
US6842825B2 (en) * | 2002-08-07 | 2005-01-11 | International Business Machines Corporation | Adjusting timestamps to preserve update timing information for cached data objects |
US6976186B1 (en) * | 2002-08-27 | 2005-12-13 | At&T Corp. | Asymmetric data mirroring |
JP2004086721A (en) * | 2002-08-28 | 2004-03-18 | Nec Corp | Data reproducing system, relay system, data transmission/receiving method, and program for reproducing data in storage |
US20040059844A1 (en) * | 2002-09-20 | 2004-03-25 | Woodhead Industries, Inc. | Network active I/O module with removable memory unit |
US7039829B2 (en) * | 2002-11-07 | 2006-05-02 | Lsi Logic Corporation | Apparatus and method for enhancing data availability by implementing inter-storage-unit communication |
US7781172B2 (en) * | 2003-11-21 | 2010-08-24 | Kimberly-Clark Worldwide, Inc. | Method for extending the dynamic detection range of assay devices |
JP4322511B2 (en) * | 2003-01-27 | 2009-09-02 | 株式会社日立製作所 | Information processing system control method and information processing system |
US7020743B2 (en) * | 2003-02-24 | 2006-03-28 | Sun Microsystems, Inc. | Atomic remote memory operations in cache mirroring storage systems |
US7380082B2 (en) * | 2003-03-25 | 2008-05-27 | Emc Corporation | Reading virtual ordered writes at local storage device |
JP2004302512A (en) * | 2003-03-28 | 2004-10-28 | Hitachi Ltd | Cluster computing system and fail-over method for the same |
US7293203B1 (en) * | 2003-04-23 | 2007-11-06 | Network Appliance, Inc. | System and method for logging disk failure analysis in disk nonvolatile memory |
JP2005018510A (en) * | 2003-06-27 | 2005-01-20 | Hitachi Ltd | Data center system and its control method |
CN1701243A (en) * | 2003-08-27 | 2005-11-23 | 恩益禧慕百霖株式会社 | Earthquake prediction method and system thereof |
US7188292B2 (en) * | 2003-09-26 | 2007-03-06 | Nortel Networks Limited | Data mirroring system |
US7148802B2 (en) * | 2003-10-14 | 2006-12-12 | Paul Abbruscato | Direction finder and locator |
ATE394142T1 (en) * | 2004-01-27 | 2008-05-15 | Goldfire Sprl | FLEXIBLE WALL WITH FIRE RESISTANT PROPERTIES |
US7315965B2 (en) * | 2004-02-04 | 2008-01-01 | Network Appliance, Inc. | Method and system for storing data using a continuous data protection system |
US7370163B2 (en) * | 2004-05-03 | 2008-05-06 | Gemini Storage | Adaptive cache engine for storage area network including systems and methods related thereto |
US7698401B2 (en) * | 2004-06-01 | 2010-04-13 | Inmage Systems, Inc | Secondary data storage and recovery system |
US7383405B2 (en) * | 2004-06-30 | 2008-06-03 | Microsoft Corporation | Systems and methods for voluntary migration of a virtual machine between hosts with common storage connectivity |
US7089099B2 (en) * | 2004-07-30 | 2006-08-08 | Automotive Technologies International, Inc. | Sensor assemblies |
US7139782B2 (en) * | 2004-09-21 | 2006-11-21 | Hitachi, Ltd. | Method of and system for testing remote storage |
ATE502334T1 (en) * | 2005-04-20 | 2011-04-15 | Axxana Israel Ltd | REMOTE DATA MIRROR SYSTEM |
GB0507912D0 (en) * | 2005-04-20 | 2005-05-25 | Ibm | Disk drive and method for protecting data writes in a disk drive |
EP2328089B1 (en) * | 2005-04-20 | 2014-07-09 | Axxana (Israel) Ltd. | Remote data mirroring system |
JP4378335B2 (en) * | 2005-09-09 | 2009-12-02 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Device for dynamically switching transaction / data writing method to disk, switching method, and switching program |
JP5036158B2 (en) * | 2005-10-05 | 2012-09-26 | 株式会社日立製作所 | Information processing system and control method of information processing system |
JP4668763B2 (en) * | 2005-10-20 | 2011-04-13 | 株式会社日立製作所 | Storage device restore method and storage device |
US20070124789A1 (en) * | 2005-10-26 | 2007-05-31 | Sachson Thomas I | Wireless interactive communication system |
US7747579B2 (en) * | 2005-11-28 | 2010-06-29 | Commvault Systems, Inc. | Metabase for facilitating data classification |
JP4810210B2 (en) * | 2005-12-06 | 2011-11-09 | 日本電気株式会社 | Storage system, master storage device, remote storage device, data copy method, data copy program |
US7376805B2 (en) * | 2006-04-21 | 2008-05-20 | Hewlett-Packard Development Company, L.P. | Distributed storage array |
US9820658B2 (en) * | 2006-06-30 | 2017-11-21 | Bao Q. Tran | Systems and methods for providing interoperability among healthcare devices |
US7978065B2 (en) * | 2006-09-13 | 2011-07-12 | Trackpoint Systems, Llc | Device, system and method for tracking mobile assets |
EP1916021A1 (en) | 2006-10-26 | 2008-04-30 | Goldfire Sprl | Fire blanket |
JP5244332B2 (en) * | 2006-10-30 | 2013-07-24 | 株式会社日立製作所 | Information system, data transfer method, and data protection method |
EP2122900A4 (en) * | 2007-01-22 | 2014-07-23 | Spyrus Inc | Portable data encryption device with configurable security functionality and method for file encryption |
JP5042644B2 (en) * | 2007-01-24 | 2012-10-03 | 株式会社日立製作所 | Remote copy system |
US8190572B2 (en) * | 2007-02-15 | 2012-05-29 | Yahoo! Inc. | High-availability and data protection of OLTP databases |
US20090007192A1 (en) * | 2007-06-28 | 2009-01-01 | Gajendra Prasad Singh | On board wireless digital entertainment, communication and information system for mass transportation medium |
WO2009047751A2 (en) * | 2007-10-08 | 2009-04-16 | Axxana (Israel) Ltd. | Fast data recovery system |
-
2009
- 2009-05-10 EP EP09750209A patent/EP2286343A4/en not_active Withdrawn
- 2009-05-10 WO PCT/IB2009/051919 patent/WO2009141752A2/en active Application Filing
- 2009-05-11 US US12/463,438 patent/US8015436B2/en not_active Expired - Fee Related
Non-Patent Citations (1)
Title |
---|
See references of EP2286343A4 * |
Also Published As
Publication number | Publication date |
---|---|
US20090287967A1 (en) | 2009-11-19 |
WO2009141752A3 (en) | 2010-01-14 |
US8015436B2 (en) | 2011-09-06 |
EP2286343A2 (en) | 2011-02-23 |
EP2286343A4 (en) | 2012-02-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8015436B2 (en) | Resilient data storage in the presence of replication faults and rolling disasters | |
US7707453B2 (en) | Remote data mirroring system | |
RU2439691C2 (en) | Method of data protection | |
US8341121B1 (en) | Imminent failure prioritized backup | |
US7793060B2 (en) | System method and circuit for differential mirroring of data | |
US7734949B2 (en) | Information error recovery apparatus and methods | |
US7698593B2 (en) | Data protection management on a clustered server | |
CN103563336B (en) | It is a kind of to facilitate the method and apparatus processed in network | |
CN110750213A (en) | Hard disk management method and device | |
CN104734895B (en) | Service monitoring system and business monitoring method | |
JP6744545B2 (en) | Information processing apparatus, information processing program, and information processing system | |
CN103297264B (en) | Cloud platform failure recovery method and system | |
Bagchi et al. | A framework for database audit and control flow checking for a wireless telephone network controller | |
CN106682040A (en) | Data management method and device | |
JP3070453B2 (en) | Memory failure recovery method and recovery system for computer system | |
CN113440784B (en) | Fire protection system | |
JP2004078437A (en) | Method and system for duplexing file system management information | |
JP3479288B2 (en) | Remote diagnostic maintenance method, method, and program | |
JPH07248970A (en) | Cache memory device | |
Koerner et al. | The z990 first error data capture concept | |
JP2022007301A (en) | Recovery control device and recovery control method | |
CN114116132A (en) | Freezing method and device for virtual machine | |
CN116860515A (en) | Virtual machine backup method, computing device and computer storage medium | |
Mugoh et al. | Intelli-Restore as an Instantaneous Approach for Reduced Data Recovery Time. | |
JPH04355844A (en) | File trouble recovery system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09750209 Country of ref document: EP Kind code of ref document: A2 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2009750209 Country of ref document: EP |