US20030014432A1 - Storage network data replicator - Google Patents

Storage network data replicator Download PDF

Info

Publication number
US20030014432A1
US20030014432A1 US09/988,853 US98885301A US2003014432A1 US 20030014432 A1 US20030014432 A1 US 20030014432A1 US 98885301 A US98885301 A US 98885301A US 2003014432 A1 US2003014432 A1 US 2003014432A1
Authority
US
United States
Prior art keywords
data
electronic device
replica
storage
replication facility
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US09/988,853
Inventor
John Teloh
Philip Newton
Simon Crosland
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Microsystems Inc
Original Assignee
Sun Microsystems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Microsystems Inc filed Critical Sun Microsystems Inc
Priority to US09/988,853 priority Critical patent/US20030014432A1/en
Assigned to SUN MICROSYSTEMS, INC. reassignment SUN MICROSYSTEMS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TELOH, JOHN, CROSLAND, SIMON, NEWTON, PHILIP
Publication of US20030014432A1 publication Critical patent/US20030014432A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2071Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
    • G06F11/2074Asynchronous techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2058Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using more than 2 mirrored copies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2082Data synchronisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2066Optimisation of the communication load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2071Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
    • G06F11/2079Bidirectional techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/855Details of asynchronous mirroring using a journal to transfer not-yet-mirrored changes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • Y10S707/99953Recoverability
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • Y10S707/99955Archiving or backup

Definitions

  • the present invention generally relates to data networks, and more particularly, to network data replication.
  • One conventional method of data backup and storage is magnetic tape backup.
  • an amount of data such as a day or a week, is transferred to a magnetic tape medium that is then stored remotely offsite.
  • the magnetic tape medium is cumbersome to fetch in the event of a disaster and often requires significant amount of business center down time to restore the lost data.
  • database replication Another conventional method utilized to avoid down time and provide disaster protection is database replication.
  • database replication the database management system can make informed decisions on whether to write data to multiple local storage devices or a local storage device and to a remote storage device, but such synchronization comes at a significant performance penalty.
  • the technique of writing data to multiple storage devices simultaneously is known as mirroring. In this manner, critical data can be accessible at all times. Ensuring transaction and record consistency often results in data transmission latency when a large number of data transmissions to remote sites are necessary with each database update. Consequently, application performance is slowed to unacceptable levels.
  • database replication only performs replication only on data in the database and not on data in user files and system files. A separate remote copy facility is utilized to replicate such user files or system files.
  • RAID redundant array of independent disks
  • SCSI small computer system interface
  • a SCSI parallel interface is used for attaching peripheral devices, such as a printer or an external storage device to a computer.
  • peripheral devices such as a printer or an external storage device.
  • ESCON enterprise systems connection
  • FCAL fiber arbitrated loop
  • Another obstacle associated with long distance data mirroring is latency. That is, the round trip delay required to write data to the distant location and to wait for the remote storage device to be updated before mirroring the next data block.
  • the latency is proportional to the distance between the two sites and can be heightened by intermediate extenders and communication protocol overhead. Consequently, application response slows to an unacceptable level.
  • a further obstacle to long distance data mirroring is compatibility among remote storage devices involved in the mirroring.
  • a host having a data replication facility may replicate a data structure to one volume of the remote storage device at a time.
  • the host may not replicate the data further than the first remote storage device due to compatibility issues surrounding data transmission rates or host platform compatibility.
  • the present invention addresses the above-described limitations of conventional data backup and storage operations.
  • the present invention provides an approach to enable remote data mirroring amongst multiple remote storage devices across data transmission paths having various transmission capabilities and remote mirroring sites operating on various operating platforms.
  • a method for replicating a data volume from a first computer to multiple remote data volumes on one or more remote computers.
  • the first computer replicates the data volume and forwards the replicated data volume to the multiple remote data volumes on the one or more remote computers.
  • the remote data volumes can reside on a storage device of a single remote computer or on multiple remote storage devices of multiple remote computers, or both.
  • the first computer can forward the replicated data volume to the one or more remote computers in either a synchronous manner or an asynchronous manner. With either the asynchronous communication manner or the synchronous communication manner, the first computer forwards the replicated data volume using the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite.
  • TCP/IP Transmission Control Protocol/Internet Protocol
  • a central data center such as a corporation's headquarters, a government agency or the like, can replicate data from a central repository to multiple remote repositories for purposes of data protection and to support data utilization by multiple employees and clients across a diverse geographical area.
  • a method is performed in a computer network, wherein each of the computers in the network host a data replication facility for remote mirroring of data between each of the network computers.
  • Each data replication facility is able to receive data from its host, write the data to a local storage device and subsequently mirror the data to each of the computer network computers hosting a data replication facility for storage on a storage device of the remote computer host.
  • Each data replication facility can replicate a logical data volume or a physical data volume to each of the other computers in the computer network or to multiple volumes on a single computer in the computer network.
  • the computer network can be a local area network, a wide area network, a virtual private network, the Internet, or other network type.
  • each remote data repository receives new data from its local host each of the other remote data repositories in the computer network can be updated in a simultaneous manner.
  • a network data replicator can replicate data to multiple storage devices with a single data replication operation.
  • a computer readable medium holding computer executable instructions that allows a first computer to replicate a data volume to multiple remote data volumes on one or more remote computers.
  • the first computer replicates the data volume and in turn forwards the replicated data volume to the multiple remote data volumes on the one or more remote computers.
  • a method for remote data mirroring is performed in a computer network.
  • data is replicated to a remote network location within the computer network.
  • the data is again replicated to a second remote network location.
  • the network data transmission capability between the first network location and the first remote network location can be different from the network data transmission capability between the first remote location and the second remote network location.
  • the first network location may replicate the data to the first remote network location in a synchronous manner, while the first remote network location replicates the data to the second remote network location in an asynchronous manner.
  • a remote data repository can act as a remote storage location for some data and a local data replicator for other data.
  • the originating location in the storage network is no longer burdened with data transmission latency issues commonly associated with mirroring data to a remote location via a long haul network.
  • a method for data replication from the first location to multiple remote locations is practiced.
  • a selected data structure is replicated and transmitted to a first remote location for replication to a second remote location.
  • the first remote location replicates the received replicated data and forwards the replication of the received data to the second remote location.
  • Transmission between the originating location and each of the remote locations occurs in a stateless manner using the TCP/IP protocol suite.
  • the transmission rate between the originating location and the first remote location can differ from the transmission rate between the first remote location and the second remote location.
  • the operating platform of the originating location can differ from the operating platform of the first remote location, which can differ from the operating platform of the second remote location.
  • a computer readable medium holding computer executable instructions for replicating data from a first location to multiple remote locations is provided.
  • the computer readable medium allows a computer at the first location to replicate a data structure from the first location and forward the replicated data structure to a first remote location for replication to the second remote location.
  • the replicated data is transmitted to each remote location using the TCP/IP protocol suite.
  • a method for remote mirroring of data in a computer network allows for updating of one or more data structures of a remote storage device using a single data set.
  • the one or more data structures are identified and selected from a local storage device.
  • the data structures selected are more current than their corresponding data structure counterparts on a remote storage device.
  • the selected data structures are grouped together as a single entity, while preserving the write ordering within each structure.
  • the single data entity is then mirrored to the remote storage device to update the one or more corresponding data structure counterparts at the remote storage device.
  • a method is practiced in a computer network for remote mirroring of data from a first networked computer to one or more networked computers.
  • the method provides for a first networked computer to log all local disk updates during a period of time when the remote mirroring of data cannot be accomplished.
  • the first networked computer determines when remote mirroring of data can be re-established and groups all of its disk updates into a single data set.
  • the first networked computer restarts the remote mirroring of data to one or more remote network computers when the remote mirroring of data is re-established.
  • FIG. 1 depicts a block diagram of an exemplary system suitable for practicing the illustrative embodiment of the invention.
  • FIG. 2 is a flow chart that illustrates the steps taken by the illustrative embodiment of the invention to replicate data in a synchronous manner.
  • FIG. 3 is a block diagram of the steps taken by the illustrative embodiment of the present invention to replicate data in an asynchronous manner.
  • FIG. 4 depicts a log suitable for use by the illustrative embodiment of the present invention.
  • FIG. 5 is a block diagram illustrating the steps taken by the illustrative embodiment of the invention to group data.
  • FIG. 6 depicts a block diagram of a system suitable for replicating data to multiple volumes using the illustrative embodiment of the present invention.
  • FIG. 7 depicts a block diagram of an exemplary system suitable for practicing the illustrative embodiment of the invention to replicate data to multiple volumes.
  • FIG. 8 is a flow diagram illustrating steps taken by the illustrative embodiment of the invention to perform selected mirroring of data.
  • FIG. 9 depicts a block diagram of an exemplary system suitable for practicing the illustrative embodiment of the invention across multiple hosts.
  • the term “host” is a computer system, such as a PC, a workstation, a server, or the like, that is capable of supporting a data replication facility.
  • volume is an identifiable unit of data on a storage device, such as a disk or tape, it is possible for a the storage device to contain more than one volume or for a volume to span more than one storage device.
  • the illustrative embodiment of the present invention provides an approach for mirroring data from a local storage device to a remote storage device that overcomes the burdens commonly associated with mirroring data to a physically remote location.
  • the illustrative embodiment of the present invention allows a local host with a data replication facility to replicate a volume of data to multiple secondary volumes on one or more remote hosts that also have a data replication facility.
  • the local host along with each remote host can replicate data volumes without the use of a volume manager.
  • the illustrative embodiment allows for each remote host to further replicate the data to additional remote hosts. In this manner, the local host can replicate data to one or more distant storage devices without concern for data transmission latency that would degrade application performance on the local host.
  • the illustrative embodiment of the present invention improves fault tolerance of a distributed system and improves upon data availability and load balancing by replicating data to multiple hosts.
  • the remote mirroring operation in the illustrative embodiment of the present invention can be occasionally interrupted, either intentionally or by unplanned outages.
  • the primary member of the volume pair that is, the local volume
  • the volume image pairs are no longer considered synchronized.
  • the term synchronize refers to the process of updating one or more replica volumes to reflect changes made in the primary volume.
  • the term resynchronization refers to the process of reestablishing the mirroring process along with replicating all primary volume images that changed during the outage period to the one or more replica volumes so that all changes to the primary volume are reflected in the one or more replica volumes.
  • the illustrative embodiment of the present invention also improves the resynchronization of the remote mirroring process in the event that a failure occurs.
  • the illustrative embodiment of the present invention tracks changes to disk regions that occur during the remote mirroring outage period. This allows changes that occur during the outage period to be easily identified and allows only the last change to a particular disk region to be replicated during the resynchronization. In this manner, multiple changed volumes of a data structure can be grouped into a single data set for resynchronization upon reestablishment of communications with the remote storage device.
  • the illustrative embodiment of the present invention is able to halt the remote data mirroring to verify proper data replication on a remote storage device. In this manner, a data volume or a group of volumes on the local storage device of the local host provide a content baseline for determining if the content of the remote storage device of the remote host matches the content of the local storage device.
  • the illustrative embodiment of the present invention transmits data to perform the remote data mirroring using the TCP/IP protocol suite.
  • the replicated data is able to share a transmission path with other IP traffic from unrelated applications.
  • the illustrative data replication facility is able to replicate data from different applications operating on different hosts that utilize distinct storage devices. By properly provisioning the common transmission path the data replication facility can route IP traffic from each application over the same link to a remote storage device.
  • the data replication facility of the illustrative embodiment supports synchronous data replication and asynchronous data replication. With synchronous data replication, the local site waits for confirmation from the remote site before initiating the next write to the local storage device.
  • FIG. 1 illustrates an exemplary system 10 suitable for practicing the asynchronous and the synchronous data replication techniques of the illustrative embodiment. Synchronous data replication by the exemplary system 10 will be discussed below in detail with reference to FIG. 2. Asynchronous data replication by the exemplary system 10 will be discussed below in detail with reference to FIG. 3. Moreover, one skilled in the art will recognize the illustrative data replication facility can replicate data to locations that are within a few hundred feet of the data replication facility as well as replicate data to locations that are hundreds or thousands of miles away.
  • the local site 12 includes a host 16 that supports the data replication facility 20 and is in communication with the storage device 24 .
  • the remote site 14 includes a host 18 that supports the data replication facility 20 ′ and is in communication with the remote storage device 26 .
  • the host 16 and 18 may be a workstation, a PC, a mainframe, a server or any combination thereof.
  • the local site 12 and the remote site 14 communicate with each other via the communication link 28 .
  • the communication link 28 can be any suitable communication link, wired or wireless, that is suitable for transmission of information in accordance with the TCP/IP protocol suite.
  • the local storage device 24 and the remote storage device 26 may be, but is not limited to an optical disk storage device, a magnetic disk storage device or any combination thereof.
  • the data replication facility 20 and 20 ′ coordinate with each host to provide data replication operation and control. In this manner, the respective data replication facility 20 and 20 ′ interface with an application performing a write operation on their respective host to control operation of the storage device local to the host and to interface with the remote data replication facility for replication of the just written data to the remote storage device.
  • the data replication facility 20 and 20 ′ can replicate data in a bi-directional manner. That is, if, in the event of a disruption in the remote mirroring process, either the data replication facility 20 or the data replication facility 20 ′ can be instructed to log each local write to the respective local storage device 24 and 26 . Upon restoration of the remote mirroring process, data resynchronization can occur from the site selected to maintain a write log during the outage.
  • the local site typically continues to write to the primary volumes on the local storage device 24 , while the remote site 14 ceases all writes and awaits for the reestablishment of the remote mirroring process.
  • the local site 12 as well as the remote site 14 can be instructed to log all local writes in the event of a remote mirroring outage.
  • someone, such as a system administrator would make a decision as to the direction the resynchronization would occur. That is, from the remote site 14 to the local site 12 or from the local site 12 to the remote site 14 .
  • the illustrative embodiment of the present invention will be discussed relative to data replication from the local site 12 to the remote site 14 .
  • FIG. 2 illustrates in more detail the operation of the illustrative data replication facility of the present invention in a synchronous replication mode.
  • an application running on the host 16 first issues a write to the local storage device 24 (step 30 ).
  • the write request first goes to the local data replication facility 20 operating on the host 16 of the local site 12 where the local data replication facility 20 sets a bit in a bitmap that represents the storage region of the storage device corresponding to where the data is written (step 32 ).
  • the local data replication facility 20 then writes the data to the local storage device 24 (step 34 ).
  • the proper bit is set in the bitmap, the write occurs on the storage device of the local site.
  • the bitmap is used to track data awaiting replication and is discussed in more detail below with reference to FIG. 4.
  • the local data replication facility 20 then replicates the data and forwards the data to the remote data replication facility 20 ′ operating on the host 18 of the remote site 14 for remote data mirroring (step 36 ).
  • the local data replication facility 20 forwards as part of the replicated data package information that identifies a storage location, such as a volume path, for the replicated data at the remote site 18 .
  • a storage location such as a volume path
  • the local data replication facility 20 can forward the data directly to the remote data replication facility 20 ′ and can also forward the data through one or more intermediary mechanisms, such as a switch, a router, a network interface apparatus, a forwarding mechanism or the like.
  • the data is received by the remote data replication facility 20 ′ at the remote site 14 (step 38 ) at which time the remote data replication facility 20 ′ issues a write request for the received data (step 40 ).
  • the received data is then written to the remote storage device 26 of the remote site 14 (step 42 ).
  • the remote data replication facility 20 ′ receives an acknowledgement from the host 18 of the remote site 14 (step 44 ) and forwards the acknowledgement to the local data replication facility 20 (step 46 ).
  • the local data replication facility 20 receives the acknowledgement (step 48 ) it clears the bit set in the bitmap (step 50 ).
  • the local data replication facility 20 informs the application operating on the local site host 16 that the write is complete (step 52 ) and the application issues the next write for the local storage device 24 (step 54 ).
  • FIG. 3 illustrates a typical asynchronous data replication technique for mirroring data to a remote site.
  • the local host 16 confirms write completion to the local storage device 24 before the remote storage device 26 of the remote site 14 is written to. However, since the time needed for the data to cross the communication link 28 is significantly longer than the time needed for the local write to occur to the local storage device 24 , the host 16 at the local site 12 queues the remote data writes for transmission at a later time.
  • the asynchronous operation of the illustrative data replication facility is as follows.
  • An application operating on the host 16 of the local site 12 wishing to write data to the local storage device 24 first issues a write request to the local storage device 24 (step 60 ).
  • the write first goes to the local data replication facility 20 operating on the host 16 of the local site 12 (step 62 ).
  • the local data replication facility 20 for the local site 12 upon receipt of the write request sets a bit in a bitmap that corresponds to the data for the issued write request (step 62 ).
  • the data is then written to the local storage device 24 of the local site 12 (step 64 ).
  • the local data replication facility 20 copies the data into a queue to await forwarding to the remote storage device 26 of the remote site 14 (step 66 ).
  • the local data replication facility 20 can forward the data directly from the queue to the remote data replication facility 20 ′ and can also forward the data through one or more intermediary mechanisms, such as a switch, a router, a network interface apparatus, a forwarding mechanism or the like.
  • the local data replication facility 20 notifies the application operating on the local host 16 that the write is complete and the local storage device 24 is now ready for the next write (step 68 ).
  • the data from the queue is forwarded on a first in first out (FIFO) basis from the local site 12 to the remote site 14 (step 70 ).
  • the data forwarded from the queue is packaged to include information that identifies a storage location at the remote host 18 , such as a volume data path.
  • Data is received at the remote site 14 by the remote data replication facility 20 ′ operating on the remote host 18 (step 72 ).
  • the remote data replication facility 20 ′ issues a write request for the received data (step 74 ).
  • the received data is then written to the remote storage device 26 of the remote site 14 (step 76 ).
  • the remote data replication facility 20 ′ sends an acknowledgement to the local data replication facility 20 (step 78 ).
  • the local data replication facility 20 Upon receipt of the acknowledgment from the remote data replicating facility 20 ′, the local data replication facility 20 removes the corresponding bit from the bitmap to signify the remote write of the data completed and that the remote asynchronous data replication is complete for that data set (step 79 ).
  • the illustrative embodiment of the present invention logs all local writes during the remote mirroring outage period.
  • the data replication facility of the illustrative embodiment generally utilizes a bit vector scoreboard 80 , illustrated in FIG. 4, to keep track of local storage device locations that change during the remote mirroring outage period.
  • the exemplary scoreboard 80 of FIG. 4 provides a way to log changes during a remote mirroring outage.
  • the exemplary scoreboard 80 holds bits that represent regions, such as tracks and sections of the local storage device that have been modified during the outage period.
  • the exemplary scoreboard 80 can be configured to support different levels of granularity, for example, one bit for every 64 kbits or one bit for every 128 kbits of memory.
  • the utilization of a scoreboard allows only the last update to the local storage device location to be resynchronized rather than all the preceding updates when the outage period ends. Consequently, the time required for the resynchronization process to occur is significantly reduced.
  • the illustrative data replication facility of the present invention is able to group together a structure, such as a write ahead log and a structure, such as a corresponding table entry into a single data set while preserving the write ordering for each asynchronous writer.
  • a structure such as a write ahead log
  • a structure such as a corresponding table entry into a single data set
  • two separate processes or threads can run asynchronous to each other and can copy or mirror their respective volumes to remote storage devices while preserving their respective write order.
  • the data replication facility operator can instruct the illustrative data replication facility to select a number of identified volumes to form a group.
  • the data replication facility operator supplies the illustrative data replication facility with the indicia to identify the volumes that should be grouped together when instructed to do so. In this manner, an application utilizing multiple volumes for write order sensitive data can be replicated as a group or single entity while preserving the write ordering of the data.
  • the illustrative data replication facility of the present invention can automatically switch from replicating mode to data logging mode using the scoreboard 80 , when a remote mirroring failure is detected (step 90 ).
  • the operator of the illustrative embodiment can select, on a data structure basis, which data the illustrative data replication facility should log into the scoreboard 80 when remote mirroring is not available.
  • the local data replication facility can automatically resynchronize the logged data with the remote storage device upon the removal of the remote data mirroring interruption.
  • the local data replication facility may detect the reestablishment on its own or may receive notification from the local host (step 92 ). Consequently, a point in time can be easily identified for a data group across multiple volumes in the event that a remote mirroring failure occurs.
  • the auto resynchronization technique supports the concept of grouping in which the illustrative data replication facility is able to group multiple data sets into a single entity through the use of a queue (step 94 ) and to resynchronize the data (step 96 ).
  • FIGS. 6 and 7 illustrate that the illustrative data replication facility is able to replicate a primary volume 100 from the local storage device 24 to multiple mirrored volumes 102 and 104 on one or more remote storage devices.
  • FIG. 6 illustrates the situation in which the multiple mirrored volumes 102 and 104 are located on the same remote storage device 26 while FIG. 7 illustrates the situation in which the multiple mirrored volumes 102 and 104 are located on multiple remote storage devices 26 and 26 ′ respectively.
  • the local host 16 identifies to the local data replication facility 20 the data volumes from the local storage device 24 that are to be mirrored to the remote storage device 26 (step 110 in FIG. 8). Once the data volumes are identified, the local data replication facility 20 enters the data logging mode using the scoreboard 80 to track the disk areas that are being mirrored to the remote storage device 26 (step 112 in FIG. 8). The local data replication facility 20 logs the selected data to a local queue as a single group (step 112 in FIG. 8).
  • the local host 16 sends the appropriate command to the remote host 18 and correspondingly the remote data replication facility 20 ′ to initiate the routine to receive and store the selected data volumes on the remote storage device 26 (step 114 in FIG. 8).
  • the local data replication facility 20 stops placing the selected data into the local queue and waits for all the selected data to be written from the local queue to the remote storage device 26 .
  • the data volume is assigned some type of indicia, such as a volume name or volume number, by the volume owner, for example, an application, the data owner or the data replication facility operator.
  • the remote data replication facility 20 ′ writes the replicated data volume to a location on the remote storage device 26 based on the volume name and the file allocation table of the remote storage device 26 .
  • the remote data replication facility 20 ′ signals to the local data replication facility 20 that the mirroring to the remote storage device 26 is complete (step 116 ).
  • the ability to copy data from the local storage device 24 to the remote storage device 26 to ensure data uniformity in an asynchronous data mirroring environment does not impede any writes that occur to the local storage device 24 when remote mirroring is halted.
  • the local data replication facility 20 is able to attend to the writes to the local storage device 24 by performing the local write and logging the local write to the scoreboard 80 . Consequently, when the volume copying is complete, the local data replication facility 20 can resynchronize with the remote data replication facility 20 ′ using the scoreboard 80 to update the remote storage device 26 with the local writes that occurred during the copy operation.
  • the local data replication facility 20 preserves the write ordering of all volumes copied during the copy operation.
  • the illustrative data replication facility of the present invention is able to remotely mirror data across multiple remote hosts and remote sites that have distinct characteristics as illustrated in FIG. 9.
  • the illustrative data replication facility 20 may replicate to a remote host 14 operating on the Solaris® operating system available from Sun Microsystems, Inc. of Palo Alto, Calif., and the data replication facility 20 ′ at the remote host 14 , in turn, replicates the same data to the remote host 14 ′ operating on the Unix® operating system.
  • the transmission medium interconnecting each remote site may have a different bandwidth characteristic that effects the rate at which replicated data can be transmitted from site to site. As FIG.
  • each intermediate host can further mirror the data from the originating local site to as many remote sites as necessary while overcoming the incompatibility issues previously associated with the remote mirroring of data across multiple remote sites.

Abstract

A method and apparatus for performing remote data replication. The method and apparatus can detect an interruption in the remote data replication process and begin local logging of all local data writes that occur while the remote data replication process is unavailable. The method and apparatus can perform remote data replication across multiple remote storage devices or the method and apparatus can replicate a data structure from a first storage device to multiple locations on one or more remote storage devices. In addition, the method and apparatus can halt the remote data replication and copy data from the local storage device to the remote storage device to ensure data uniformity on all storage devices.

Description

    RELATED APPLICATIONS
  • The current application claims priority from the Nonprovisional Patent Application Ser. No. 09/905,436 entitled STORAGE NETWORK DATA REPLICATOR which was filed on Jul. 13, 2001, all naming the same inventors and the same assignee as this application, which is hereby incorporated by reference herein.[0001]
  • TECHNICAL FIELD OF THE INVENTION
  • The present invention generally relates to data networks, and more particularly, to network data replication. [0002]
  • BACKGROUND OF THE INVENTION
  • With accelerated business practices and the globalization of the marketplace, there is an ever increasing need for around the clock business communications and operations. As such, corporate data repositories must be able to provide critical business data at any moment in time in the face of interruptions caused by hardware failure, software failure, geographical disaster, or the like. To achieve the necessary data continuity and resilience for the present global marketplace, businesses utilize remote data repositories to backup and store critical business data. [0003]
  • One conventional method of data backup and storage is magnetic tape backup. At a business center, an amount of data, such as a day or a week, is transferred to a magnetic tape medium that is then stored remotely offsite. However, the magnetic tape medium is cumbersome to fetch in the event of a disaster and often requires significant amount of business center down time to restore the lost data. [0004]
  • Another conventional method utilized to avoid down time and provide disaster protection is database replication. With database replication, the database management system can make informed decisions on whether to write data to multiple local storage devices or a local storage device and to a remote storage device, but such synchronization comes at a significant performance penalty. The technique of writing data to multiple storage devices simultaneously is known as mirroring. In this manner, critical data can be accessible at all times. Ensuring transaction and record consistency often results in data transmission latency when a large number of data transmissions to remote sites are necessary with each database update. Consequently, application performance is slowed to unacceptable levels. In addition, database replication only performs replication only on data in the database and not on data in user files and system files. A separate remote copy facility is utilized to replicate such user files or system files. [0005]
  • In yet another data replication technique known as redundant array of independent disks (RAID), a host, such as a server or workstation, writes data to two duplicate storage devices simultaneously. In this manner, if one of the storage devices fails, the host can instantly switch to the other storage device without any loss of data or service. Nevertheless, to write to two duplicate storage devices simultaneously when one storage device is local and the other is remote is burdensome. [0006]
  • The mirroring of data to a distant location often faces remote data transmission limitations. For example, data transmission using a small computer system interface (SCSI) is limited to twenty-five meters. Typically, a SCSI parallel interface is used for attaching peripheral devices, such as a printer or an external storage device to a computer. Thus by utilizing a computer's SCSI port a computer can perform data mirroring by simultaneously writing to an internal storage device and an external storage device. Although discrete SCSI extenders are available, they become cumbersome and expensive beyond one or two remote connections. [0007]
  • One data transmission connection that is used for remote mirroring of data with an external storage device is the enterprise systems connection (ESCON) for use in mainframe systems. Unfortunately, ESCON has a maximum range of sixty kilometers. A further example of a remote data transmission connection that is used for distant mirroring of data is fiber arbitrated loop (FCAL), which can distribute loop connections over 100 kilometers when properly equipped. Nevertheless, these data transmission connections do not provide the necessary long distance separation between an operational work center and the data repository to overcome regional disasters such as earthquakes, tornadoes, floods, and the like. [0008]
  • The above shortcomings can be overcome by use of a dedicated transmission medium between two sites, such as a high-speed fiber optic cable. However, most high speed transmission mediums are dedicated to telecommunications traffic. Moreover, the cost for a dedicated high-speed link makes such a choice prohibitive. [0009]
  • Another obstacle associated with long distance data mirroring is latency. That is, the round trip delay required to write data to the distant location and to wait for the remote storage device to be updated before mirroring the next data block. Typically, the latency is proportional to the distance between the two sites and can be heightened by intermediate extenders and communication protocol overhead. Consequently, application response slows to an unacceptable level. [0010]
  • A further obstacle to long distance data mirroring is compatibility among remote storage devices involved in the mirroring. As a result, a host having a data replication facility may replicate a data structure to one volume of the remote storage device at a time. Moreover, the host may not replicate the data further than the first remote storage device due to compatibility issues surrounding data transmission rates or host platform compatibility. These burdens place significant limitations on data protection schemes that require multiple remote storage devices. [0011]
  • SUMMARY OF THE INVENTION
  • The present invention addresses the above-described limitations of conventional data backup and storage operations. The present invention provides an approach to enable remote data mirroring amongst multiple remote storage devices across data transmission paths having various transmission capabilities and remote mirroring sites operating on various operating platforms. [0012]
  • In a first embodiment of the present invention, a method is practiced for replicating a data volume from a first computer to multiple remote data volumes on one or more remote computers. The first computer replicates the data volume and forwards the replicated data volume to the multiple remote data volumes on the one or more remote computers. The remote data volumes can reside on a storage device of a single remote computer or on multiple remote storage devices of multiple remote computers, or both. The first computer can forward the replicated data volume to the one or more remote computers in either a synchronous manner or an asynchronous manner. With either the asynchronous communication manner or the synchronous communication manner, the first computer forwards the replicated data volume using the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite. [0013]
  • The above-described approach benefits a data storage network having data repositories dispersed over a large geographical area. Consequently, a central data center, such as a corporation's headquarters, a government agency or the like, can replicate data from a central repository to multiple remote repositories for purposes of data protection and to support data utilization by multiple employees and clients across a diverse geographical area. [0014]
  • In another embodiment of the present invention, a method is performed in a computer network, wherein each of the computers in the network host a data replication facility for remote mirroring of data between each of the network computers. Each data replication facility is able to receive data from its host, write the data to a local storage device and subsequently mirror the data to each of the computer network computers hosting a data replication facility for storage on a storage device of the remote computer host. Each data replication facility can replicate a logical data volume or a physical data volume to each of the other computers in the computer network or to multiple volumes on a single computer in the computer network. The computer network can be a local area network, a wide area network, a virtual private network, the Internet, or other network type. [0015]
  • The above-described approach benefits geographically remote data repositories that form a computer network in that as each remote data repository receives new data from its local host each of the other remote data repositories in the computer network can be updated in a simultaneous manner. In this manner, a network data replicator can replicate data to multiple storage devices with a single data replication operation. [0016]
  • In yet another aspect of the present invention, a computer readable medium holding computer executable instructions is provided that allows a first computer to replicate a data volume to multiple remote data volumes on one or more remote computers. The first computer replicates the data volume and in turn forwards the replicated data volume to the multiple remote data volumes on the one or more remote computers. [0017]
  • In accordance with another aspect of the present invention, a method for remote data mirroring is performed in a computer network. At a first network location, data is replicated to a remote network location within the computer network. At the remote location, the data is again replicated to a second remote network location. The network data transmission capability between the first network location and the first remote network location can be different from the network data transmission capability between the first remote location and the second remote network location. In addition, the first network location may replicate the data to the first remote network location in a synchronous manner, while the first remote network location replicates the data to the second remote network location in an asynchronous manner. [0018]
  • The above-described approach benefits a computer network, such as storage network that replicates data to multiple storage devices in the data network. As a result, a remote data repository can act as a remote storage location for some data and a local data replicator for other data. Moreover, the originating location in the storage network is no longer burdened with data transmission latency issues commonly associated with mirroring data to a remote location via a long haul network. [0019]
  • In yet another aspect of the present invention, a method for data replication from the first location to multiple remote locations is practiced. At the first location a selected data structure is replicated and transmitted to a first remote location for replication to a second remote location. The first remote location replicates the received replicated data and forwards the replication of the received data to the second remote location. Transmission between the originating location and each of the remote locations occurs in a stateless manner using the TCP/IP protocol suite. The transmission rate between the originating location and the first remote location can differ from the transmission rate between the first remote location and the second remote location. Moreover, the operating platform of the originating location can differ from the operating platform of the first remote location, which can differ from the operating platform of the second remote location. [0020]
  • The above-described approach benefits an enterprise having multiple geographically remote data centers operating on various platforms. In this manner, data produced at each data center can be replicated and transmitted to each other data center in the network regardless of operating platform and transmission line capability that connects one data center to another. [0021]
  • In still another aspect of the present invention, a computer readable medium holding computer executable instructions for replicating data from a first location to multiple remote locations is provided. The computer readable medium allows a computer at the first location to replicate a data structure from the first location and forward the replicated data structure to a first remote location for replication to the second remote location. The replicated data is transmitted to each remote location using the TCP/IP protocol suite. [0022]
  • In yet another aspect of the present invention, a method for remote mirroring of data in a computer network is practiced. The method allows for updating of one or more data structures of a remote storage device using a single data set. The one or more data structures are identified and selected from a local storage device. The data structures selected are more current than their corresponding data structure counterparts on a remote storage device. The selected data structures are grouped together as a single entity, while preserving the write ordering within each structure. The single data entity is then mirrored to the remote storage device to update the one or more corresponding data structure counterparts at the remote storage device. [0023]
  • In still another aspect of the present invention, a method is practiced in a computer network for remote mirroring of data from a first networked computer to one or more networked computers. The method provides for a first networked computer to log all local disk updates during a period of time when the remote mirroring of data cannot be accomplished. The first networked computer determines when remote mirroring of data can be re-established and groups all of its disk updates into a single data set. The first networked computer restarts the remote mirroring of data to one or more remote network computers when the remote mirroring of data is re-established.[0024]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • An illustrative embodiment of the present invention will be described below relative to the following drawings. [0025]
  • FIG. 1 depicts a block diagram of an exemplary system suitable for practicing the illustrative embodiment of the invention. [0026]
  • FIG. 2 is a flow chart that illustrates the steps taken by the illustrative embodiment of the invention to replicate data in a synchronous manner. [0027]
  • FIG. 3 is a block diagram of the steps taken by the illustrative embodiment of the present invention to replicate data in an asynchronous manner. [0028]
  • FIG. 4 depicts a log suitable for use by the illustrative embodiment of the present invention. [0029]
  • FIG. 5 is a block diagram illustrating the steps taken by the illustrative embodiment of the invention to group data. [0030]
  • FIG. 6 depicts a block diagram of a system suitable for replicating data to multiple volumes using the illustrative embodiment of the present invention. [0031]
  • FIG. 7 depicts a block diagram of an exemplary system suitable for practicing the illustrative embodiment of the invention to replicate data to multiple volumes. [0032]
  • FIG. 8 is a flow diagram illustrating steps taken by the illustrative embodiment of the invention to perform selected mirroring of data. [0033]
  • FIG. 9 depicts a block diagram of an exemplary system suitable for practicing the illustrative embodiment of the invention across multiple hosts.[0034]
  • DETAILED DESCRIPTION
  • Before beginning with the discussion below it is helpful to first define a few terms. [0035]
  • The term “host” is a computer system, such as a PC, a workstation, a server, or the like, that is capable of supporting a data replication facility. [0036]
  • The term “volume” is an identifiable unit of data on a storage device, such as a disk or tape, it is possible for a the storage device to contain more than one volume or for a volume to span more than one storage device. [0037]
  • The illustrative embodiment of the present invention provides an approach for mirroring data from a local storage device to a remote storage device that overcomes the burdens commonly associated with mirroring data to a physically remote location. The illustrative embodiment of the present invention allows a local host with a data replication facility to replicate a volume of data to multiple secondary volumes on one or more remote hosts that also have a data replication facility. The local host along with each remote host can replicate data volumes without the use of a volume manager. In addition, the illustrative embodiment allows for each remote host to further replicate the data to additional remote hosts. In this manner, the local host can replicate data to one or more distant storage devices without concern for data transmission latency that would degrade application performance on the local host. The illustrative embodiment of the present invention improves fault tolerance of a distributed system and improves upon data availability and load balancing by replicating data to multiple hosts. [0038]
  • Those skilled in the art will recognize that the remote mirroring operation in the illustrative embodiment of the present invention can be occasionally interrupted, either intentionally or by unplanned outages. In such instances, if for example, the primary member of the volume pair, that is, the local volume, continues to update during the outage period, then the volume image pairs (local and remote) are no longer considered synchronized. As such, the term synchronize refers to the process of updating one or more replica volumes to reflect changes made in the primary volume. Hence, the term resynchronization refers to the process of reestablishing the mirroring process along with replicating all primary volume images that changed during the outage period to the one or more replica volumes so that all changes to the primary volume are reflected in the one or more replica volumes. [0039]
  • The illustrative embodiment of the present invention also improves the resynchronization of the remote mirroring process in the event that a failure occurs. The illustrative embodiment of the present invention tracks changes to disk regions that occur during the remote mirroring outage period. This allows changes that occur during the outage period to be easily identified and allows only the last change to a particular disk region to be replicated during the resynchronization. In this manner, multiple changed volumes of a data structure can be grouped into a single data set for resynchronization upon reestablishment of communications with the remote storage device. In addition, the illustrative embodiment of the present invention is able to halt the remote data mirroring to verify proper data replication on a remote storage device. In this manner, a data volume or a group of volumes on the local storage device of the local host provide a content baseline for determining if the content of the remote storage device of the remote host matches the content of the local storage device. [0040]
  • The illustrative embodiment of the present invention transmits data to perform the remote data mirroring using the TCP/IP protocol suite. As a result, the replicated data is able to share a transmission path with other IP traffic from unrelated applications. In this manner, the illustrative data replication facility is able to replicate data from different applications operating on different hosts that utilize distinct storage devices. By properly provisioning the common transmission path the data replication facility can route IP traffic from each application over the same link to a remote storage device. [0041]
  • The data replication facility of the illustrative embodiment supports synchronous data replication and asynchronous data replication. With synchronous data replication, the local site waits for confirmation from the remote site before initiating the next write to the local storage device. FIG. 1 illustrates an [0042] exemplary system 10 suitable for practicing the asynchronous and the synchronous data replication techniques of the illustrative embodiment. Synchronous data replication by the exemplary system 10 will be discussed below in detail with reference to FIG. 2. Asynchronous data replication by the exemplary system 10 will be discussed below in detail with reference to FIG. 3. Moreover, one skilled in the art will recognize the illustrative data replication facility can replicate data to locations that are within a few hundred feet of the data replication facility as well as replicate data to locations that are hundreds or thousands of miles away.
  • As shown in FIG. 1, the [0043] local site 12 includes a host 16 that supports the data replication facility 20 and is in communication with the storage device 24. Similarly, the remote site 14 includes a host 18 that supports the data replication facility 20′ and is in communication with the remote storage device 26. Those skilled in the art will appreciate that the host 16 and 18 may be a workstation, a PC, a mainframe, a server or any combination thereof. The local site 12 and the remote site 14 communicate with each other via the communication link 28. The communication link 28 can be any suitable communication link, wired or wireless, that is suitable for transmission of information in accordance with the TCP/IP protocol suite. In addition, the local storage device 24 and the remote storage device 26 may be, but is not limited to an optical disk storage device, a magnetic disk storage device or any combination thereof.
  • The [0044] data replication facility 20 and 20′ coordinate with each host to provide data replication operation and control. In this manner, the respective data replication facility 20 and 20′ interface with an application performing a write operation on their respective host to control operation of the storage device local to the host and to interface with the remote data replication facility for replication of the just written data to the remote storage device. Those skilled in the art will recognize that the data replication facility 20 and 20′ can replicate data in a bi-directional manner. That is, if, in the event of a disruption in the remote mirroring process, either the data replication facility 20 or the data replication facility 20′ can be instructed to log each local write to the respective local storage device 24 and 26. Upon restoration of the remote mirroring process, data resynchronization can occur from the site selected to maintain a write log during the outage.
  • Typically when a remote mirroring outage occurs, the local site continues to write to the primary volumes on the [0045] local storage device 24, while the remote site 14 ceases all writes and awaits for the reestablishment of the remote mirroring process. In certain instances, the local site 12 as well as the remote site 14 can be instructed to log all local writes in the event of a remote mirroring outage. In this instance, upon reestablishment of the remote mirroring process, someone, such as a system administrator would make a decision as to the direction the resynchronization would occur. That is, from the remote site 14 to the local site 12 or from the local site 12 to the remote site 14. For the ease of the discussion below, the illustrative embodiment of the present invention will be discussed relative to data replication from the local site 12 to the remote site 14.
  • FIG. 2 illustrates in more detail the operation of the illustrative data replication facility of the present invention in a synchronous replication mode. At the [0046] local site 12, an application running on the host 16 first issues a write to the local storage device 24 (step 30). The write request first goes to the local data replication facility 20 operating on the host 16 of the local site 12 where the local data replication facility 20 sets a bit in a bitmap that represents the storage region of the storage device corresponding to where the data is written (step 32). The local data replication facility 20 then writes the data to the local storage device 24 (step 34). When the proper bit is set in the bitmap, the write occurs on the storage device of the local site. The bitmap is used to track data awaiting replication and is discussed in more detail below with reference to FIG. 4.
  • The local [0047] data replication facility 20 then replicates the data and forwards the data to the remote data replication facility 20′ operating on the host 18 of the remote site 14 for remote data mirroring (step 36). The local data replication facility 20 forwards as part of the replicated data package information that identifies a storage location, such as a volume path, for the replicated data at the remote site 18. Those skilled in the art will recognize that the local data replication facility 20 can forward the data directly to the remote data replication facility 20′ and can also forward the data through one or more intermediary mechanisms, such as a switch, a router, a network interface apparatus, a forwarding mechanism or the like. The data is received by the remote data replication facility 20′ at the remote site 14 (step 38) at which time the remote data replication facility 20′ issues a write request for the received data (step 40). The received data is then written to the remote storage device 26 of the remote site 14 (step 42). After the data is written to the remote storage device 26, the remote data replication facility 20′ receives an acknowledgement from the host 18 of the remote site 14 (step 44) and forwards the acknowledgement to the local data replication facility 20 (step 46). When the local data replication facility 20 receives the acknowledgement (step 48) it clears the bit set in the bitmap (step 50). At this point the local data replication facility 20 informs the application operating on the local site host 16 that the write is complete (step 52) and the application issues the next write for the local storage device 24 (step 54).
  • FIG. 3 illustrates a typical asynchronous data replication technique for mirroring data to a remote site. With reference to FIG. 1, the [0048] local host 16 confirms write completion to the local storage device 24 before the remote storage device 26 of the remote site 14 is written to. However, since the time needed for the data to cross the communication link 28 is significantly longer than the time needed for the local write to occur to the local storage device 24, the host 16 at the local site 12 queues the remote data writes for transmission at a later time.
  • The asynchronous operation of the illustrative data replication facility is as follows. An application operating on the [0049] host 16 of the local site 12 wishing to write data to the local storage device 24 first issues a write request to the local storage device 24 (step 60). The write first goes to the local data replication facility 20 operating on the host 16 of the local site 12 (step 62). The local data replication facility 20 for the local site 12 upon receipt of the write request sets a bit in a bitmap that corresponds to the data for the issued write request (step 62). The data is then written to the local storage device 24 of the local site 12 (step 64). At this point, the local data replication facility 20 copies the data into a queue to await forwarding to the remote storage device 26 of the remote site 14 (step 66). Those skilled in the art will recognize that the local data replication facility 20 can forward the data directly from the queue to the remote data replication facility 20′ and can also forward the data through one or more intermediary mechanisms, such as a switch, a router, a network interface apparatus, a forwarding mechanism or the like. The local data replication facility 20 notifies the application operating on the local host 16 that the write is complete and the local storage device 24 is now ready for the next write (step 68).
  • The data from the queue is forwarded on a first in first out (FIFO) basis from the [0050] local site 12 to the remote site 14 (step 70). The data forwarded from the queue is packaged to include information that identifies a storage location at the remote host 18, such as a volume data path. Data is received at the remote site 14 by the remote data replication facility 20′ operating on the remote host 18 (step 72). When data is received, the remote data replication facility 20′ issues a write request for the received data (step 74). The received data is then written to the remote storage device 26 of the remote site 14 (step 76). Upon completion of the write at the remote site 14, the remote data replication facility 20′ sends an acknowledgement to the local data replication facility 20 (step 78). Upon receipt of the acknowledgment from the remote data replicating facility 20′, the local data replication facility 20 removes the corresponding bit from the bitmap to signify the remote write of the data completed and that the remote asynchronous data replication is complete for that data set (step 79).
  • To reduce the time necessary to resynchronize one or more remote storage devices with the local storage device after a remote mirroring outage, the illustrative embodiment of the present invention logs all local writes during the remote mirroring outage period. The data replication facility of the illustrative embodiment generally utilizes a [0051] bit vector scoreboard 80, illustrated in FIG. 4, to keep track of local storage device locations that change during the remote mirroring outage period.
  • The [0052] exemplary scoreboard 80 of FIG. 4 provides a way to log changes during a remote mirroring outage. The exemplary scoreboard 80 holds bits that represent regions, such as tracks and sections of the local storage device that have been modified during the outage period. The exemplary scoreboard 80 can be configured to support different levels of granularity, for example, one bit for every 64 kbits or one bit for every 128 kbits of memory. The utilization of a scoreboard allows only the last update to the local storage device location to be resynchronized rather than all the preceding updates when the outage period ends. Consequently, the time required for the resynchronization process to occur is significantly reduced.
  • An important consideration in asynchronous data replication is ensuring that the remote writes are applied in the order in which they were posted by the local host application. For example, some data structures utilize a write ahead logging protocol to recover any table updates from a write ahead log that were not captured on disk due to media or other failure. Unfortunately in remote mirroring, the write ahead log and the table updates are typically deposited on different disk volumes. Consequently, when two separate applications are performing asynchronous writes, any attempt to replicate the data volumes to the remote storage device fails to preserve the correct write ordering. The illustrative data replication facility of the present invention is able to group together a structure, such as a write ahead log and a structure, such as a corresponding table entry into a single data set while preserving the write ordering for each asynchronous writer. In this manner, two separate processes or threads can run asynchronous to each other and can copy or mirror their respective volumes to remote storage devices while preserving their respective write order. Moreover, the data replication facility operator can instruct the illustrative data replication facility to select a number of identified volumes to form a group. The data replication facility operator supplies the illustrative data replication facility with the indicia to identify the volumes that should be grouped together when instructed to do so. In this manner, an application utilizing multiple volumes for write order sensitive data can be replicated as a group or single entity while preserving the write ordering of the data. [0053]
  • As depicted in FIG. 5, the illustrative data replication facility of the present invention can automatically switch from replicating mode to data logging mode using the [0054] scoreboard 80, when a remote mirroring failure is detected (step 90). The operator of the illustrative embodiment can select, on a data structure basis, which data the illustrative data replication facility should log into the scoreboard 80 when remote mirroring is not available. Moreover, the local data replication facility can automatically resynchronize the logged data with the remote storage device upon the removal of the remote data mirroring interruption.
  • The local data replication facility may detect the reestablishment on its own or may receive notification from the local host (step [0055] 92). Consequently, a point in time can be easily identified for a data group across multiple volumes in the event that a remote mirroring failure occurs. In addition, the auto resynchronization technique supports the concept of grouping in which the illustrative data replication facility is able to group multiple data sets into a single entity through the use of a queue (step 94) and to resynchronize the data (step 96).
  • FIGS. 6 and 7 illustrate that the illustrative data replication facility is able to replicate a [0056] primary volume 100 from the local storage device 24 to multiple mirrored volumes 102 and 104 on one or more remote storage devices. FIG. 6 illustrates the situation in which the multiple mirrored volumes 102 and 104 are located on the same remote storage device 26 while FIG. 7 illustrates the situation in which the multiple mirrored volumes 102 and 104 are located on multiple remote storage devices 26 and 26′ respectively.
  • To ensure the data on the [0057] remote storage device 26 matches the data on the local storage device 24, the local host 16 identifies to the local data replication facility 20 the data volumes from the local storage device 24 that are to be mirrored to the remote storage device 26 (step 110 in FIG. 8). Once the data volumes are identified, the local data replication facility 20 enters the data logging mode using the scoreboard 80 to track the disk areas that are being mirrored to the remote storage device 26 (step 112 in FIG. 8). The local data replication facility 20 logs the selected data to a local queue as a single group (step 112 in FIG. 8). The local host 16 sends the appropriate command to the remote host 18 and correspondingly the remote data replication facility 20′ to initiate the routine to receive and store the selected data volumes on the remote storage device 26 (step 114 in FIG. 8). The local data replication facility 20 stops placing the selected data into the local queue and waits for all the selected data to be written from the local queue to the remote storage device 26. The data volume is assigned some type of indicia, such as a volume name or volume number, by the volume owner, for example, an application, the data owner or the data replication facility operator. The remote data replication facility 20′ writes the replicated data volume to a location on the remote storage device 26 based on the volume name and the file allocation table of the remote storage device 26. At this point, the remote data replication facility 20′ signals to the local data replication facility 20 that the mirroring to the remote storage device 26 is complete (step 116).
  • The ability to copy data from the [0058] local storage device 24 to the remote storage device 26 to ensure data uniformity in an asynchronous data mirroring environment does not impede any writes that occur to the local storage device 24 when remote mirroring is halted. The local data replication facility 20 is able to attend to the writes to the local storage device 24 by performing the local write and logging the local write to the scoreboard 80. Consequently, when the volume copying is complete, the local data replication facility 20 can resynchronize with the remote data replication facility 20′ using the scoreboard 80 to update the remote storage device 26 with the local writes that occurred during the copy operation. Moreover, the local data replication facility 20 preserves the write ordering of all volumes copied during the copy operation.
  • The illustrative data replication facility of the present invention is able to remotely mirror data across multiple remote hosts and remote sites that have distinct characteristics as illustrated in FIG. 9. For example, the illustrative [0059] data replication facility 20 may replicate to a remote host 14 operating on the Solaris® operating system available from Sun Microsystems, Inc. of Palo Alto, Calif., and the data replication facility 20′ at the remote host 14, in turn, replicates the same data to the remote host 14′ operating on the Unix® operating system. In similar fashion, the transmission medium interconnecting each remote site may have a different bandwidth characteristic that effects the rate at which replicated data can be transmitted from site to site. As FIG. 9 illustrates, the illustrative data replication facility 20 operating on the local host 12 is able to remotely mirror data to multiple remote sites 14 and 14′. As illustrated, the remote data replication facility 20′ operating on the host 18 of the first remote site 14 becomes the local data replication facility, as referenced to the remote data replication facility 20″ operating on the host 18′ of the second remote site 14′. In this manner, each intermediate host can further mirror the data from the originating local site to as many remote sites as necessary while overcoming the incompatibility issues previously associated with the remote mirroring of data across multiple remote sites.
  • While the present invention has been described with reference to a preferred embodiment thereof, one skilled in the art will appreciate that various changes in form and detail may be made without departing from the intended scope of the present invention as defined in the pending claims. [0060]

Claims (51)

What is claimed is:
1. In a storage network, a method for replicating data in said storage network, said method comprising the steps of:
identifying to a first data replication facility at a first programmable electronic device in said storage network a first structure and a second structure held by a storage device locally accessible to said first programmable electronic device;
instructing said first data replication facility to logically group said first structure and said second structure from said storage device to create a group;
generating a replica of said group at said first data replication facility; and
forwarding said replica in accordance with a communication protocol from said first data replication facility at said first programmable electronic device to a second data replication facility at a second programmable electronic device in said storage network for storage by a second storage device.
2. The method of claim 1, further comprising the step of, forwarding from said first data replication facility at said first programmable electronic device to said second data replication facility at said second programmable electronic device information identifying a storage location at said second storage device at which to store said replica.
3. The method of claim 1, wherein said first programmable electronic device forwards said replica to said second programmable electronic device in a synchronous manner.
4. The method of claim 1, wherein said first programmable electronic device forwards said replica to said second programmable electronic device in an asynchronous manner.
5. The method of claim 1, wherein said communication protocol comprises the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite.
6. The method of claim 1, wherein said first programmable electronic device and said second programmable electronic device in said storage network operate without a volume manager facility.
7. The method of claim 1, wherein said first structure comprises a first logical volume and said second structure comprises a second logical volume.
8. A method for replicating data in a storage network to update one or more data structures of a remote storage device, said method comprising the steps of:
instructing a first data replication facility of a first electronic device in said storage network to logically associate a first data structure and a second data structure held by a locally accessible storage device, wherein said logical association defines a group;
generating a replica of said first data structure and said second data structure as a said group; and
outputting said replica in accordance with a data communications protocol from said first replication facility of said first electronic device to a second replication facility of a second electronic device in said storage network to update said one or more data structures of said remote storage device.
9. The method of claim 8, further comprising the steps of, packaging with said replica, information identifying one or more storage locations for storage of said replica on said remote storage device.
10. The method of claim 8, further comprising the steps of, instructing said first data replication facility to preserve a write ordering of said first data structure and said second data structure in said group.
11. The method of claim 8, wherein said communication protocol comprises the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite.
12. The method of claim 8, wherein said first electronic device and said second electronic device in said storage network perform said replicating of data without a volume manager.
13. A readable medium holding programmable electronic device readable instructions for executing a method for replicating data in a storage network, said method comprising the steps of:
identifying to a first data replication facility at a first programmable electronic device in said storage network a first structure and a second structure held by a storage device locally accessible to said first programmable electronic device;
instructing said first data replication facility to group said first structure and said second structure from said storage device;
generating a replica of said first structure and said second structure as a group at said first data replication facility; and
asserting said replica in accordance with a communication protocol from said first data replication facility at said first programmable electronic device to a second data replication facility at a second programmable electronic device in said storage network for storage by a second storage device locally accessible to said second programmable electronic device.
14. The readable medium of claim 13, further comprising the step of, forwarding from said first data replication facility at said first programmable electronic device to said second data replication facility at said second programmable electronic device information identifying a storage location for said second storage device to store said replica.
15. The readable medium of claim 13, wherein said first programmable electronic device forwards said replica to said second programmable electronic device in a synchronous manner.
16. The readable medium of claim 13, wherein said first programmable electronic device forwards said replica to said second programmable electronic device in an asynchronous manner.
17. The readable medium of claim 13, wherein said communication protocol comprises the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite.
18. The readable medium of claim 13, wherein said first programmable electronic device and said second programmable electronic device in said storage network operate without a volume manager facility.
19. The readable medium of claim 13, wherein said first structure comprises a first group of records and second structure comprises a second group of records.
20. The readable medium of claim 13, wherein said first structure comprises a first logical volume and said second structure comprises a second logical volume.
21. In a storage network, a method to create a replica of selected data in said storage network, said method comprising the steps of:
instructing a first data replication facility at a first electronic device in said storage network to track changes to one or more storage locations of a first storage medium that correspond to said selected data;
instructing said first data replication facility to generate said replica of said selected data based on said tracked changes to said one or more locations of said first storage medium;
placing said replica of said selected data in a data structure; and
forwarding said replica of said selected data in accordance with a communication protocol from said data structure to a second data replication facility at a second electronic device in said storage network for storage of said replica on a second storage medium by said second electronic device.
22. The method of claim 21, further comprising the step of, sending an instruction from said first data replication facility at said first electronic device to said second data replication facility at said second electronic device to initiate a process for receiving and storing said replica of said selected data.
23. The method of claim 21, further comprising the step of, halting said generation of said replica of said selected data until said replica held by said data structure is forwarded from said data structure to the second data replication facility at the second electronic device in said storage network.
24. The method of claim 21, further comprising the step of, packaging with said replica of said selected data information that identifies a storage location for storage of said replica of said selected data on said second storage medium.
25. The method of claim 21, further comprising the step of, identifying to said first data replication facility at said first electronic device in said storage network said selected data held by said first storage medium in communication with said first electronic device.
26. The method of claim 21, wherein said data structure comprises a queue.
27. The method of claim 21, wherein said first electronic device performs said forwarding of said replica of said selected data from said data structure to said second data replication facility at said second electronic device in a first in first out (FIFO) manner.
28. The method of claim 27, wherein said first electronic device performs said forwarding of said replica of said selected data from said data structure to said second data replication facility at said second electronic device in a synchronous manner.
29. The method of claim 27, wherein said first electronic device performs said forwarding of said replica of said related data from said data structure to said second data replication facility of said second electronic device in a n asynchronous manner.
30. The method of claim 21, wherein said communication protocol comprises the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite.
31. The method of claim 21, wherein said first electronic device and said second electronic device operate without a volume manager facility.
32. The method of claim 21, wherein said one or more locations of said first storage medium comprise one of a track, a sector, a logical volume and a logical offset into said first storage medium.
33. A readable medium holding programmable electronic device readable instructions for executing a method to create a replica of selected data in a storage network, said method comprising the steps of:
instructing a first data replication facility at a first programmable electronic device in said network to track changes to one or more areas of a first storage device in communication with said first programmable electronic device, wherein the one or more areas correspond to said selected data;
instructing said first data replication facility to generate said replica of said selected data based on said tracked changes to said one or more areas of said first storage device;
placing said replica of said selected data in a data structure; and
forwarding said replica of said selected data in accordance with a communication protocol from said data structure to a second data replication facility at a second programmable electronic device in said storage network for storage of said replica on a second storage device in communication with said second programmable electronic device.
34. The readable medium of claim 33, further comprising the step of, sending an instruction from said first data replication facility at said first programmable electronic device to said second data replication facility at said second programmable electronic device to initiate a process for receiving and storing said replica of said selected data.
35. The readable medium of claim 33, further comprising the step of, halting said generation of said replica of said selected data until said replica held by said data structure is forwarded from said data structure to the second data replication facility at the second electronic device in said storage network.
36. The readable medium of claim 33, further comprising the step of, packaging with said replica of said selected data information that identifies a storage location for said replica of said selected data in said second storage device in communication with said second programmable electronic device.
37. The readable medium of claim 33, further comprising the step of, identifying to said first data replication facility at said first programmable electronic device in said storage network said selected data held by said first storage device in communication with said first computer.
38. The readable medium of claim 33, wherein said data structure comprises a queue.
39. The readable medium of claim 33, wherein said first programmable electronic device forwards said replica of said selected data from said data structure to said second data replication facility at said second programmable electronic device in a first in first out (FIFO) manner.
40. The readable medium of claim 39, wherein said first programmable electronic device forwards said replica of said selected data from said data structure to said second data replication facility at said second programmable electronic device in a synchronous manner.
41. The readable medium of claim 39, wherein said first programmable electronic device forwards said replica of said related data from said data structure to said second data replication facility of said second programmable electronic device in an asynchronous manner.
42. The readable medium of claim 33, wherein said communication protocol comprises the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite.
43. The readable medium of claim 33, wherein said first programmable electronic device and said second programmable electronic device operate without a volume manager facility.
44. The readable medium of claim 33, wherein said one or more areas of said first storage device comprise one of a track, a sector, a logical volume and a logical offset into said first storage medium.
45. A method for replicating data in a distributed network to update a remote storage device with data from a local storage device, said method comprising the steps of:
instructing a first data replication facility of a first electronic device in said distributed network to track one or more locations of a local storage device that correspond to one or more identified volumes;
grouping each of said one or more identified volumes into a group by said first data replication facility;
generating a replica of said group at said first data replication facility; and
asserting said replica in accordance with a communication protocol toward a second replication facility of a second electronic device in said distributed network to update said remote storage device.
46. The method of claim 45, further comprising the step of, sending a command from said first data replication facility to said second data replication facility of said second electronic device to initiate receipt of said replica.
47. The method of claim 45, further comprising the step of, packaging with said replica information that indicates a storage location for each volume in said replica for storage on said remote storage device.
48. The method of claim 45, further comprising the step of, sending from said second data replication facility to said first data replication facility an indication that said update to said remote storage device completed.
49. The method of claim 45, further comprising the step of, writing the replica to a local queue for temporary storage before said asserting of said replica in accordance with said communication protocol toward said second replication facility of said second computer occurs.
50. The method of claim 45, further comprising the step of, identifying to said first data replication facility of said first electronic device in said distributed network said one or more volumes of said data for said replicating of data to update said remote storage device.
51. The method of claim 47, wherein said information comprises one of a volume name and a volume number.
US09/988,853 2001-07-13 2001-11-19 Storage network data replicator Abandoned US20030014432A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/988,853 US20030014432A1 (en) 2001-07-13 2001-11-19 Storage network data replicator

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/905,436 US20030014523A1 (en) 2001-07-13 2001-07-13 Storage network data replicator
US09/988,853 US20030014432A1 (en) 2001-07-13 2001-11-19 Storage network data replicator

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/905,436 Continuation US20030014523A1 (en) 2001-07-13 2001-07-13 Storage network data replicator

Publications (1)

Publication Number Publication Date
US20030014432A1 true US20030014432A1 (en) 2003-01-16

Family

ID=25420811

Family Applications (3)

Application Number Title Priority Date Filing Date
US09/905,436 Abandoned US20030014523A1 (en) 2001-07-13 2001-07-13 Storage network data replicator
US09/988,854 Active 2024-08-04 US7340490B2 (en) 2001-07-13 2001-11-19 Storage network data replicator
US09/988,853 Abandoned US20030014432A1 (en) 2001-07-13 2001-11-19 Storage network data replicator

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US09/905,436 Abandoned US20030014523A1 (en) 2001-07-13 2001-07-13 Storage network data replicator
US09/988,854 Active 2024-08-04 US7340490B2 (en) 2001-07-13 2001-11-19 Storage network data replicator

Country Status (1)

Country Link
US (3) US20030014523A1 (en)

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030033327A1 (en) * 2001-07-27 2003-02-13 Chhandomay Mandal Method and apparatus for managing remote data replication in a distributed computer system
US20030061399A1 (en) * 2001-09-27 2003-03-27 Sun Microsystems, Inc. Method and apparatus for managing data volumes in a distributed computer system
US20030084198A1 (en) * 2001-10-30 2003-05-01 Sun Microsystems Inc. Method and apparatus for managing data services in a distributed computer system
US20030088713A1 (en) * 2001-10-11 2003-05-08 Sun Microsystems, Inc. Method and apparatus for managing data caching in a distributed computer system
US20030105840A1 (en) * 2001-11-16 2003-06-05 Sun Microsystems, Inc. Method and apparatus for managing configuration information in a distributed computer system
US20040006555A1 (en) * 2002-06-06 2004-01-08 Kensaku Yamamoto Full-text search device performing merge processing by using full-text index-for-registration/deletion storage part with performing registration/deletion processing by using other full-text index-for-registration/deletion storage part
US20040044643A1 (en) * 2002-04-11 2004-03-04 Devries David A. Managing multiple virtual machines
US20040111390A1 (en) * 2002-12-09 2004-06-10 Yasushi Saito Replication and replica management in a wide area file system
US20040133752A1 (en) * 2002-12-18 2004-07-08 Hitachi, Ltd. Method for controlling storage device controller, storage device controller, and program
US20040139129A1 (en) * 2002-10-16 2004-07-15 Takashi Okuyama Migrating a database
US20040267829A1 (en) * 2003-06-27 2004-12-30 Hitachi, Ltd. Storage system
WO2005001713A1 (en) 2003-06-30 2005-01-06 International Business Machines Corporation Retrieving a replica of an electronic document in a computer network
US20050004925A1 (en) * 2003-05-07 2005-01-06 Nathaniel Stahl Copy-on-write mapping file system
EP1505504A2 (en) * 2003-08-04 2005-02-09 Hitachi, Ltd. Remote copy system
US20050050115A1 (en) * 2003-08-29 2005-03-03 Kekre Anand A. Method and system of providing cascaded replication
US20050055523A1 (en) * 2003-06-27 2005-03-10 Hitachi, Ltd. Data processing system
US20050060505A1 (en) * 2003-09-17 2005-03-17 Hitachi, Ltd. Remote storage disk control device and method for controlling the same
WO2005029333A1 (en) * 2003-09-12 2005-03-31 Levanta, Inc. Tracking and replicating file system changes
EP1562118A2 (en) * 2004-01-30 2005-08-10 Hitachi, Ltd. Replicating log data in a data processing system
US20060020754A1 (en) * 2004-07-21 2006-01-26 Susumu Suzuki Storage system
US20060031647A1 (en) * 2004-08-04 2006-02-09 Hitachi, Ltd. Storage system and data processing system
EP1647891A2 (en) * 2004-10-14 2006-04-19 Hitachi Ltd. Computer system performing remote copying via an intermediate site
US20060090048A1 (en) * 2004-10-27 2006-04-27 Katsuhiro Okumoto Storage system and storage control device
US7054890B2 (en) 2001-09-21 2006-05-30 Sun Microsystems, Inc. Method and apparatus for managing data imaging in a distributed computer system
US20060117154A1 (en) * 2003-09-09 2006-06-01 Hitachi, Ltd. Data processing system
US20060235863A1 (en) * 2005-04-14 2006-10-19 Akmal Khan Enterprise computer management
US20060277384A1 (en) * 2005-06-01 2006-12-07 Hitachi, Ltd. Method and apparatus for auditing remote copy systems
US7197519B2 (en) 2002-11-14 2007-03-27 Hitachi, Ltd. Database system including center server and local servers
US20070100909A1 (en) * 2005-10-31 2007-05-03 Michael Padovano Data mirroring using a virtual connection
US7216209B2 (en) 2003-12-15 2007-05-08 Hitachi, Ltd. Data processing system having a plurality of storage systems
US7370025B1 (en) * 2002-12-17 2008-05-06 Symantec Operating Corporation System and method for providing access to replicated data
US20080114815A1 (en) * 2003-03-27 2008-05-15 Atsushi Sutoh Data control method for duplicating data between computer systems
US7529898B2 (en) 2004-07-09 2009-05-05 International Business Machines Corporation Method for backing up and restoring data
US7549080B1 (en) * 2002-08-27 2009-06-16 At&T Corp Asymmetric data mirroring
US7600087B2 (en) 2004-01-15 2009-10-06 Hitachi, Ltd. Distributed remote copy system
US20100199038A1 (en) * 2003-06-27 2010-08-05 Hitachi, Ltd. Remote copy method and remote copy system
US20110251999A1 (en) * 2010-04-07 2011-10-13 Hitachi, Ltd. Asynchronous remote copy system and storage control method
US8694538B1 (en) * 2004-06-18 2014-04-08 Symantec Operating Corporation Method and apparatus for logging write requests to a storage volume in a network data switch
US20140164323A1 (en) * 2012-12-10 2014-06-12 Transparent Io, Inc. Synchronous/Asynchronous Storage System
US8799211B1 (en) 2005-09-06 2014-08-05 Symantec Operating Corporation Cascaded replication system with remote site resynchronization after intermediate site failure
CN107257391A (en) * 2017-06-09 2017-10-17 济南中维世纪科技有限公司 The method of monitoring device locality protection
US11269679B2 (en) 2018-05-04 2022-03-08 Microsoft Technology Licensing, Llc Resource-governed protocol and runtime for distributed databases with consistency models
US11386122B2 (en) * 2019-12-13 2022-07-12 EMC IP Holding Company LLC Self healing fast sync any point in time replication systems using augmented Merkle trees
US20220245171A1 (en) * 2017-03-30 2022-08-04 Amazon Technologies, Inc. Selectively replicating changes to hierarchial data structures
US11675812B1 (en) 2022-09-29 2023-06-13 Fmr Llc Synchronization of metadata between databases in a cloud computing environment
US11775558B1 (en) * 2022-04-11 2023-10-03 Fmr Llc Systems and methods for automatic management of database data replication processes
US11928085B2 (en) 2019-12-13 2024-03-12 EMC IP Holding Company LLC Using merkle trees in any point in time replication

Families Citing this family (159)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9603582D0 (en) 1996-02-20 1996-04-17 Hewlett Packard Co Method of accessing service resource items that are for use in a telecommunications system
US6418478B1 (en) * 1997-10-30 2002-07-09 Commvault Systems, Inc. Pipelined high speed data transfer mechanism
US7581077B2 (en) * 1997-10-30 2009-08-25 Commvault Systems, Inc. Method and system for transferring data in a storage operation
JP4689137B2 (en) * 2001-08-08 2011-05-25 株式会社日立製作所 Remote copy control method and storage system
US20060195667A1 (en) * 2001-05-10 2006-08-31 Hitachi, Ltd. Remote copy for a storage controller with consistent write order
JP2003141006A (en) * 2001-07-17 2003-05-16 Canon Inc Communication system, communication device, communication method, storage medium and program
US6968369B2 (en) * 2001-09-27 2005-11-22 Emc Corporation Remote data facility over an IP network
US7076690B1 (en) 2002-04-15 2006-07-11 Emc Corporation Method and apparatus for managing access to volumes of storage
US10489449B2 (en) * 2002-05-23 2019-11-26 Gula Consulting Limited Liability Company Computer accepting voice input and/or generating audible output
US8611919B2 (en) 2002-05-23 2013-12-17 Wounder Gmbh., Llc System, method, and computer program product for providing location based services and mobile e-commerce
US7096330B1 (en) 2002-07-29 2006-08-22 Veritas Operating Corporation Symmetrical data change tracking
US7620666B1 (en) 2002-07-29 2009-11-17 Symantec Operating Company Maintaining persistent data change maps for fast data synchronization and restoration
US7103727B2 (en) * 2002-07-30 2006-09-05 Hitachi, Ltd. Storage system for multi-site remote copy
US7613741B2 (en) * 2002-08-01 2009-11-03 Oracle International Corporation Utilizing rules in a distributed information sharing system
US7707151B1 (en) 2002-08-02 2010-04-27 Emc Corporation Method and apparatus for migrating data
US7103796B1 (en) 2002-09-03 2006-09-05 Veritas Operating Corporation Parallel data change tracking for maintaining mirrored data consistency
GB2410106B (en) 2002-09-09 2006-09-13 Commvault Systems Inc Dynamic storage device pooling in a computer system
US8370542B2 (en) 2002-09-16 2013-02-05 Commvault Systems, Inc. Combined stream auxiliary copy system and method
US7546482B2 (en) 2002-10-28 2009-06-09 Emc Corporation Method and apparatus for monitoring the storage of data in a computer system
US7036040B2 (en) * 2002-11-26 2006-04-25 Microsoft Corporation Reliability of diskless network-bootable computers using non-volatile memory cache
JP2004178254A (en) * 2002-11-27 2004-06-24 Hitachi Ltd Information processing system, storage system, storage device controller and program
AU2003300946A1 (en) * 2002-12-18 2004-07-29 Emc Corporation Automated media library configuration
US7752403B1 (en) * 2003-04-01 2010-07-06 At&T Corp. Methods and systems for secure dispersed redundant data storage
WO2004090740A1 (en) 2003-04-03 2004-10-21 Commvault Systems, Inc. System and method for dynamically sharing media in a computer network
US7263590B1 (en) 2003-04-23 2007-08-28 Emc Corporation Method and apparatus for migrating data in a computer system
US7415591B1 (en) 2003-04-23 2008-08-19 Emc Corporation Method and apparatus for migrating data and automatically provisioning a target for the migration
US7805583B1 (en) * 2003-04-23 2010-09-28 Emc Corporation Method and apparatus for migrating data in a clustered computer system environment
US7266665B2 (en) 2003-06-17 2007-09-04 International Business Machines Corporation Method, system, and article of manufacture for remote copying of data
US7467168B2 (en) * 2003-06-18 2008-12-16 International Business Machines Corporation Method for mirroring data at storage locations
US7043665B2 (en) * 2003-06-18 2006-05-09 International Business Machines Corporation Method, system, and program for handling a failover to a remote storage location
US8136025B1 (en) 2003-07-03 2012-03-13 Google Inc. Assigning document identification tags
US7568034B1 (en) * 2003-07-03 2009-07-28 Google Inc. System and method for data distribution
US7797571B2 (en) * 2003-07-15 2010-09-14 International Business Machines Corporation System, method and circuit for mirroring data
US7552294B1 (en) 2003-08-07 2009-06-23 Crossroads Systems, Inc. System and method for processing multiple concurrent extended copy commands to a single destination device
US7251708B1 (en) 2003-08-07 2007-07-31 Crossroads Systems, Inc. System and method for maintaining and reporting a log of multi-threaded backups
US7447852B1 (en) 2003-08-07 2008-11-04 Crossroads Systems, Inc. System and method for message and error reporting for multiple concurrent extended copy commands to a single destination device
US20050256971A1 (en) * 2003-08-14 2005-11-17 Oracle International Corporation Runtime load balancing of work across a clustered computing system using current service performance levels
US7516221B2 (en) * 2003-08-14 2009-04-07 Oracle International Corporation Hierarchical management of the dynamic allocation of resources in a multi-node system
US7437459B2 (en) * 2003-08-14 2008-10-14 Oracle International Corporation Calculation of service performance grades in a multi-node environment that hosts the services
US8365193B2 (en) * 2003-08-14 2013-01-29 Oracle International Corporation Recoverable asynchronous message driven processing in a multi-node system
US7937493B2 (en) * 2003-08-14 2011-05-03 Oracle International Corporation Connection pool use of runtime load balancing service performance advisories
US7490113B2 (en) * 2003-08-27 2009-02-10 International Business Machines Corporation Database log capture that publishes transactions to multiple targets to handle unavailable targets by separating the publishing of subscriptions and subsequently recombining the publishing
JP4021823B2 (en) * 2003-09-01 2007-12-12 株式会社日立製作所 Remote copy system and remote copy method
US7376859B2 (en) * 2003-10-20 2008-05-20 International Business Machines Corporation Method, system, and article of manufacture for data replication
US7693960B1 (en) * 2003-10-22 2010-04-06 Sprint Communications Company L.P. Asynchronous data storage system with geographic diversity
US7200726B1 (en) 2003-10-24 2007-04-03 Network Appliance, Inc. Method and apparatus for reducing network traffic during mass storage synchronization phase of synchronous data mirroring
US7596672B1 (en) 2003-10-24 2009-09-29 Network Appliance, Inc. Synchronous mirroring including writing image updates to a file
US7203796B1 (en) * 2003-10-24 2007-04-10 Network Appliance, Inc. Method and apparatus for synchronous data mirroring
US7325109B1 (en) * 2003-10-24 2008-01-29 Network Appliance, Inc. Method and apparatus to mirror data at two separate sites without comparing the data at the two sites
US7143122B2 (en) * 2003-10-28 2006-11-28 Pillar Data Systems, Inc. Data replication in data storage systems
US7836014B2 (en) 2003-11-04 2010-11-16 Bakbone Software, Inc. Hybrid real-time data replication
US7870354B2 (en) 2003-11-04 2011-01-11 Bakbone Software, Inc. Data replication from one-to-one or one-to-many heterogeneous devices
US8060619B1 (en) * 2003-11-07 2011-11-15 Symantec Operating Corporation Direct connections to a plurality of storage object replicas in a computer network
WO2005065084A2 (en) * 2003-11-13 2005-07-21 Commvault Systems, Inc. System and method for providing encryption in pipelined storage operations in a storage network
US7299378B2 (en) * 2004-01-15 2007-11-20 Oracle International Corporation Geographically distributed clusters
US8108483B2 (en) * 2004-01-30 2012-01-31 Microsoft Corporation System and method for generating a consistent user namespace on networked devices
US7277997B2 (en) 2004-03-16 2007-10-02 International Business Machines Corporation Data consistency for mirroring updatable source data storage
WO2005096186A2 (en) * 2004-03-18 2005-10-13 Thomson Licensing Automatic mirroring of information
JP4476683B2 (en) * 2004-04-28 2010-06-09 株式会社日立製作所 Data processing system
US8554806B2 (en) * 2004-05-14 2013-10-08 Oracle International Corporation Cross platform transportable tablespaces
US7571173B2 (en) * 2004-05-14 2009-08-04 Oracle International Corporation Cross-platform transportable database
US7747760B2 (en) * 2004-07-29 2010-06-29 International Business Machines Corporation Near real-time data center switching for client requests
US7502824B2 (en) * 2004-08-12 2009-03-10 Oracle International Corporation Database shutdown with session migration
JP4475079B2 (en) * 2004-09-29 2010-06-09 株式会社日立製作所 Computer system configuration management method
JP2006106985A (en) * 2004-10-01 2006-04-20 Hitachi Ltd Computer system, storage device and method of managing storage
WO2006053084A2 (en) 2004-11-05 2006-05-18 Commvault Systems, Inc. Method and system of pooling storage devices
US7490207B2 (en) * 2004-11-08 2009-02-10 Commvault Systems, Inc. System and method for performing auxillary storage operations
US8346843B2 (en) 2004-12-10 2013-01-01 Google Inc. System and method for scalable data distribution
US9489424B2 (en) * 2004-12-20 2016-11-08 Oracle International Corporation Cursor pre-fetching
US7475281B2 (en) * 2005-03-10 2009-01-06 International Business Machines Corporation Method for synchronizing replicas of a database
EP2328089B1 (en) 2005-04-20 2014-07-09 Axxana (Israel) Ltd. Remote data mirroring system
US9195397B2 (en) 2005-04-20 2015-11-24 Axxana (Israel) Ltd. Disaster-proof data recovery
US7610314B2 (en) * 2005-10-07 2009-10-27 Oracle International Corporation Online tablespace recovery for export
CN101025741A (en) * 2006-02-17 2007-08-29 鸿富锦精密工业(深圳)有限公司 Database back up system and method
US8843783B2 (en) * 2006-03-31 2014-09-23 Emc Corporation Failover to backup site in connection with triangular asynchronous replication
US20070234105A1 (en) * 2006-03-31 2007-10-04 Quinn Brett A Failover to asynchronous backup site in connection with triangular asynchronous replication
US8909599B2 (en) * 2006-11-16 2014-12-09 Oracle International Corporation Efficient migration of binary XML across databases
US8312323B2 (en) 2006-12-22 2012-11-13 Commvault Systems, Inc. Systems and methods for remote monitoring in a computer network and reporting a failed migration operation without accessing the data being moved
US20080209145A1 (en) * 2007-02-27 2008-08-28 Shyamsundar Ranganathan Techniques for asynchronous data replication
US8271648B2 (en) * 2007-04-03 2012-09-18 Cinedigm Digital Cinema Corp. Method and apparatus for media duplication
US9027025B2 (en) 2007-04-17 2015-05-05 Oracle International Corporation Real-time database exception monitoring tool using instance eviction data
US8001307B1 (en) 2007-04-27 2011-08-16 Network Appliance, Inc. Apparatus and a method to eliminate deadlock in a bi-directionally mirrored data storage system
US7685182B2 (en) * 2007-05-08 2010-03-23 Microsoft Corporation Interleaved garbage collections
JP2008299481A (en) * 2007-05-30 2008-12-11 Hitachi Ltd Storage system and data copy method between several base points
US8073922B2 (en) * 2007-07-27 2011-12-06 Twinstrata, Inc System and method for remote asynchronous data replication
WO2009047751A2 (en) * 2007-10-08 2009-04-16 Axxana (Israel) Ltd. Fast data recovery system
US8862689B2 (en) * 2007-10-24 2014-10-14 International Business Machines Corporation Local flash memory and remote server hybrid continuous data protection
US9143559B2 (en) * 2007-12-05 2015-09-22 International Business Machines Corporation Directory server replication
US20090158284A1 (en) * 2007-12-18 2009-06-18 Inventec Corporation System and method of processing sender requests for remote replication
EP2286343A4 (en) * 2008-05-19 2012-02-15 Axxana Israel Ltd Resilient data storage in the presence of replication faults and rolling disasters
US8601526B2 (en) 2008-06-13 2013-12-03 United Video Properties, Inc. Systems and methods for displaying media content and media guidance information
US8230185B2 (en) 2008-10-08 2012-07-24 International Business Machines Corporation Method for optimizing cleaning of maps in FlashCopy cascades containing incremental maps
WO2010076755A2 (en) * 2009-01-05 2010-07-08 Axxana (Israel) Ltd Disaster-proof storage unit having transmission capabilities
JP5387015B2 (en) * 2009-02-02 2014-01-15 株式会社リコー Information processing apparatus and information processing method of information processing apparatus
EP2394225B1 (en) * 2009-02-05 2019-01-09 Wwpass Corporation Centralized authentication system with safe private data storage and method
US8752153B2 (en) 2009-02-05 2014-06-10 Wwpass Corporation Accessing data based on authenticated user, provider and system
US8839391B2 (en) 2009-02-05 2014-09-16 Wwpass Corporation Single token authentication
US8713661B2 (en) 2009-02-05 2014-04-29 Wwpass Corporation Authentication service
US8751829B2 (en) 2009-02-05 2014-06-10 Wwpass Corporation Dispersed secure data storage and retrieval
US9128895B2 (en) * 2009-02-19 2015-09-08 Oracle International Corporation Intelligent flood control management
WO2010114933A1 (en) * 2009-03-31 2010-10-07 Napera Networks Using in-the-cloud storage for computer health data
US8238538B2 (en) 2009-05-28 2012-08-07 Comcast Cable Communications, Llc Stateful home phone service
US9014546B2 (en) 2009-09-23 2015-04-21 Rovi Guides, Inc. Systems and methods for automatically detecting users within detection regions of media devices
US20110078343A1 (en) * 2009-09-29 2011-03-31 Cleversafe, Inc. Distributed storage network including memory diversity
US9454325B2 (en) * 2009-11-04 2016-09-27 Broadcom Corporation Method and system for offline data access on computer systems
WO2011067702A1 (en) 2009-12-02 2011-06-09 Axxana (Israel) Ltd. Distributed intelligent network
US20110164175A1 (en) * 2010-01-05 2011-07-07 Rovi Technologies Corporation Systems and methods for providing subtitles on a wireless communications device
US9165086B2 (en) 2010-01-20 2015-10-20 Oracle International Corporation Hybrid binary XML storage model for efficient XML processing
US8862816B2 (en) 2010-01-28 2014-10-14 International Business Machines Corporation Mirroring multiple writeable storage arrays
US10296428B2 (en) * 2010-05-17 2019-05-21 Veritas Technologies Llc Continuous replication in a distributed computer system environment
US8954669B2 (en) 2010-07-07 2015-02-10 Nexenta System, Inc Method and system for heterogeneous data volume
US8984241B2 (en) 2010-07-07 2015-03-17 Nexenta Systems, Inc. Heterogeneous redundant storage array
US8458530B2 (en) 2010-09-21 2013-06-04 Oracle International Corporation Continuous system health indicator for managing computer system alerts
US10303357B2 (en) 2010-11-19 2019-05-28 TIVO SOLUTIONS lNC. Flick to send or display content
US20120179874A1 (en) * 2011-01-07 2012-07-12 International Business Machines Corporation Scalable cloud storage architecture
US9767098B2 (en) 2012-08-08 2017-09-19 Amazon Technologies, Inc. Archival data storage system
US9563681B1 (en) 2012-08-08 2017-02-07 Amazon Technologies, Inc. Archival data flow management
US8812566B2 (en) 2011-05-13 2014-08-19 Nexenta Systems, Inc. Scalable storage for virtual machines
US9292588B1 (en) 2011-07-20 2016-03-22 Jpmorgan Chase Bank, N.A. Safe storing data for disaster recovery
US8555079B2 (en) 2011-12-06 2013-10-08 Wwpass Corporation Token management
US9372910B2 (en) 2012-01-04 2016-06-21 International Business Machines Corporation Managing remote data replication
US10120579B1 (en) 2012-08-08 2018-11-06 Amazon Technologies, Inc. Data storage management for sequentially written media
US9779035B1 (en) 2012-08-08 2017-10-03 Amazon Technologies, Inc. Log-based data storage on sequentially written media
US9652487B1 (en) 2012-08-08 2017-05-16 Amazon Technologies, Inc. Programmable checksum calculations on data storage devices
US9225675B2 (en) 2012-08-08 2015-12-29 Amazon Technologies, Inc. Data storage application programming interface
US8805793B2 (en) 2012-08-08 2014-08-12 Amazon Technologies, Inc. Data storage integrity validation
US9830111B1 (en) 2012-08-08 2017-11-28 Amazon Technologies, Inc. Data storage space management
US8959067B1 (en) 2012-08-08 2015-02-17 Amazon Technologies, Inc. Data storage inventory indexing
US9904788B2 (en) 2012-08-08 2018-02-27 Amazon Technologies, Inc. Redundant key management
US10430391B2 (en) 2012-09-28 2019-10-01 Oracle International Corporation Techniques for activity tracking, data classification, and in database archiving
US10379988B2 (en) 2012-12-21 2019-08-13 Commvault Systems, Inc. Systems and methods for performance monitoring
US9448948B2 (en) * 2013-01-10 2016-09-20 Dell Products L.P. Efficient replica cleanup during resynchronization
US10558581B1 (en) * 2013-02-19 2020-02-11 Amazon Technologies, Inc. Systems and techniques for data recovery in a keymapless data storage system
US9378261B1 (en) * 2013-09-30 2016-06-28 Emc Corporation Unified synchronous replication for block and file objects
US9330155B1 (en) * 2013-09-30 2016-05-03 Emc Corporation Unified management of sync and async replication for block and file objects
WO2015056169A1 (en) 2013-10-16 2015-04-23 Axxana (Israel) Ltd. Zero-transaction-loss recovery for database systems
US9674563B2 (en) 2013-11-04 2017-06-06 Rovi Guides, Inc. Systems and methods for recommending content
KR20150082812A (en) * 2014-01-08 2015-07-16 삼성전자주식회사 Display apparatus and control method thereof
US9304865B2 (en) 2014-03-26 2016-04-05 International Business Machines Corporation Efficient handing of semi-asynchronous raid write failures
US9898213B2 (en) 2015-01-23 2018-02-20 Commvault Systems, Inc. Scalable auxiliary copy processing using media agent resources
US9904481B2 (en) 2015-01-23 2018-02-27 Commvault Systems, Inc. Scalable auxiliary copy processing in a storage management system using media agent resources
US10379958B2 (en) 2015-06-03 2019-08-13 Axxana (Israel) Ltd. Fast archiving for database systems
US11386060B1 (en) 2015-09-23 2022-07-12 Amazon Technologies, Inc. Techniques for verifiably processing data in distributed computing systems
US10496320B2 (en) 2015-12-28 2019-12-03 Netapp Inc. Synchronous replication
US10474653B2 (en) 2016-09-30 2019-11-12 Oracle International Corporation Flexible in-memory column store placement
CN110703985B (en) * 2016-10-25 2021-05-18 华为技术有限公司 Data synchronization method and out-of-band management equipment
US10909097B2 (en) 2017-02-05 2021-02-02 Veritas Technologies Llc Method and system for dependency analysis of workloads for orchestration
US11310137B2 (en) 2017-02-05 2022-04-19 Veritas Technologies Llc System and method to propagate information across a connected set of entities irrespective of the specific entity type
US10592326B2 (en) 2017-03-08 2020-03-17 Axxana (Israel) Ltd. Method and apparatus for data loss assessment
US11010261B2 (en) 2017-03-31 2021-05-18 Commvault Systems, Inc. Dynamically allocating streams during restoration of data
US11853575B1 (en) 2019-06-08 2023-12-26 Veritas Technologies Llc Method and system for data consistency across failure and recovery of infrastructure
US11429640B2 (en) 2020-02-28 2022-08-30 Veritas Technologies Llc Methods and systems for data resynchronization in a replication environment
US11531604B2 (en) 2020-02-28 2022-12-20 Veritas Technologies Llc Methods and systems for data resynchronization in a replication environment
US11928030B2 (en) 2020-03-31 2024-03-12 Veritas Technologies Llc Optimize backup from universal share
US10855660B1 (en) 2020-04-30 2020-12-01 Snowflake Inc. Private virtual network replication of cloud databases
US11442652B1 (en) 2020-07-23 2022-09-13 Pure Storage, Inc. Replication handling during storage system transportation
US11349917B2 (en) 2020-07-23 2022-05-31 Pure Storage, Inc. Replication handling among distinct networks
US11500701B1 (en) * 2020-12-11 2022-11-15 Amazon Technologies, Inc. Providing a global queue through replication
US11593223B1 (en) 2021-09-02 2023-02-28 Commvault Systems, Inc. Using resource pool administrative entities in a data storage management system to provide shared infrastructure to tenants

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5544347A (en) * 1990-09-24 1996-08-06 Emc Corporation Data storage system controlled remote data mirroring with respectively maintained data indices
US6209002B1 (en) * 1999-02-17 2001-03-27 Emc Corporation Method and apparatus for cascading data through redundant data storage units
US6324654B1 (en) * 1998-03-30 2001-11-27 Legato Systems, Inc. Computer network remote data mirroring system
US6629264B1 (en) * 2000-03-30 2003-09-30 Hewlett-Packard Development Company, L.P. Controller-based remote copy system with logical unit grouping
US6662198B2 (en) * 2001-08-30 2003-12-09 Zoteca Inc. Method and system for asynchronous transmission, backup, distribution of data and file sharing

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5423037A (en) * 1992-03-17 1995-06-06 Teleserve Transaction Technology As Continuously available database server having multiple groups of nodes, each group maintaining a database copy with fragments stored on multiple nodes
EP0678812A1 (en) * 1994-04-20 1995-10-25 Microsoft Corporation Replication verification
US5502443A (en) * 1994-06-27 1996-03-26 Newberry; Robert S. Transponder for interactive data exchange between individually user-controlled computer-steered systems
AU6678096A (en) * 1995-07-20 1997-02-18 Novell, Inc. Transaction synchronization in a disconnectable computer and network
US5870537A (en) * 1996-03-13 1999-02-09 International Business Machines Corporation Concurrent switch to shadowed device for storage controller and device errors
US5832222A (en) * 1996-06-19 1998-11-03 Ncr Corporation Apparatus for providing a single image of an I/O subsystem in a geographically dispersed computer system
US5812793A (en) * 1996-06-26 1998-09-22 Microsoft Corporation System and method for asynchronous store and forward data replication
US5909540A (en) * 1996-11-22 1999-06-01 Mangosoft Corporation System and method for providing highly available data storage using globally addressable memory
US6324571B1 (en) * 1998-09-21 2001-11-27 Microsoft Corporation Floating single master operation
US6718347B1 (en) * 1999-01-05 2004-04-06 Emc Corporation Method and apparatus for maintaining coherence among copies of a database shared by multiple computers
US6401120B1 (en) * 1999-03-26 2002-06-04 Microsoft Corporation Method and system for consistent cluster operational data in a server cluster using a quorum of replicas
JP3901883B2 (en) * 1999-09-07 2007-04-04 富士通株式会社 Data backup method, data backup system and recording medium
US6728751B1 (en) * 2000-03-16 2004-04-27 International Business Machines Corporation Distributed back up of data on a network
US6587970B1 (en) * 2000-03-22 2003-07-01 Emc Corporation Method and apparatus for performing site failover
US6658590B1 (en) * 2000-03-30 2003-12-02 Hewlett-Packard Development Company, L.P. Controller-based transaction logging system for data recovery in a storage area network
US6691139B2 (en) * 2001-01-31 2004-02-10 Hewlett-Packard Development Co., Ltd. Recreation of archives at a disaster recovery site
US7340505B2 (en) * 2001-04-02 2008-03-04 Akamai Technologies, Inc. Content storage and replication in a managed internet content storage environment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5544347A (en) * 1990-09-24 1996-08-06 Emc Corporation Data storage system controlled remote data mirroring with respectively maintained data indices
US6324654B1 (en) * 1998-03-30 2001-11-27 Legato Systems, Inc. Computer network remote data mirroring system
US6209002B1 (en) * 1999-02-17 2001-03-27 Emc Corporation Method and apparatus for cascading data through redundant data storage units
US6629264B1 (en) * 2000-03-30 2003-09-30 Hewlett-Packard Development Company, L.P. Controller-based remote copy system with logical unit grouping
US6662198B2 (en) * 2001-08-30 2003-12-09 Zoteca Inc. Method and system for asynchronous transmission, backup, distribution of data and file sharing

Cited By (126)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030033327A1 (en) * 2001-07-27 2003-02-13 Chhandomay Mandal Method and apparatus for managing remote data replication in a distributed computer system
US6772178B2 (en) * 2001-07-27 2004-08-03 Sun Microsystems, Inc. Method and apparatus for managing remote data replication in a distributed computer system
US7054890B2 (en) 2001-09-21 2006-05-30 Sun Microsystems, Inc. Method and apparatus for managing data imaging in a distributed computer system
US20030061399A1 (en) * 2001-09-27 2003-03-27 Sun Microsystems, Inc. Method and apparatus for managing data volumes in a distributed computer system
US6996587B2 (en) 2001-09-27 2006-02-07 Sun Microsystems, Inc. Method and apparatus for managing data volumes in a distributed computer system
US20030088713A1 (en) * 2001-10-11 2003-05-08 Sun Microsystems, Inc. Method and apparatus for managing data caching in a distributed computer system
US20030084198A1 (en) * 2001-10-30 2003-05-01 Sun Microsystems Inc. Method and apparatus for managing data services in a distributed computer system
US7555541B2 (en) 2001-11-16 2009-06-30 Sun Microsystems, Inc. Method and apparatus for managing configuration information in a distributed computer system
US20030105840A1 (en) * 2001-11-16 2003-06-05 Sun Microsystems, Inc. Method and apparatus for managing configuration information in a distributed computer system
US7159015B2 (en) 2001-11-16 2007-01-02 Sun Microsystems, Inc. Method and apparatus for managing configuration information in a distributed computer system
US20040044643A1 (en) * 2002-04-11 2004-03-04 Devries David A. Managing multiple virtual machines
US20070118543A1 (en) * 2002-06-06 2007-05-24 Kensaku Yamamoto Full-text search device performing merge processing by using full-text index-for-registration/ deletion storage part with performing registration/deletion processing by using other full-text index-for-registration/deletion storage part
US7644097B2 (en) 2002-06-06 2010-01-05 Ricoh Company, Ltd. Full-text search device performing merge processing by using full-text index-for-registration/deletion storage part with performing registration/deletion processing by using other full-text index-for-registration/deletion storage part
US20070136258A1 (en) * 2002-06-06 2007-06-14 Kensaku Yamamoto Full-text search device performing merge processing by using full-text index-for-registration/deletion storage part with performing registration/deletion processing by using other full-text index-for-registration/deletion storage part
US7730069B2 (en) 2002-06-06 2010-06-01 Ricoh Company, Ltd. Full-text search device performing merge processing by using full-text index-for-registration/ deletion storage part with performing registration/deletion processing by using other full-text index-for-registration/deletion storage part
US7702666B2 (en) * 2002-06-06 2010-04-20 Ricoh Company, Ltd. Full-text search device performing merge processing by using full-text index-for-registration/deletion storage part with performing registration/deletion processing by using other full-text index-for-registration/deletion storage part
US20040006555A1 (en) * 2002-06-06 2004-01-08 Kensaku Yamamoto Full-text search device performing merge processing by using full-text index-for-registration/deletion storage part with performing registration/deletion processing by using other full-text index-for-registration/deletion storage part
US10887391B2 (en) 2002-08-27 2021-01-05 At&T Intellectual Property Ii, L.P. Remote cloud backup of data
US8819479B2 (en) 2002-08-27 2014-08-26 At&T Intellectual Property Ii, L.P. Asymmetric data mirroring
US20110179243A1 (en) * 2002-08-27 2011-07-21 At&T Intellectual Property Ii, L.P. Asymmetric Data Mirroring
US8443229B2 (en) * 2002-08-27 2013-05-14 At&T Intellectual Property I, L.P. Asymmetric data mirroring
US10044805B2 (en) 2002-08-27 2018-08-07 At&T Intellectual Property Ii, L.P. Asymmetric data mirroring
US9344502B2 (en) 2002-08-27 2016-05-17 At&T Intellectual Property Ii, L.P. Asymmetric data mirroring
US7549080B1 (en) * 2002-08-27 2009-06-16 At&T Corp Asymmetric data mirroring
US20040139129A1 (en) * 2002-10-16 2004-07-15 Takashi Okuyama Migrating a database
US7197519B2 (en) 2002-11-14 2007-03-27 Hitachi, Ltd. Database system including center server and local servers
US20070143362A1 (en) * 2002-11-14 2007-06-21 Norifumi Nishikawa Database system including center server and local servers
US7693879B2 (en) 2002-11-14 2010-04-06 Hitachi, Ltd. Database system including center server and local servers
US7739240B2 (en) * 2002-12-09 2010-06-15 Hewlett-Packard Development Company, L.P. Replication and replica management in a wide area file system
US20040111390A1 (en) * 2002-12-09 2004-06-10 Yasushi Saito Replication and replica management in a wide area file system
US7370025B1 (en) * 2002-12-17 2008-05-06 Symantec Operating Corporation System and method for providing access to replicated data
US20080288733A1 (en) * 2002-12-18 2008-11-20 Susumu Suzuki Method for controlling storage device controller, storage device controller, and program
US7962712B2 (en) 2002-12-18 2011-06-14 Hitachi, Ltd. Method for controlling storage device controller, storage device controller, and program
US7418563B2 (en) 2002-12-18 2008-08-26 Hitachi, Ltd. Method for controlling storage device controller, storage device controller, and program
US20060168412A1 (en) * 2002-12-18 2006-07-27 Hitachi, Ltd. Method for controlling storage device controller, storage device controller, and program
US7089386B2 (en) 2002-12-18 2006-08-08 Hitachi, Ltd. Method for controlling storage device controller, storage device controller, and program
US7093087B2 (en) 2002-12-18 2006-08-15 Hitachi, Ltd. Method for controlling storage device controller, storage device controller, and program
US20050251636A1 (en) * 2002-12-18 2005-11-10 Hitachi, Ltd. Method for controlling storage device controller, storage device controller, and program
US20040133752A1 (en) * 2002-12-18 2004-07-08 Hitachi, Ltd. Method for controlling storage device controller, storage device controller, and program
US20070028066A1 (en) * 2002-12-18 2007-02-01 Hitachi, Ltd. Method for controlling storage device controller, storage device controller, and program
US7334097B2 (en) 2002-12-18 2008-02-19 Hitachi, Ltd. Method for controlling storage device controller, storage device controller, and program
US8396830B2 (en) * 2003-03-27 2013-03-12 Hitachi, Ltd. Data control method for duplicating data between computer systems
US20080114815A1 (en) * 2003-03-27 2008-05-15 Atsushi Sutoh Data control method for duplicating data between computer systems
US20050004925A1 (en) * 2003-05-07 2005-01-06 Nathaniel Stahl Copy-on-write mapping file system
US20050004886A1 (en) * 2003-05-07 2005-01-06 Nathaniel Stahl Detection and reporting of computer viruses
US20070168362A1 (en) * 2003-06-27 2007-07-19 Hitachi, Ltd. Data replication among storage systems
US20050055523A1 (en) * 2003-06-27 2005-03-10 Hitachi, Ltd. Data processing system
US20040267829A1 (en) * 2003-06-27 2004-12-30 Hitachi, Ltd. Storage system
US8234471B2 (en) 2003-06-27 2012-07-31 Hitachi, Ltd. Remote copy method and remote copy system
US8135671B2 (en) 2003-06-27 2012-03-13 Hitachi, Ltd. Data replication among storage systems
US20050073887A1 (en) * 2003-06-27 2005-04-07 Hitachi, Ltd. Storage system
US7152079B2 (en) 2003-06-27 2006-12-19 Hitachi, Ltd. Data replication among storage systems
US8239344B2 (en) 2003-06-27 2012-08-07 Hitachi, Ltd. Data replication among storage systems
US7725445B2 (en) 2003-06-27 2010-05-25 Hitachi, Ltd. Data replication among storage systems
US7130975B2 (en) * 2003-06-27 2006-10-31 Hitachi, Ltd. Data processing system
US20070168361A1 (en) * 2003-06-27 2007-07-19 Hitachi, Ltd. Data replication among storage systems
US9058305B2 (en) 2003-06-27 2015-06-16 Hitachi, Ltd. Remote copy method and remote copy system
US8943025B2 (en) 2003-06-27 2015-01-27 Hitachi, Ltd. Data replication among storage systems
US20100199038A1 (en) * 2003-06-27 2010-08-05 Hitachi, Ltd. Remote copy method and remote copy system
US8566284B2 (en) 2003-06-27 2013-10-22 Hitachi, Ltd. Data replication among storage systems
WO2005001713A1 (en) 2003-06-30 2005-01-06 International Business Machines Corporation Retrieving a replica of an electronic document in a computer network
EP1505504A3 (en) * 2003-08-04 2009-04-29 Hitachi, Ltd. Remote copy system
EP1505504A2 (en) * 2003-08-04 2005-02-09 Hitachi, Ltd. Remote copy system
US20050050115A1 (en) * 2003-08-29 2005-03-03 Kekre Anand A. Method and system of providing cascaded replication
US7447855B2 (en) 2003-09-09 2008-11-04 Hitachi, Ltd. Data processing method providing remote copy in a system having first, second, and third storage systems
US8074036B2 (en) 2003-09-09 2011-12-06 Hitachi, Ltd. Data processing system having first, second and third storage systems that are arranged to be changeable between a first status and a second status in response to a predetermined condition at the first storage system
US8495319B2 (en) 2003-09-09 2013-07-23 Hitachi, Ltd. Data processing system providing remote copy in a system having first, second and third storage systems and establishing first, second and third copy pairs
US7143254B2 (en) 2003-09-09 2006-11-28 Hitachi, Ltd. Remote copy system
US20090037436A1 (en) * 2003-09-09 2009-02-05 Kazuhito Suishu Data processing system
US20060117154A1 (en) * 2003-09-09 2006-06-01 Hitachi, Ltd. Data processing system
US20070038824A1 (en) * 2003-09-09 2007-02-15 Kazuhito Suishu Data processing system
WO2005029333A1 (en) * 2003-09-12 2005-03-31 Levanta, Inc. Tracking and replicating file system changes
US20050091286A1 (en) * 2003-09-12 2005-04-28 Adam Fineberg Tracking and replicating file system changes
US20050060505A1 (en) * 2003-09-17 2005-03-17 Hitachi, Ltd. Remote storage disk control device and method for controlling the same
US7216209B2 (en) 2003-12-15 2007-05-08 Hitachi, Ltd. Data processing system having a plurality of storage systems
US7600087B2 (en) 2004-01-15 2009-10-06 Hitachi, Ltd. Distributed remote copy system
US7328373B2 (en) 2004-01-30 2008-02-05 Hitachi, Ltd. Data processing system
US7802137B2 (en) 2004-01-30 2010-09-21 Hitachi, Ltd. Journaling system switching to another logical volume to store subsequently received update history
EP1562118A2 (en) * 2004-01-30 2005-08-10 Hitachi, Ltd. Replicating log data in a data processing system
EP1562118A3 (en) * 2004-01-30 2008-12-31 Hitachi, Ltd. Replicating log data in a data processing system
US20080104135A1 (en) * 2004-01-30 2008-05-01 Hitachi, Ltd. Data Processing System
US20070033437A1 (en) * 2004-01-30 2007-02-08 Hitachi, Ltd. Data processing system
US8694538B1 (en) * 2004-06-18 2014-04-08 Symantec Operating Corporation Method and apparatus for logging write requests to a storage volume in a network data switch
US7529898B2 (en) 2004-07-09 2009-05-05 International Business Machines Corporation Method for backing up and restoring data
US7523148B2 (en) 2004-07-21 2009-04-21 Hitachi, Ltd. Storage system
US20060020754A1 (en) * 2004-07-21 2006-01-26 Susumu Suzuki Storage system
US20060020640A1 (en) * 2004-07-21 2006-01-26 Susumu Suzuki Storage system
EP1628200A1 (en) * 2004-07-21 2006-02-22 Hitachi, Ltd. Storage system
US7243116B2 (en) 2004-07-21 2007-07-10 Hitachi, Ltd. Storage system
US7296126B2 (en) 2004-08-04 2007-11-13 Hitachi, Ltd. Storage system and data processing system
US7313663B2 (en) 2004-08-04 2007-12-25 Hitachi, Ltd. Storage system and data processing system
US7529901B2 (en) 2004-08-04 2009-05-05 Hitachi, Ltd. Storage system and data processing system
US20060031647A1 (en) * 2004-08-04 2006-02-09 Hitachi, Ltd. Storage system and data processing system
US20070168630A1 (en) * 2004-08-04 2007-07-19 Hitachi, Ltd. Storage system and data processing system
US20060242373A1 (en) * 2004-08-04 2006-10-26 Hitachi, Ltd. Storage system and data processing system
US7159088B2 (en) 2004-08-04 2007-01-02 Hitachi, Ltd. Storage system and data processing system
US7765370B2 (en) 2004-10-14 2010-07-27 Hitachi, Ltd. Computer system storing data on multiple storage systems
US7512755B2 (en) 2004-10-14 2009-03-31 Hitachi, Ltd. Computer system storing data on multiple storage systems
US20090187722A1 (en) * 2004-10-14 2009-07-23 Hitachi, Ltd. Computer System Storing Data on Multiple Storage Systems
EP1647891A2 (en) * 2004-10-14 2006-04-19 Hitachi Ltd. Computer system performing remote copying via an intermediate site
US7934065B2 (en) 2004-10-14 2011-04-26 Hitachi, Ltd. Computer system storing data on multiple storage systems
US20060085610A1 (en) * 2004-10-14 2006-04-20 Hitachi, Ltd. Computer system
EP1647891A3 (en) * 2004-10-14 2008-06-04 Hitachi Ltd. Computer system performing remote copying via an intermediate site
US20100281229A1 (en) * 2004-10-14 2010-11-04 Hitachi, Ltd. Computer System Storing Data On Multiple Storage Systems
US7673107B2 (en) 2004-10-27 2010-03-02 Hitachi, Ltd. Storage system and storage control device
US20060090048A1 (en) * 2004-10-27 2006-04-27 Katsuhiro Okumoto Storage system and storage control device
US20080016303A1 (en) * 2004-10-27 2008-01-17 Katsuhiro Okumoto Storage system and storage control device
US20060235863A1 (en) * 2005-04-14 2006-10-19 Akmal Khan Enterprise computer management
US20060277384A1 (en) * 2005-06-01 2006-12-07 Hitachi, Ltd. Method and apparatus for auditing remote copy systems
US8799211B1 (en) 2005-09-06 2014-08-05 Symantec Operating Corporation Cascaded replication system with remote site resynchronization after intermediate site failure
US20070100909A1 (en) * 2005-10-31 2007-05-03 Michael Padovano Data mirroring using a virtual connection
US8903766B2 (en) * 2005-10-31 2014-12-02 Hewlett-Packard Development Company, L.P. Data mirroring using a virtual connection
US8495014B2 (en) 2010-04-07 2013-07-23 Hitachi, Ltd. Asynchronous remote copy system and storage control method
US8271438B2 (en) * 2010-04-07 2012-09-18 Hitachi, Ltd. Asynchronous remote copy system and storage control method
US20110251999A1 (en) * 2010-04-07 2011-10-13 Hitachi, Ltd. Asynchronous remote copy system and storage control method
US20140164323A1 (en) * 2012-12-10 2014-06-12 Transparent Io, Inc. Synchronous/Asynchronous Storage System
US11860895B2 (en) * 2017-03-30 2024-01-02 Amazon Technologies, Inc. Selectively replicating changes to hierarchial data structures
US20220245171A1 (en) * 2017-03-30 2022-08-04 Amazon Technologies, Inc. Selectively replicating changes to hierarchial data structures
CN107257391A (en) * 2017-06-09 2017-10-17 济南中维世纪科技有限公司 The method of monitoring device locality protection
US11269679B2 (en) 2018-05-04 2022-03-08 Microsoft Technology Licensing, Llc Resource-governed protocol and runtime for distributed databases with consistency models
US11386122B2 (en) * 2019-12-13 2022-07-12 EMC IP Holding Company LLC Self healing fast sync any point in time replication systems using augmented Merkle trees
US11921747B2 (en) 2019-12-13 2024-03-05 EMC IP Holding Company LLC Self healing fast sync any point in time replication systems using augmented Merkle trees
US11928085B2 (en) 2019-12-13 2024-03-12 EMC IP Holding Company LLC Using merkle trees in any point in time replication
US11775558B1 (en) * 2022-04-11 2023-10-03 Fmr Llc Systems and methods for automatic management of database data replication processes
US20230325405A1 (en) * 2022-04-11 2023-10-12 Fmr Llc Systems and methods for automatic management of database data replication processes
US11675812B1 (en) 2022-09-29 2023-06-13 Fmr Llc Synchronization of metadata between databases in a cloud computing environment

Also Published As

Publication number Publication date
US7340490B2 (en) 2008-03-04
US20030014433A1 (en) 2003-01-16
US20030014523A1 (en) 2003-01-16

Similar Documents

Publication Publication Date Title
US7340490B2 (en) Storage network data replicator
US6363462B1 (en) Storage controller providing automatic retention and deletion of synchronous back-up data
US7603581B2 (en) Remote copying of updates to primary and secondary storage locations subject to a copy relationship
US6446176B1 (en) Method and system for transferring data between primary storage and secondary storage using a bridge volume and an internal snapshot copy of the data being transferred
US7577868B2 (en) No data loss IT disaster recovery over extended distances
US7251743B2 (en) Method, system, and program for transmitting input/output requests from a primary controller to a secondary controller
US7206911B2 (en) Method, system, and program for a system architecture for an arbitrary number of backup components
US8108486B2 (en) Remote copy system
US7600087B2 (en) Distributed remote copy system
US6535967B1 (en) Method and apparatus for transferring data between a primary storage system and a secondary storage system using a bridge volume
US20010047412A1 (en) Method and apparatus for maximizing distance of data mirrors
US7831550B1 (en) Propagating results of a volume-changing operation to replicated nodes
JP2005327283A (en) Mirroring storage interface
US9921764B2 (en) Using inactive copy relationships to resynchronize data between storages
US20030154305A1 (en) High availability lightweight directory access protocol service
KR20070059095A (en) A system, a method and a device for updating a data set through a communication network
US8903766B2 (en) Data mirroring using a virtual connection
US20090024812A1 (en) Copying writes from primary storages to secondary storages across different networks
US9582384B2 (en) Method and system for data replication
US9436653B2 (en) Shared-bandwidth multiple target remote copy
Pandey et al. A survey of storage remote replication software
US10331358B1 (en) High performance and low-latency replication using storage mirroring
Atwood et al. Storage Solutions with the IBM TotalStorage™ Enterprise Storage Server and VERITAS Software

Legal Events

Date Code Title Description
AS Assignment

Owner name: SUN MICROSYSTEMS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TELOH, JOHN;CROSLAND, SIMON;NEWTON, PHILIP;REEL/FRAME:012318/0040;SIGNING DATES FROM 20011101 TO 20011114

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION