US20070177413A1 - Storage system and storage control device - Google Patents

Storage system and storage control device Download PDF

Info

Publication number
US20070177413A1
US20070177413A1 US11/697,777 US69777707A US2007177413A1 US 20070177413 A1 US20070177413 A1 US 20070177413A1 US 69777707 A US69777707 A US 69777707A US 2007177413 A1 US2007177413 A1 US 2007177413A1
Authority
US
United States
Prior art keywords
storage
storage device
volume
data
copying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/697,777
Inventor
Katsuhiro Okumoto
Yoshihito Nakagawa
Hisao Honma
Keishi Tamura
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US11/697,777 priority Critical patent/US20070177413A1/en
Publication of US20070177413A1 publication Critical patent/US20070177413A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0665Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2071Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2082Data synchronisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0605Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0607Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/065Replication mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/1658Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit
    • G06F11/1662Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit the resynchronized component or unit being a persistent storage device

Definitions

  • the present invention relates to a storage system and a storage control device.
  • data is controlled using relatively large-scale storage systems in order to handle large quantities of various types of data in government organizations, public offices, autonomous regional bodies, business enterprises, educational organizations and the like.
  • storage systems are constructed from disk array devices or the like.
  • Disk array devices are constructed by disposing numerous storage devices in the form of an array; for example, a storage region based on an RAID (redundant array of independent disks) is provided.
  • One or more logical volumes are formed in a physical storage region provided by a storage device group, and these logical volumes are provided to a host computer (more specifically, to a data base program operating in a host computer).
  • the host computer (hereafter abbreviated to “host”) can perform the reading and writing of data with respect to the logical volumes by transmitting specified commands.
  • such cases include cases in which the old type of storage device is a device that involves mechanical operations (such as head seeking or the like), so that the mechanical operating time is long, cases in which the capacity of the data transfer buffer of the old type of storage device is small, and the like.
  • the present invention was devised in light of the above problems.
  • One object of the present invention is to provide a storage system and storage control device which are devised so that different types of storage control devices such as new and old storage control devices can be caused to cooperate, thus allowing effective utilization of memory resources.
  • Another object of the present invention is provide a storage system and storage control device which allow the utilization of an old type of storage control device as a new type of storage control device.
  • Another object of the present invention is to provide a storage system and storage control device which are devised so that new functions can be added while utilizing the advantages of an old type of storage device.
  • Another object of the present invention is to provide a storage system and storage control device which are devised so that the memory resources of a second storage control device can be incorporated into a first storage control device as a first virtual volume, and the storage contents of the first real volume of the first storage control device and this first virtual volume can be synchronized.
  • the storage system of the present invention is a storage system which is constructed by communicably connecting a first storage control device and a second storage control device, and which performs data processing in accordance with requests from a higher device
  • the abovementioned first storage control device comprises a first real volume, a first virtual volume that can form a copying pair with the first real volume, a first control part that respectively controls data communications between the first real volume and first virtual volume, and the higher device and second storage control device, and a synchronizing part that synchronizes the storage contents of the first real volume and the storage contents of the first virtual volume
  • the second storage control device comprises a second real volume that is associated with the first virtual volume, and a second control part that respectively controls data communications between the second real volume, and the higher device and first storage control device.
  • the first storage control device respectively comprises a first real volume and a first virtual volume.
  • the first real volume is constructed on the basis of first storage device which has a first storage control device
  • the first virtual volume is constructed on the basis of a second storage device which has a second storage control device.
  • the first storage control part incorporates the memory resources of the second storage control device as though these memory resources were its own memory resources, and provides these memory resources to the higher device. Furthermore, the synchronizing part synchronizes the storage contents of the first real volume and first virtual volume. Accordingly, a backup of the first real volume can be formed in the first virtual volume, and conversely, a backup of the first virtual volume can be formed in the first real volume.
  • the synchronization modes can be divided into two main types: namely, a full copying mode in which all of the storage contents are copied, and a differential copying mode in which only the differential data is copied.
  • the first storage control device has a first storage device
  • the second storage control device has a second storage device; furthermore, the first real volume is connected to the first storage device via an intermediate storage device, and the first virtual volume is connected to the second storage device via a virtually constructed virtual intermediate storage device.
  • the intermediate storage device is a storage hierarchy which logically connects the first storage device that provides a physical storage region, and the first virtual volume.
  • the virtual intermediate storage device is a storage hierarchy which logically connects the second storage device that provides a physical storage region, and the first virtual volume.
  • the virtual intermediate storage device is associated with the storage region of the second storage device of the second storage control device. Specifically, by mapping the second storage device in the virtual intermediate storage device, it is possible to vary the storage capacity, or to employ a stripe structure or the like.
  • the synchronizing part can copy the entire storage contents stored in the first real volume into the first virtual volume. Conversely, the synchronizing part can also copy the entire storage contents stored in the first virtual volume into the first real volume.
  • the synchronizing part can also copy the differential data between the storage contents of the first real volume and the storage contents of the first virtual volume into the first virtual volume.
  • the copying pair consisting of the two volumes is temporarily released (split). Then, in cases where a change occurs in the storage contents of the first virtual volume as a result of a write request from the higher device, the storage contents of the two volumes can again be caused to coincide by separately controlling the this changed differential data, and copying only this differential data into the first real volume.
  • the synchronizing part can copy the differential data into the first virtual volume. As a result, the storage contents of both volumes can be matched.
  • the system further comprises a managing device which is communicably connected to the first storage control device and second storage control device, respectively. Furthermore, in cases where the access attribute of “write prohibited” is set in the first real volume by the managing device, the synchronizing part copies the differential data into the first virtual volume, and when the copying of the differential data is completed, the managing device can set the access attribute of the first real volume as “read and write possible”.
  • the function of the managing device can be constructed from a computer program. Accordingly, for example, the managing device can be constructed as a computer device that is separate from the higher device, or can be installed inside the higher device.
  • the term “access attribute” refers to information that is used to control whether or not a given volume can be accessed. Examples of access attributes include “write prohibited (read only)” which prohibits the updating of data, “read/write possible” which allows both the reading and writing of data, “hidden” which does not respond to inquiry responses, “empty capacity 0 ” which responds that the state is full in the case of inquiries for empty capacity, and the like.
  • the synchronizing part can also copy differential data between the storage contents of the first real volume and the storage contents of the first virtual volume into the first real volume. Furthermore, in this case, the synchronizing part can acquire differential control information relating to the differential data from the second storage control device, and can read out differential data from the second storage control device and copy this data into the first real volume on the basis of this differential control information.
  • the synchronizing part can maintain the matching of data by copying the differential data into the first real volume.
  • the synchronizing part can copy the differential data into the first real volume, and when the copying of this differential data has been completed, the managing device can also set the access attribute of the second real volume as “read and write possible”.
  • the storage system is a storage system in which a first storage control device and a second storage control device are communicably connected, this storage system comprising a higher device that can respectively issue access requests to the first storage control device and second storage control device, and a managing device that can communicate with the first storage control device and second storage control device
  • the first storage control device comprises a first storage device that stores data, an intermediate storage device that is disposed in the storage region of this first storage device, a first real volume that is disposed in the storage region of this intermediate storage device, a virtual intermediate storage device that is disposed in the storage region of the second storage device of the second storage control device, a first virtual volume that is disposed in the storage region of this virtual intermediate storage device, a higher communications control part that respectively controls data communications between the higher device, and the second storage control device and managing device, a lower communications control part that controls data communications with the first storage device, a memory part that is shared by the higher communications control part and lower communications control part, and a mapping table that is stored in the
  • the higher communications control part refers to the mapping table and reads out all of the data from the second real volume, and the lower communications control part stores all of this read-out data in the first storage device.
  • the lower communications control part reads out all of the data of the first real volume from the first storage device, and the higher communications control part refers to the mapping table and writes this read-out data into the second real volume.
  • the first storage control device and second storage control device can respectively hold differential control information that controls the differential data between the storage contents of the first real volume and the storage contents of the first virtual volume.
  • the lower communications control part reads out the differential data from the first storage device, and the higher communications control part refers to the mapping table and writes this read-out differential data into the second real volume.
  • the higher communications control part reads out the differential control information controlled by the second storage control device, refers to this read-out differential control information and the mapping table, and reads out the differential data from the second real volume, and the lower communications control part stores this read-out differential data in the first storage device.
  • the present invention may also be understood as the invention of a storage control device.
  • the present invention may also be understood as a copying control method for a storage control device.
  • this copying control method can be constructed so as to comprise the steps of mapping the second real volume of a second storage control device into the first virtual volume of a first storage control device, setting the abovementioned first virtual volume and the first real volume of the abovementioned first storage control device as a copying pair, and causing the storage contents of the abovementioned first virtual volume and the abovementioned first real volume to coincide.
  • FIG. 1 is a block diagram which shows the overall construction of a storage system constituting an embodiment of the present invention
  • FIG. 2 is a block diagram of the storage system
  • FIG. 3 is an explanatory diagram which shows an outline of the functional construction of the storage system
  • FIG. 4 is an explanatory diagram which shows the storage structure in model form
  • FIG. 5 is an explanatory diagram which shows an example of the construction of the mapping table
  • FIG. 6 is an explanatory diagram which shows the conditions of address conversion in a case where data is written into an external volume incorporated as a virtual internal volume
  • FIG. 7 is an explanatory diagram respectively showing the differential bit map T 4 and saving destination address control map T 5 used to control the differential data;
  • FIG. 8 is an explanatory diagram showing an example of the construction of the copying pair control table
  • FIG. 9 is an explanatory diagram showing an example of the construction of the access attribute control table
  • FIG. 10 is an explanatory diagram showing the flow of the processing that is used to construct the mapping table
  • FIG. 11 is schematic diagram showing a case in which data is written into an external storage device used as a virtual internal volume
  • FIG. 12 is a schematic diagram showing a case in which data is read out from an external storage device used as a virtual internal volume
  • FIG. 13 is a sequence flow chart showing the flow of the first full copying mode
  • FIG. 14 is a sequence flow chart showing the flow of the second full copying mode
  • FIG. 15 is a sequence flow chart showing the flow of the first differential copying mode
  • FIG. 16 is a sequence flow chart showing the flow of the second differential copying mode
  • FIG. 17 is an explanatory diagram showing the storage structure of a storage system according to a second embodiment.
  • FIG. 18 is an explanatory diagram showing the storage structure of a storage system according to a third embodiment.
  • FIG. 1 is a structural explanatory diagram which shows an overall outline of an embodiment of the present invention.
  • the storage system maps a storage device present on the outside into its own intermediate storage device (VDEV), thus incorporating this external storage device as thought this device were its own internal volume, and provides this volume to a host.
  • VDEV intermediate storage device
  • the storage system of the present embodiment can comprise a first storage device 1 which is one example of a first storage control device, a second storage device 2 which is one example of a second storage control device, a host 3 which acts as a higher device, and a managing device 4 .
  • the first storage device 1 is constructed as a disk array device.
  • the first storage device 1 comprises three communications ports 1 A through 1 C; the host 3 , managing device 4 and second storage device 2 are communicably connected by means of these respective communications ports.
  • data communications can be performed between the respective storage devices 1 and 2 , and the respective storage devices 1 and 2 and the host 3 , on the basis of a fiber channel protocol.
  • data communications can be performed between the respective storage devices 1 and 2 and the managing device 4 on the basis of a TCP/IP (transmission control protocol/internet protocol).
  • TCP/IP transmission control protocol/internet protocol
  • the first storage device 1 can comprise a control part 5 , an internal volume 6 used as a first real volume, and a virtual internal volume 7 used as a first virtual volume.
  • the control part 5 respectively controls the exchange of data inside the first storage device and the exchange of data with the outside.
  • the internal volume 6 is disposed on the basis of a physical storage device (e. g., a disk drive) disposed inside the first storage device 1 .
  • the virtual internal volume 7 has a virtual existence; the entity that stores data is present inside the second storage device 2 .
  • the virtual internal volume 7 is constructed by mapping an external volume 9 of the second storage device 2 into a specified level of the storage hierarchy of the first storage device 1 .
  • the control part 5 comprises a differential bit map 5 A and a mapping table 5 B.
  • the differential bit map 5 A comprises information that is used to control the differential between the storage contents of the internal volume 6 and the storage contents of the virtual internal volume 7 (external volume 9 ).
  • differential data 6 A is generated by this updating.
  • the differential bit map 5 A comprises information that is used to control this differential data 6 A.
  • the mapping table 5 B comprises information that is used to associate the external volume 9 with the virtual internal volume 7 ; for example, this information includes path information or the like that is used to access the external volume 9 .
  • the second storage device 2 is communicably connected with the host 3 , managing device 4 and first storage device 1 , respectively via respective communications ports 2 A through 2 C.
  • the second storage device 2 can be constructed so that this device comprises a control part 8 and an external volume 9 .
  • the control part 8 respectively controls the exchange of data within the second storage device 2 and the exchange of data with the outside.
  • the external volume 9 is disposed on the basis of a physical storage device disposed inside the second storage device 2 . Since the volumes of the second storage device 2 are present on the outside as seen from the first storage device 1 , these volumes are called external volumes here.
  • the control part 8 comprises a differential bit map 8 A that is used to control the differential data 9 A that is generated in the external volume 9 .
  • the internal volume 6 and virtual internal volume 7 form a copying pair. Either of these volumes may be the copying source, and either of the volumes may be the copying destination.
  • the method used to synchronize the storage contents there is full copying in which all of the storage contents of the copying source volume are copied into the copying destination volume, and differential copying in which the only the differential data between the copying source volume and copying destination volume is copied; either of these methods may be employed.
  • the control part 5 refers to the mapping table 5 B, acquires path information relating to the path to the external volume 9 which is the entity of the of the virtual internal volume 7 , and transfers data to the external volume 9 .
  • the control part 5 refers to the mapping table 5 B, acquires path information relating to the path to the external volume 9 , and writes data read out from the external volume 9 into the internal volume 6 .
  • the data of the internal volume 6 and the data of the virtual internal volume 7 can be synchronized.
  • the present embodiment will be described in greater detail below.
  • FIG. 2 is a block diagram which shows the construction of the essential parts of the storage system of the present embodiment.
  • the hosts 10 A and 10 B are computer devices comprising information processing resources such as a CPU (central processing unit), memory and the like; for instance, these hosts are constructed as personal computers, workstations, main frames or the like.
  • an LAN local area network
  • an SAN storage area network
  • the internet a dedicated circuit, a public circuit or the like
  • data communications via an LAN are performed according to a TCP/IP protocol.
  • the hosts 10 In cases where the hosts 10 are connected to the first storage device 100 [and second storage device] 200 via an LAN, the hosts 10 request data input and output in file units by designating file names.
  • the hosts 10 request data input and output with blocks (which are the units of data control of the storage regions provided by a plurality of disk storage devices (disk drives)) in accordance with a fiber channel protocol.
  • the HBA 11 is (for example) a network card corresponding to this LAN.
  • the HBA 11 is (for example) a host bus adapter.
  • the managing device 20 is a computer device which is used to control the construction of the storage system and the like. For example, this device is operated by a user such as a system manager or the like.
  • the managing device 20 is respectively connected to the respective storage devices 100 and 200 via a communications network CN 2 .
  • the managing device 20 issues instructions relating to the formation of copying pairs, access attributes and the like to the respective storage devices 100 and 200 .
  • the first storage device 100 is constructed as a disk array subsystem.
  • the present invention is not limited to this; the first storage device 100 can also be constructed as a highly functionalized intelligent type fiber channel switch.
  • the first storage device 100 can provide the memory resources of the second storage device 200 to the host 10 as its own logical volume (logical unit).
  • the first storage device 100 can be divided into two main parts, i. e., a controller and a storage part 160 .
  • the controller comprises a plurality of channel adapters (hereafter referred to as “CHAs”) 110 , a plurality of disk adapters (hereafter referred to as “DKAs”) 120 , a cache memory 130 , a shared memory 140 , and a connection control part 150 .
  • CHOK channel adapters
  • DKAs disk adapters
  • Each CHA 110 performs data communications with a host 10 .
  • Each CHA 110 comprises a communications port 111 for performing communications with this host 10 .
  • the respective CHAs 110 are constructed as microcomputer systems comprising a CPU, memory and the like; these CHAs 110 interpret and execute various types of commands received from the hosts 10 .
  • Network addresses used to discriminate the respective CHAs 110 e. g., IP addresses or WWN
  • each CHA 110 can behave separately as an NAS (network attached storage). In cases where a plurality of hosts 10 are present, the respective CHAs 110 separately receive and process requests from the respective hosts 10 .
  • Each DKA 120 exchanges data with a disk drive 161 of the control part 160 .
  • each DKA 120 is constructed as a microcomputer system comprising a CPU, memory and the like.
  • the respective DKAs 120 write data received from the host 10 or read out from the second storage device 200 by the CHAs 110 into a specified address of a specified disk drive 161 .
  • each DKA 120 reads out data from a specified address of a specified disk drive 161 , and transmits this data to a host 10 or the second storage device 200 .
  • each DKA 120 converts the logical address into a physical address.
  • each DKA 120 accesses data according to the RAID construction. For example, each DKA 120 writes the same data into different disk drive groups (RAID groups), or performs parity calculations and writes the data and parity into the disk drive groups.
  • RAID groups disk drive groups
  • the cache memory 130 stores data received from the host 10 or second storage device 200 , or stores data read out from the disk drive 161 .
  • a virtual intermediate storage device is constructed utilizing the storage space of the cache memory 130 .
  • control information used in the operation of the first storage device 100 are stored in the shared memory (also called a control memory in some cases) 140 . Furthermore, in addition to the setting of a work region, various types of tables such as the mapping table and the like described later are also stored in the shared memory 140 .
  • connection control part 150 connects the respective CHAs 110 , the respective DKAs 120 , the cache memory 130 and the shared memory 140 to each other.
  • the connection control part 150 can be constructed as a high-sped bus such as an ultra-high-speed cross-bar switch that performs data transfer by means of a high-speed switching operation.
  • the storage part 160 comprises a plurality of disk drives 161 .
  • various types of storage disks such as hard disk drives, flexible disk drives, magnetic disk drives, semiconductor memory drives, optical disk drives or the like, or the equivalents of such drives, can be used as the disk drives 161 .
  • different types of disks may be mixed inside the storage part 160 , as in FC (fiber channel) disks, SATA (serial AT attachment) disks or the like.
  • a virtual internal volume 191 based on a disk drive 220 of the second storage device 200 can be formed in the first storage device 100 .
  • This virtual internal volume 191 can be provided to the host 10 A in the same manner as the internal volume 190 based on the disk drive 161 .
  • the second storage device 200 and host 10 B are connected via the communications network CN 1 .
  • the second storage device 200 and managing device 20 are connected via the communications network CN 2 .
  • the second storage device 200 and first storage device 100 are connected via the communications network CN 3 .
  • the communications networks CN 2 and CN 3 can be constructed from SAN, LAN or the like.
  • the second storage device 200 may have substantially the same construction as the first storage device, or may have a simpler construction than the first storage device 100 .
  • the disk drives 220 of the second storage device 200 may be handled as internal storage devices of the first storage device 100 .
  • FIG. 3 is a structural explanatory diagram focusing on the functional construction of the present embodiment.
  • the controller 101 of the first storage device 100 is constructed from the CHAs 110 , respective DKAs 120 , cache memory 130 , shared memory 140 and the like.
  • this controller 101 comprises (for example) a first full copying control part 102 , a second full copying control part 103 , a first differential copying control part 104 , and a second differential copying control part 105 . Furthermore, various types of tables such as a mapping table T 1 , differential bit map T 4 and the like are stored inside the shared memory 140 of the controller 101 .
  • the first full copying control part 102 is a function that copies all of the storage contents of the virtual internal volume 191 into the internal volume 190 .
  • the second full copying control part 103 is a function that copies all of the storage contents of the internal volume 190 into the virtual internal volume 191 .
  • the first differential copying control part 104 is a control that copies the differential data 192 of the internal volume 190 into the virtual internal volume 191 .
  • the second differential copying control part 105 is a function that copies the differential data 261 of the virtual internal volume 191 into the internal volume 190 .
  • the internal volume 190 and virtual internal volume 191 are respectively disposed in the first storage device 100 .
  • the internal volume 190 is a volume that is set on the basis of the storage regions of the respective disk drives 161 that are directly governed by the first storage device 100 .
  • the virtual internal volume 191 is a volume that is set on the basis of the storage regions of the respective disk drives 220 of the second storage device 200 .
  • the controller 210 of the second storage device 200 stores the differential bit map T 4 ( 2 ) in a memory (not shown in the figures).
  • This differential bit map T 4 ( 2 ) is used to control the differential data 261 that is generated for the external volume 260 of the second storage device 200 .
  • the external volume 260 is based on the storage region of the disk drive 220 , and is an internal volume with respect to the second storage device 200 . However, since this volume 260 is mapped into the virtual internal volume 191 and incorporated into the first storage device 100 , this volume is called the external volume 260 in the present embodiment.
  • the managing device 20 comprises an access attribute setting part 21 .
  • This access attribute setting part 21 is used to set access attributes for the internal volume 190 or external volume 260 .
  • the setting of access attributes can be performed manually by the user, or can be performed automatically on the basis of some type of trigger signal. The types of access attributes will be further described later.
  • FIG. 4 is a structural explanatory diagram which focuses on the storage structure of the first storage device 100 and second storage device 200 . The construction of the first storage device 100 will be described first.
  • the storage structure of the first storage device 100 can be roughly divided into a physical storage hierarchy and a logical storage hierarchy.
  • the physical storage hierarchy is constructed by a PDEV (physical device) 161 which is a physical disk.
  • the PDEV corresponds to a disk drive.
  • the logical storage hierarchy can be constructed from a plurality (e. g., two types) of hierarchies.
  • One logical hierarchy can be constructed from VDEVs (virtual devices) 162 and virtual VDEVs (hereafter called “V-VOLs”) 163 which can be handled as VDEVs 162 .
  • V-VOLs virtual VDEVs
  • the other logical hierarchy can be constructed from LDEVs (logical devices) 164 .
  • the VDEVs 162 can be constructed by forming a specified number of PDEVs 161 into a group, e. g., four units as one set ( 3 D+ 1 P), eight units as one set ( 7 D+ 1 P) or the like.
  • One RAID storage region is formed by the aggregate of the storage regions provided by the respective PDEVs 161 belonging to a group. This RAID storage region constitutes a VDEV 162 .
  • the V-VOL 163 is a virtual intermediate storage device which requires no physical storage region.
  • the V-VOL 163 is not directly associated with a physical storage region, but is a receiver for the mapping of LUs (logical units) of the second storage device 200 .
  • One or more LDEVs 164 can be respectively disposed in the VDEV 162 or V-VOL 163 .
  • the LDEVs 164 can be constructed by splitting a VDEV 162 into specified lengths.
  • the host 10 [involved] is an open type host
  • the host 10 can recognize the LDEV 164 as a single physical disk by mapping the LDEV 164 in the LU 165 .
  • the open type host can access a desired LDEV 164 by designating the LUN (logical unit number) or logical block address.
  • the LDEV 164 can be directly accessed.
  • the LU 165 is a device that can be recognized as an SCSI logical unit.
  • the respective LUs 165 are connected to the host 10 via a target port 111 A.
  • One or more LDEVs 164 can be respectively associated with each LU 165 . It is also possible to expand the LU size virtually by associating a plurality of LDEVs 164 with one LU 165 .
  • the CMD (command device) 166 is a special LU that is used to transfer commands and status [information] between the I/O control program operating in the host 10 and the controller 101 (CHAs 110 , DKAs 210 ) of the storage device 100 .
  • Commands from the host 10 are written into the CMD 166 .
  • the controller 101 of the storage device 100 executes processing corresponding to the commands that are written into the CMD 166 , and writes the results of this execution into the CMD 166 as status [information].
  • the host 10 reads out and confirms the status [information] that is written into the CMD 166 , and then writes the processing contents that are to be executed next into the CMD 166 .
  • the host 10 can issue various types of instructions to the storage device 100 via the CMD 166 .
  • the commands received from the host 10 can also be processed without being stored in the CMD 166 .
  • the CMD can also be formed as a virtual device without defining the actual device (LU), and can be constructed so as to receive and process commands from the host 10 .
  • the CHAs 110 write the commands received from the host 10 into the shared memory 140
  • the CHAs 110 or DKAs 120 process the commands stored in this shared memory 140 .
  • the processing results are written into the shared memory 140 , and are transmitted to the host 10 from the CHAs 110 .
  • the second storage device 200 is connected to the external initiator port (external port) 111 B of the first storage device 100 via the communications network CN 3 .
  • the LUs 250 i. e., the LDEVs 240
  • V-VOL 163 is a virtual intermediate storage device so that these LUs 250 can also be used from the first storage device 100 .
  • the “LDEV 1 ” and “LDEV 2 ” of the second storage device 200 are respectively mapped into the “V-VOL 1 ” and “V-VOL 2 ” of the first storage device 100 via the “LU 1 ” and “LU 2 ” of the second storage device 200 .
  • the “V-VOL 1 ” and “V-VOL 2 ” are respectively mapped into the “LDEV 3 ” and “LDEV 4 ”, and can be utilized via the “LU 3 ” and “LU 4 ”.
  • VDEVs 162 and V-VOLs 163 can use an RAID construction. Specifically, one disk drive 161 can be divided into a plurality of VDEVs 162 and V-VOLs 163 (slicing), or one VDEV 162 or V-VOL 163 can be formed from a plurality of disk drives 161 (striping).
  • the “LDEV 1 ” or “LDEV 2 ” of the first storage device 100 corresponds to internal volume 190 in FIG. 3 .
  • the “LDEV 3 ” or “LDEV 4 ” of the of the first storage device 100 corresponds to the virtual internal volume 191 .
  • the “LDEV 1 ” or “LDEV 2 ” of the second storage device 200 corresponds to the external volume 260 in FIG. 3 .
  • FIG. 5 shows one example of the mapping table Ti that is used to map the external volume 260 into the virtual internal volume 191 .
  • mapping table T 1 can be constructed by respectively establishing a correspondence between the VDEV numbers used to discriminate the VDEVs 162 and V-VOLs 163 and information relating to the external disk drives 220 .
  • the external device information can be constructed so that this information includes device discriminating information, storage capacities of the disk drives 220 , information indicating the type of device (tape type devices, disk type devices or the like) and path information indicating the paths to the disk drives 220 .
  • This path information can be constructed so as to include discriminating information (WWN) specific to the respective communications ports 211 , and LUN numbers used to discriminate the LUs 250 .
  • the values of the device discriminating information, WWN and the like shown in FIG. 5 are values used for convenience of description, and do not have any particular meaning.
  • three items of path information are associated with the VDEV 101 having the VDEV number of “ 3 ” shown on the lower side in FIG. 5 .
  • the external disk drive 220 that is mapped into this VDEV (# 3 ) has an alternate path structure which has three paths inside, and this alternate path structure is deliberately mapped into the VDEV (# 3 ). It is seen that the same storage region can be accessed via any of these three paths; accordingly, even in cases where one or two of the paths are obstructed, the desired data can be accessed via the remaining normal path or paths.
  • mapping table T 1 such as that shown in FIG. 5 , it is possible to map one or a plurality of external disk drives 220 into the V-VOL 163 inside the first storage device 100 .
  • volume numbers and the like shown in the table are examples used to illustrate the table construction; these values do not particularly correspond to the other constructions shown in FIG. 4 or the like.
  • the host 10 transmits data to a specified communications port 111 with the LUN number (LUN #) and logical block address (LBA) being designated.
  • LUN # LUN number
  • LBA logical block address
  • the first storage device 100 converts the data that is input for LDEV use (LUN #+LBA) into data for VDEV use on the basis of the first conversion table T 2 shown in FIG. 6 ( a ).
  • the first conversion table T 2 is an LUN-LDEV-VDEV conversion table that is used to convert data that designates LUNs in the first storage device 100 into VDEV data.
  • this first conversion table T 2 is constructed by associating LUN numbers (LUN #), LDEV numbers (LDEV #) and maximum slot numbers that correspond to correspond to these LUNs, VDEV (including V-VOL) numbers (VDEV #) and maximum slot numbers that correspond to these LDEVs and the like.
  • LUN # LUN numbers
  • LDEV # LDEV numbers
  • VDEV # maximum slot numbers that correspond to correspond to these LUNs
  • VDEV # including V-VOL numbers
  • the first storage device 100 refers to the second conversion table T 3 shown in FIG. 6 ( b ), and converts the VDEV data into data that is used for transmission and storage for the LUNs of the second storage device 200 .
  • VDEV numbers VDEV #
  • WWN used to specify the communications ports that are the data transfer destinations and LUNs that can be accessed via these communications ports are associated.
  • the first storage device 100 converts the address information of the data that is to be stored into the format of initiator port number#+WWN+LUN#+LBA.
  • the data whose address information has thus been altered reaches the designated communications port 211 from the designated initiator port via the communications network CN 3 . Then, the data is stored in a specified place in the LDEV.
  • FIG. 6 ( c ) shows another second conversion table T 3 a .
  • This conversion table T 3 a is used in cases where a stripe or RAID is applied to VDEVs (i. e., V-VOLs) originating in an external disk drive 220 .
  • the conversion table T 3 a is constructed by associating VDEV numbers (VDEV #), stripe sizes, RAID levels, numbers used to discriminate the second storage device 200 (SS # (storage system numbers)), initiator port numbers and WWN and LUN numbers of the communications ports 211 .
  • one VDEV constructs an RAID 1 utilizing a total of four external storage control devices specified by SS # ( 1 , 4 , 6 , 7 ). Furthermore, the three LUNs (# 0 , # 0 and # 4 ) assigned to SS # 1 are set in the same device (LDEV #). Moreover, the volumes of LUN # 0 comprise an alternate path structure which has two access data paths. Thus, logical volumes (LDEVs) belonging respectively to a plurality of external storage device can be respectively mapped in a single V-VOL inside the first storage device 100 , and can be utilized as a virtual internal volume 191 .
  • LDEVs logical volumes
  • V-VOL VDEV
  • LDEV logical volumes
  • FIG. 7 respectively shows a differential bit map T 4 and saving destination address control table T 5 that are used to control the differential data 192 . Furthermore, in the second storage device 200 as well, differential data 261 is controlled by the same method as in FIG. 7 .
  • the differential bit map T 4 can be constructed by associating updating flag information indicating the status as to whether or not updating has been performed with each logical track of the disk drives 161 constituting the internal volume 190 .
  • One logical track corresponds to three cache segments, and has size of 48 kB or 64 kB.
  • the saving destination address control table can be constructed by associating with each logical track unit a saving destination address which indicates where the data stored on this track is saved.
  • the control units are not limited to track units.
  • other control units such as slot units, LBA units or the like can also be used.
  • FIG. 8 is an explanatory diagram which shows one example of the copying pair control table T 6 .
  • the copying pair control table T 6 can be constructed by associating information that specifies the copying source LU, information that specifies the copying destination LU and the current pair status. Examples of copying pair status include “pair form (paircreate)”, “pair split (pairsplit)”, “resynchronize (resync)” and the like.
  • the “pair form” status is a state in which initial copying (full copying) from the copying source volume to the copying destination volume has been performed, so that a copying pair is formed.
  • the “pair split” status is a state in which the copying source volume and copying destination volume are separated after the copying pair has been forcibly synchronized.
  • the “resynchronize” status is a state in which the storage contents of the copying source volume and copying destination volume are resynchronized and a copying pair is formed after the two volumes have been separated.
  • FIG. 9 is an explanatory diagram showing one example of the access attribute control table T 7 .
  • the term “access attribute” refers to information that controls the possibility of access to the volumes or the like.
  • the access attribute control table T 7 can be constructed by associating access attributes with each LU number (LUN).
  • access attributes include “read/write possible”, “write prohibited (read only)”, “read/write impossible”, “empty capacity 0 ”, “copying destination setting impossible” and “hidden”.
  • read/write possible indicates a state in which reading and writing from and into the volume in question are possible.
  • Write prohibited indicates a state in which writing into the volume in question is prohibited, so that only read-out is permitted.
  • Read/write impossible indicates a state in which writing and reading into and from the volume are prohibited.
  • Empty capacity 0 indicates a state in which a response of remaining capacity 0 (full) is given in reply to inquiries regarding the remaining capacity of the volume even in cases where there is actually some remaining capacity.
  • Copying destination setting impossible indicates a state in which the volume in question cannot be set as the copying destination volume (secondary volume).
  • Hidden indicates a state in which the volume in question cannot be recognized from the initiator.
  • the LUNs in the table are numbers used for purposes of description; these numbers in themselves have no particular significance.
  • FIG. 10 is a flow chart illustrating the mapping method that is used in order to utilize the external volume 260 of the second storage device 200 as a virtual internal volume 191 of the first storage device 100 . This processing is performed between the first storage device 100 and second storage device 200 when the mapping of the volumes is performed.
  • the first storage device 100 logs into the second storage device 200 via the initiator port of the CHA 110 (S 1 ). Logging in is completed by the second storage device 200 sending back a response to the logging in of the first storage device 100 (S 2 ). Next, for example, the first storage device 100 transmits an inquiry command determined by the SCSI (small computer system interface) to the second storage device 200 , and requests a response regarding details of the disk drives 220 belonging to the second storage device 200 (S 3 ).
  • SCSI small computer system interface
  • the inquiry command is used to clarify the type and construction of the inquiry destination device; this makes it possible to pass through the hierarchy of the inquiry destination device and grasp the physical structure of this inquiry destination device.
  • the first storage device 100 can acquire information such as the device name, device type, manufacturing serial number (product ID), LDEV number, various types of version information, vendor ID and the like from the second storage device 200 (S 4 ).
  • the second storage device 200 responds by transmitting the information for which an inquiry was made to the first storage device 100 (S 5 ).
  • the first storage device 100 registers the information acquired from the second storage device 200 in the mapping table T 1 (S 6 ).
  • the first storage device 100 reads out the storage capacity of the disk drive 220 from the second storage device 200 (S 7 ).
  • the second storage device 200 sends back the storage capacity of the disk drive 220 (S 8 ), and returns a response (S 9 ).
  • the first storage device 100 registers the storage capacity of the disk drive 220 in a specified place in the mapping table T 1 (S 10 ).
  • the mapping table T 1 can be constructed by performing the above processing. In cases where the input and output of data are performed with the external disk drive 220 (external LUN, i.e., external volume 260 ) mapped into the V-VOL of the first storage device 100 , address conversion and the like are performed with reference to the other conversion tables T 2 and T 3 described with reference to FIG. 6 .
  • FIG. 11 is a model diagram which shows the processing that is performed when data is written.
  • the host 10 can write data into a logical volume (LDEV) that has access authorization. For example, by using procedures such as zoning that sets a virtual SAN subnet in the SAN or LUN masking in which the host 10 holds a list of accessible LUNs, it is possible to set the host 10 so that the host 10 can access only specified LDEVs.
  • LDEV logical volume
  • the LDEV into which the host 10 is to write data is connected via a VDEV to a disk drive 161 which is in internal storage device
  • data is written by ordinary processing.
  • the data from the host 10 is temporarily stored in the cache memory 130 , and is then stored in a specified address of a specified disk drive 161 from the cache memory 130 via the DKA 120 .
  • the DKA 120 converts the logical address into a physical address.
  • the same data is stored in a plurality of disk drives 161 or the like.
  • FIG. 11 is a flow chart centering on the storage hierarchy
  • FIG. 11 ( b ) is a flow chart centering on the manner of use of the cache memory 130 .
  • the host 10 indicates an LDEV number that specifies the LDEV that is the object of writing and a WWN that specifies the communications port that is used to access this LDEV, and issues a write command (write) (S 21 ).
  • the first storage device 100 receives a write command from the host 10
  • the first storage device 100 produces a write command for transmission to the second storage device 200 , and transmits this command to the second storage device 200 (S 22 ).
  • the first storage device 100 alters the address information and the like contained in the write command received from the host 10 so as to match the external volume 260 , thus producing a new write command.
  • the host 10 transmits the write data to the to the first storage device 100 (S 23 ).
  • the write data received by the first storage device 100 is transferred to the second storage device 200 (S 26 ) from the LDEV via the V-VOL (S 24 ).
  • the first storage device 100 sends back a response (good) indicating the completion of writing to the host 10 .
  • the second storage device 200 transmits a writing completion report to the first storage device 100 (S 26 ). Specifically, the time at which the completion of writing is reported to the host 10 by the first storage device 100 (S 25 ) and the time at which the data is actually stored in the disk drive 220 are different (asynchronous system). Accordingly, the host 10 is released from data write processing before the write data is actually stored in the disk drive 220 , so that the host 10 can perform other processing.
  • the first storage device 100 converts the logical block addresses designated by the host 10 into sub-block addresses, and stores data in specified locations in the cache memory 130 (S 24 ).
  • the V-VOLs and VDEVs have a logical presence installed in the storage space of the cache memory 130 .
  • the host 10 designates a communications port 111 and transmits a data read-out command to the first storage device 100 (S 31 ).
  • the first storage device 100 receives a read command
  • the first storage device 100 produces a read command in order to read out the requested data from the second storage device 200 .
  • the first storage device 100 transmits the produced read command to the second storage device 200 (S 32 ).
  • the second storage device 200 reads out the requested data from the disk drive 220 , transmits this read-out data to the first storage device 100 (S 33 ), and reports that read-out was normally completed (S 35 ).
  • the first storage device 100 stores the data received from the second storage device 200 in a specified location in the cache memory 130 (S 34 ).
  • the first storage device 100 reads out the data stored in the cache memory 130 , performs address conversion, transmits the data to the host 10 via the LUN 103 or the like (S 36 ), and issues a read-out completion report (S 37 ). In the series of processing performed in these data read-outs, the conversion operation described with reference to FIG. 6 is performed in reverse.
  • the operation is shown as if data is read out from the second storage device 200 and stored in the cache memory 130 in accordance with the request from the host 10 .
  • the operation is not limited to this; it would also be possible to store all or part of the data stored in the external volume 260 in the cache memory 130 beforehand. In this case, in response to a command from the host 10 , data can be immediately read out from the cache memory 130 and transmitted to the host 10 .
  • FIGS. 13 and 14 show the full copying mode in which all of the storage contents of the copying source volume are copied into the copying destination volume
  • FIGS. 15 and 16 show the differential copying mode in which only the differential data generated in the copying source volume following the completion of full copying is coped into the copying destination volume.
  • data is transferred directly between the first storage device and second storage device; the host 10 does not participate.
  • the managing device 20 instructs the first storage device 100 to execute the first full copying mode (S 41 ).
  • the CHA 110 that receives this instruction refers to the mapping table T 1 stored in the shared memory 140 (S 42 ), and acquires path information for the external volume 260 which is the copying destination volume.
  • the CHA 110 issues a read command to the second storage device 200 (S 43 ), and requests the read-out of the data that is stored in the external volume 260 .
  • the second storage device 200 In response to the read command from the first storage device 100 , the second storage device 200 reads out data from the external volume 260 (S 44 ), and transmits this read-out data to the first storage device 100 (S 45 ).
  • the CHA 110 When the CHA 110 receives the data from the second storage device 200 , the CHA 110 stores this received data in the cache memory 130 (S 46 ). Furthermore, for example, the CHA 110 requests the execution of destage processing from the DKA 120 by writing a write command into shared memory 140 (S 47 ).
  • the DKA 120 occasionally refers to the shared memory 140 , and when the DKA 120 discovers an unprocessed write command, the DKA 120 reads out the data stored in the cache memory 130 , performs processing such as address conversion and the like, and writes this data into a specified disk drive 161 (S 48 ).
  • FIG. 14 shows the processing of the second full copying mode.
  • the first storage device 100 instructs the first storage device 100 to execute the second full copying mode (S 51 ).
  • the CHA 110 that receives this instruction refers to the mapping table T 1 stored in the shared memory 140 (S 52 ), and acquires path information for the external volume 260 which is the copying destination volume. Furthermore, the CHA 110 requests that the DKA 120 perform staging (processing that transfers the data to a cache) of the data stored in the internal volume 190 (S 53 ).
  • the DKA 120 reads out the data of the internal volume 190 from the disk drive 161 , and stores this data in the cache memory 130 (S 54 ). Furthermore, the DKA 120 request that the CHA 110 issue a write command (S 55 ).
  • the CHA 110 issues a write command to the second storage device 200 (S 56 ).
  • the CHA 110 transmits write data to the second storage device 200 (S 57 ).
  • the second storage device 200 receives the write data from the first storage device 100 (S 58 ), and stores this data in a specified disk drive 220 (S 59 ).
  • the storage contents of the internal volume 190 which is the copying source volume can be copied into the external volume 260 which is the copying destination volume, so that the storage contents of both volumes can be caused to coincide.
  • FIG. 15 shows the processing of the first differential copying mode.
  • the managing device 20 requests the first storage device 100 to split the copying pair (S 61 ).
  • the CHA 110 that receives the splitting instruction updates the copying pair control table T 6 stored in the shared memory 140 , and alters the status of the copying pair to a split state (S 62 ).
  • the pair state of the internal volume 190 and virtual internal volume 191 is dissolved.
  • the host 10 A executes updating I/O for the internal volume 190 (S 63 ).
  • the CHA 110 stores the write data received from the host 10 A in the cache memory 130 (S 64 ), and sends a response to the host 10 A indicating that processing of the write command has been completed (S 65 ).
  • the CHA 110 respectively updates the differential bit map T 4 and the differential data 192 (S 66 ), and requests that the DKA 120 execute destage processing (S 67 ).
  • the DKA 120 stores the write data generated by the updating I/O in the disk drive 161 (S 68 ).
  • the updating I/O from the host 10 A is stopped (S 69 ). For example, this stopping of the I/O can be accomplished manually by the user. Furthermore, the managing device 20 alters the access attribute of the internal volume 190 from “read/write possible” to “write prohibited” (S 70 ). Although the issuing of updating I/O by the host 10 A is already stopped, further variation of the storage contents of the internal volume 190 can be prevented in advance by altering the access attribute to “write prohibited”.
  • the managing device 20 instructs the first storage device 100 to execute first differential copying (S 71 ).
  • the CHA 110 that receives this instruction refers to the mapping table T 1 (S 72 ), and acquires path information for the external volume 260 . Furthermore, the CHA 110 refers to the differential bit map T 4 (S 73 ), and requests that the DKA 120 perform destaging of the differential data 192 (S 74 ).
  • the DKA 120 reads out the differential data 192 produced for the internal volume 190 from the disk drive 161 , and stores this data in the cache memory 130 (S 75 ). Then, the DKA 120 requests that the CHA 110 issue a write command (S 76 ).
  • the CHA 110 issues a write command to the second storage device 200 (S 77 ), and transmits write data (the differential data 192 ) to the second storage device 200 (S 78 ).
  • the second storage device 200 stores the received write data in the external volume 260 .
  • the storage contents of the external volume 260 and internal volume 190 coincide.
  • the managing device 20 alters the access attribute of the internal volume 190 from “write prohibited” to “read/write possible” (S 79 ).
  • FIG. 16 shows the processing of the second differential copying mode.
  • the managing device 20 Prior to the initiation of differential copying, the managing device 20 first instructs the first storage device 100 to split the copying pair (S 81 ).
  • the CHA 110 that receives this instruction updates the copying pair control table T 6 , and dissolves the pair state (S 82 ).
  • the second storage device 200 writes the write data into the disk drive 220 (S 84 ), and respectively updates the differential data 261 and differential bit map T 4 ( 2 ) (S 85 ).
  • the managing device 20 alters the access attribute of the external volume 260 from “read/write possible” to “write prohibited” (S 86 ), thus prohibiting updating of the external volume 260 ; then, the managing device 20 instructs the first storage device 100 to initiate second differential copying (S 87 ).
  • the CHA 110 that receives the instruction to initiate differential copying requests that the second storage device 200 to transfer the differential bit map T 4 ( 2 ) (S 88 ). Since the contacts of the differential data 261 generated in the external volume 260 are controlled by the second storage device 200 , the first storage device 100 acquires the differential bit map T 4 ( 2 ) from the second storage device 200 (S 89 ).
  • a construction is used in which commands and data are directly exchanged between the first storage device 100 and second storage device 200 .
  • the present invention is not limited to this; for example, it would also be possible to exchange data such as the differential bit map and the like between the respective storage devices 100 and 200 via the managing device 20 .
  • the CHA 110 refers to the mapping table T 1 (S 90 ), and acquires path information indicating the path to the external volume 260 . Then, the CHA 110 requests the transfer of the differential data 261 by issuing a read command to the second storage device 200 (S 91 ).
  • the second storage device 200 transmits the differential data 261 to the first storage device 100 (S 92 ). Then, the CHA 110 that receives this differential data 261 stores the differential data 261 in the cache memory 130 (S 93 ) The CHA 110 requests that the DKA 120 perform destage processing of the differential data 261 (S 94 ). Then, the DKA 120 reads out the differential data 261 stored in the cache memory 130 , and writes the data constituting the internal volume 190 into the disk drive 161 (S 95 ). As a result, the storage contents of the external volume 260 and internal volume 190 coincide.
  • the external volume 260 can be handled as though this volume were a logical volume inside the first storage device 100 by mapping the external disk drive 220 into the V-VOL. Accordingly, even in cases where the second storage device 200 is an old type device that cannot be directly connected to the host 10 , the memory resources of the old type device can be reutilized as memory resources of the first storage device 100 , and can be provided to the host 10 , by interposing a new type first storage device 100 . As a result, the old type storage device 200 can be connected to a new type storage device 100 , and the memory resources can be effectively utilized.
  • the low performance of the second storage device can be hidden by the high-performance computer resources (cache capacity, CPU processing speed and the like) of the first storage device 100 , so that high-performance services can be provided to the host 10 using a virtual internal volume that utilizes the disk drive 220 .
  • functions such as (for example) striping, expansion, splitting, RAID and the like can be added to an external volume 260 constructed in the disk drive 220 , and can be used. Accordingly, compared to cases in which an external volume is directly mapped into an LUN, the degree of freedom of utilization is increased so that convenience of use is improved.
  • the storage contents can be synchronized between the internal volume 190 and virtual internal volume 191 (external volume 260 ). Accordingly, a backup of the internal volume 190 can be formed in the virtual internal volume 191 , or conversely, a backup of the virtual internal volume 191 can be formed in the internal volume 190 , so that the convenience is even further improved.
  • a construction is used in which the storage contents of the copying source volume are fixed by altering the access attribute to “write prohibited”. Accordingly, volume copying can be executed without particularly altering the processing contents in the host 10 .
  • FIG. 17 is an explanatory diagram showing the storage structure of a storage system constituting a second embodiment of the present invention.
  • the first storage device 100 comprises a third storage device 300 in addition to a second storage device 200 .
  • this third storage device 300 is a device that is externally connected to the first storage device 100 .
  • the third storage device 300 comprises (for example) PDEVs 320 , VDEVs 330 , LDEVs 349 , LUs 350 , targets 311 and the like.
  • the construction of the third storage device 300 the construction of the second storage device 200 can be employed; since this construction is not the gist of the present invention, details will be omitted.
  • the second storage device 200 and third storage device 300 need not have the same structure.
  • the first storage device 100 does not comprise PDEVs 161 which are physical storage devices, and does not comprise real volumes (internal volumes).
  • the first storage device 100 comprises only “LDEV 1 ” and “LDEV 2 ”, which are virtual internal volumes. Accordingly, the first storage device 100 need not be a disk array device; for example, this first storage device 100 may be an intelligent type switch comprises a computer system.
  • the first virtual internal volume “LDEV 1 ” 164 is connected to “LDEV 1 ” 240 , which is a real volume of the second storage device 200 , via “V-VOL” 163 .
  • the second virtual internal volume “LDEV 2 ” 164 is connected to “LDEV 1 ” 340 , which is a real volume of the third storage device 300 , via “V-VOL 2 ” 163 .
  • the system is devised so that full copying and differential copying are performed between the first virtual internal volume “LDEV 1 ” and the second virtual internal volume “LDEV 2 ” inside the first storage device 100 .
  • FIG. 18 is an explanatory diagram which shows one example of the control screen used by the storage system. This embodiment can be used in any of the respective embodiments described above.
  • the user logs into the managing device 20 , and calls up a control screen such as that shown in FIG. 18 .
  • the managing device 20 sends instructions for alteration of the construction to one or both of the storage devices 100 and 200 . Receiving these instructions, the respective storage devices 100 and 200 alter their internal construction.
  • control menus M 1 through M 3 can be set on the control screen.
  • these control menus M 1 through M 3 can be constructed as tab type switching menus.
  • the menu M 1 is a menu that is used to perform various types of LU operations such as production of volumes or the like.
  • the menu M 2 is a menu that is used to perform communications port operations.
  • the menu M 3 is a menu that is used to perform volume copying operations between the storage devices described in the abovementioned embodiments,
  • the menu M 3 can be constructed so that this menu includes a plurality of screen regions G 1 through G 5 .
  • the screen region G 1 is used to select the storage device (subsystem) that performs the setting of copying pairs.
  • the conditions of the set copying pairs are displayed in the screen region G 2 .
  • the copying source volume (P-VOL), copying destination volume (S-VOL), emulation type, capacity, copying status, progression, copying speed and the like can be displayed in the screen region G 2 .
  • the user can select two copying pairs displayed in the screen region G 2 ; furthermore, the user can display the submenu M 4 by right clicking [with the mouse].
  • the user can designate the synchronization of volumes or dissolution of pairs by means of the submenu M 4 .
  • either internal volumes inside the first storage device 100 or external volumes inside the second storage device 200 can be exclusively selected as the volumes of the copying pair.
  • an internal volume is selected as the copying source volume.
  • An internal volume or external volume can be designated as either the copying source volume or copying destination volume.
  • Preset values can be displayed in the screen region G 4 .
  • Operation states can be displayed in the screen region G 5 .
  • the user can cause alterations in the construction to be reflected by operating an application button B 1 .
  • the user operates a cancel button B 2 .
  • the abovementioned screen construction is an example; the present invention is not limited to this construction.

Abstract

Virtualization arrangements, including: splitting a relationship between a first and second virtual volume; receiving a differential copying request; if the differential copying request indicates to copy differential data from one of the first and second virtual volume to the other of the first and second virtual volume, (1) controlling to copy the data to the one of the first and second virtual volume based on the differential information, and (2) transferring the data to the other of the first and second logical volume of the storage systems, so that the storage system can write the data of the write request to a storage area of the disk drives.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This is a continuation of U.S. application Ser. No. 11/016,806, filed Dec. 21, 2004. This application relates to and claims priority from Japanese Patent Application No. 2004-312358, filed on Oct. 27, 2004. The entirety of the contents and subject matter of all of the above is incorporated herein by reference.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates to a storage system and a storage control device.
  • 2. Description of the Related Art
  • For example, data is controlled using relatively large-scale storage systems in order to handle large quantities of various types of data in government organizations, public offices, autonomous regional bodies, business enterprises, educational organizations and the like. For instance, such storage systems are constructed from disk array devices or the like. Disk array devices are constructed by disposing numerous storage devices in the form of an array; for example, a storage region based on an RAID (redundant array of independent disks) is provided. One or more logical volumes (logical units) are formed in a physical storage region provided by a storage device group, and these logical volumes are provided to a host computer (more specifically, to a data base program operating in a host computer). The host computer (hereafter abbreviated to “host”) can perform the reading and writing of data with respect to the logical volumes by transmitting specified commands.
  • With the development of an informationized society and the like, there has been a continual increase in the amount of data that must be managed. Consequently, there is a demand for storage control devices that offer higher performance and a larger capacity, and new types of storage control devices have been developed one after another in order to meet this market demand. There are two conceivable methods for introducing new types of storage control devices as storage systems. One is a method in which an old type of storage control device and a new type of storage control device are completely interchanged, so that a storage system is constructed from a completely new type of storage control device (Japanese Patent Publication No. 10-508967). The other method is a method in which a new type of storage control device is added to a storage system consisting of an old type of storage device, so that new and old types of storage devices are caused to coexist.
  • Furthermore, a technique in which the storage region of a physical device is controlled in sector units, and a logical device is dynamically constructed in sector units, is also known (Japanese Patent Application Laid-Open No. 2001-337850).
  • Moreover, a technique is also known which is devised so that when a logical device is constructed from a plurality of storage devices with different capacities, an area is formed in accordance with the storage device that has the smallest capacity, and an area is formed in accordance with the smallest capacity in the case of the remaining capacity as well (Japanese Patent Application Laid-Open No. 9-288547).
  • In cases where a complete transition is made from an old type of storage control device to a new type of storage control device, the function and performance of the new type of storage control device can be utilized; however, the old type of storage control device cannot be effectively utilized, and the introduction costs are also increased. On the other hand, in cases where an old type of storage control device and a new type of storage control device are used together, the number of storage control devices that construct the storage system is increased, and considerable effort is required in order to control and operate both the new and old storage control devices.
  • Furthermore, in cases where the response of the storage device in which the old type of storage control device is installed is slow, the performance of the overall system drops as a result of this old type of storage control device being connected to the storage system. For example, such cases include cases in which the old type of storage device is a device that involves mechanical operations (such as head seeking or the like), so that the mechanical operating time is long, cases in which the capacity of the data transfer buffer of the old type of storage device is small, and the like.
  • Furthermore, there may also be cases in which an old type of storage device cannot be utilized “as is”, as in combinations of open type storage devices and main frames, or servers to which only storage devices with specified functions can be connected.
  • SUMMARY OF THE INVENTION
  • The present invention was devised in light of the above problems. One object of the present invention is to provide a storage system and storage control device which are devised so that different types of storage control devices such as new and old storage control devices can be caused to cooperate, thus allowing effective utilization of memory resources. Another object of the present invention is provide a storage system and storage control device which allow the utilization of an old type of storage control device as a new type of storage control device. Another object of the present invention is to provide a storage system and storage control device which are devised so that new functions can be added while utilizing the advantages of an old type of storage device. Another object of the present invention is to provide a storage system and storage control device which are devised so that the memory resources of a second storage control device can be incorporated into a first storage control device as a first virtual volume, and the storage contents of the first real volume of the first storage control device and this first virtual volume can be synchronized. Other objects of the present invention will become clear from the following description of embodiments.
  • In order to solve the abovementioned problems, the storage system of the present invention is a storage system which is constructed by communicably connecting a first storage control device and a second storage control device, and which performs data processing in accordance with requests from a higher device, wherein the abovementioned first storage control device comprises a first real volume, a first virtual volume that can form a copying pair with the first real volume, a first control part that respectively controls data communications between the first real volume and first virtual volume, and the higher device and second storage control device, and a synchronizing part that synchronizes the storage contents of the first real volume and the storage contents of the first virtual volume, and the second storage control device comprises a second real volume that is associated with the first virtual volume, and a second control part that respectively controls data communications between the second real volume, and the higher device and first storage control device.
  • For example, storage devices such as disk array devices or the like, or highly functionalized switches (fiber channel switches or the like) can be used as the storage control devices. The first storage control device respectively comprises a first real volume and a first virtual volume. The first real volume is constructed on the basis of first storage device which has a first storage control device, and the first virtual volume is constructed on the basis of a second storage device which has a second storage control device.
  • Specifically, the first storage control part incorporates the memory resources of the second storage control device as though these memory resources were its own memory resources, and provides these memory resources to the higher device. Furthermore, the synchronizing part synchronizes the storage contents of the first real volume and first virtual volume. Accordingly, a backup of the first real volume can be formed in the first virtual volume, and conversely, a backup of the first virtual volume can be formed in the first real volume. Here, the synchronization modes can be divided into two main types: namely, a full copying mode in which all of the storage contents are copied, and a differential copying mode in which only the differential data is copied.
  • In an embodiment of the present invention, the first storage control device has a first storage device, and the second storage control device has a second storage device; furthermore, the first real volume is connected to the first storage device via an intermediate storage device, and the first virtual volume is connected to the second storage device via a virtually constructed virtual intermediate storage device. Here, the intermediate storage device is a storage hierarchy which logically connects the first storage device that provides a physical storage region, and the first virtual volume. Similarly, the virtual intermediate storage device is a storage hierarchy which logically connects the second storage device that provides a physical storage region, and the first virtual volume. Furthermore, while the intermediate storage device is set in the storage region of the of the first storage device of the first storage control device, the virtual intermediate storage device is associated with the storage region of the second storage device of the second storage control device. Specifically, by mapping the second storage device in the virtual intermediate storage device, it is possible to vary the storage capacity, or to employ a stripe structure or the like.
  • The synchronizing part can copy the entire storage contents stored in the first real volume into the first virtual volume. Conversely, the synchronizing part can also copy the entire storage contents stored in the first virtual volume into the first real volume.
  • Alternatively, the synchronizing part can also copy the differential data between the storage contents of the first real volume and the storage contents of the first virtual volume into the first virtual volume. For example, after the first real volume and first virtual volume are synchronized by full copying, the copying pair consisting of the two volumes is temporarily released (split). Then, in cases where a change occurs in the storage contents of the first virtual volume as a result of a write request from the higher device, the storage contents of the two volumes can again be caused to coincide by separately controlling the this changed differential data, and copying only this differential data into the first real volume.
  • Here, in cases where write requests to the first real volume from the higher device are stopped, the synchronizing part can copy the differential data into the first virtual volume. As a result, the storage contents of both volumes can be matched.
  • In an embodiment of the present invention, the system further comprises a managing device which is communicably connected to the first storage control device and second storage control device, respectively. Furthermore, in cases where the access attribute of “write prohibited” is set in the first real volume by the managing device, the synchronizing part copies the differential data into the first virtual volume, and when the copying of the differential data is completed, the managing device can set the access attribute of the first real volume as “read and write possible”.
  • The function of the managing device can be constructed from a computer program. Accordingly, for example, the managing device can be constructed as a computer device that is separate from the higher device, or can be installed inside the higher device. The term “access attribute” refers to information that is used to control whether or not a given volume can be accessed. Examples of access attributes include “write prohibited (read only)” which prohibits the updating of data, “read/write possible” which allows both the reading and writing of data, “hidden” which does not respond to inquiry responses, “empty capacity 0” which responds that the state is full in the case of inquiries for empty capacity, and the like.
  • By starting differential copying after setting the access attribute of the volume as “write prohibited”, it is possible to prohibit updating requests (write requests) from the higher device, and to match the storage contents of the copying source volume (the first real volume in this case) and the copying destination volume (the first virtual volume in this case). Furthermore, since it is sufficient to alter only the access attribute inside the storage control device without any need to alter the setting of the higher device, data matching can be ensured by means of comparatively simple construction.
  • The synchronizing part can also copy differential data between the storage contents of the first real volume and the storage contents of the first virtual volume into the first real volume. Furthermore, in this case, the synchronizing part can acquire differential control information relating to the differential data from the second storage control device, and can read out differential data from the second storage control device and copy this data into the first real volume on the basis of this differential control information.
  • Furthermore, in cases where write requests to the second real volume from the higher device are prohibited, the synchronizing part can maintain the matching of data by copying the differential data into the first real volume.
  • Moreover, in cases where a managing device that is communicably connected to the first storage control device and second storage control device, respectively is provided, and the access attribute of “write prohibited” is set in the second real volume by the managing device, the synchronizing part can copy the differential data into the first real volume, and when the copying of this differential data has been completed, the managing device can also set the access attribute of the second real volume as “read and write possible”.
  • In an embodiment of the present invention, the storage system is a storage system in which a first storage control device and a second storage control device are communicably connected, this storage system comprising a higher device that can respectively issue access requests to the first storage control device and second storage control device, and a managing device that can communicate with the first storage control device and second storage control device, wherein the first storage control device comprises a first storage device that stores data, an intermediate storage device that is disposed in the storage region of this first storage device, a first real volume that is disposed in the storage region of this intermediate storage device, a virtual intermediate storage device that is disposed in the storage region of the second storage device of the second storage control device, a first virtual volume that is disposed in the storage region of this virtual intermediate storage device, a higher communications control part that respectively controls data communications between the higher device, and the second storage control device and managing device, a lower communications control part that controls data communications with the first storage device, a memory part that is shared by the higher communications control part and lower communications control part, and a mapping table that is stored in the memory part and that is used to map the second storage device in the virtual intermediate storage device. Furthermore, in cases where the first full copying mode that copies all of the storage contents stored in the first virtual volume into the first real volume is designated by the managing device, the higher communications control part refers to the mapping table and reads out all of the data from the second real volume, and the lower communications control part stores all of this read-out data in the first storage device. On the other hand, in cases where the second full copying mode that copies all of the storage contents stored in the first real volume into the first virtual volume is designated by the managing device, the lower communications control part reads out all of the data of the first real volume from the first storage device, and the higher communications control part refers to the mapping table and writes this read-out data into the second real volume.
  • Furthermore, the first storage control device and second storage control device can respectively hold differential control information that controls the differential data between the storage contents of the first real volume and the storage contents of the first virtual volume. Moreover, in cases where the first differential copying mode that copies the differential data into the first virtual volume is designated by the managing device, the lower communications control part reads out the differential data from the first storage device, and the higher communications control part refers to the mapping table and writes this read-out differential data into the second real volume. On the other hand, in cases where the second differential copying mode that copies the differential data into the first real volume is designated by the managing device, the higher communications control part reads out the differential control information controlled by the second storage control device, refers to this read-out differential control information and the mapping table, and reads out the differential data from the second real volume, and the lower communications control part stores this read-out differential data in the first storage device.
  • The present invention may also be understood as the invention of a storage control device. Moreover, the present invention may also be understood as a copying control method for a storage control device. Specifically, for example, this copying control method can be constructed so as to comprise the steps of mapping the second real volume of a second storage control device into the first virtual volume of a first storage control device, setting the abovementioned first virtual volume and the first real volume of the abovementioned first storage control device as a copying pair, and causing the storage contents of the abovementioned first virtual volume and the abovementioned first real volume to coincide.
  • There may be cases in which all or part of the means, functions and steps of present invention can be constructed as computer programs that are executed by a computer system. In case where all or part of the construction of the present invention is constructed from computer programs, these computer programs can be fixed (for example) on various types of storage media and distributed (or the like); alternatively, these computer programs can also be transmitted via communications networks.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram which shows the overall construction of a storage system constituting an embodiment of the present invention;
  • FIG. 2 is a block diagram of the storage system;
  • FIG. 3 is an explanatory diagram which shows an outline of the functional construction of the storage system;
  • FIG. 4 is an explanatory diagram which shows the storage structure in model form;
  • FIG. 5 is an explanatory diagram which shows an example of the construction of the mapping table;
  • FIG. 6 is an explanatory diagram which shows the conditions of address conversion in a case where data is written into an external volume incorporated as a virtual internal volume;
  • FIG. 7 is an explanatory diagram respectively showing the differential bit map T4 and saving destination address control map T5 used to control the differential data;
  • FIG. 8 is an explanatory diagram showing an example of the construction of the copying pair control table;
  • FIG. 9 is an explanatory diagram showing an example of the construction of the access attribute control table;
  • FIG. 10 is an explanatory diagram showing the flow of the processing that is used to construct the mapping table;
  • FIG. 11 is schematic diagram showing a case in which data is written into an external storage device used as a virtual internal volume;
  • FIG. 12 is a schematic diagram showing a case in which data is read out from an external storage device used as a virtual internal volume;
  • FIG. 13 is a sequence flow chart showing the flow of the first full copying mode;
  • FIG. 14 is a sequence flow chart showing the flow of the second full copying mode;
  • FIG. 15 is a sequence flow chart showing the flow of the first differential copying mode;
  • FIG. 16 is a sequence flow chart showing the flow of the second differential copying mode;
  • FIG. 17 is an explanatory diagram showing the storage structure of a storage system according to a second embodiment; and
  • FIG. 18 is an explanatory diagram showing the storage structure of a storage system according to a third embodiment.
  • DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • FIG. 1 is a structural explanatory diagram which shows an overall outline of an embodiment of the present invention. In this embodiment, as will be described later, [the storage system] maps a storage device present on the outside into its own intermediate storage device (VDEV), thus incorporating this external storage device as thought this device were its own internal volume, and provides this volume to a host.
  • For example, the storage system of the present embodiment can comprise a first storage device 1 which is one example of a first storage control device, a second storage device 2 which is one example of a second storage control device, a host 3 which acts as a higher device, and a managing device 4.
  • For example, the first storage device 1 is constructed as a disk array device. The first storage device 1 comprises three communications ports 1A through 1C; the host 3, managing device 4 and second storage device 2 are communicably connected by means of these respective communications ports. Here, for example, data communications can be performed between the respective storage devices 1 and 2, and the respective storage devices 1 and 2 and the host 3, on the basis of a fiber channel protocol. Furthermore, for example, data communications can be performed between the respective storage devices 1 and 2 and the managing device 4 on the basis of a TCP/IP (transmission control protocol/internet protocol). However, the above are examples; the present invention is not restricted in terms of the type of protocol used.
  • The first storage device 1 can comprise a control part 5, an internal volume 6 used as a first real volume, and a virtual internal volume 7 used as a first virtual volume. The control part 5 respectively controls the exchange of data inside the first storage device and the exchange of data with the outside. The internal volume 6 is disposed on the basis of a physical storage device (e. g., a disk drive) disposed inside the first storage device 1. The virtual internal volume 7 has a virtual existence; the entity that stores data is present inside the second storage device 2. Specifically, the virtual internal volume 7 is constructed by mapping an external volume 9 of the second storage device 2 into a specified level of the storage hierarchy of the first storage device 1.
  • The control part 5 comprises a differential bit map 5A and a mapping table 5B. The differential bit map 5A comprises information that is used to control the differential between the storage contents of the internal volume 6 and the storage contents of the virtual internal volume 7 (external volume 9). When the host 3 updates the storage contents of the internal volume 6 after the internal volume 6 and virtual internal volume 7 have been synchronized, differential data 6A is generated by this updating. The differential bit map 5A comprises information that is used to control this differential data 6A. The mapping table 5B comprises information that is used to associate the external volume 9 with the virtual internal volume 7; for example, this information includes path information or the like that is used to access the external volume 9.
  • The second storage device 2 is communicably connected with the host 3, managing device 4 and first storage device 1, respectively via respective communications ports 2A through 2C. For example, the second storage device 2 can be constructed so that this device comprises a control part 8 and an external volume 9. The control part 8 respectively controls the exchange of data within the second storage device 2 and the exchange of data with the outside. The external volume 9 is disposed on the basis of a physical storage device disposed inside the second storage device 2. Since the volumes of the second storage device 2 are present on the outside as seen from the first storage device 1, these volumes are called external volumes here. Furthermore, the control part 8 comprises a differential bit map 8A that is used to control the differential data 9A that is generated in the external volume 9.
  • In the present embodiment, the internal volume 6 and virtual internal volume 7 form a copying pair. Either of these volumes may be the copying source, and either of the volumes may be the copying destination. In regard to the method used to synchronize the storage contents, there is full copying in which all of the storage contents of the copying source volume are copied into the copying destination volume, and differential copying in which the only the differential data between the copying source volume and copying destination volume is copied; either of these methods may be employed.
  • In cases where data is copied from the internal volume 6 into the virtual internal volume 7, the control part 5 refers to the mapping table 5B, acquires path information relating to the path to the external volume 9 which is the entity of the of the virtual internal volume 7, and transfers data to the external volume 9. Similarly, furthermore, in cases where data is copied from the virtual internal volume 7 into the internal volume 6, the control part 5 refers to the mapping table 5B, acquires path information relating to the path to the external volume 9, and writes data read out from the external volume 9 into the internal volume 6.
  • In the present embodiment, even in cases where the first storage device 1 incorporates the external volume 9 of the second storage device 2 as its own virtual internal volume 7, the data of the internal volume 6 and the data of the virtual internal volume 7 can be synchronized. The present embodiment will be described in greater detail below.
  • 1. FIRST EMBODIMENT
  • FIG. 2 is a block diagram which shows the construction of the essential parts of the storage system of the present embodiment. For example, the hosts 10A and 10B are computer devices comprising information processing resources such as a CPU (central processing unit), memory and the like; for instance, these hosts are constructed as personal computers, workstations, main frames or the like.
  • The host 10A comprises an HBA (host bus adapter) 11A that is used to access a first storage device 100 via a communications network CN1, and (for example) an application program 12A such as data base software or the like. Similarly, the host 10B also comprises an HBA 11B that is used to access a second storage device 200, and an application program 12B. Below, in cases where no particular distinction is made between the respective hosts 10A and 10B, these parts will be referred to simply as hosts 10, HBA 11 and application programs 12.
  • For example, depending on the case, an LAN (local area network), an SAN (storage area network), the internet, a dedicated circuit, a public circuit or the like can be appropriately used as the communications network CN1. For example, data communications via an LAN are performed according to a TCP/IP protocol. In cases where the hosts 10 are connected to the first storage device 100 [and second storage device] 200 via an LAN, the hosts 10 request data input and output in file units by designating file names.
  • On the other hand, in cases where the hosts 10 are connected to the first storage device 100 [and second storage device] 200 via an SAN, the hosts 10 request data input and output with blocks (which are the units of data control of the storage regions provided by a plurality of disk storage devices (disk drives)) in accordance with a fiber channel protocol. In cases where the communications network CN1 is an LAN, the HBA 11 is (for example) a network card corresponding to this LAN. In cases where the communications network CN1 is an SAN, the HBA 11 is (for example) a host bus adapter.
  • The managing device 20 is a computer device which is used to control the construction of the storage system and the like. For example, this device is operated by a user such as a system manager or the like. The managing device 20 is respectively connected to the respective storage devices 100 and 200 via a communications network CN2. As will be described later, the managing device 20 issues instructions relating to the formation of copying pairs, access attributes and the like to the respective storage devices 100 and 200.
  • For example, the first storage device 100 is constructed as a disk array subsystem. However, the present invention is not limited to this; the first storage device 100 can also be constructed as a highly functionalized intelligent type fiber channel switch. As will be described later, the first storage device 100 can provide the memory resources of the second storage device 200 to the host 10 as its own logical volume (logical unit).
  • The first storage device 100 can be divided into two main parts, i. e., a controller and a storage part 160. For example, the controller comprises a plurality of channel adapters (hereafter referred to as “CHAs”) 110, a plurality of disk adapters (hereafter referred to as “DKAs”) 120, a cache memory 130, a shared memory 140, and a connection control part 150.
  • Each CHA 110 performs data communications with a host 10. Each CHA 110 comprises a communications port 111 for performing communications with this host 10. The respective CHAs 110 are constructed as microcomputer systems comprising a CPU, memory and the like; these CHAs 110 interpret and execute various types of commands received from the hosts 10. Network addresses used to discriminate the respective CHAs 110 (e. g., IP addresses or WWN) are assigned to these CHAs 110, and each CHA 110 can behave separately as an NAS (network attached storage). In cases where a plurality of hosts 10 are present, the respective CHAs 110 separately receive and process requests from the respective hosts 10.
  • Each DKA 120 exchanges data with a disk drive 161 of the control part 160. Like the CHAs 110, each DKA 120 is constructed as a microcomputer system comprising a CPU, memory and the like. For example, the respective DKAs 120 write data received from the host 10 or read out from the second storage device 200 by the CHAs 110 into a specified address of a specified disk drive 161. Furthermore, each DKA 120 reads out data from a specified address of a specified disk drive 161, and transmits this data to a host 10 or the second storage device 200. In cases where the input or output of data is performed with the disk drive 161, each DKA 120 converts the logical address into a physical address. In cases where the disk drive 161 is controlled in accordance with an RAID, each DKA 120 accesses data according to the RAID construction. For example, each DKA 120 writes the same data into different disk drive groups (RAID groups), or performs parity calculations and writes the data and parity into the disk drive groups.
  • The cache memory 130 stores data received from the host 10 or second storage device 200, or stores data read out from the disk drive 161. As will be described later, a virtual intermediate storage device is constructed utilizing the storage space of the cache memory 130.
  • Various types of control information used in the operation of the first storage device 100 are stored in the shared memory (also called a control memory in some cases) 140. Furthermore, in addition to the setting of a work region, various types of tables such as the mapping table and the like described later are also stored in the shared memory 140.
  • Moreover, one or a plurality of disk drives 161 can also be used as cache disks. Furthermore, the cache memory 130 and shared memory 140 can be constructed as respectively separate memories, or some of the storage regions of the same memory can be used as cache regions, and other storage regions can be used as control regions.
  • The connection control part 150 connects the respective CHAs 110, the respective DKAs 120, the cache memory 130 and the shared memory 140 to each other. For example, the connection control part 150 can be constructed as a high-sped bus such as an ultra-high-speed cross-bar switch that performs data transfer by means of a high-speed switching operation.
  • The storage part 160 comprises a plurality of disk drives 161. For example, various types of storage disks such as hard disk drives, flexible disk drives, magnetic disk drives, semiconductor memory drives, optical disk drives or the like, or the equivalents of such drives, can be used as the disk drives 161. Furthermore, for example, different types of disks may be mixed inside the storage part 160, as in FC (fiber channel) disks, SATA (serial AT attachment) disks or the like.
  • Furthermore, as will be described later, a virtual internal volume 191 based on a disk drive 220 of the second storage device 200 can be formed in the first storage device 100. This virtual internal volume 191 can be provided to the host 10A in the same manner as the internal volume 190 based on the disk drive 161.
  • For example, the second storage device 200 comprises a controller 210 and a plurality of disk drives 220. The second storage device 200 is communicably connected with the host 10B, managing device 20 and first storage device 100, respectively via the communications port 211.
  • The second storage device 200 and host 10B are connected via the communications network CN1. The second storage device 200 and managing device 20 are connected via the communications network CN2. The second storage device 200 and first storage device 100 are connected via the communications network CN3. For example, the communications networks CN2 and CN3 can be constructed from SAN, LAN or the like.
  • The second storage device 200 may have substantially the same construction as the first storage device, or may have a simpler construction than the first storage device 100. The disk drives 220 of the second storage device 200 may be handled as internal storage devices of the first storage device 100.
  • Reference is now made to FIG. 3. FIG. 3 is a structural explanatory diagram focusing on the functional construction of the present embodiment. The controller 101 of the first storage device 100 is constructed from the CHAs 110, respective DKAs 120, cache memory 130, shared memory 140 and the like.
  • As internal functions, this controller 101 comprises (for example) a first full copying control part 102, a second full copying control part 103, a first differential copying control part 104, and a second differential copying control part 105. Furthermore, various types of tables such as a mapping table T1, differential bit map T4 and the like are stored inside the shared memory 140 of the controller 101.
  • The first full copying control part 102 is a function that copies all of the storage contents of the virtual internal volume 191 into the internal volume 190. Conversely, the second full copying control part 103 is a function that copies all of the storage contents of the internal volume 190 into the virtual internal volume 191. Furthermore, the first differential copying control part 104 is a control that copies the differential data 192 of the internal volume 190 into the virtual internal volume 191. The second differential copying control part 105 is a function that copies the differential data 261 of the virtual internal volume 191 into the internal volume 190.
  • The internal volume 190 and virtual internal volume 191 are respectively disposed in the first storage device 100. The internal volume 190 is a volume that is set on the basis of the storage regions of the respective disk drives 161 that are directly governed by the first storage device 100. The virtual internal volume 191 is a volume that is set on the basis of the storage regions of the respective disk drives 220 of the second storage device 200.
  • The controller 210 of the second storage device 200 stores the differential bit map T4 (2) in a memory (not shown in the figures). This differential bit map T4 (2) is used to control the differential data 261 that is generated for the external volume 260 of the second storage device 200. Here, as was described above, the external volume 260 is based on the storage region of the disk drive 220, and is an internal volume with respect to the second storage device 200. However, since this volume 260 is mapped into the virtual internal volume 191 and incorporated into the first storage device 100, this volume is called the external volume 260 in the present embodiment.
  • The managing device 20 comprises an access attribute setting part 21. This access attribute setting part 21 is used to set access attributes for the internal volume 190 or external volume 260. The setting of access attributes can be performed manually by the user, or can be performed automatically on the basis of some type of trigger signal. The types of access attributes will be further described later.
  • Reference is now made to FIG. 4. FIG. 4 is a structural explanatory diagram which focuses on the storage structure of the first storage device 100 and second storage device 200. The construction of the first storage device 100 will be described first.
  • For example, the storage structure of the first storage device 100 can be roughly divided into a physical storage hierarchy and a logical storage hierarchy. The physical storage hierarchy is constructed by a PDEV (physical device) 161 which is a physical disk. The PDEV corresponds to a disk drive.
  • The logical storage hierarchy can be constructed from a plurality (e. g., two types) of hierarchies. One logical hierarchy can be constructed from VDEVs (virtual devices) 162 and virtual VDEVs (hereafter called “V-VOLs”) 163 which can be handled as VDEVs 162. The other logical hierarchy can be constructed from LDEVs (logical devices) 164.
  • For example, the VDEVs 162 can be constructed by forming a specified number of PDEVs 161 into a group, e. g., four units as one set (3D+1P), eight units as one set (7D+1P) or the like. One RAID storage region is formed by the aggregate of the storage regions provided by the respective PDEVs 161 belonging to a group. This RAID storage region constitutes a VDEV 162.
  • In contrast to the construction of a VDEV 162 in a physical storage region, the V-VOL 163 is a virtual intermediate storage device which requires no physical storage region. The V-VOL 163 is not directly associated with a physical storage region, but is a receiver for the mapping of LUs (logical units) of the second storage device 200.
  • One or more LDEVs 164 can be respectively disposed in the VDEV 162 or V-VOL 163. For example, the LDEVs 164 can be constructed by splitting a VDEV 162 into specified lengths. In cases where the host 10 [involved] is an open type host, the host 10 can recognize the LDEV 164 as a single physical disk by mapping the LDEV 164 in the LU 165. The open type host can access a desired LDEV 164 by designating the LUN (logical unit number) or logical block address. Furthermore, in the case of a main frame type host, the LDEV 164 can be directly accessed.
  • The LU 165 is a device that can be recognized as an SCSI logical unit. The respective LUs 165 are connected to the host 10 via a target port 111A. One or more LDEVs 164 can be respectively associated with each LU 165. It is also possible to expand the LU size virtually by associating a plurality of LDEVs 164 with one LU 165.
  • The CMD (command device) 166 is a special LU that is used to transfer commands and status [information] between the I/O control program operating in the host 10 and the controller 101 (CHAs 110, DKAs 210) of the storage device 100. Commands from the host 10 are written into the CMD 166. The controller 101 of the storage device 100 executes processing corresponding to the commands that are written into the CMD 166, and writes the results of this execution into the CMD 166 as status [information]. The host 10 reads out and confirms the status [information] that is written into the CMD 166, and then writes the processing contents that are to be executed next into the CMD 166. Thus, the host 10 can issue various types of instructions to the storage device 100 via the CMD 166.
  • Furthermore, the commands received from the host 10 can also be processed without being stored in the CMD 166. Moreover, the CMD can also be formed as a virtual device without defining the actual device (LU), and can be constructed so as to receive and process commands from the host 10. Specifically, for example, the CHAs 110 write the commands received from the host 10 into the shared memory 140, and the CHAs 110 or DKAs 120 process the commands stored in this shared memory 140. The processing results are written into the shared memory 140, and are transmitted to the host 10 from the CHAs 110.
  • The second storage device 200 is connected to the external initiator port (external port) 111B of the first storage device 100 via the communications network CN3.
  • The second storage device 200 comprises a plurality of PDEVs 220, VDEVs 230 that are set in storage regions provided by the PDEVs 220, and one or more LDEVs 240 that can be set in the VDEVs 230. Each LDEV 240 is respectively associated with an LU 250.
  • Furthermore, in the present embodiment, the LUs 250 (i. e., the LDEVs 240) of the second storage device 200 are mapped into a V-VOL 163 which is a virtual intermediate storage device so that these LUs 250 can also be used from the first storage device 100.
  • For example, in FIG. 4, the “LDEV 1” and “LDEV 2” of the second storage device 200 are respectively mapped into the “V-VOL 1” and “V-VOL 2” of the first storage device 100 via the “LU 1” and “LU 2” of the second storage device 200. Furthermore, the “V-VOL 1” and “V-VOL 2” are respectively mapped into the “LDEV 3” and “LDEV 4”, and can be utilized via the “LU 3” and “LU 4”.
  • Furthermore, the VDEVs 162 and V-VOLs 163 can use an RAID construction. Specifically, one disk drive 161 can be divided into a plurality of VDEVs 162 and V-VOLs 163 (slicing), or one VDEV 162 or V-VOL 163 can be formed from a plurality of disk drives 161 (striping).
  • Furthermore, the “LDEV 1” or “LDEV 2” of the first storage device 100 corresponds to internal volume 190 in FIG. 3. The “LDEV 3” or “LDEV 4” of the of the first storage device 100 corresponds to the virtual internal volume 191. The “LDEV 1” or “LDEV 2” of the second storage device 200 corresponds to the external volume 260 in FIG. 3.
  • Reference is now made to FIG. 5. FIG. 5 shows one example of the mapping table Ti that is used to map the external volume 260 into the virtual internal volume 191.
  • For example, the mapping table T1 can be constructed by respectively establishing a correspondence between the VDEV numbers used to discriminate the VDEVs 162 and V-VOLs 163 and information relating to the external disk drives 220.
  • For example, the external device information can be constructed so that this information includes device discriminating information, storage capacities of the disk drives 220, information indicating the type of device (tape type devices, disk type devices or the like) and path information indicating the paths to the disk drives 220. This path information can be constructed so as to include discriminating information (WWN) specific to the respective communications ports 211, and LUN numbers used to discriminate the LUs 250.
  • Furthermore, the values of the device discriminating information, WWN and the like shown in FIG. 5 are values used for convenience of description, and do not have any particular meaning. Moreover, three items of path information are associated with the VDEV 101 having the VDEV number of “3” shown on the lower side in FIG. 5. Specifically, the external disk drive 220 that is mapped into this VDEV (#3) has an alternate path structure which has three paths inside, and this alternate path structure is deliberately mapped into the VDEV (#3). It is seen that the same storage region can be accessed via any of these three paths; accordingly, even in cases where one or two of the paths are obstructed, the desired data can be accessed via the remaining normal path or paths.
  • By using a mapping table T1 such as that shown in FIG. 5, it is possible to map one or a plurality of external disk drives 220 into the V-VOL 163 inside the first storage device 100.
  • Furthermore, as is also true of the other tables shown below, the volume numbers and the like shown in the table are examples used to illustrate the table construction; these values do not particularly correspond to the other constructions shown in FIG. 4 or the like.
  • The conditions of data conversion using these various types of tables will be described with reference to FIG. 6. As is shown in the upper part of FIG. 6, the host 10 transmits data to a specified communications port 111 with the LUN number (LUN #) and logical block address (LBA) being designated.
  • The first storage device 100 converts the data that is input for LDEV use (LUN #+LBA) into data for VDEV use on the basis of the first conversion table T2 shown in FIG. 6(a). The first conversion table T2 is an LUN-LDEV-VDEV conversion table that is used to convert data that designates LUNs in the first storage device 100 into VDEV data.
  • For example, this first conversion table T2 is constructed by associating LUN numbers (LUN #), LDEV numbers (LDEV #) and maximum slot numbers that correspond to correspond to these LUNs, VDEV (including V-VOL) numbers (VDEV #) and maximum slot numbers that correspond to these LDEVs and the like. As a result of reference being made to this first conversion table T2, the data from the host 10 (LUN #+LBA) is converted into VDEV data (VDEV #+SLOT #+SUBLOCK #).
  • Next, the first storage device 100 refers to the second conversion table T3 shown in FIG. 6(b), and converts the VDEV data into data that is used for transmission and storage for the LUNs of the second storage device 200.
  • In the second conversion table T3, for example, VDEV numbers (VDEV #), the numbers of initiator ports used to transmit data from the VDEVs to the second storage device 200, WWN used to specify the communications ports that are the data transfer destinations and LUNs that can be accessed via these communications ports are associated.
  • On the basis of this second conversion table T3, the first storage device 100 converts the address information of the data that is to be stored into the format of initiator port number#+WWN+LUN#+LBA. The data whose address information has thus been altered reaches the designated communications port 211 from the designated initiator port via the communications network CN3. Then, the data is stored in a specified place in the LDEV.
  • FIG. 6(c) shows another second conversion table T3 a. This conversion table T3 a is used in cases where a stripe or RAID is applied to VDEVs (i. e., V-VOLs) originating in an external disk drive 220. The conversion table T3 a is constructed by associating VDEV numbers (VDEV #), stripe sizes, RAID levels, numbers used to discriminate the second storage device 200 (SS # (storage system numbers)), initiator port numbers and WWN and LUN numbers of the communications ports 211.
  • In the example shown in FIG. 6(c), one VDEV (V-VOL) constructs an RAID 1 utilizing a total of four external storage control devices specified by SS # (1, 4, 6, 7). Furthermore, the three LUNs (#0, #0 and #4) assigned to SS # 1 are set in the same device (LDEV #). Moreover, the volumes of LUN # 0 comprise an alternate path structure which has two access data paths. Thus, logical volumes (LDEVs) belonging respectively to a plurality of external storage device can be respectively mapped in a single V-VOL inside the first storage device 100, and can be utilized as a virtual internal volume 191. As a result, in the present embodiment, by constructing a VDEV (V-VOL) from a plurality of logical volumes (LDEV) present on the outside, it is possible to add functions such as striping, RAID or the like, and to provide these functions to the host 10.
  • FIG. 7 respectively shows a differential bit map T4 and saving destination address control table T5 that are used to control the differential data 192. Furthermore, in the second storage device 200 as well, differential data 261 is controlled by the same method as in FIG. 7.
  • For example, the differential bit map T4 can be constructed by associating updating flag information indicating the status as to whether or not updating has been performed with each logical track of the disk drives 161 constituting the internal volume 190. One logical track corresponds to three cache segments, and has size of 48 kB or 64 kB.
  • For example, the saving destination address control table can be constructed by associating with each logical track unit a saving destination address which indicates where the data stored on this track is saved. Furthermore, in the tables T4 and T5, the control units are not limited to track units. For example, other control units such as slot units, LBA units or the like can also be used.
  • FIG. 8 is an explanatory diagram which shows one example of the copying pair control table T6. For example, the copying pair control table T6 can be constructed by associating information that specifies the copying source LU, information that specifies the copying destination LU and the current pair status. Examples of copying pair status include “pair form (paircreate)”, “pair split (pairsplit)”, “resynchronize (resync)” and the like.
  • Here, the “pair form” status is a state in which initial copying (full copying) from the copying source volume to the copying destination volume has been performed, so that a copying pair is formed. The “pair split” status is a state in which the copying source volume and copying destination volume are separated after the copying pair has been forcibly synchronized. The “resynchronize” status is a state in which the storage contents of the copying source volume and copying destination volume are resynchronized and a copying pair is formed after the two volumes have been separated.
  • FIG. 9 is an explanatory diagram showing one example of the access attribute control table T7. The term “access attribute” refers to information that controls the possibility of access to the volumes or the like. For example, the access attribute control table T7 can be constructed by associating access attributes with each LU number (LUN).
  • Examples of access attributes include “read/write possible”, “write prohibited (read only)”, “read/write impossible”, “empty capacity 0”, “copying destination setting impossible” and “hidden”.
  • Here, “read/write possible” indicates a state in which reading and writing from and into the volume in question are possible. “Write prohibited” indicates a state in which writing into the volume in question is prohibited, so that only read-out is permitted. “Read/write impossible” indicates a state in which writing and reading into and from the volume are prohibited. “Empty capacity 0” indicates a state in which a response of remaining capacity 0 (full) is given in reply to inquiries regarding the remaining capacity of the volume even in cases where there is actually some remaining capacity. “Copying destination setting impossible” indicates a state in which the volume in question cannot be set as the copying destination volume (secondary volume). “Hidden” indicates a state in which the volume in question cannot be recognized from the initiator. Furthermore, as was already mentioned above, the LUNs in the table are numbers used for purposes of description; these numbers in themselves have no particular significance.
  • Next, the operation of the present embodiment will be described. First, FIG. 10 is a flow chart illustrating the mapping method that is used in order to utilize the external volume 260 of the second storage device 200 as a virtual internal volume 191 of the first storage device 100. This processing is performed between the first storage device 100 and second storage device 200 when the mapping of the volumes is performed.
  • The first storage device 100 logs into the second storage device 200 via the initiator port of the CHA 110 (S1). Logging in is completed by the second storage device 200 sending back a response to the logging in of the first storage device 100 (S2). Next, for example, the first storage device 100 transmits an inquiry command determined by the SCSI (small computer system interface) to the second storage device 200, and requests a response regarding details of the disk drives 220 belonging to the second storage device 200 (S3).
  • The inquiry command is used to clarify the type and construction of the inquiry destination device; this makes it possible to pass through the hierarchy of the inquiry destination device and grasp the physical structure of this inquiry destination device. By using such an inquiry command, for example, the first storage device 100 can acquire information such as the device name, device type, manufacturing serial number (product ID), LDEV number, various types of version information, vendor ID and the like from the second storage device 200 (S4). The second storage device 200 responds by transmitting the information for which an inquiry was made to the first storage device 100 (S5).
  • The first storage device 100 registers the information acquired from the second storage device 200 in the mapping table T1 (S6). The first storage device 100 reads out the storage capacity of the disk drive 220 from the second storage device 200 (S7). In response to an inquiry from the first storage device 100, the second storage device 200 sends back the storage capacity of the disk drive 220 (S8), and returns a response (S9). The first storage device 100 registers the storage capacity of the disk drive 220 in a specified place in the mapping table T1 (S10).
  • The mapping table T1 can be constructed by performing the above processing. In cases where the input and output of data are performed with the external disk drive 220 (external LUN, i.e., external volume 260) mapped into the V-VOL of the first storage device 100, address conversion and the like are performed with reference to the other conversion tables T2 and T3 described with reference to FIG. 6.
  • Next, the input and output of data between the first storage device 100 and second storage device 200 will be descried. FIG. 11 is a model diagram which shows the processing that is performed when data is written.
  • The host 10 can write data into a logical volume (LDEV) that has access authorization. For example, by using procedures such as zoning that sets a virtual SAN subnet in the SAN or LUN masking in which the host 10 holds a list of accessible LUNs, it is possible to set the host 10 so that the host 10 can access only specified LDEVs.
  • In cases where the LDEV into which the host 10 is to write data is connected via a VDEV to a disk drive 161 which is in internal storage device, data is written by ordinary processing. Specifically, the data from the host 10 is temporarily stored in the cache memory 130, and is then stored in a specified address of a specified disk drive 161 from the cache memory 130 via the DKA 120. In this case, the DKA 120 converts the logical address into a physical address. Furthermore, in the case of a raid construction, the same data is stored in a plurality of disk drives 161 or the like.
  • On the other hand, in cases where the LDEV into which the host 10 is to write data is connected to an external disk drive 220 via a V-VOL, the flow is as shown in FIG. 11. FIG. 11(a) is a flow chart centering on the storage hierarchy, and FIG. 11(b) is a flow chart centering on the manner of use of the cache memory 130.
  • The host 10 indicates an LDEV number that specifies the LDEV that is the object of writing and a WWN that specifies the communications port that is used to access this LDEV, and issues a write command (write) (S21). When the first storage device 100 receives a write command from the host 10, the first storage device 100 produces a write command for transmission to the second storage device 200, and transmits this command to the second storage device 200 (S22). The first storage device 100 alters the address information and the like contained in the write command received from the host 10 so as to match the external volume 260, thus producing a new write command.
  • The host 10 transmits the write data to the to the first storage device 100 (S23). The write data received by the first storage device 100 is transferred to the second storage device 200 (S26) from the LDEV via the V-VOL (S24). Here, at the point in time at which the data from the host 10 is stored in the cache memory 130, the first storage device 100 sends back a response (good) indicating the completion of writing to the host 10.
  • At the point in time at which the write data is received from the first storage device 100 (or the point in time at which writing into the disk drive 220 is completed), the second storage device 200 transmits a writing completion report to the first storage device 100 (S26). Specifically, the time at which the completion of writing is reported to the host 10 by the first storage device 100 (S25) and the time at which the data is actually stored in the disk drive 220 are different (asynchronous system). Accordingly, the host 10 is released from data write processing before the write data is actually stored in the disk drive 220, so that the host 10 can perform other processing.
  • Reference will now be made to FIG. 11(b). Numerous subprograms are installed in the cache memory 130. The first storage device 100 converts the logical block addresses designated by the host 10 into sub-block addresses, and stores data in specified locations in the cache memory 130 (S24). In other words, the V-VOLs and VDEVs have a logical presence installed in the storage space of the cache memory 130.
  • The flow in cases where data is read out from the external volume 260 of the second storage device 200 will be described with reference to FIG. 12.
  • First, the host 10 designates a communications port 111 and transmits a data read-out command to the first storage device 100 (S31). When the first storage device 100 receives a read command, the first storage device 100 produces a read command in order to read out the requested data from the second storage device 200.
  • The first storage device 100 transmits the produced read command to the second storage device 200 (S32). In accordance with the read command received from the first storage device 100, the second storage device 200 reads out the requested data from the disk drive 220, transmits this read-out data to the first storage device 100 (S33), and reports that read-out was normally completed (S35). As is shown in FIG. 12(b), the first storage device 100 stores the data received from the second storage device 200 in a specified location in the cache memory 130 (S34).
  • The first storage device 100 reads out the data stored in the cache memory 130, performs address conversion, transmits the data to the host 10 via the LUN 103 or the like (S36), and issues a read-out completion report (S37). In the series of processing performed in these data read-outs, the conversion operation described with reference to FIG. 6 is performed in reverse.
  • In FIG. 12, the operation is shown as if data is read out from the second storage device 200 and stored in the cache memory 130 in accordance with the request from the host 10. However, the operation is not limited to this; it would also be possible to store all or part of the data stored in the external volume 260 in the cache memory 130 beforehand. In this case, in response to a command from the host 10, data can be immediately read out from the cache memory 130 and transmitted to the host 10.
  • Next, the method used to synchronize the storage contents between the internal volume 190 and virtual internal volume 191 (whose substance is the external volume 260) will be described. FIGS. 13 and 14 show the full copying mode in which all of the storage contents of the copying source volume are copied into the copying destination volume, and FIGS. 15 and 16 show the differential copying mode in which only the differential data generated in the copying source volume following the completion of full copying is coped into the copying destination volume. In the case of both copying modes, data is transferred directly between the first storage device and second storage device; the host 10 does not participate.
  • The managing device 20 instructs the first storage device 100 to execute the first full copying mode (S41). The CHA 110 that receives this instruction refers to the mapping table T1 stored in the shared memory 140 (S42), and acquires path information for the external volume 260 which is the copying destination volume. The CHA 110 issues a read command to the second storage device 200 (S43), and requests the read-out of the data that is stored in the external volume 260.
  • In response to the read command from the first storage device 100, the second storage device 200 reads out data from the external volume 260 (S44), and transmits this read-out data to the first storage device 100 (S45).
  • When the CHA 110 receives the data from the second storage device 200, the CHA 110 stores this received data in the cache memory 130 (S46). Furthermore, for example, the CHA 110 requests the execution of destage processing from the DKA 120 by writing a write command into shared memory 140 (S47).
  • The DKA 120 occasionally refers to the shared memory 140, and when the DKA 120 discovers an unprocessed write command, the DKA 120 reads out the data stored in the cache memory 130, performs processing such as address conversion and the like, and writes this data into a specified disk drive 161 (S48).
  • Thus, all of the storage contents of the external volume 260 which is the copying source volume can be copied into the internal volume 190 which is the copying destination volume, so that the storage contents of both volumes are caused to coincide.
  • FIG. 14 shows the processing of the second full copying mode. The first storage device 100 instructs the first storage device 100 to execute the second full copying mode (S51). The CHA 110 that receives this instruction refers to the mapping table T1 stored in the shared memory 140 (S52), and acquires path information for the external volume 260 which is the copying destination volume. Furthermore, the CHA 110 requests that the DKA 120 perform staging (processing that transfers the data to a cache) of the data stored in the internal volume 190 (S53).
  • In response to this staging request, the DKA 120 reads out the data of the internal volume 190 from the disk drive 161, and stores this data in the cache memory 130 (S54). Furthermore, the DKA 120 request that the CHA 110 issue a write command (S55).
  • On the basis of the path information acquired in S52, the CHA 110 issues a write command to the second storage device 200 (S56). Next, the CHA 110 transmits write data to the second storage device 200 (S57).
  • The second storage device 200 receives the write data from the first storage device 100 (S58), and stores this data in a specified disk drive 220 (S59). Thus, the storage contents of the internal volume 190 which is the copying source volume can be copied into the external volume 260 which is the copying destination volume, so that the storage contents of both volumes can be caused to coincide.
  • FIG. 15 shows the processing of the first differential copying mode. First, prior to the initiation of differential copying, the managing device 20 requests the first storage device 100 to split the copying pair (S61). The CHA 110 that receives the splitting instruction updates the copying pair control table T6 stored in the shared memory 140, and alters the status of the copying pair to a split state (S62). As a result, the pair state of the internal volume 190 and virtual internal volume 191 (external volume 260) is dissolved.
  • The host 10A executes updating I/O for the internal volume 190 (S63). The CHA 110 stores the write data received from the host 10A in the cache memory 130 (S64), and sends a response to the host 10A indicating that processing of the write command has been completed (S65).
  • Furthermore, the CHA 110 respectively updates the differential bit map T4 and the differential data 192 (S66), and requests that the DKA 120 execute destage processing (S67). The DKA 120 stores the write data generated by the updating I/O in the disk drive 161 (S68).
  • Prior to the initiation of differential copying, the updating I/O from the host 10A is stopped (S69). For example, this stopping of the I/O can be accomplished manually by the user. Furthermore, the managing device 20 alters the access attribute of the internal volume 190 from “read/write possible” to “write prohibited” (S70). Although the issuing of updating I/O by the host 10A is already stopped, further variation of the storage contents of the internal volume 190 can be prevented in advance by altering the access attribute to “write prohibited”.
  • Then, the managing device 20 instructs the first storage device 100 to execute first differential copying (S71). The CHA 110 that receives this instruction refers to the mapping table T1 (S72), and acquires path information for the external volume 260. Furthermore, the CHA 110 refers to the differential bit map T4 (S73), and requests that the DKA 120 perform destaging of the differential data 192 (S74).
  • The DKA 120 reads out the differential data 192 produced for the internal volume 190 from the disk drive 161, and stores this data in the cache memory 130 (S75). Then, the DKA 120 requests that the CHA 110 issue a write command (S76).
  • The CHA 110 issues a write command to the second storage device 200 (S77), and transmits write data (the differential data 192) to the second storage device 200 (S78). The second storage device 200 stores the received write data in the external volume 260. As a result, the storage contents of the external volume 260 and internal volume 190 coincide. Then, the managing device 20 alters the access attribute of the internal volume 190 from “write prohibited” to “read/write possible” (S79).
  • FIG. 16 shows the processing of the second differential copying mode. Prior to the initiation of differential copying, the managing device 20 first instructs the first storage device 100 to split the copying pair (S81). The CHA 110 that receives this instruction updates the copying pair control table T6, and dissolves the pair state (S82).
  • Then, when the host 10B accesses the external volume 260 and issues updating I/O (S83), the second storage device 200 writes the write data into the disk drive 220 (S84), and respectively updates the differential data 261 and differential bit map T4 (2) (S85).
  • When differential copying is initiated, the managing device 20 alters the access attribute of the external volume 260 from “read/write possible” to “write prohibited” (S86), thus prohibiting updating of the external volume 260; then, the managing device 20 instructs the first storage device 100 to initiate second differential copying (S87).
  • The CHA 110 that receives the instruction to initiate differential copying requests that the second storage device 200 to transfer the differential bit map T4 (2) (S88). Since the contacts of the differential data 261 generated in the external volume 260 are controlled by the second storage device 200, the first storage device 100 acquires the differential bit map T4 (2) from the second storage device 200 (S89).
  • Furthermore, in this embodiment, a construction is used in which commands and data are directly exchanged between the first storage device 100 and second storage device 200. However, the present invention is not limited to this; for example, it would also be possible to exchange data such as the differential bit map and the like between the respective storage devices 100 and 200 via the managing device 20.
  • The CHA 110 refers to the mapping table T1 (S90), and acquires path information indicating the path to the external volume 260. Then, the CHA 110 requests the transfer of the differential data 261 by issuing a read command to the second storage device 200 (S91).
  • In response to the read command from the first storage device 100, the second storage device 200 transmits the differential data 261 to the first storage device 100 (S92). Then, the CHA 110 that receives this differential data 261 stores the differential data 261 in the cache memory 130 (S93) The CHA 110 requests that the DKA 120 perform destage processing of the differential data 261 (S94). Then, the DKA 120 reads out the differential data 261 stored in the cache memory 130, and writes the data constituting the internal volume 190 into the disk drive 161 (S95). As a result, the storage contents of the external volume 260 and internal volume 190 coincide.
  • In the present embodiment, as was described in detail above, the external volume 260 can be handled as though this volume were a logical volume inside the first storage device 100 by mapping the external disk drive 220 into the V-VOL. Accordingly, even in cases where the second storage device 200 is an old type device that cannot be directly connected to the host 10, the memory resources of the old type device can be reutilized as memory resources of the first storage device 100, and can be provided to the host 10, by interposing a new type first storage device 100. As a result, the old type storage device 200 can be connected to a new type storage device 100, and the memory resources can be effectively utilized.
  • Furthermore, in cases where the first storage device 100 is a high-performance, highly functional new type device, the low performance of the second storage device can be hidden by the high-performance computer resources (cache capacity, CPU processing speed and the like) of the first storage device 100, so that high-performance services can be provided to the host 10 using a virtual internal volume that utilizes the disk drive 220. Furthermore, functions such as (for example) striping, expansion, splitting, RAID and the like can be added to an external volume 260 constructed in the disk drive 220, and can be used. Accordingly, compared to cases in which an external volume is directly mapped into an LUN, the degree of freedom of utilization is increased so that convenience of use is improved.
  • In the present embodiment, in addition to these effects, the storage contents can be synchronized between the internal volume 190 and virtual internal volume 191 (external volume 260). Accordingly, a backup of the internal volume 190 can be formed in the virtual internal volume 191, or conversely, a backup of the virtual internal volume 191 can be formed in the internal volume 190, so that the convenience is even further improved.
  • Furthermore, in the present embodiment, since both a full copying mode and a differential copying mode can be performed, efficient copying can be performed in accordance with the conditions.
  • Furthermore, in the present embodiment, a construction is used in which the storage contents of the copying source volume are fixed by altering the access attribute to “write prohibited”. Accordingly, volume copying can be executed without particularly altering the processing contents in the host 10.
  • 2. SECOND EMBODIMENT
  • A second embodiment of the present invention will be described with reference to FIG. 17. The following embodiments including this embodiment correspond to modifications of the abovementioned first embodiment. In the present embodiment, copying is performed among a plurality of virtual internal volumes inside the first storage device 100. Furthermore, in the present embodiment, the first storage device 100 does not comprise any internal volumes. FIG. 17 is an explanatory diagram showing the storage structure of a storage system constituting a second embodiment of the present invention.
  • In the present embodiment, the first storage device 100 comprises a third storage device 300 in addition to a second storage device 200. Like the second storage device 200, this third storage device 300 is a device that is externally connected to the first storage device 100. Like the second storage device 200, the third storage device 300 comprises (for example) PDEVs 320, VDEVs 330, LDEVs 349, LUs 350, targets 311 and the like. In regard to the construction of the third storage device 300, the construction of the second storage device 200 can be employed; since this construction is not the gist of the present invention, details will be omitted. However, the second storage device 200 and third storage device 300 need not have the same structure.
  • The first storage device 100 does not comprise PDEVs 161 which are physical storage devices, and does not comprise real volumes (internal volumes). The first storage device 100 comprises only “LDEV 1” and “LDEV 2”, which are virtual internal volumes. Accordingly, the first storage device 100 need not be a disk array device; for example, this first storage device 100 may be an intelligent type switch comprises a computer system.
  • The first virtual internal volume “LDEV 1164 is connected to “LDEV 1240, which is a real volume of the second storage device 200, via “V-VOL” 163. The second virtual internal volume “LDEV 2164 is connected to “LDEV 1340, which is a real volume of the third storage device 300, via “V-VOL 2163.
  • Furthermore, in the present embodiment, the system is devised so that full copying and differential copying are performed between the first virtual internal volume “LDEV 1” and the second virtual internal volume “LDEV 2” inside the first storage device 100.
  • 3. THIRD EMBODIMENT
  • A third embodiment will be described with reference to FIG. 18. FIG. 18 is an explanatory diagram which shows one example of the control screen used by the storage system. This embodiment can be used in any of the respective embodiments described above.
  • For example, in cases where copying pairs are set in the storage system, the user logs into the managing device 20, and calls up a control screen such as that shown in FIG. 18. When the construction of a copying pair or the like is set on this control screen, the managing device 20 sends instructions for alteration of the construction to one or both of the storage devices 100 and 200. Receiving these instructions, the respective storage devices 100 and 200 alter their internal construction.
  • A plurality of different types of control menus M1 through M3 can be set on the control screen. For example, these control menus M1 through M3 can be constructed as tab type switching menus. For instance, the menu M1 is a menu that is used to perform various types of LU operations such as production of volumes or the like. The menu M2 is a menu that is used to perform communications port operations. The menu M3 is a menu that is used to perform volume copying operations between the storage devices described in the abovementioned embodiments,
  • For example, the menu M3 can be constructed so that this menu includes a plurality of screen regions G1 through G5. The screen region G1 is used to select the storage device (subsystem) that performs the setting of copying pairs. The conditions of the set copying pairs are displayed in the screen region G2. For instance, the copying source volume (P-VOL), copying destination volume (S-VOL), emulation type, capacity, copying status, progression, copying speed and the like can be displayed in the screen region G2.
  • For instance, using a pointing device such as a mouse or the like, the user can select two copying pairs displayed in the screen region G2; furthermore, the user can display the submenu M4 by right clicking [with the mouse]. The user can designate the synchronization of volumes or dissolution of pairs by means of the submenu M4.
  • In the screen region G3, either internal volumes inside the first storage device 100 or external volumes inside the second storage device 200 can be exclusively selected as the volumes of the copying pair. In the figures, a case is shown in which an internal volume is selected as the copying source volume. An internal volume or external volume can be designated as either the copying source volume or copying destination volume.
  • Preset values can be displayed in the screen region G4. Operation states can be displayed in the screen region G5. When the setting of the copying pair has been completed, the user can cause alterations in the construction to be reflected by operating an application button B1. In cases where the content of this setting is to be canceled, the user operates a cancel button B2. The abovementioned screen construction is an example; the present invention is not limited to this construction.
  • Furthermore, the present invention is not limited to the respective embodiments described above. A person skilled in the art can make various additions, alterations and the like within the scope of the present invention.

Claims (1)

1. A virtualization system, comprising:
at least one first port coupled to at least one host system;
at least one second port coupled to a plurality of storage systems, each of the storage systems comprising at least one logical volume related to at least a portion of a plurality of disk drives; and
at least one controller forming a first virtual volume and a second virtual volume;
wherein the virtualization system is capable to control to perform processes of
splitting a relationship between the first virtual volume and the second virtual volume;
receiving a first write request, the first write request being sent from the host system for writing data to the first virtual volume;
storing first differential information identifying data of the first write request;
transferring the data of the first write request to a first logical volume of a first storage system of the storage systems, the first logical volume being related to the first virtual volume, so that the first storage system can write the data of the first write request to a storage area of the disk drives related to the first logical volume;
storing second differential information identifying data of a second write request, the data of the second request being written after the splitting step, the second differential information related to the second virtual volume, the second virtual volume being related to a second logical volume of a second storage system of the storage systems;
receiving a differential copying request;
if the differential copying request indicates to copy differential data from one of the first and second virtual volume to the other of the first and second virtual volume, (1) controlling to copy the data to the one of the first and second virtual volume based on the differential information, and (2) transferring the data to the other of the first and second logical volume of the storage systems, so that the storage system can write the data of the write request to a storage area of the disk drives.
US11/697,777 2004-10-27 2007-04-09 Storage system and storage control device Abandoned US20070177413A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/697,777 US20070177413A1 (en) 2004-10-27 2007-04-09 Storage system and storage control device

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2004-312358 2004-10-27
JP2004312358A JP2006127028A (en) 2004-10-27 2004-10-27 Memory system and storage controller
US11/016,806 US20060090048A1 (en) 2004-10-27 2004-12-21 Storage system and storage control device
US11/697,777 US20070177413A1 (en) 2004-10-27 2007-04-09 Storage system and storage control device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/016,806 Continuation US20060090048A1 (en) 2004-10-27 2004-12-21 Storage system and storage control device

Publications (1)

Publication Number Publication Date
US20070177413A1 true US20070177413A1 (en) 2007-08-02

Family

ID=34941557

Family Applications (3)

Application Number Title Priority Date Filing Date
US11/016,806 Abandoned US20060090048A1 (en) 2004-10-27 2004-12-21 Storage system and storage control device
US11/697,777 Abandoned US20070177413A1 (en) 2004-10-27 2007-04-09 Storage system and storage control device
US11/773,081 Expired - Fee Related US7673107B2 (en) 2004-10-27 2007-07-03 Storage system and storage control device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/016,806 Abandoned US20060090048A1 (en) 2004-10-27 2004-12-21 Storage system and storage control device

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/773,081 Expired - Fee Related US7673107B2 (en) 2004-10-27 2007-07-03 Storage system and storage control device

Country Status (3)

Country Link
US (3) US20060090048A1 (en)
EP (2) EP1777616A3 (en)
JP (1) JP2006127028A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8612703B2 (en) 2007-08-22 2013-12-17 Hitachi, Ltd. Storage system performing virtual volume backup and method thereof

Families Citing this family (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006127028A (en) * 2004-10-27 2006-05-18 Hitachi Ltd Memory system and storage controller
US7130960B1 (en) * 2005-04-21 2006-10-31 Hitachi, Ltd. System and method for managing disk space in a thin-provisioned storage subsystem
JP4741304B2 (en) * 2005-07-11 2011-08-03 株式会社日立製作所 Data migration method or data migration system
US7165158B1 (en) * 2005-08-17 2007-01-16 Hitachi, Ltd. System and method for migrating a replication system
JP5222469B2 (en) 2006-10-11 2013-06-26 株式会社日立製作所 Storage system and data management method
JP2008108145A (en) * 2006-10-26 2008-05-08 Hitachi Ltd Computer system, and management method of data using the same
US8949383B1 (en) * 2006-11-21 2015-02-03 Cisco Technology, Inc. Volume hierarchy download in a storage area network
JP5031341B2 (en) * 2006-11-30 2012-09-19 株式会社日立製作所 Storage system and data management method
US8443134B2 (en) 2006-12-06 2013-05-14 Fusion-Io, Inc. Apparatus, system, and method for graceful cache device degradation
US8719501B2 (en) 2009-09-08 2014-05-06 Fusion-Io Apparatus, system, and method for caching data on a solid-state storage device
US9104599B2 (en) 2007-12-06 2015-08-11 Intelligent Intellectual Property Holdings 2 Llc Apparatus, system, and method for destaging cached data
US8489817B2 (en) 2007-12-06 2013-07-16 Fusion-Io, Inc. Apparatus, system, and method for caching data
US8706968B2 (en) 2007-12-06 2014-04-22 Fusion-Io, Inc. Apparatus, system, and method for redundant write caching
US8495292B2 (en) 2006-12-06 2013-07-23 Fusion-Io, Inc. Apparatus, system, and method for an in-server storage area network
JP4961997B2 (en) * 2006-12-22 2012-06-27 富士通株式会社 Storage device, storage device control method, and storage device control program
JP2009088739A (en) * 2007-09-28 2009-04-23 Hitachi Ltd Data transfer unit
US9519540B2 (en) 2007-12-06 2016-12-13 Sandisk Technologies Llc Apparatus, system, and method for destaging cached data
US7836226B2 (en) 2007-12-06 2010-11-16 Fusion-Io, Inc. Apparatus, system, and method for coordinating storage requests in a multi-processor/multi-thread environment
JP2009163542A (en) * 2008-01-08 2009-07-23 Hitachi Ltd Control device for controlling setting for logic volume
JP2009205333A (en) * 2008-02-27 2009-09-10 Hitachi Ltd Computer system, storage device, and data management method
JP5317495B2 (en) 2008-02-27 2013-10-16 株式会社日立製作所 Storage system, copy method, and primary storage apparatus
JP4576449B2 (en) * 2008-08-29 2010-11-10 富士通株式会社 Switch device and copy control method
JP5138530B2 (en) * 2008-10-08 2013-02-06 株式会社日立製作所 Fault management method in storage capacity virtualization technology
US8396835B2 (en) 2009-05-25 2013-03-12 Hitachi, Ltd. Computer system and its data control method
JP2011008548A (en) 2009-06-25 2011-01-13 Fujitsu Ltd Data repeater system and storage system
WO2011024221A1 (en) * 2009-08-26 2011-03-03 Hitachi,Ltd. Remote copy system
WO2012106362A2 (en) 2011-01-31 2012-08-09 Fusion-Io, Inc. Apparatus, system, and method for managing eviction of data
US8874823B2 (en) 2011-02-15 2014-10-28 Intellectual Property Holdings 2 Llc Systems and methods for managing data input/output operations
US9003104B2 (en) 2011-02-15 2015-04-07 Intelligent Intellectual Property Holdings 2 Llc Systems and methods for a file-level cache
US9201677B2 (en) 2011-05-23 2015-12-01 Intelligent Intellectual Property Holdings 2 Llc Managing data input/output operations
WO2012116369A2 (en) 2011-02-25 2012-08-30 Fusion-Io, Inc. Apparatus, system, and method for managing contents of a cache
US8301812B1 (en) * 2011-03-24 2012-10-30 Emc Corporation Techniques for performing host path detection verification
US8751762B2 (en) * 2011-03-30 2014-06-10 International Business Machines Corporation Prevention of overlay of production data by point in time copy operations in a host based asynchronous mirroring environment
JP5439435B2 (en) * 2011-06-14 2014-03-12 株式会社日立製作所 Computer system and disk sharing method in the computer system
US8965856B2 (en) * 2011-08-29 2015-02-24 Hitachi, Ltd. Increase in deduplication efficiency for hierarchical storage system
US9251052B2 (en) 2012-01-12 2016-02-02 Intelligent Intellectual Property Holdings 2 Llc Systems and methods for profiling a non-volatile cache having a logical-to-physical translation layer
US10102117B2 (en) 2012-01-12 2018-10-16 Sandisk Technologies Llc Systems and methods for cache and storage device coordination
US9767032B2 (en) 2012-01-12 2017-09-19 Sandisk Technologies Llc Systems and methods for cache endurance
US9251086B2 (en) 2012-01-24 2016-02-02 SanDisk Technologies, Inc. Apparatus, system, and method for managing a cache
US9116812B2 (en) 2012-01-27 2015-08-25 Intelligent Intellectual Property Holdings 2 Llc Systems and methods for a de-duplication cache
US10359972B2 (en) 2012-08-31 2019-07-23 Sandisk Technologies Llc Systems, methods, and interfaces for adaptive persistence
US10019353B2 (en) 2012-03-02 2018-07-10 Longitude Enterprise Flash S.A.R.L. Systems and methods for referencing data on a storage medium
JP5942511B2 (en) * 2012-03-19 2016-06-29 富士通株式会社 Backup device, backup method, and backup program
US9612966B2 (en) 2012-07-03 2017-04-04 Sandisk Technologies Llc Systems, methods and apparatus for a virtual machine cache
US10339056B2 (en) 2012-07-03 2019-07-02 Sandisk Technologies Llc Systems, methods and apparatus for cache transfers
US9842053B2 (en) 2013-03-15 2017-12-12 Sandisk Technologies Llc Systems and methods for persistent cache logging
JP5903524B2 (en) * 2013-03-15 2016-04-13 株式会社日立製作所 Computer switching method, computer system, and management computer
GB2516435A (en) 2013-04-05 2015-01-28 Continental Automotive Systems Embedded memory management scheme for real-time applications
KR20200034360A (en) * 2018-09-21 2020-03-31 에스케이하이닉스 주식회사 A data processing system comprising a plurality of memory systems coupled to each other by an internal channel

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6457109B1 (en) * 2000-08-18 2002-09-24 Storage Technology Corporation Method and apparatus for copying data from one storage system to another storage system
US20030208511A1 (en) * 2002-05-02 2003-11-06 Earl Leroy D. Database replication system
US20040103261A1 (en) * 2002-11-25 2004-05-27 Hitachi, Ltd. Virtualization controller and data transfer control method

Family Cites Families (206)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3771137A (en) 1971-09-10 1973-11-06 Ibm Memory control in a multipurpose system utilizing a broadcast
US4025904A (en) * 1973-10-19 1977-05-24 Texas Instruments Incorporated Programmed allocation of computer memory workspace
JPH0658646B2 (en) 1982-12-30 1994-08-03 インタ−ナショナル・ビジネス・マシ−ンズ・コ−ポレ−ション Virtual memory address translation mechanism with controlled data persistence
US4710868A (en) 1984-06-29 1987-12-01 International Business Machines Corporation Interconnect scheme for shared memory local networks
US5170480A (en) 1989-09-25 1992-12-08 International Business Machines Corporation Concurrently applying redo records to backup database in a log sequence using single queue server per queue at a time
US5307481A (en) * 1990-02-28 1994-04-26 Hitachi, Ltd. Highly reliable online system
US5155845A (en) 1990-06-15 1992-10-13 Storage Technology Corporation Data storage system for providing redundant copies of data on different disk drives
US5544347A (en) * 1990-09-24 1996-08-06 Emc Corporation Data storage system controlled remote data mirroring with respectively maintained data indices
US5459857A (en) 1992-05-15 1995-10-17 Storage Technology Corporation Fault tolerant disk array data storage subsystem
US5555371A (en) 1992-12-17 1996-09-10 International Business Machines Corporation Data backup copying with delayed directory updating and reduced numbers of DASD accesses at a back up site using a log structured array data storage
US5408465A (en) * 1993-06-21 1995-04-18 Hewlett-Packard Company Flexible scheme for admission control of multimedia streams on integrated networks
KR0128271B1 (en) * 1994-02-22 1998-04-15 윌리암 티. 엘리스 Remote data duplexing
US5574950A (en) 1994-03-01 1996-11-12 International Business Machines Corporation Remote data shadowing using a multimode interface to dynamically reconfigure control link-level and communication link-level
US5504882A (en) * 1994-06-20 1996-04-02 International Business Machines Corporation Fault tolerant data storage subsystem employing hierarchically arranged controllers
US5835953A (en) 1994-10-13 1998-11-10 Vinca Corporation Backup system that takes a snapshot of the locations in a mass storage device that has been identified for updating prior to updating
US5548712A (en) 1995-01-19 1996-08-20 Hewlett-Packard Company Data storage system and method for managing asynchronous attachment and detachment of storage disks
US5799323A (en) 1995-01-24 1998-08-25 Tandem Computers, Inc. Remote duplicate databased facility with triple contingency protection
US5680580A (en) 1995-02-28 1997-10-21 International Business Machines Corporation Remote copy system for setting request interconnect bit in each adapter within storage controller and initiating request connect frame in response to the setting bit
US5917723A (en) * 1995-05-22 1999-06-29 Lsi Logic Corporation Method and apparatus for transferring data between two devices with reduced microprocessor overhead
US5799141A (en) 1995-06-09 1998-08-25 Qualix Group, Inc. Real-time data protection system and method
US5720029A (en) * 1995-07-25 1998-02-17 International Business Machines Corporation Asynchronously shadowing record updates in a remote copy session using track arrays
US5680640A (en) 1995-09-01 1997-10-21 Emc Corporation System for migrating data by selecting a first or second transfer means based on the status of a data element map initialized to a predetermined state
US5819020A (en) 1995-10-16 1998-10-06 Network Specialists, Inc. Real time backup system
US5758118A (en) * 1995-12-08 1998-05-26 International Business Machines Corporation Methods and data storage devices for RAID expansion by on-line addition of new DASDs
US5809285A (en) 1995-12-21 1998-09-15 Compaq Computer Corporation Computer system having a virtual drive array controller
JP3287203B2 (en) 1996-01-10 2002-06-04 株式会社日立製作所 External storage controller and data transfer method between external storage controllers
US5870537A (en) * 1996-03-13 1999-02-09 International Business Machines Corporation Concurrent switch to shadowed device for storage controller and device errors
JP3641872B2 (en) 1996-04-08 2005-04-27 株式会社日立製作所 Storage system
GB2312319B (en) 1996-04-15 1998-12-09 Discreet Logic Inc Video storage
US5901327A (en) * 1996-05-28 1999-05-04 Emc Corporation Bundling of write data from channel commands in a command chain for transmission over a data link between data storage systems for remote data mirroring
US6477627B1 (en) 1996-05-31 2002-11-05 Emc Corporation Method and apparatus for mirroring data in a remote data storage system
US6101497A (en) 1996-05-31 2000-08-08 Emc Corporation Method and apparatus for independent and simultaneous access to a common data set
US6092066A (en) 1996-05-31 2000-07-18 Emc Corporation Method and apparatus for independent operation of a remote data facility
US5933653A (en) 1996-05-31 1999-08-03 Emc Corporation Method and apparatus for mirroring data in a remote data storage system
US5995980A (en) 1996-07-23 1999-11-30 Olson; Jack E. System and method for database update replication
US5835954A (en) 1996-09-12 1998-11-10 International Business Machines Corporation Target DASD controlled data migration move
JP3193880B2 (en) * 1996-12-11 2001-07-30 株式会社日立製作所 Data migration method
JP3410010B2 (en) 1997-12-24 2003-05-26 株式会社日立製作所 Subsystem migration method and information processing system
US6583797B1 (en) * 1997-01-21 2003-06-24 International Business Machines Corporation Menu management mechanism that displays menu items based on multiple heuristic factors
US5895485A (en) * 1997-02-24 1999-04-20 Eccs, Inc. Method and device using a redundant cache for preventing the loss of dirty data
US5895495A (en) * 1997-03-13 1999-04-20 International Business Machines Corporation Demand-based larx-reserve protocol for SMP system buses
US6073209A (en) 1997-03-31 2000-06-06 Ark Research Corporation Data storage controller providing multiple hosts with access to multiple storage subsystems
JP3671595B2 (en) * 1997-04-01 2005-07-13 株式会社日立製作所 Compound computer system and compound I / O system
JP3228182B2 (en) * 1997-05-29 2001-11-12 株式会社日立製作所 Storage system and method for accessing storage system
US6012123A (en) * 1997-06-10 2000-01-04 Adaptec Inc External I/O controller system for an independent access parity disk array
JP3414218B2 (en) 1997-09-12 2003-06-09 株式会社日立製作所 Storage controller
US6052758A (en) * 1997-12-22 2000-04-18 International Business Machines Corporation Interface error detection and isolation in a direct access storage device DASD system
US6247103B1 (en) * 1998-01-06 2001-06-12 International Business Machines Corporation Host storage management control of outboard data movement using push-pull operations
US6230246B1 (en) * 1998-01-30 2001-05-08 Compaq Computer Corporation Non-intrusive crash consistent copying in distributed storage systems without client cooperation
US6173374B1 (en) * 1998-02-11 2001-01-09 Lsi Logic Corporation System and method for peer-to-peer accelerated I/O shipping between host bus adapters in clustered computer network
JPH11242566A (en) * 1998-02-26 1999-09-07 Hitachi Ltd Multiplex data storage system
US6324654B1 (en) 1998-03-30 2001-11-27 Legato Systems, Inc. Computer network remote data mirroring system
US6157991A (en) 1998-04-01 2000-12-05 Emc Corporation Method and apparatus for asynchronously updating a mirror of a source device
US6070224A (en) * 1998-04-02 2000-05-30 Emc Corporation Virtual tape system
US6178427B1 (en) * 1998-05-07 2001-01-23 Platinum Technology Ip, Inc. Method of mirroring log datasets using both log file data and live log data including gaps between the two data logs
US6260120B1 (en) 1998-06-29 2001-07-10 Emc Corporation Storage mapping and partitioning among multiple host processors in the presence of login state changes and host controller replacement
US6421711B1 (en) * 1998-06-29 2002-07-16 Emc Corporation Virtual ports for data transferring of a data storage system
US6148383A (en) 1998-07-09 2000-11-14 International Business Machines Corporation Storage system employing universal timer for peer-to-peer asynchronous maintenance of consistent mirrored storage
US6237008B1 (en) * 1998-07-20 2001-05-22 International Business Machines Corporation System and method for enabling pair-pair remote copy storage volumes to mirror data in another storage volume
US6253295B1 (en) * 1998-07-20 2001-06-26 International Business Machines Corporation System and method for enabling pair-pair remote copy storage volumes to mirror data in another pair of storage volumes
US6195730B1 (en) * 1998-07-24 2001-02-27 Storage Technology Corporation Computer system with storage device mapping input/output processor
DE69938378T2 (en) * 1998-08-20 2009-04-30 Hitachi, Ltd. Copy data to storage systems
SE515084C2 (en) 1998-08-26 2001-06-05 Ericsson Telefon Ab L M Procedure and device in an IP network
JP4689137B2 (en) * 2001-08-08 2011-05-25 株式会社日立製作所 Remote copy control method and storage system
JP2000099277A (en) * 1998-09-18 2000-04-07 Fujitsu Ltd Method for remote transfer between file units
JP4412685B2 (en) 1998-09-28 2010-02-10 株式会社日立製作所 Storage controller and method of handling data storage system using the same
US6718457B2 (en) * 1998-12-03 2004-04-06 Sun Microsystems, Inc. Multiple-thread processor for threaded software applications
US6542961B1 (en) 1998-12-22 2003-04-01 Hitachi, Ltd. Disk storage system including a switch
US6457139B1 (en) 1998-12-30 2002-09-24 Emc Corporation Method and apparatus for providing a host computer with information relating to the mapping of logical volumes within an intelligent storage system
US7107395B1 (en) * 1998-12-31 2006-09-12 Emc Corporation Apparatus and methods for operating a computer storage system
US6209002B1 (en) * 1999-02-17 2001-03-27 Emc Corporation Method and apparatus for cascading data through redundant data storage units
US6397307B2 (en) * 1999-02-23 2002-05-28 Legato Systems, Inc. Method and system for mirroring and archiving mass storage
US6453354B1 (en) 1999-03-03 2002-09-17 Emc Corporation File server system using connection-oriented protocol and sharing data sets among data movers
US6370605B1 (en) 1999-03-04 2002-04-09 Sun Microsystems, Inc. Switch based scalable performance storage architecture
JP3780732B2 (en) 1999-03-10 2006-05-31 株式会社日立製作所 Distributed control system
US6640278B1 (en) 1999-03-25 2003-10-28 Dell Products L.P. Method for configuration and management of storage resources in a storage network
US6553408B1 (en) * 1999-03-25 2003-04-22 Dell Products L.P. Virtual device architecture having memory for storing lists of driver modules
US6446141B1 (en) 1999-03-25 2002-09-03 Dell Products, L.P. Storage server system including ranking of data source
US6654830B1 (en) 1999-03-25 2003-11-25 Dell Products L.P. Method and system for managing data migration for a storage system
JP2000276304A (en) 1999-03-26 2000-10-06 Nec Corp Data shifting method and information processing system
JP3764016B2 (en) * 1999-05-10 2006-04-05 財団法人流通システム開発センタ− Integrated IP transfer network
US6247099B1 (en) * 1999-06-03 2001-06-12 International Business Machines Corporation System and method for maintaining cache coherency and data synchronization in a computer system having multiple active controllers
US6219753B1 (en) * 1999-06-04 2001-04-17 International Business Machines Corporation Fiber channel topological structure and method including structure and method for raid devices and controllers
US6697387B1 (en) * 1999-06-07 2004-02-24 Micron Technology, Inc. Apparatus for multiplexing signals through I/O pins
US6662197B1 (en) 1999-06-25 2003-12-09 Emc Corporation Method and apparatus for monitoring update activity in a data storage facility
JP3853540B2 (en) 1999-06-30 2006-12-06 日本電気株式会社 Fiber channel-connected magnetic disk device and fiber channel-connected magnetic disk controller
US6446175B1 (en) 1999-07-28 2002-09-03 Storage Technology Corporation Storing and retrieving data on tape backup system located at remote storage system site
US6804676B1 (en) 1999-08-31 2004-10-12 International Business Machines Corporation System and method in a data processing system for generating compressed affinity records from data records
US6463501B1 (en) 1999-10-21 2002-10-08 International Business Machines Corporation Method, system and program for maintaining data consistency among updates across groups of storage areas using update times
US6338126B1 (en) * 1999-12-06 2002-01-08 Legato Systems, Inc. Crash recovery without complete remirror
US6625623B1 (en) 1999-12-16 2003-09-23 Livevault Corporation Systems and methods for backing up data files
US6526418B1 (en) * 1999-12-16 2003-02-25 Livevault Corporation Systems and methods for backing up data files
US6460055B1 (en) 1999-12-16 2002-10-01 Livevault Corporation Systems and methods for backing up data files
US6484173B1 (en) 2000-02-07 2002-11-19 Emc Corporation Controlling access to a storage device
US20020103889A1 (en) 2000-02-11 2002-08-01 Thomas Markson Virtual storage layer approach for dynamically associating computer storage with processing hosts
JP3866894B2 (en) * 2000-02-29 2007-01-10 東芝プラントシステム株式会社 Method for pyrolyzing plastics and pyrolysis products obtained by this method
US20020065864A1 (en) * 2000-03-03 2002-05-30 Hartsell Neal D. Systems and method for resource tracking in information management environments
JP3918394B2 (en) 2000-03-03 2007-05-23 株式会社日立製作所 Data migration method
US6487645B1 (en) 2000-03-06 2002-11-26 International Business Machines Corporation Data storage subsystem with fairness-driven update blocking
US6654831B1 (en) 2000-03-07 2003-11-25 International Business Machine Corporation Using multiple controllers together to create data spans
US6446176B1 (en) 2000-03-09 2002-09-03 Storage Technology Corporation Method and system for transferring data between primary storage and secondary storage using a bridge volume and an internal snapshot copy of the data being transferred
US6490659B1 (en) 2000-03-31 2002-12-03 International Business Machines Corporation Warm start cache recovery in a dual active controller with cache coherency using stripe locks for implied storage volume reservations
JP2001356945A (en) * 2000-04-12 2001-12-26 Anetsukusu Syst Kk Data backup recovery system
US6647387B1 (en) 2000-04-27 2003-11-11 International Business Machine Corporation System, apparatus, and method for enhancing storage management in a storage area network
US6622152B1 (en) 2000-05-09 2003-09-16 International Business Machines Corporation Remote log based replication solution
US6596113B2 (en) * 2000-05-16 2003-07-22 Kimberly-Clark Worldwide, Inc. Presentation and bonding of garment side panels
JP4175764B2 (en) 2000-05-18 2008-11-05 株式会社日立製作所 Computer system
JP2001337790A (en) 2000-05-24 2001-12-07 Hitachi Ltd Storage unit and its hierarchical management control method
DE60039033D1 (en) 2000-05-25 2008-07-10 Hitachi Ltd Storage system for confirming data synchronization during asynchronous remote copying
US6718404B2 (en) * 2000-06-02 2004-04-06 Hewlett-Packard Development Company, L.P. Data migration using parallel, distributed table driven I/O mapping
US6745207B2 (en) * 2000-06-02 2004-06-01 Hewlett-Packard Development Company, L.P. System and method for managing virtual storage
JP4776804B2 (en) 2000-06-12 2011-09-21 キヤノン株式会社 Network device, control method therefor, and computer program
US6697367B1 (en) 2000-06-12 2004-02-24 Emc Corporation Multihop system calls
US6804755B2 (en) 2000-06-19 2004-10-12 Storage Technology Corporation Apparatus and method for performing an instant copy of data based on a dynamically changeable virtual mapping scheme
US6912537B2 (en) 2000-06-20 2005-06-28 Storage Technology Corporation Dynamically changeable virtual mapping scheme
JP2002014777A (en) * 2000-06-29 2002-01-18 Hitachi Ltd Data moving method and protocol converting device, and switching device using the same
US6675258B1 (en) * 2000-06-30 2004-01-06 Lsi Logic Corporation Methods and apparatus for seamless firmware update and propagation in a dual raid controller system
US6766430B2 (en) * 2000-07-06 2004-07-20 Hitachi, Ltd. Data reallocation among storage systems
JP3998405B2 (en) * 2000-07-28 2007-10-24 富士通株式会社 Access control method and storage device using the same
EP1332444A4 (en) * 2000-10-09 2005-10-12 Maximum Availability Ltd Method and apparatus for data processing
US6675268B1 (en) * 2000-12-11 2004-01-06 Lsi Logic Corporation Method and apparatus for handling transfers of data volumes between controllers in a storage environment having multiple paths to the data volumes
US6976174B2 (en) 2001-01-04 2005-12-13 Troika Networks, Inc. Secure multiprotocol interface
WO2002065275A1 (en) 2001-01-11 2002-08-22 Yottayotta, Inc. Storage virtualization system and methods
US6681339B2 (en) * 2001-01-16 2004-01-20 International Business Machines Corporation System and method for efficient failover/failback techniques for fault-tolerant data storage system
US6990547B2 (en) 2001-01-29 2006-01-24 Adaptec, Inc. Replacing file system processors by hot swapping
US6560673B2 (en) * 2001-01-31 2003-05-06 Hewlett Packard Development Company, L.P. Fibre channel upgrade path
EP1370945B1 (en) 2001-02-13 2010-09-08 Candera, Inc. Failover processing in a storage system
US6606690B2 (en) 2001-02-20 2003-08-12 Hewlett-Packard Development Company, L.P. System and method for accessing a storage area network as network attached storage
JP4041656B2 (en) * 2001-03-02 2008-01-30 株式会社日立製作所 Storage system and data transmission / reception method in storage system
US6728736B2 (en) * 2001-03-14 2004-04-27 Storage Technology Corporation System and method for synchronizing a data copy using an accumulation remote copy trio
US6622220B2 (en) 2001-03-15 2003-09-16 Hewlett-Packard Development Company, L.P. Security-enhanced network attached storage device
US20020194428A1 (en) 2001-03-30 2002-12-19 Intransa, Inc., A Delaware Corporation Method and apparatus for distributing raid processing over a network link
US7340505B2 (en) 2001-04-02 2008-03-04 Akamai Technologies, Inc. Content storage and replication in a managed internet content storage environment
JP4009434B2 (en) 2001-04-18 2007-11-14 株式会社日立製作所 Magnetic disk unit coupling device
US6496908B1 (en) 2001-05-18 2002-12-17 Emc Corporation Remote mirroring
JP2003044230A (en) 2001-05-23 2003-02-14 Hitachi Ltd Storage system
US6772315B1 (en) 2001-05-24 2004-08-03 Rambus Inc Translation lookaside buffer extended to provide physical and main-memory addresses
US20020188592A1 (en) 2001-06-11 2002-12-12 Storage Technology Corporation Outboard data storage management system and method
US6876656B2 (en) * 2001-06-15 2005-04-05 Broadcom Corporation Switch assisted frame aliasing for storage virtualization
JP4032670B2 (en) 2001-06-21 2008-01-16 株式会社日立製作所 Storage device system for authenticating host computer
US6718447B2 (en) * 2001-06-28 2004-04-06 Hewlett-Packard Development Company, L.P. Method and system for providing logically consistent logical unit backup snapshots within one or more data storage devices
US6735637B2 (en) * 2001-06-28 2004-05-11 Hewlett-Packard Development Company, L.P. Method and system for providing advanced warning to a data stage device in order to decrease the time for a mirror split operation without starving host I/O request processsing
US7024477B2 (en) * 2001-06-29 2006-04-04 International Business Machines Corporation Service time analysis methods for the WSM QOS monitor
US6647460B2 (en) * 2001-07-13 2003-11-11 Hitachi, Ltd. Storage device with I/O counter for partial data reallocation
US20030014523A1 (en) * 2001-07-13 2003-01-16 John Teloh Storage network data replicator
US6883122B2 (en) * 2001-07-31 2005-04-19 Hewlett-Packard Development Company, L.P. Write pass error detection
US6816945B2 (en) * 2001-08-03 2004-11-09 International Business Machines Corporation Quiesce system storage device and method in a dual active controller with cache coherency using stripe locks for implied storage volume reservations
US6640291B2 (en) 2001-08-10 2003-10-28 Hitachi, Ltd. Apparatus and method for online data migration with remote copy
US7421509B2 (en) * 2001-09-28 2008-09-02 Emc Corporation Enforcing quality of service in a storage network
US6976134B1 (en) 2001-09-28 2005-12-13 Emc Corporation Pooling and provisioning storage resources in a storage network
US7404000B2 (en) * 2001-09-28 2008-07-22 Emc Corporation Protocol translation in a storage system
US20030079018A1 (en) * 2001-09-28 2003-04-24 Lolayekar Santosh C. Load balancing in a storage network
US7185062B2 (en) * 2001-09-28 2007-02-27 Emc Corporation Switch-based storage services
US6910098B2 (en) 2001-10-16 2005-06-21 Emc Corporation Method and apparatus for maintaining data coherency
US7200144B2 (en) 2001-10-18 2007-04-03 Qlogic, Corp. Router and methods using network addresses for virtualization
JP2003140837A (en) * 2001-10-30 2003-05-16 Hitachi Ltd Disk array control device
NZ532773A (en) * 2001-11-01 2005-11-25 Verisign Inc Transactional memory manager
US7107320B2 (en) * 2001-11-02 2006-09-12 Dot Hill Systems Corp. Data mirroring between controllers in an active-active controller pair
US7055056B2 (en) * 2001-11-21 2006-05-30 Hewlett-Packard Development Company, L.P. System and method for ensuring the availability of a storage system
US20030105931A1 (en) 2001-11-30 2003-06-05 Weber Bret S. Architecture for transparent mirroring
US6973549B1 (en) 2001-12-10 2005-12-06 Incipient, Inc. Locking technique for control and synchronization
US6948039B2 (en) 2001-12-14 2005-09-20 Voom Technologies, Inc. Data backup and restoration using dynamic virtual storage
US7024427B2 (en) 2001-12-19 2006-04-04 Emc Corporation Virtual file system
US7139885B2 (en) 2001-12-27 2006-11-21 Hitachi, Ltd. Method and apparatus for managing storage based replication
US7007152B2 (en) 2001-12-28 2006-02-28 Storage Technology Corporation Volume translation apparatus and method
US6912669B2 (en) 2002-02-21 2005-06-28 International Business Machines Corporation Method and apparatus for maintaining cache coherency in a storage system
JP2003248605A (en) 2002-02-26 2003-09-05 Hitachi Ltd Storage system, main storing system, sub-storing system, and its data copying method
JP4219602B2 (en) 2002-03-01 2009-02-04 株式会社日立製作所 Storage control device and control method of storage control device
US20030172069A1 (en) * 2002-03-08 2003-09-11 Yasufumi Uchiyama Access management server, disk array system, and access management method thereof
US6922761B2 (en) 2002-03-25 2005-07-26 Emc Corporation Method and system for migrating data
US6941322B2 (en) 2002-04-25 2005-09-06 International Business Machines Corporation Method for efficient recording and management of data changes to an object
JP2003316522A (en) 2002-04-26 2003-11-07 Hitachi Ltd Computer system and method for controlling the same system
JP4704659B2 (en) * 2002-04-26 2011-06-15 株式会社日立製作所 Storage system control method and storage control device
US7185169B2 (en) 2002-04-26 2007-02-27 Voom Technologies, Inc. Virtual physical drives
US6968349B2 (en) 2002-05-16 2005-11-22 International Business Machines Corporation Apparatus and method for validating a database record before applying journal data
US20030220935A1 (en) 2002-05-21 2003-11-27 Vivian Stephen J. Method of logical database snapshot for log-based replication
JP2004013367A (en) 2002-06-05 2004-01-15 Hitachi Ltd Data storage subsystem
JP4100968B2 (en) 2002-06-06 2008-06-11 株式会社日立製作所 Data mapping management device
JP2004030254A (en) * 2002-06-26 2004-01-29 Hitachi Ltd Remote si (storage interface) control system
US20040003022A1 (en) * 2002-06-27 2004-01-01 International Business Machines Corporation Method and system for using modulo arithmetic to distribute processing over multiple processors
US7752361B2 (en) 2002-06-28 2010-07-06 Brocade Communications Systems, Inc. Apparatus and method for data migration in a storage processing device
US7353305B2 (en) * 2002-06-28 2008-04-01 Brocade Communications Systems, Inc. Apparatus and method for data virtualization in a storage processing device
US20040028043A1 (en) * 2002-07-31 2004-02-12 Brocade Communications Systems, Inc. Method and apparatus for virtualizing storage devices inside a storage area network fabric
US7076508B2 (en) 2002-08-12 2006-07-11 International Business Machines Corporation Method, system, and program for merging log entries from multiple recovery log files
JP2003157152A (en) * 2002-08-22 2003-05-30 Fujitsu Ltd File control unit and filing system
JP2004102374A (en) * 2002-09-05 2004-04-02 Hitachi Ltd Information processing system having data transition device
US7020758B2 (en) * 2002-09-18 2006-03-28 Ortera Inc. Context sensitive storage management
ITMI20020441U1 (en) * 2002-09-26 2004-03-27 Elesa Spa RETURN HANDLE WITH BUILT-IN CONTROL BUTTON ASSEMBLY AND QUICK DISASSEMBLY
US6857057B2 (en) * 2002-10-03 2005-02-15 Hewlett-Packard Development Company, L.P. Virtual storage systems and virtual storage system operational methods
JP4139675B2 (en) * 2002-11-14 2008-08-27 株式会社日立製作所 Virtual volume storage area allocation method, apparatus and program thereof
JP2004192105A (en) 2002-12-09 2004-07-08 Hitachi Ltd Connection device of storage device and computer system including it
JP4325843B2 (en) 2002-12-20 2009-09-02 株式会社日立製作所 Logical volume copy destination performance adjustment method and apparatus
JP2004220450A (en) 2003-01-16 2004-08-05 Hitachi Ltd Storage device, its introduction method and its introduction program
JP4322511B2 (en) 2003-01-27 2009-09-02 株式会社日立製作所 Information processing system control method and information processing system
JP4387116B2 (en) 2003-02-28 2009-12-16 株式会社日立製作所 Storage system control method and storage system
US6959369B1 (en) 2003-03-06 2005-10-25 International Business Machines Corporation Method, system, and program for data backup
JP4165747B2 (en) 2003-03-20 2008-10-15 株式会社日立製作所 Storage system, control device, and control device program
US7111146B1 (en) 2003-06-27 2006-09-19 Transmeta Corporation Method and system for providing hardware support for memory protection and virtual memory address translation for a virtual machine
JP4537022B2 (en) * 2003-07-09 2010-09-01 株式会社日立製作所 A data processing method, a storage area control method, and a data processing system that limit data arrangement.
JP2005062928A (en) * 2003-08-11 2005-03-10 Hitachi Ltd Remote copy system using two or more sites
US20050050115A1 (en) * 2003-08-29 2005-03-03 Kekre Anand A. Method and system of providing cascaded replication
US7484050B2 (en) * 2003-09-08 2009-01-27 Copan Systems Inc. High-density storage systems using hierarchical interconnect
US8788764B2 (en) * 2003-10-08 2014-07-22 Oracle International Corporation Access controller for storage devices
US20050138184A1 (en) 2003-12-19 2005-06-23 Sanrad Ltd. Efficient method for sharing data between independent clusters of virtualization switches
US7373472B2 (en) * 2004-08-31 2008-05-13 Emc Corporation Storage switch asynchronous replication
JP2006127028A (en) * 2004-10-27 2006-05-18 Hitachi Ltd Memory system and storage controller

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6457109B1 (en) * 2000-08-18 2002-09-24 Storage Technology Corporation Method and apparatus for copying data from one storage system to another storage system
US20030208511A1 (en) * 2002-05-02 2003-11-06 Earl Leroy D. Database replication system
US20040103261A1 (en) * 2002-11-25 2004-05-27 Hitachi, Ltd. Virtualization controller and data transfer control method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8612703B2 (en) 2007-08-22 2013-12-17 Hitachi, Ltd. Storage system performing virtual volume backup and method thereof

Also Published As

Publication number Publication date
US20080016303A1 (en) 2008-01-17
US20060090048A1 (en) 2006-04-27
EP1777616A3 (en) 2007-07-25
US7673107B2 (en) 2010-03-02
EP1657631A1 (en) 2006-05-17
JP2006127028A (en) 2006-05-18
EP1777616A2 (en) 2007-04-25

Similar Documents

Publication Publication Date Title
US7673107B2 (en) Storage system and storage control device
US8635424B2 (en) Storage system and control method for the same
US8484425B2 (en) Storage system and operation method of storage system including first and second virtualization devices
JP4648751B2 (en) Storage control system and storage control method
US8683157B2 (en) Storage system and virtualization method
JP4575059B2 (en) Storage device
US7434017B2 (en) Storage system with virtual allocation and virtual relocation of volumes
US7415593B2 (en) Storage control system
US9619171B2 (en) Storage system and virtualization method
US20060095709A1 (en) Storage system management method and device
US20060143332A1 (en) Storage system and method of storage system path control
US7526627B2 (en) Storage system and storage system construction control method
US7243196B2 (en) Disk array apparatus, and method for avoiding data corruption by simultaneous access by local and remote host device
JP5052257B2 (en) Storage system and storage control device
JP2008171032A (en) Storage device and storage control method using the same
JP4604068B2 (en) Storage device
JP2006072440A (en) Storage device and data transfer method
EP1632842B1 (en) Storage device system

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION