WO1999026143A1 - Computer system transparent data migration - Google Patents
Computer system transparent data migration Download PDFInfo
- Publication number
- WO1999026143A1 WO1999026143A1 PCT/US1998/024187 US9824187W WO9926143A1 WO 1999026143 A1 WO1999026143 A1 WO 1999026143A1 US 9824187 W US9824187 W US 9824187W WO 9926143 A1 WO9926143 A1 WO 9926143A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- volume
- data
- migration
- tdmf
- module
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0647—Migration mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
- G06F3/0605—Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0689—Disk arrays, e.g. RAID, JBOD
Definitions
- the present invention relates to he management and maintenance of large computer systems and particularly to automated methods and apparatus for the movement of data (migration) from location in the system to another.
- a data migration facihty provides the ability to "relocate" data from one device to another device.
- a logical relationship is defined between a device (the source) and another device (the target).
- the logical relationship between a source and target volume provides the framework for a migration.
- the data migration facility controls multiple concurrent migrations in a single group that is called a session.
- a migration is the process that causes the data on the source to be copied to the target.
- the facihty is completely transparent to the end user and the application program. No appUcation disruption is required in order to make program changes and the facility is dynamically and nondisruptively activated. • The facility provides for full data access during the data migration. The data on the source volume is available to the end user for read and write access.
- the facihty provides for a dynamic and nondisruptive takeover of the target volume, when the source and target volumes are synchronized. • The migration facihty must ensure complete data integrity.
- the migration facility should NOT be restricted to any control unit model type or device type. All devices in the data center should be able to participate in a migration and the facihty should support a multiple vendor environment.
- Migration facilities that exist today were primarily designed for disaster recovery or the facilities were meant to address single volume failures. However, these facilities can also be used as data migration tools. These facilities differ in implementation and use a combination of host software and/or control unit firmware and hardware in order to provide the foundation for the data migration capability.
- the IBM 3990 host extended feature IBM Dual Copy and the EMC Symmetrix mirroring feature are two examples of local mirrors.
- a source volume and a target volume are identified as a mirrored paired and at the creation of the mirrored pair, data is transparently copied or migrated to the secondary volume. Continuous read and write access by applications is allowed during the data migration process and all data updates are reflected to the secondary volume.
- the mirror is under the complete control of the system operator. For example, through the use of system commands a mirrored pair can be created. At create time, the data will be copied to a secondary device. At the completion of this copy, the operator can then disconnect the pair and assign the secondary device to be the primary. This is called Transient Dual Copy and is an example of Dual Copy being used as a migration facility.
- the function of the EMC mirroring feature is to maximize data availability.
- the EMC subsystem will disconnect the mirror in the event of a failure of one of the paired devices.
- the mirror will be automatically reconstructed when the failing device is repaired by EMC field engineers.
- the EMC subsystem does not provide an external control mechanism to enable the user to selectively initiate and disconnect mirrored pairs. Therefore, the EMC mirroring feature can not be used as a migration facihty.
- Standard mirroring has a major restriction that prevents its universal utilization as a transparent data migration tool.
- the source and target volumes must be attached to the same logical control unit and therefore data can only be relocated within a single control unit. Although limited, this capability is an important tool to the storage administrator.
- IBM 3990-6 and EMC Symmetrix features support remote mirroring.
- a remote mirror function exists when paired devices can exist on different control units and subsystems. The primary objective of this function is to provide a disaster recovery capability. However, a remote mirroring function can also be used as a data migrator.
- DF/SMS extended Remote Copy is a host-assisted remote mirror method that uses components of DF/SMS and DFP.
- the major component is the System Data Mover (SDM). This component is also used for Concurrent Copy.
- SDM System Data Mover
- a IBM 3990-6 (or compatible) host is required as the primary or sending control unit.
- the secondary or receiving control unit can be an IBM 3990-3 or -6 or compatible host.
- XRC is transparent to the end user and the application program. No application program changes are required and the facility is dynamically activated. • The data on the source volume is available for read and write access during the XRC migration.
- XRC causes a disruption because XRC does NOT support a dynamic and nondisruptive takeover of the target volume when the source and target volumes are synchronized. The impact of this disruption can be expensive. All applications with data resident on the source volume must be disabled during the takeover process.
- • XRC ensures complete data integrity through the use of the journaling data sets and a common timer.
- • XRC is a relatively "open" facihty and therefore supports a multiple vendor environment. Any vendor that supports the IBM 3990-6 XRC specification can participate as the sending or receiving control unit in an XRC session. Any vendor that supports the IBM 3990-3 or basic mode IBM 3990-6 specification can participate as the receiving control unit in an XRC session.
- the IBM 3990-6 host also supports a feature that is called Peer-to- Peer Remote Copy (PPRC).
- PPRC Peer-to- Peer Remote Copy
- PPRC is host independent and therefore differs from XRC in several ways.
- PPRC operates as a synchronous process which means that the MVS host is not informed of I/O completion of a write operation until both the primary and secondary IBM 3990 host control units have acknowledged that the write has been processed.
- this operation is a cache-to-cache transfer, there is a performance impact which represents a major differentiator of PPRC over XRC.
- the service time to the user on write operations for PPRC is elongated by the time required to send and acknowledge the I/O to the secondary IBM 3990 host.
- ESCON fiber but does require an IBM proprietary protocol for this cross controller communication.
- This proprietary link restricts the use of PPRC to real IBM 3990-6 host controllers only and therefore does not support a multiple vendor environment.
- PPRC can also be used as a migration facihty.
- PPRC requires a series of commands to be issued to initiate and control the migration and is therefore resource intensive.
- IBM has a marketing tool called the PPRC Migration Manager that is used to streamhne a migration process with the use of ISPF panels and REXX execs.
- a migration using PPRC does not support an automatic takeover to the secondary device.
- IBM announced an enhancement to PPRC called P/DAS, PPRC Dynamic Address Switch, which apparently when available eliminates the requirement to bring down the applications in order to perform the takeover of the target device. Therefore, P/DAS may allow I/O to be dynamically redirected to the target volume when all source data has been copied to that device.
- P/DAS Use of P/DAS is restricted to IBM 3990-6 controllers and is supported only in an MVS/ESA 5.1 and DFSMS/MVS 1.2 environment. Therefore the enhancement offered by P/DAS is achieved at the cost of prerequisite software. Furthermore, the dynamic switch capabihty is based on the PPRC platform and therefore supports only a IBM 3990-6 environment. Although PPRC does have some of the requirements for transparent migration, PPRC does not have all of them.
- PPRC is transparent to the end user and the application program. No application program changes are required and the facihty is dynamically activated. • The data on the source volume is available for read and write access during a PPRC migration.
- • P/DAS apparently supports a dynamic and nondisruptive takeover of the target volume when the source and target volumes are synchronized.
- • PPRC ensures complete data integrity because a write operation will not be signaled complete until the primary and the secondary IBM 3990 control units have acknowledged the update request. This methodology will elongate the time required to perform update operations.
- • PPRC requires a proprietary link between two control units manufactured by the same vendor. For example, only IBM 3990-6 control units can participate in an IBM PPRC migration. Therefore PPRC does NOT support a multiple vendor environment.
- • PPRC is complex to use and therefore is operationally expensive and resource intensive. EMC Corporation's remote mirroring facihty is called Symmetrix Remote Data Facility (SRDF). The SRDF link is proprietary and therefore can only be used to connect dual Symmetrix 5000 subsystems.
- SRDF Symmetrix Remote Data Facility
- SRDF has two modes of operation. The first is a PPRC-like synchronous mode of operation and the second is a "semi-synchronous" mode of operation.
- the semi-synchronous mode is meant to address the performance impact of the synchronous process.
- the host In the synchronous mode, the host is signaled that the operation is complete only when both the primary and the secondary controllers have acknowledged a successful I/O operation.
- the semi-synchronous mode the host is signaled that the operation is complete when the primary controller has successfully completed the I/O operation.
- the secondary controller will be sent the update asynchronously by the primary controUer. No additional requests will be accepted by the primary controUer for this volume until the secondary controller has acknowledged a successful I/O operation. Therefore in the SRDF semi-synchronous mode, there may one outstanding request for each volume pair in the subsystem.
- SRDF data migration facihty
- EMC announced a migration capabihty that is based on the SRDF platform.
- This facihty allows a Symmetrix 5000 to directly connect to another vendor's subsystem.
- the objective of the Symmetrix Migration Data Service (SMDS) is to ease the implementation of an EMC subsystem and is not meant to be a general purpose facihty.
- SMDS has been called the "data sucker" because it directly reads data off another control unit.
- the data migration must include aU of the volumes on a source subsystem and the target is restricted to a Symmetrix 5000.
- An EMC Series 5000 subsystem is configured so that it can emulate the address and control unit type and device types of a existing subsystem (the source). This source subsystem is then disconnected from the host and attached directly to the 5000. The 5000 is then attached to the host processor. This setup is disruptive and therefore does cause an application outage.
- the migration begins when a background copy of the source data is initiated by the 5000 subsystem. Apphcations are enabled and users have read and write access to data.
- the target subsystem (the 5000) receives a read request from the host, the data is directly returned if it has already been migrated. If the requested data has not been migrated, the 5000 will immediately retrieve the data from the source device.
- the target subsystem receives a write request, the update is placed only on the 5000 and is not reflected onto the source subsystem. This operation means that updates will cause the source and target volumes to be out of synchronization. This operation is a potential data integrity exposure because a catastrophic interruption in the migration process will cause a loss of data for any volume in the source subsystem that has been updated.
- SMDS does not have all of them.
- AU apphcations are deactivated so that the 5000 can be installed and attached to the host in place of the source subsystem. Specialized software is loaded into the Symmetrix 5000 to allow it to emulate the source subsystem and initiate the data migration. This disruption can last as long as an hour.
- the data on the source volume is available for read and write access during a SMDS migration.
- C. SMDS may support a dynamic and nondisruptive takeover of the target volume when the source and target volumes are synchronized.
- the source subsystem must be disconnected and the migration software must be disabled and it is unknown whether this is disruptive and an outage is required.
- SMDS can link to control units manufactured by other vendors.
- the purpose of SMDS is to ease the disruption and simplify the installation of an EMC 5000 subsystem. Data can only be migrated to an EMC subsystem. Therefore SMDS does NOT support a multiple vendor environment.
- the present invention is a data migration facihty for managing and ma taining large computer systems and particularly for automatic movement of large amounts of data (migration of data ⁇ from one data storage location to another data storage location in the computer system.
- the computer system has a plurality of storage volumes for storing data used in the computer system, one or more storage control units for controlling I/O transfers of data in the computer system from and to the storage volumes, one or more apphcation programs for execution in the computer system using data accessed from and to the storage volumes, one or more operating system programs for execution in the computer system for controlling the storage volumes, the storage control units and the apphcation programs, and a data migration program for migrating data from one of said data volumes designated as a source volume to one of said data volumes designated a target volume while said apphcation programs are executing using data accessed from and to the storage volumes.
- the data migration program includes a main module to control the start of a migration session when said apphcation programs are using data accessed to and from the source volume, to migrate data from the source volume to the target volume, and to end the migration session whereby said apphcation programs are using data accessed to and from the target volume
- the data migration program includes a volume module to control said volumes during the migration session.
- the data migration program includes a copy module to control the copying of data from the source module to the target module during the migration session.
- the data migration program includes a monitor module for monitoring I/O transfers to the data volumes during the migration sessions.
- the data migration program includes dynamic activation and termination, includes non-disruptive automatic swap processing that does not require operator intervention, is transparent to apphcations and end-users while providing complete read and write activity to storage volumes during the data migration, permits multiple migrations per session, permits multiple sessions.
- the installation is non-disruptive (a compute program that can execute as a batch process rather than requiring the addition of a new hardware sub-system to the computer system), requires no IPL of the operating system, is vendor independent with any-to-any device migration independent of DASD control unit model type or device type.
- the data migration program includes a communication data set (COMMDS) located outside the DASD control unit which helps ensure vendor independence.
- the data migration facihty has complete data integrity at all times, provides the ability to introduce new storage subsystems with minimal disruption of service (install is disruptive), allows parallel or ESCON channel connections to ensure vendor independence and can be implemented as computer software only without need for dependency on hardware microcode assist.
- FIG. 1 depicts a block diagram of an enterprise computer system.
- FIG. 2 depicts further details of the enterprise system of FIG. 1.
- FIG. 3 depicts a single MVS operating system embodiment for migrating data from a source volume to a target volume.
- FIG. 4 depicts a multiple MVS operating system embodiment for migrating data from a source volume to a target volume.
- FIG. 5 depicts a flow diagram of the different states of the migration process.
- FIG. 6 depicts a block diagram of the different software components that control the master slave operation used to perform the migration process.
- FIG. 7 depicts a block diagram of the copy sub-task components that control the copying of data in the migration process.
- FIG. 8 depicts a block diagram of the customer I/O request components that control the operation of I/O requests during the migration process.
- FIG. 9 depicts a block diagram of the interrelationship of the different software components used to perform the migration process.
- FIG. 10 depicts the source and target volumes before a TDMF migration.
- FIG. 11 depicts the source and target volumes after a TDMF migration.
- FIG. 1
- the Transparent Data Migration Facilities is resident as an apphcation program under the MVS/ESA host operating system of the enterprise system 1 of FIG. 1.
- the Transparent Data Migration Facilities is an apphcation program among other apphcation programs 2.
- the TDMF migration provides model independence, operational flexibility, and a completely transparent migration.
- the enterprise system 1 is a conventional large-scale computer system which includes one or more host computer systems and one or more operating systems such as the MVS operating system.
- the operating systems control the operation of the host computers and the execution of a number of customer apphcations 2 and a TDMF apphcation.
- the TDMS apphcation is used for migrating data from one location in the enterprise system to another location in the enterprise system.
- the data locations in the enterprise system 1 are located on storage volumes 5. Data is transferred to and from the storage volumes 5 under control of the storage control units 4.
- the architecture of the FIG. 1 system is weU-known as represented, for example, by Amdahl and IBM mainframe systems.
- the storage volumes 5 between which the migration occurs may be located at the same place under control of a single operating system or may be located in volumes under the control of different operating systems. Also, some of the volumes may be located geographically remote such as in different cities, states or countries. When located remotely, the remote computers are connected by a high-speed data communication line such as a T3 line.
- the TDMF migration is intended for flexibility, for example, so that multiple MVS/ESA releases are supported (4.2, 4.3, 5.1, 5.2; OS390 VI.1, VI.2, VI.3, and N2.4), so that shared data system environments are supported, so that CKD/E compatible 388x and 399x control units are supported (Read
- Track CCW is required in the disclosed embodiment), so that 3380 and 3390 device geometries are supported, so that flexible device pairing options are possible (Uses device pairs with equal track sizes and numbers of cylinders and requires the target volume to be equal to or greater than the source volume), so that a single TDMF session can support up to 640 concurrent migrations, so that a single TDMF session can support concurrent migrations with differing control unit and device types, and so that optional point-in-time capabihty is available.
- the TDMF migration is intended to have an ease of use that includes, for example, easy instaUation, a simple parameter driven process, a centralized control and monitoring capabihty, and the ability to have integrated online help and documentation.
- the TDMF migration is intended to have a minimal impact on performance featuring, for example, a multiple tasking design, efficient user I/O scanning, asynchronous copy and refresh processes, minimization of device logical quiesce time and integrated performance measurement capabihty.
- the TDMF migration is intended to have apphcation transparency including, for example, dynamic instaUation and termination, no requirement for apphcation changes, continuous read and write access, dynamic enablement of MNS intercepts and dynamic and nondisruptive takeover of target devices.
- the TDMF migration is intended to have data integrity including, for example, continuous heartbeat monitoring, dynamic error detection and recovery capabihty, passive and nondestructive I/O monitoring design, optional data compare capabihty and maintenance of audit trails.
- FIG. 2
- the multiple operating systems 3 include the MNS operating systems 3-1, 3-2, ..., 3-M. These operating systems 3 are typicaUy running on a plurahty of different host computers with one or more operating systems per host computer. Each of the operating systems 3-1 , 3-2, ... , 3-M is associated with a plurahty of apphcation programs including the apphcation programs 2-1, 2-2, ... , 2-M, respectively.
- the apphcation programs 2-1, 2-2, ... , 2-M are conventional apphcation programs as would be found in a customer enterprise system. In addition to the conventional apphcation programs, the
- TDMF apphcation program 2-T is also present on each of the operating systems 3-1, 3-2, ..., 3-M that are to be involved in a data migration.
- each of the operating systems 3-1, 3-2, ..., 3-M is associated with a storage control unit complex 4 including the storage control unit complexes 4-1, 4-2, ..., 4-M, respectively.
- Each of the storage control units 4-1 , 4-2, ... , 4-M is associated with one or more volumes.
- the storage control unit 4-1 is associated with the data storage volumes 5-l 3 , ... , 5-1 vl .
- the storage control unit 4-2 is associated with the volumes 5-2, , 5-2 v2 .
- the storage control unit 4-M is associated with the volumes 5-M,, ... , 5-M VM .
- any one of the apphcations 2-1 , 2-2, ... , 2-M may store or retrieve data from any one of the volumes 5 under control of the operating systems 3-1, 3-2, ..., 3-M through the storage control units 4-1, 4-2, ... , 4-M.
- any one of the volumes 5 may be designated for replacement or otherwise become unavailable so that the data on the volume must be moved to one of the other volumes of FIG. 2.
- the data of a source volume X may be migrated to a target volume Y.
- volume 5-1 j has been designated as the source volume
- volume 5-M ! has been designated as the target volume.
- the objective is to move all the data from the source volume X to the target volume Y transparently to the operation of all of the apphcations 2-1, 2-2, ..., 2-M so that the enterprise system of FIG. 2 continues operating on a continuous 24-hour by seven day data availability without significant disruption.
- the single master MVS operating system 3-1 controls the operation of the apphcations 2-1 on a source volume 5 X .
- the enterprise system requires that the source volume 5 X be taken out of service requiring the data on the source volume 5 X to be migrated to a target volume 5 Y .
- the operating system 3-1 controls transfers to and from the source volume 5 X through the control unit 4 X .
- the MVS operating system 3-1 controls transfers to and from the target volume 5 Y through the storage control unit 4 ⁇ the data migration from the source volume 5 X to the target volume 5 Y is under control of the TDMF apphcation 2-T, .
- the MVS operating systems 3-1, 3-2, ..., 3-M are all part of the enterprise system 1 of FIG. 1.
- Each of the operating systems 3-1, 3-2, ..., 3-M is associated with corresponding apphcation programs 2-1, 2-2, ..., 2- M.
- Each of the apphcations 2-1, 2-2, ..., 2-M is operative under the control of the MVS operating systems 3-1, 3-2, ..., 3-M to access the storage volumes 5 through the storage control units 4-1, 4-2, ... , 4-M.
- the storage control units 4-1, 4-2, ..., 4-M control accesses to and from the volumes 5-1, 5-2, ..., 5-M, respectively.
- the volumes 5-1 include the volumes 5-l l5 ..., 5 ⁇ . v ⁇ , the volumes 5-2,, ..., 5- 2 V2 , ... , 5-M j , ... , 5-M VM , respectively.
- the source NOL ⁇ is designated as 5-1 , controlled through the SCU X 4-1 and the target volume NOL ⁇ is designated 5-M, and is controlled through the SCU Y designated 4-M.
- the data migration occurs from the source VOL ⁇ to the target VOL ⁇ without interruption of the customer apphcations 2-1, 2-2, ..., 2-M.
- the TDMF apphcation 2-T operates in a distributive manner across each of the operating systems 3-1, 3-2, ..., 3-M.
- the operation of the TDMF migration apphcation is in the form of a master/ slave implementation.
- Each of the operating systems 3-1, 3-2, ..., 3-M includes a corresponding instance of the TDMF apphcation, namely, the TDMF apphcations 2-T.
- One of the apphcations is designated as a master TDMF apphcation and the other of the apphcations is designated as a slave TDMF apphcation.
- a master TDMF apphcation is designated as a slave TDMF apphcation.
- the TDMF apphcation associated with the MNS 3-1 is designated as the master TDMF apphcation 2-T mas .
- Each of the other operating systems 3-2, ... , 3-M is associated with a slave apphcation 2- T sU , ... , 2-T slM , respectively.
- the data migration in the FIG. 3 and FIG. 4 systems is carried out with a TDMF apphcation 2-T (with a separate instance of that apphcation for each MNS operating system) without necessity of any modified hardware connections.
- the migration commences with an INITIALIZATION/ ACTIVATION phase 10, foUowed by a COPY phase 11, foUowed by a REFRESH phase 12, foUowed by a QUIESCE phase 13, foUowed by a SYNCHRONIZE phase 14, foUowed by a REDIRECT phase 15, foUowed by a RESUME phase 16, and ending in a TERMINATION phase 17.
- an error occurs, the error is detected by the ERROR module 20 which passes the flow to the TERMINATION stage 17.
- the ERROR module 21 passes the flow to the TERMINATION stage 17. If an error is detected during the REFRESH phase 12, the ERROR module 22 passes as a flow to the TERMINATION phase 17. If an error occurs during the QUIESCE phase 13 or the
- the ERROR modules 23 and 24, respectively, pass the flow to the RESUME phase 16. If an error occurs during the REDIRECT phase 15, the ERROR module 25 passes the flow to the BACKOUT module 26 which then returns the flow to the RESUME phase 16.
- the migration phases of FIG. 5 are active in each of the MVS operating systems 3 and the MASTER TDMF apphcation and the MASTER MVS operating system insure that one phase is completed for aU operating systems, both master and aU slaves, before the next phase is entered.
- the TDMFMAIN module is the main task which calls other of the modules for controlling the phasewise execution of the mmigration as set forth in FIG. 5.
- the TDMFMAIN module starts the TDMF session of FIG. 5 on the master and slave systems, opens aU necessary files, reads and validates control cards, validates aU parameters and volumes to be involved in the migration session.
- the TDMFVOL module controls the migration process of any and aU volumes being migrated within the TDMF session of FIG. 5.
- the TDMFICOM generates channel programs to implement I/O operations being requested to the COMMUNICATIONS DATA SET (COMMDS) via parameters passed by the caUer.
- the caUer may request that the system initialize the COMMDS or to selectively read or write caUer specified TDMF control blocks as required by the controUer.
- the TDMFICOM module is caUed by the TDMFMAIN module only.
- the TDMFIVOL module generates channel programs to implement I/O operations being requested to volumes being migrated to the TDMF session of FIG. 5 via parameters passed by the caUer.
- the TDMFSIO module issues a request to the MVS INPUT/OUTPUT SUPERVISOR (IOS) component to perform the I/O operation represented by the INPUT/OUTPUT SUPERVISOR control LOCK (IOSB) in conjunction with its SERVICE REQUEST BLOCK (SRB) as requested by the caUer.
- IOS INPUT/OUTPUT SUPERVISOR
- SRB SERVICE REQUEST BLOCK
- the TDMFSIO module can be caUed from the TDMFICOM module and the TDMFIVOL module. Upon completion of the I/O operation requested, control is returned to the calling module.
- the COPY Sub-task functions with one per volume migration for the master only.
- the COPY Sub-task includes the TDMFCOPY module which provides the functions required by the TDMF COPY Sub-task.
- the TDMFCOPY Sub-task is attached as a Sub-task by the module TDMFVOL by means of an ATTACHX macro during its processing.
- the TDMFCOPY module implements three Sub-task phases, namely, the COPY sub-phase, the REFRESH sub-phase, and the SYNCHRONIZATION sub-phase.
- the TDMFIVOL module may also be caUed by the TDMFCOPY module to generate channel programs to implement I/O operation being requested to volumes being migrated to the TDMF session of FIG. 5 via parameters passed by the caUer.
- the TDMFSIO module when caUed by the TDMFIVOL module issues a request to the MVS INPUT/OUTPUT SUPERVISOR (IOS) component to perform the I/O operation represented by the INPUT/OUTPUT SUPERVISOR CONTROLE BLOCK (IOSB) in conjunction with its SERVICE
- FIG. 8 a block diagram of the monitoring of I/O operations during a migration session of FIG. 5 is shown.
- the FIG. 8 modules are conventional with the addition of the TDMFIMON module.
- the purpose of the TDMFIMON module is to monitor aU customer I/O operations to the source and target volumes during the life of an active migration session of FIG. 5.
- the TDMFIMON module insures that the primary design objective of insuring data integrity of the target volume by insuring that any update activity by customer I/O operation that changes the data located on the source volume wUl be reflected on the target volume.
- the TDMFIMON module only monitors volumes involved in a migration session of FIG. 5 and aUowing any I/O operations to volumes not involved in the TDMF migration session not to be impacted in any fashion.
- the TDMFMAIN module functions to control systems initialization and system determination and caUs the TDMFVOL module, the TDMFICOM module, and the TDMFIVOL module.
- the TDMFVOL module is responsible for volume imtiahzation with the TDMFNVOL module, volume activation with the TDMFAVOL module, volume quiesce with the TDMFQVOL module, volume resume with the
- the TDMFRVOL module and volume termination with the TDMFTVOL module.
- the TDMFVOL module caUs or returns to the TDMFMAIN module, the TDMFICOM module, the TDMFTVOL module, and the TDMFCOPY module.
- the TDMFCOPY module is caUed by and returns to the TDMFVOL module and the TDMFIVOL module.
- the TDMFICOM module caUs and is returned to the TDMFSIO module.
- the TDMFIVOL module caUs and is returned to by the TDMFSIO module.
- DetaUs of the various modules of FIG. 6, FIG. 7, FIG. 8 and FIG. 9 for performing the migration in the phases of FIG. 5 are described in detaU in the foUowing LISTING 1.
- a data migration is to occur between the volume 5-1, as the source through the storage control unit SCU X designated 4-1 to the target volume 5-M VM through the storage control unit SCU Y designated 4-M.
- SOURCE volume has device address AOO, serial number SRCOOl and is in the ON-LINE status before the TDMF migration.
- the target volume has the device address SCO, serial number TGT001 and an ON-LINE status before target migration.
- aU I/O operations are directed to the source volume 5 - 1 , .
- the source volume 5-1 has a device address of AOO, a serial number of TGT001 and an OFF-LINE status.
- the target volume 5-M VM has a device address of FCO, a serial number of SRCOOl, and an ON-ONE status.
- TDMF is initiated as a MVS batch job.
- TDMF' s job control language identifies the system type, session control parameters, and the communications data set (COMMDS).
- the session control parameters define the paired source (VOL ⁇ ) and target volumes (VOL ⁇ ) which make up the migrations under the control of this session.
- the control parameters also define the master system and aU slave systems.
- the COMMDS set is the mechanism that is used to enable aU systems to communicate and monitor the health of the migrations in progress.
- the COMMDS set is also used as an event log and message repository.
- AU systems that have access to the source volume must have TDMF activated within them. This requirement is critical in order to guarantee data integrity.
- the responsibilities of the master system include:
- the responsibilities of the slave systems include: • Initialization of the slave TDMF environment. • Establishment of communication to the master through the COMMDS.
- the master system initiates and controls aU migration.
- a migration is broken into the major phases as Ulustrated in FIG. 5.
- the master initiates each phase and aU slave systems must acknowledge in order to proceed. If any system is unable to acknowledge or an error is detected, the systems wiU cleanup and terminate the migration.
- TDMF is designed with a passive monitoring capabihty so that aU user I/O operations to the source volume wiU complete as instructed by the apphcation. This completion means that if any unexpected event occurs, the data on the source volume is whole.
- TDMF wiU merely cleanup and terminate the migration. Standard cleanup and recovery is active for aU phases with the exception of the REDIRECT phase.
- a TDMF session begins with an INITIALIZATION phase, immediately foUowed by an ACTIVATION phase.
- ah systems confirm the validity of the source and target volumes and enable user I/O activity monitoring.
- the master wUl begin the COPY phase and initiate a sub-task (COPY Sub-task) to copy data from the source to the target volume.
- This COPY Sub-task is a background activity and wiU copy data asynchronously from the source volume to the target volume.
- I/O monitoring provided by the TDMFIMON module of FIG. 8 provides the ability to detect updates to the source volume.
- a system that detects a source volume update wiU note this update in a refresh notification record and send this information to the master system.
- the REFRESH phase is initiated by the master. During this phase, the updates that have been made to the source volume are reflected on to the target volume.
- the REFRESH phase is divided into multiple cycles. A refresh cycle is a pass through the entire source volume processing all updates.
- TDMF measures the time required to process the refresh operations in a single refresh cycle.
- the refresh cycles wiU continue untU the time for a refresh cycle is reduced to a threshold that is based on the coUected performance data. When the threshold is reached, this event signals the master to be ready to enter the phase.
- the Master issues a request to the Slave(s) to Quiesce aU I/O to the Source volume (from the slave side). At this time the final group of detected updates are coUected and applied to the Target volume (S YNCHRONIZA ⁇ ON).
- the Master starts the volume REDIRECT (swap) phase.
- the Target volume has now become the new Source volume.
- the Master initiates the RESUME phase so that user I/O can continue to the new source volume.
- the elapsed time between the last QUIESCE phase and the RESUME phase is approximately four (4) seconds plus the ACTUAL S YNCHRONIZA ⁇ ON time (which should always be less than the specified synchronization goal).
- the Synchronization Goal default is five (5) seconds. Synchronization wUl not occur unless the calculated synchronization time is less than the goal. If the synchronization goal is increased, then the time the user I/O is queued (quiesced) is greater. If the value 999 is used, this equates to synchronize as soon as possible; it does not matter how long it takes. This can be a significant amount of time depending on the write activity of the source volume. Therefore, use discretion when changing this value.
- the Synchronization goal parameter may be specified for each volume migration, aUowing the customer to specify the amount of time (in seconds) that he wiU aUow the Synchronization phase to execute. This is the maximum amount of time.
- the Master and Slave(s) wake up for processing based upon a variable which is the minimum number of seconds based upon any migration volumes current phase or stage.
- the phases of a volume migration and their associated time intervals are:
- the CPU overhead associated with running TDMF is less than 3 percent on average for the Master system. This is dependent upon the number of volumes within a session and write activity against those source volumes. A Slave system's CPU overhead wiU be almost non-measurable. For example, if the Master job takes 44 minutes, 22 seconds to migrate 16 volumes, and the TCB time is 63.5 seconds, and the SRB time is 2.92 seconds, then the CPU overhead is equal to 2.49 percent ((63.5 + 2.92) / 2662) for that session.
- the Master should be placed on the system that has the most updates or on the system where the owning apphcation is executing. If multiple TDMF Master sessions are being run on multiple operating systems, then the MVS system(s) must have a global facihty like GRS or MIM. This is to prevent inadvertent usage of the same volumes in a multi-session environment. If a GRS type facihty is not avaUable in the complex, then aU Master sessions must run on the same operating system.
- the Communications Dataset should be placed on a volume with low activity and the volume must not be involved in any migration pairing.
- Performing refresh operations to the target volume to reflect update activity Checking the internal health of the master environment and the health of aU slave systems.
- the master issues the quiesce request to aU systems so that activity to the source volume wUl be inhibited.
- this is the indication for the slave system to enter the QUIESCE phase and to quiesce and send to the master the final group of detected updates so that the master may perform the synchronization process or optionaUy restart an additional REFRESH phase.
- synchronization wiU begin in the SYNCHRONIZATION phase.
- the device quiesce is transparent to aU apphcations.
- An apphcation can issue I/O requests; however, they wiU not be executed and wUl be queued untU the RESUME phase.
- the purpose of the coUection of performance information during the REFRESH phase is to minimize the device quiesce time.
- the master wiU initiate the REDIRECT phase.
- the purpose of the REDIRECT phase is to cause I/O activity to be redirected to the target volume so that it can become the primary device.
- the master will request that aU systems perform and confirm a successful redirect before the redirect operation is performed by the Master system.
- the master wih then initiate the RESUME phase request so that any queued user I/O will begin execution.
- Subsequent I/O operations wiU then be directed to the target device.
- I/O monitoring wUl disable and the migration wiU terminate in the TERMINATE phase.
- the TDMF facihty simplifies the relocation of user-specified data from existing hardware to new storage subsystems, for example, without loss of access to data, without the imposition of unfamiliar operator procedures, and without dependence upon vendor firmware or hardware release levels. • TDMF is transparent to the end user and the apphcation program. No apphcation program changes are required and the facihty is dynamicaUy activated.
- TDMF provides for full data access during the data migration .
- the data on the source volume is avaUable to the end user for read and write access.
- TDMF supports a dynamic and nondisruptive takeover of the target volume when the source and target volumes are synchronized. AU apphcations with data resident on the source volume may remain enabled during the takeover process. • TDMF ensures complete data integrity through the use of passive
- TDMF is an "open" facihty and therefore supports a multiple vendor environment. Any vendor that supports the IBM 3880 or IBM 3990 ECKD specification can participate as the sending or receiving control unit in a TDMF migration. (Read Track CCW support is required).
- TDMF operates in both a single CPU and/or multiple CPU environment with shared DASD.
- TDMF a competitive multiple vendor environment is enhanced because TDMF is designed to be independent of any specific storage subsystem. LISTING 1 TDMF LOGIC ® 1998 AMDAHL CORPORA ⁇ ON
- This module is to start the TDMF session on a Master or Slave system, open all necessary files, read and validate control cards, validate all parameters and volumes to be involved in the migration session. Initialize all TDMF control blocks and obtain the necessary storage for these control blocks.
- the Communications Data Set In the case of a TDMF Master system, the Communications Data Set
- This module executes in Supervisor State, Protect Key Zero and is the first part of the main task for an address space.
- This module contains a MVS Extended Set Task Abend Exit (ESTAE) recovery routine. This routine ensures that the TDMFMAIN task receives control in the event that the TDMF main task attempts an abend condition.
- ESTAE MVS Extended Set Task Abend Exit
- CSECT TDMFMAIN Set up TDMF areas to include save areas, termination routines, message table, patch table, work areas and clock.
- Set up recovery environment to include input and output areas, file processing, and initiation/termination.
- routine ADD MESSAGE for inclusion upon return to TDMFMAIN.
- routine ADD SYSTEM MESSAGE for inclusion upon return to TDMFMAIN.
- Call module TDMFICOM requesting that all TDMF main system records to be written for this system. This includes the main TDMF control block, the TDMFTSOM control block, and information related to the TDMF authorization routine. This is done to provide diagnostic information that may be necessary at a later time. Call module TDMFVOL.
- This module will call module TDMFVOL, CSECT TDMFVOL, to complete all volume migration phases.
- TDMFVMSG TDMF volume message control block
- TDMFTSOM TDMF TSO Monitor
- TDMFMMSG embedded TDMF Monitor message
- TDMFAUTH TDMF authorization module
- Validate Format 1 DSCB and CCHHR in order to verify that COMMDS has contiguous cylinders allocated and that the allocation is large enough.
- DRM Dynamic Reconfiguration Manager
- TMFSIO Load I/O code
- TDMFICOM Load communications I/O code
- TDMFCOPY Load copy code
- Validate all source and target volumes in migration session Verify that all UCBs exist for source and target volumes. Verify that UCBs do NOT exist for new volume serial numbers. Save UCB address information for PIN processing.
- the system will allow target devices to be non-existent (or offline) on a Slave system only, if the type of volume migration being requested is for a Point-In-Time migration.
- TDMFTSOM TDMF TSO Monitor
- TDMFMMSG embedded TDMF Monitor message
- the TDMF design requires that the lOSLEVEL used by the operating system for Dynamic Device Reconfiguration (DDR) be saved for subsequent use by TDMFIVOL.
- DDR Dynamic Device Reconfiguration
- CSECT TDMFINSE Initialize system entries for Master and Slave(s) (if present), TOD value, TDMF ASID, and
- I/O operations are issued to collect information from the volumes involved in LISTING 1 TDMF LOGIC ® 1998 AMDAHL CORPORA ⁇ ON the migration session via subsequent calls to module TDMFIVOL.
- I/O operations requested are the miscellaneous requests to Read Device Characteristics, Sense ID, Sense Subsystem Status. Then the volumes are checked to ensure that the volumes are not involved in a duplexing operation, volumes support ECKD CCWs, device type is supported, track geometries are equal, target device size is equal to or greater than the source device.
- TDMFICOM requesting a write/update of system record zero information, followed by a release of the COMMDS.
- TDMFICOM requesting the write of the TMFVMSG control block for all volumes from this system. If this is a Master system, Dequeue Exclusive for all volumes in this migration session and unpin the UCBs.
- TDMFMMSG Collect and print all TDMF TSO Monitor messages (TDMFMMSG), system messages (TDMFSMSG), and all migration messages (TDMFVMSG) for all volumes. If this is a Master system, collect and print all TDMF TSO Monitor messages (TDMFMMSG), system messages (TDMFSMSG), and all migration messages (TDMFVMSG) for all volumes that were issued on behalf of any Slave system that have been previously written to the COMMDS. This Slave system information is collected via calls to TDMFICOM collecting the appropriate control blocks from the COMMDS.
- a patch area used for fixes is A patch area used for fixes.
- TDMFVOL The purpose of this module is to control the migration process of any and all volumes being migrated within the TDMF session. It is called from module TDMFMAIN, CSECT TDMFMAIN, after the initialization of the COMMDS on a Master system or upon completion of all error checking on a Slave system. This module executes as a part of the main task for an address space and executes in Supervisor State, Protect Key Zero. This module contains a MVS Extended Set Task Abend Exit (ESTAE) recovery routine.
- ESTAE MVS Extended Set Task Abend Exit
- This routine ensures that the TDMFVOL task receives control in the event that the TDMFVOL task attempts an abend condition.
- routine ADD MESSAGE for inclusion upon return to TDMFMAIN.
- routine ADD SYSTEM MESSAGE for inclusion upon return to TDMFMAIN.
- phase processing is executed in the following sequence:
- TDMFRVOL Resume processing
- Each sub-phase executing as part of the individual sub-task communicates with the volume processing main task via posting of a sub-phase event control block (ECB).
- EBC sub-phase event control block
- This sub-task provides three sub-phases within the domain of it's execution. These are the copy sub-phase, refresh sub- phase and synchronization sub-phase.
- a copy sub-phase is used to copy all data from the source volume to the target volume.
- a refresh sub-phase is executed at least once copying all data that may have been updated by a Master or Slave system for that volume.
- This design allows customer access to all data on the source volume, including update activity, during the life of the migration process.
- the TDMF I/O Monitor module (TDMFIMON) dynamically detects that a Master or Slave system has updated data located on the source volume and communicates this information to the TDMFVOL main task to be used by CSECT TDMFMVOL.
- Each volume control block contains a word indicating the phases that have been completed or are being requested to be executed upon each of the possible systems.
- Each phase is represented by a word (32 bits), called the MSV_( ⁇ hase)MASK.
- the MSV_(phase)MASK field is initialized to a value of one in all system defined bit positions that do not exist in the TDMF session. Processing for an individual phase and it's corresponding mask, is handled by the appropriate CSECT. For example, swap processing is handled in TDMFSVOL and is controlled with bits in MSV SMASK.
- the appropriate MSV_(phase)MASK bit zero defines that the Master system is requesting the function.
- a Slave system does not attempt execution of the phase if the Master system has not requested the function (bit zero of the MSV_(phase)MASK is a zero).
- Bits 1 through 31 are set when the respective slave system completes the phase that has been requested by the Master.
- the Master system normally only executes the requested phases when all slave systems have responded that the phase has completed. Thus, all bits are set to one.
- an indicator is set in a volume LISTING 1 TDMF LOGIC ® 1998 AMDAHL CORPORA ⁇ ON flag (MSV VOL FLAG) indicating that that phase is complete. Processing will continue to the next phase.
- this TDMF design provides a positive acknowledgment environment, i.e. , that all Slave systems only execute the functions requested by the Master system.
- the Master system only executes the requested phase upon successful completion by all Slave systems for that phase.
- there are two cases in which a Slave system might like to alter the phase processing sequence but since a Slave system cannot request processing of a specific phase as bit zero can only be set by the Master system, the Slave system must have a way to notify the Master to alter it's phase processing sequence.
- the first case is when a Slave system detects an error via the TDMF I/O Monitor which requires volume termination processing of the migration be invoked.
- the second case is that a Slave system detects an error during swap processing which requires that back-out processing be invoked. Since other Slave systems within the TDMF migration session may be present, and may have successfully executed swap processing for the volume, a swap volume processing failure on one Slave system must be communicated via the Master system, to all Slave systems, to invoke back-out processing.
- a Slave system may turn on its designated system bit in MS V TM ASK or MS V BM ASK requesting the Master system to effectively request these functions on all systems.
- the Slave system designated bit is reset to zero by the Master system after the requested functions bit zero is set. This allows the Slave systems designated system bit to be again used as an acknowledgment indicator indicating that the Slave system has completed the requested function.
- the Master system examines MSV TMASK and/or MSV BMASK at the appropriate points to determine if a Slave system has set their respective bit in MSV TMASK or MSV BMASK indicating to the Master to alter it's volume processing sequence.
- volume termination goes to NEXT VOLUME. If volume termination is being requested, go to NEXT VOLUME. If TDMF is executing on a Slave system, go to NEXT VOLUME. If the copy sub-task does not exist, go to NEXT VOLUME. If the copy sub-phase has been posted, go to CHECK COPY ENDED.
- NEXT VOLUME PROCESSING If the volume is not currently quiesced, go to NEXT VOLUME. Else, set request function to quiesce once the refresh end function has completed. If slave systems are involved, the Master will wait until notification from Slave(s) that quiesce request has been processed, before continuing. If automatic migration, quiesce and synchronize without intervention. If prompt or single group option was selected, create addressability to the TDMF Monitor control block (TDMFTSOM), pass indicator to TDMF Monitor that a prompt is required.
- TDMFTSOM TDMF Monitor control block
- a TDMF systems response time is set to be normally the minimum time interval from the above, based upon every volumes phase.
- TDMF was designed to be as responsive as possible without placing a large burden upon the CPU.
- the TDMF system must be extremely responsive to minimize the impact to customer application I/O operations that have been quiesced and lasting until the I/O operations have been resumed. This includes the time required to do volume quiesce phase processing, volume synchronize sub-phase processing, volume swap phase processing, and volume resume phase processing. This is why the systems response time is set to one second if any volumes migration is quiesced and not resumed, which encompasses the aforementioned phases and sub-phase.
- TDMF will only be as responsive as required based upon necessity.
- TDMF I/O communications routine (module TDMFICOM) to execute the requested read or write operation. If TDMF is executing on a system that has been previously determined to have been "dead” , the I/O operation is allowed only if it is a write I/O operation and the write I/O operation is not attempting to re-write system record zero which contains all TDMFMSE, TDMFMSV, and TDMFMSVE information. This will prevent a system that has "died” from re-awakening and destroying system record zero information that may be in use by the recovery process and it's subsequent termination of the TDMF system.
- the purpose of this routine is to pin or unpin a MVS Unit Control Block (UCB) as requested by the caller.
- the purpose of the pin or unpin is to prevent MVS from invoking Dynamic Reconfiguration Management (DRM) functions, thereby changing the address of the UCB during the volume migration process.
- DRM Dynamic Reconfiguration Management
- MODIFY UCBDDT Determine each source and target volumes MVS UCBs Device Dependent Table (DDT) via the common extension in the UCB. Processing will depend on a caller option of chain or unchain.
- DDT Device Dependent Table
- TDMF Device Dependent Table
- TDMFIMON TDMF I/O Monitor
- module TDMFIMON receives control at the proper time for it's processing.
- the chaining or unchaining operation being requested dynamically modifies the standard MVS UCB to point to the TDMFDDTE during a chain operation or reset to point to the standard MVS DDT for an unchain operation.
- SIO Start I/O
- DDT is changed to point to the TDMFIMON main entry point, and it's address is saved in the TDMFDDTE control block. This allows the TDMFIMON routine to receive control at the proper time during MVS' Input/Output Supervisor (IOS) processing transparently.
- IOS Input/Output Supervisor
- TDMFIMON Functional Recovery Routine
- IOSB Input/Output Supervisor Block
- IOSDIE Input/Output Supervisor Disable Interrupt Exit
- CLEAN IOSLEVEL A TDMF design consideration requires that TDMF be able to quiesce and prevent customer
- TDMF implements this requirement based upon the fact that MVS IOS will not allow an I/O operation, represented by an IOSB, who's lOSLEVEL value is numerically less than the UCBLEVEL field of the UCB to which the I/O operation is being directed.
- IOSB who's lOSLEVEL value is numerically less than the UCBLEVEL field of the UCB to which the I/O operation is being directed.
- Normal I/O operations have an lOSLEVEL of X'Ol'; the UCB has a UCBLEVEL of X'Ol ' .
- the UCBLEVEL will be changed via an lOSLEVEL macro to the Dynamic Device Reconfiguration (DDR) level of the MVS operating system.
- DDR Dynamic Device Reconfiguration
- the UCBLEVEL is effectively a priority which only allows subsequent I/O operations, represented by an IOSB, of equal or higher priority to be issued against the migrating volume.
- This routine is to ensure during volume termination processing that a UCB is not inadvertently left with a UCBLEVEL which is not equal to the normal UCBLEVEL ofX'Ol'.
- the TDMF I/O Monitor (module TDMFIMON) examines the TDMFDDTE to determine if the TDMF system still requires the module to monitor I/O operations being executed against the migration volumes. This routine indicates to TDMFIMON that monitoring of I/O operations is no longer required.
- MS ADDRESSABIL ⁇ Y Set up standard system addressing to the system control block for the system (Master or
- TDMFMSE first volume control block
- TDMFMSV first volume control block
- TDMFMSVE first volume entry control block
- This control block Provide addressability to the control block to communicate to the TDMF TSO Monitor.
- This control block provides a communications vehicle between the TDMF main task and the TSO user executing the TDMF Monitor.
- the TDMF system In order to allow multiple systems in a TDMF session to communicate with each other, the status of all volumes and all systems during volume migration processing, is recorded in the COMMDS. To protect the data integrity of the data within the COMMDS, the TDMF system logically provides protection via a standard MVS reserve or enqueue macro and physically provides protection with an actual hardware reserve channel command word which will be issued during the read of system record zero. This will prevent multiple TDMF systems from attempting to use the information recorded in the COMMDS at the same instant in time.
- RELEASE COM At the end of the update of system record zero in the COMMDS, a logical MVS dequeue operation is performed to indicate that the COMMDS may be accessed by another TDMF system.
- the actual physical release operation is dependent upon whether the MVS operating system has a physical reserve upon the volume belonging to another application using the same DASD volume containing the COMMDS. MVS will not release the physical protection until no applications still require any physical volume protection.
- TDMFVMSG TDMF volume message control block
- UCB UNALLOCATE Allow the TDMF system to clear allocation information placed in the UCB during activate volume processing in CSECT TDMFAVOL.
- ESTAE MVS Extended Set Task Abend Exit
- VOLUME PROCESSING SYSTEM DEAD routine located in this CSECT, will be invoked on all systems that detect that this system is "dead” . These routines will terminate all volume migrations, in a controlled fashion, based upon each volumes' current migration LISTING 1 TDMF LOGIC ® 1998 AMDAHL CORPORA ⁇ ON phase or sub-phase, for each system, in the TDMF life cycle sequence. These routines will modify the normal volume processing loop described in CSECT TDMFVOL previously.
- any system detects that a system is "dead” , mark the appropriate systems entry that it is considered “dead”, and set the MSV_(phase)MASK to the appropriate phase to either terminate the volume migrations, back-out the volume migrations, or to consider the volume migrations complete if swap processing has completed. Ensure that any volume that has had I/O operations quiesced to it, be resumed.
- NEXT VOLUME PROCESSING SYSTEM DEAD Increment all system control blocks to their next respective entries and if all volumes have not been processed, go to VOLUME PROCESSING SYSTEM DEAD. If all volumes have been processed, determine the amount of time that this system should wait before reawakening.
- This phase is to terminate the volume processing for a specific volume.
- volume processing is complete for this volume, go to NEXT VOLUME PROCESSING in CSECT TDMFVOL.
- TDMFIMON will signal the error condition by turning off a bit in the LISTING 1 TDMF LOGIC ® 1998 AMDAHL CORPORA ⁇ ON
- TDMFDDTE control block indicating that the I/O Monitor is no longer active. Therefore, if the I/O Monitor is no longer active, go to MONITOR DEAD.
- this is a Master system go to CHECK FOR SLAVE TERMINA ⁇ ON REQUEST. Otherwise, this is a Slave system which will examine the Master systems entry to determine if a failure occurred upon the Master system and that the Master system requested system termination of the TDMF session on all Slave systems.
- volume termination processing has completed for this Slave system, go to NEXT VOLUME PROCESSING in CSECT TDMFVOL. If volume termination processing has not completed upon this Slave system, go to
- volume termination request function If the volume termination request function has been set, go to NORM AL TERMINATTON CHECK. If any Slave system has requested volume back-out processing for this volume, set that Slave systems back-out volume request bit in
- MSV BMASK to zero. Set the request function in MSV BMASK to one to cause all Slave systems to invoke volume back-out processing . Go to
- NOT BACKOUT_REQUEST Determine if any Slave system has requested termination of this volume migration. If no, go to NORMAL TERMINA ⁇ ON CHECK. If any Slave system has requested volume termination processing for this volume, set that Slave systems volume termination request bit in MSV TMASK to zero. Set the request function bit in MSV TMASK to one to cause LISTING 1 TDMF LOGIC ® 1998 AMDAHL CORPORA ⁇ ON all Slave systems to invoke volume termination processing. Go to
- volume termination request function If the volume termination request function has been set, go to NORMAL TERMINA ⁇ ON CHECK. Terminate copy sub-task. If the synchronization sub-phase ECB is waiting to be posted, post the sub-phase telling the sub-phase to terminate, then go to TERM QUIESCE CHECK. If the refresh sub-phase ECB is waiting to be posted, post the sub-phase telling the sub-phase to terminate.
- SKIP CUSTOMER TERMINATE Get addressability to the TDMFTSOM control block. If TDMF has already acknowledged a change in the synchronization goal time value, go to TEST CUSTOMER TERMINATE. If the customer has not requested a change in volume synchronization goal time, go to TEST CUSTOMER TERMNATE. Acknowledge volume synchronization goal time change and save the value for the synchronization goal.
- TEST CUSTOMER TERMINATE If TDMF has already acknowledged a volume termination request, go to
- SKIP CUSTOMER TERMINATE If the customer has not requested volume termination, go to SKIP CUSTOMER TERMINATE. If the customer has requested volume termination, acknowledge the volume termination request and go to GIVE_SLAVE_TERM_MSG. SKIP CUSTOMER TERMINATE:
- volume termination is not requested (MSV TMASK, bit zero set), then go to TDMFRVOL. If any Slave system has not processed the volume termination request, go to VOLUME TERMINATTON SLAVE. If this is a Slave system, go to NEXT VOLUME PROCESSING. VOLUME TERMINATION PROCESSING MASTER:
- CHECK UCB ALLOCATION If the number of systems that are active is zero, go to CHECK UCB ALLOCATION. Decrement the number of systems active and mark this system as inactive. Go to CHECK UCB ALLOCA ⁇ ON.
- volume migration is not active, via a check of MSV AMASK on the Slave system, or MSV VOL FLAG on a Master system, return to the caller. If the volume is backed-out or swapped, return to the caller. If the I/O Monitor is in "quiet mode" and not currently monitoring for I/O operations, return to the caller. If the I/O Monitor is not active, return to the caller signaling that the I/O Monitor is "dead”. Invoke termination of volume migration. This check is made for both the source and target volumes in the migration pair. If the TDMFDDTE control block is not properly chained to the MVS UCB, return the same error condition.
- This phase is to resume the volume processing for a specific volume.
- the amount of time that a volume cannot successfully execute I/O operations is kept to a minimum.
- the Master system waits for a positive acknowledgment from all Slave systems prior to executing a function that has been requested.
- the TDMF system design allows the resume operation to take place immediately upon the Master system without waiting for a positive acknowledgment from all Slave systems. This minimizes the impact above.
- the normal next phase is the volume termination phase, and that phase will not be allowed to be requested until the Master system has received a positive acknowledgment from all Slave systems that the volume has successfully resumed.
- resume processing has not been requested via MSV RMASK, go to TDMFBVOL. If this system has already resumed processing for this volume as indicated in MSV RMASK or in MSV VOL FLAG, then go to NEXT VOLUME PROCESSING in CSECT TDMFVOL.
- volume back-out processing is invoked due to one of two possible failures during swap processing.
- the first possible failure can be due to a failure while re-writing the volume serial numbers on the source and target volumes.
- the second possible failure can occur during the actual swap process when the addresses of the UCBs are modified.
- this system does not require back-out processing from the swap function, go to REWRITE VOLUME LABELS.
- the source volume is a JES3 system managed device, issue a Subsystem Function Request to JES3 to validate and indicate that a swap function is starting and that the UCB addresses involved in the back-out processing are correct and that back-out processing may continue. Invoke back -out processing by executing procedures documented in CSECT TDMFSVOL, in order to reverse the original swap process.
- the source volume is a JES3 system managed device, issue a Subsystem
- REWRITE VOLUME LABELS If this is a Slave system, mark back-out volume processing complete in MSV BMASK for this system, and go to NEXT VOLUME PROCESSING in CSECT TDMFVOL. Call CALL TDMFIVOL passing a request of write volume label with the addresses of the source and target UCBs in order to re-write the volume serial numbers. Mark the target volume offline. Set appropriate bit in MSV VOL FLAG to indicate that back-out LISTING 1 TDMF LOGIC ® 1998 AMDAHL CORPORA ⁇ ON processing is complete for the volume. Set the resume request function bit in MSV RMASK. If the number of systems is equal to one, go to TDMFRVOL.
- the purpose of this phase is to perform swap processing for a specific volume.
- swap processing has not been requested via MS V SMASK, go to TDMFQVOL. If this system has already completed swap processing for this volume as indicated in MSV SMASK or in MSV VOL FLAG, then go to NEXT VOLUME PROCESSING in CSECT TDMFVOL.
- Standard MVS processing invokes a module called lOSVSWAP.
- Information located in the UCB Device Class Extension (DCE) for the source and target volumes may not be correct in a standard MVS environment.
- the target volumes information may not be available as the device may be offline.
- the TDMF design requires that the source and target volumes be online during the life of the TDMF migration process. This design therefore, eliminates the possibility of the UCBs DCE containing incorrect information.
- TDMF is designed to ensure that the UCB DCE information is properly swapped between the source and target volumes. Thus, prior to the swap process, the UCB DCE information is saved for the source and target volumes.
- the purpose of this phase is to perform quiesce processing for a specific volume.
- TEST DEVICE STATUS Call the IOSGEN ROUTINE to determine if the quiesce function may continue for the source volume. If the return from IOSGEN ROU ⁇ NE indicates that quiesce processing may not continue, retry 50 times after waiting 10 milliseconds each time. If quiesce processing can still not continue, go to TDMFMVOL.
- IOSGEN ROU ⁇ NE Issue an lOSLEVEL macro to set the lOSLEVEL to the DDR level for the source volume.
- IOSGEN ROU ⁇ NE If the IOSGEN ROU ⁇ NE indicates that the source volume is not quiesced due to either a source volume is active with a customer I/O operation or the source volume is in a reserve status, then go to TEST DEVICE STATUS.
- CSECT TDMFAVOL
- the purpose of this phase is to perform activate processing for a specific volume.
- activate processing has not been requested via MSV AMASK, go to TDMFNVOL. If this system has already completed activate processing for this volume as indicated in MSV AMASK or in MSV VOL FLAG, then go to CSECT TDMFMVOL.
- This phase is to prevent the MVS operating system from changing the device status of the source and target volume during a migration. This is done by updating the
- the purpose of this phase is to perform initialization processing for a specific volume.
- initialization processing has not been requested via MSV IMASK, go to NEXT VOLUME PROCESSING. If this system has already completed initialization processing for this volume as indicated in MSV IMASK, and this is a Slave system, then go to NEXT VOLUME PROCESSING. If this Slave system has completed volume initialization processing, set the respective system bit MSV IMASK. On all systems, set an indicator in MSV TDMF ACT MASK that this TDMF system is active with at least one volume migration. Increment TDMF NUM SYS ACT which represents the number of systems that are active. Increment MSE SV NUM ACTTVE which represents the number of volumes that are active on this system. If this is a Slave system, go to LISTING 1 TDMF LOGIC ® 1998 AMDAHL CORPORA ⁇ ON
- NEXT VOLUME PROCESSING Set the activate request function bit in MSV AMASK. If the number of volumes is equal to one, go to TDMFAVOL, else go to NEXT VOLUME PROCESSING.
- the TDMF I/O Monitor monitors I/O update activity against the source volume during the life of a normal migration, from the time the volume is activated until the volume has completed quiesce processing and no further customer I/O operations are allowed against the source volume.
- the TDMF design allows the TDMF I/O Monitor (TDMFIMON) to collect information concerning the cylinders and tracks on a volume that have been updated, and this information is saved into the TDMFVRNQ control block.
- TDMFIMON TDMF I/O Monitor
- a Slave system will write the results of its output TDMFVRNQ control block to the COMMDS in a specified location for each volume in the migration process.
- the Master system reads all TDMFVRNQ control blocks for all Slave system that have been previously written.
- the TDMF system during processing within this CSECT, must convert and merge all cylinder and track information specific to a volume, located within all TDMFVRNQ control blocks for that volume and the TDMFVRNQ control block still in memory on the Master system into a volume refresh bit map (TDMFVRBM) control block.
- TDMFVRBM control block represents the cumulative update requests that have occurred during the last TDMF processing phase. If this is a Slave system, go to SLAVE UPDATE PROCESSING.
- EXCHANGE VRNQ Process all VRNQ entries located in TDMFVRNQ control block being processed, updating the active TDMFVRBM control block. Update the time stamp indicating the time of the control block that has just been processed by the Master system. If the copy sub-phase is not complete, go to NEXT VOLUME PROCESSING.
- NEXT VOLUME PROCESSING Set the volume quiesced flag in MSV VOL FLAG. If the synchronization sub-phase is not waiting to be posted, go to NEXT VOLUME PROCESSING. Call EXCHANGE VRBM. Determine if the number of cylinders and tracks to be updated during the possible synchronization sub-phase can be accomplished within the synchronization time goal value specified by the customer. Go to SYNCH VOLUME NOW. If the customer set the synchronization time goal value to 999 (as soon as possible), go to SYNCH VOLUME NOW. If the synchronization time goal cannot be met, go to ANOTHER REFRESH.
- this volume Indicate that this volume is ready for synchronization. If this volume is not part of a single group session and no prompt is required, then post the synchronization sub-phase to begin processing. Go to CHECK SYNCHRONIZA ⁇ ON COMPLETE. If this volume is not part of a single group session and a prompt is required, go to NORM AL PROMPT TES ⁇ NG . Since this volume is a part of a single group session, go to ANOTHER REFRESH if volumes in the session have not reached a synchronization ready state.
- ANOTHER REFRESH Indicate that this volume requires another refresh pass. Indicate that the previous refresh process has completed. Indicate that the source volume is no longer quiesced. Go to SET_RESUME in CSECT TDMFVOL.
- EXCHANGE VRNQ Go to NEXT VOLUME PROCESSING. EXCHANGE VRNQ:
- This routine exchanges the pointers between active and back-up TDMFVRNQ control blocks.
- the TDMF system keeps a time stamp, by system and by volume, containing the time of the last TDMFVRNQ control block that was processed into the TDMFVRBM control block. If this time stamp is equal to the time stamp in the output TDMFVRNQ control block previously specified for output processing, this indicates that the TDMF Master system has successfully processed the output TDMFVRNQ control block into the TDMFVRBM control block.
- the back-up TDMFVRNQ control block is moved to the output TDMFVRNQ control block for subsequent processing. If the time stamps are not equal, this indicates that the TDMF Master system has not processed the information located in the TDMFVRNQ control block and that the new information located in the back-up TDMFVRNQ control block must be appended to the current output TDMFVRNQ control block, and the time stamp updated in the TDMFVRNQ control block.
- EXCHANGE VRBM EXCHANGE VRBM:
- This routine exchanges the pointers between active and back-up TDMFVRBM control blocks. Additionally, the routine calculates and updates counters within the TDMFVRBM control block header indicating the number of complete cylinders and the number of individual tracks that will be either refreshed or synchronized when this TDMFVRBM is used by the TDMF copy sub-task respective's sub-phase. These counts in conjunction with LISTING 1 TDMF LOGIC ® 1998 AMDAHL CORPORA ⁇ ON the total I/O device service time accumulated by module TDMFSIO, when reading or writing from the migration volumes, allow the code at label SET SYNCH above, to determine if the synchronization sub-phase can be accomplished within the synchronization time goal value specified by the customer.
- the TDMFVRBM control block is used as input to the refresh sub-phase and the synchronization sub-phase that execute as part of the copy sub-task.
- the refresh and synchronization sub-phases use the back-up TDMFVRBM control block to indicate which cylinders or tracks need to be refreshed during that sub-phase. As cylinders and tracks are refreshed, an indication is marked in a cumulative TDMFVRBM control block which indicate all cylinders or tracks that were ever refreshed during those two sub-phases.
- a TDMF system Master or Slave
- BEGIN causes a re-read of the COMMDS and the beginning of the next TDMF processing cycle.
- CSECT TDMFEVOL After setting return codes for module TDMFVOL processing.
- the TDMF design of the COMMDS provides a repository of all messages that may be issued from any TDMF system (Master or Slave) and any messages issued concerning an individual volumes' migration. This provides an audit trail within the TDMF system that may be used during or after completion of a TDMF session using the TDMF TSO Monitor.
- the TDMF design of the COMMDS provides a repository of the status of all control blocks that are used by this TDMF system (Master or Slave). This provides an audit trail within the TDMF system that may be used for diagnostic purposes during or after completion of a TDMF session using the TDMF TSO Monitor. Set up parameters to write this systems diagnostic information located in the
- This module is to monitor all customer I/O operations to the source and target volumes during the life of an active migration session.
- This module implements the primary design objective within the TDMF system, which is to ensure that data integrity of the target volume in that any update activity by customer I/O operations that changes the data located on the source volume must be reflected to the target volume.
- the TDMF design allows that only volumes involved in a migration session are monitored for I/O operations allowing any I/O operations to volumes not involved within the TDMF migration session to not be impacted in any fashion.
- the TDMF design allows this module to receive control twice during the life of an I/O operation to a source or target volume involved in a migration session. It first receives control via the Device Dependent Start I/O Exit from MVS' IOS when an I/O operation attempts to start upon either the source or target volume.
- This entry point runs as an extension, effectively, of MVS' IOS, running in Supervisor State, Protect Key Zero, and is disabled for external and I/O interrupts while the MVS local and IOS UCB locks are held.
- This entry point examines the starting request to determine if this request is for the source or target volume/device involved in the TDMF migration session.
- Module TDMFVOL
- CSECT TDMFAVOL creates a TDMFDDTE control block with it's entry address in the DDT SIO exit location pointing to the SIO main entry point of this module.
- the TDMF design demands that any errors detected or that cannot be handled by the code within this module, to notify the TDMF main task executing, that it is necessary to terminate this volumes migration due to an error to prevent loss of data integrity to the customers data being migrated to the target volume.
- the TDMFIMON communicates this error condition to the TDMF main task via deactivating the I/O Monitor which will be recognized by the main TDMF task in the routine logic of label CHECK IMON in CSECT TDMFTVOL of module TDMFVOL, and by similar logic within the TDMFCOPY module for the copy sub-task.
- the TDMF design requires and does not allow any customer I/O operations that would destroy information located on the target volume being migrated to are prevented. If the I/O operation is being directed to a source volume, the I/O operation must be allowed to continue without hindrance.
- the second entry point will receive control via at this modules Disabled Interrupt Exit (DIE) address. This address is placed into the IOSB in lieu of any other DIE address that may exist after saving the DIE address if necessary.
- the DIE routine will receive control at the completion of the I/O operation on the source volume or upon receipt of an interrupt due to an intermediate request (Program Control Interrupt, PCI).
- PCI Program Control Interrupt
- the second entry point receives control as stated in the above paragraph and examines the channel program that has just completed upon the source volume. It's purpose is to create LISTING 1 TDMF LOGIC ® 1998 AMDAHL CORPORA ⁇ ON in the case of any write operation the location, cylinder and track, upon the source volume that were affected by the execution of this channel program.
- the information is collected in an active TDMFVRNQ control block.
- Each executing TDMF system will periodically exchange the pointers between an active and back-up TDMFVRNQ control block, providing an empty TDMFVRNQ control block for additional data collection by TDMFIMON for this source volumes migration.
- These back-up TDMFVRNQ control blocks will then be processed by the TDMF main task, module TDMFVOL, CSECT TDMFMVOL in the routine EXCHANGE VRNQ.
- This TDMF design thus ensures data integrity based upon the architectural structure of channel programs and their channel command words. Therefore, the structure of files, data sets, and/or data bases are not compromised and cannot impact the data integrity of the
- channel programs that may be directed to a source volume that do not affect that data located upon the source volume also cannot impact the data integrity of the migrating volume.
- a third entry point is allocated to be used for potential future implementation of I/O operations using suspend/resume channel operations protocol.
- a fourth entry point is used within the TDMF system to ensure the data integrity of the COMMDS during it's reserve processing sequence.
- This entry point provides compatibility with MVS' IOS and physical data integrity regardless of the physical volume protection normally invoked by MVS' Global Resource Serialization (GRS) component or replacement component used by a customer, and properly manages both reserve and reserve pending conditions.
- GRS Global Resource Serialization
- This component executes in 31 -bit addressability mode and is protected by a Functional Recovery Routine (FRR) to ensure non-destructive termination of a volumes migration and to prevent abnormal termination of MVS' IOS component. Additionally, the first two entry points always validate all TDMF control blocks upon receipt of control to ensure proper execution of the system. Any validation error of these control blocks results in the termination of the volumes' migration.
- FRR Functional Recovery Routine
- This entry point finds via the I/O Queue (IOQ) Element, the address of the IOSB and the address of the UCB, that this I/O request is being directed to. If the TDMFDDTE control block indicates that the TDMF I/O Monitor is in "quiet mode", this code returns to the standard MVS SIO Exit address that has been previously saved in the TDMFDDTE control block.
- IOQ I/O Queue
- the address stored in the IOSDIE field of the IOSB if it exists, is saved in the TDMFDDTE.
- the address in the IOSDIE field of the IOSB is then replaced with an address to provide re-entry into the second TDMFIMON entry point.
- control is then passed to the normal MVS DDT SIO entry point that has been previously saved.
- This entry point is passed the address of the IOSB and the UCB address, that this I/O request is being directed to. If the TDMFDDTE control block indicates that the TDMF I/O Monitor is in "quiet mode", this code returns to the caller which is MVS' IOS if no IOSDIE address was saved in the TDMFDDTE control block or to the address that was saved in the TDMFDDTE control block during the previous entry point processing.
- the IOSDIE pointer in the IOSB is normally used to transfer control to a customer supplied routine to be used in the case of a Program Controlled Interrupt (PCI).
- PCI Program Controlled Interrupt
- the PCI routine receiving control might dynamically modify the CCWs within the channel program based up application I/O operational requirements.
- the IOSDIE address is temporarily restored to it's original value during the life of execution of the PCI/DIE routine specified.
- IOSDIE routine address in the IOSB is again restored to the address of the DISABLED INTERRUPT EXIT ENTRY POINT above. This ensures in the case of a non-terminating or non-ending I/O operation that the TDMFIMON DIE entry point will again receive control during presentation of the next I/O interrupt which may either be another intermediate (PCI) interrupt or the final I/O interrupt signaling completion of the channel program.
- PCI intermediate
- the channel program that has received an I/O interrupt is scanned from the first CCW that has not been previously scanned until the last CCW that has been executed.
- each CCW operation code (op code) is used to index into a translate and test table providing multiple functions.
- the first is the address of a routine to receive control due to the appearance of the specified CCW op code.
- the second is the virtual address of the specified data operand in the CCW after being converted from a real address to a virtual address.
- the third is an indication whether this virtual address may be used by the receiving dependent op code routine.
- a dependent op code routine requires that the virtual address exist, and is not available for use by the dependent op code routine, an error indication is signaled and the volume migration termination request is invoked. This may occur if the operating system pages out the MVS Page Table Entries for the address space issuing the I/O operation.
- the dependent op code routine address returned may be the address of a CCW ignore routine. Which infers that this operation code does not require further examination. An example of this would be an CCW op code for a read operation of any type, since a read op code does not affect data integrity.
- the dependent op code address returned may be an address of an invalid CCW op code routine which infers that this op code may not be used during the life of the migration process as it may inadvertently affect the data integrity of the volume in an unknown fashion. An example of this would be the existence of a diagnostic write CCW op code.
- a normal dependent op code routine will receive control and save any appropriate control information that may be used later in the processing of the channel program and if the I/O operation indicates that any write operation is occurring on the source volume, calls the
- Each entry consists of a sixteen-bit half-word entry indicating the affected cylinder number of the source volume that is being updated, a eight-bit (one byte) starting track number of that cylinder on the source volume that is being updated and an eight-bit
- a VRNQ entry encompassing the complete cylinder is created. For example, an entry of cylinder 25, starting track zero, with the number of tracks equal to 15, which is the maximum number of tracks per cylinder within the current DASD architecture supported.
- this module The purpose of this module is to provide the functions required by the TDMF copy sub- task, which is attached as a sub-task by module TDMFVOL, CSECT TDMFAVOL, which is created with an ATTACHX macro during its processing.
- This module implements three sub-task phases: the copy sub-phase, the refresh sub-phase, and the synchronization sub- phase. Additionally, a preceding sub-phase to the copy sub-phase may be executed if the customer requested the purge option when migrating from a source volume of one size to LISTING 1 TDMF LOGIC ® 1998 AMDAHL CORPORA ⁇ ON a target volume of a larger size.
- This module executes in 31-bit mode, Supervisor State, Protect Key Zero.
- This module contains a MVS Extended Set Task Abend Exit (ESTAE) recovery routine. This routine ensures that the TDMFCOPY sub-task receives control in the event that the TDMF copy sub-task attempts an abend condition. If any sub-phase below is posted with a termination request, the sub-phase and the copy sub-task are terminated with an error condition.
- ESTAE MVS Extended Set Task Abend Exit
- each cylinder on the target volume is effectively erased starting with the number of cylinders existing upon the source volume plus one until the last cylinder of the target volume has been erased.
- a Volume Refresh Bit Map (VRBM) entry consists of a sixteen-bit half-word field. Bit 15 of the half-word indicates that all the information located upon that cylinder represented by this entry must be refreshed, meaning all tracks read from the source volume and written to the target volume. If bit 15 is off, and any other bits (zero through 14) are equal to one, those respective bits indicate which selective tracks must be read from the designated cylinder on the source volume and copied to the respective cylinder on the target volume.
- each cylinder on the source volume is read a cylinder as specified, and written a cylinder as specified to the target volume, starting with cylinder zero until the all VRNQ entries have been processed.
- These I/O operations are requested based upon parameters passed to module TDMFIVOL, CSECT TDMFTVOL.
- this information is propagated to the cumulative TDMFVRBM control block, signifying that these cylinders and/or tracks have been refreshed.
- the refresh sub-phase then waits until it has been posted by the TDMF main task as to whether to continue to the synchronization sub-phase or to begin processing a subsequent VOLUME REFRESH pass.
- VOLUME SYNCHRONIZATION LISTING 1 TDMF LOGIC ® 1998 AMDAHL CORPORA ⁇ ON
- a Volume Refresh Bit Map (VRBM) entry consists of a sixteen-bit half-word field. Bit 15 of the half-word indicates that all the information located upon that cylinder represented by this entry must be refreshed, meaning all tracks read from the source volume and written to the target volume. If bit 15 is off, and any other bits (zero through 14) are equal to one, those respective bits indicate which selective tracks must be read from the designated cylinder on the source volume and copied to the respective cylinder on the target volume.
- Each cylinder on the source volume is read a cylinder as specified, and written a cylinder as specified to the target volume, starting with cylinder zero until the all VRNQ entries have been processed.
- I/O operations are requested based upon parameters passed to module TDMFIVOL, CSECT TDMFIVOL.
- this information is propagated to the cumulative TDMFVRBM control block, signifying that these cylinders and/or tracks have been synchronized.
- CHECK IMON Each cylinder on the source volume is read a cylinder at a time, and each cylinder on the target volume is read a cylinder at a time.
- the data read from both volumes is compared, byte-by-byte, to ensure data integrity has not been compromised. This operation starts with cylinder zero until the last cylinder of the source volume has been read and compared to the target volume.
- the main TDMF task has requested termination of the copy sub-phase, return to the caller. If the I/O Monitor is not active, return to the caller signaling that the I/O Monitor is "dead”. Invoke te ⁇ nination of volume migration. This check is made for both the source and target volumes in the migration pair. If the TDMFDDTE control block is not properly chained to the MVS UCB, return the same error condition.
- This module is to generate channel programs consisting of Channel Command Words (CCWs) to implement I/O operations being requested to the CCWs.
- CCWs Channel Command Words
- the caller may LISTING 1 TDMF LOGIC ® 1998 AMDAHL CORPORA ⁇ ON request that the system initialize the COMMDS, or to selectively read or write caller specified TDMF control blocks as required by the caller. This module is called from the TDMF main task only.
- This module contains a MVS Extended Set Task Abend Exit (ESTAE) recovery routine. This routine ensures that the TDMFICOM module receives control in the event that the TDMFICOM module attempts an abend condition. This code executes in 31 -bit mode,
- the TDMF design provides that all control blocks are 4K in length or a logical multiple of 4K in length, or that multiple control blocks may exist evenly into one 4K page.
- the COMMDS may reside upon either a 3380 or 3390 device type track, in order to provide compatibility, only ten data records, each 4K in length, are used on any track within the COMMDS to contain the TDMF control blocks.
- the tracks are allocated on a 3390 device type, two additional data records are written on each track containing dummy records.
- TDMF COMMDS Due to the design of the TDMF COMMDS, multiple write operations from multiple TDMF systems may be executed simultaneously to all cylinders within the COMMDS without reserve/release protections, except logical cylinder zero of the COMMDS. This is due to each TDMF system is allocated a cylinder into which the respective information about that system may be written without regard to another system writing to the same cylinder. In other words, the Master system (system number one) always writes its data onto relative cylinder one.
- the first Slave system (system number two) writes its data on logical cylinder two, and so on.
- each cylinder allocated to a system provides up to 64 TDMFVMSG control blocks to be written across tracks zero through six of the specified cylinder.
- the last six records of track six on the specified cylinder will remain as dummy formatted records.
- Tracks seven through ten of the specified cylinder will contain all TDMFWORK/TDMFDDTE control block information for up to 64 volumes. The last eight records of track ten on the specified cylinder will remain as dummy formatted records. Track eleven on the specified cylinder will contain all records related to the main TDMF control block for the specified system. This contains information including system messages, messages that may have been received from the TDMF TSO Monitor from the TDMFTSOM control block, plus information from the TDMF authorization checking mechanisms. This information currently requires four records and thus, the additional six records will remain as dummy formatted records. Tracks twelve through fourteen currently contain only formatted dummy records and are available for future expansion.
- Logical cylinder zero contains what is referred to as System Record Zero on tracks zero and one of that logical cylinder.
- System Record Zero consists of 20 4K pages containing all the information in the TDMFMSE, TDMFMSV, and TDMFVSVE control blocks, collectively.
- Relative tracks two through 14 on logical cylinder zero contain specified locations to hold the TDMFVRNQ control blocks required by the TDMF session. Since only ten records per track are allowed to contain data, there are thirteen tracks that may contain all possible TDMFVRNQ control blocks within the TDMF session as currently implemented.
- An artificial implementation restriction has been placed into the TDMF design to limit the number of systems involved in a TDMF migration session times the number of volumes within the migration session may not exceed 128.
- This architectural implementation restriction could and may be removed in future releases of TDMF as required by using additional cylinders to contain the TDMFVRNQ control blocks.
- this artificial implementation restriction would be of a minor impact to customer requirements.
- the TDMF system was designed to support 32 operating systems and 64 volumes within one TDMF session.
- All I/O operations to the COMMDS represented by an IOSB contain an lOSLEVEL of one, indicating normal priority, and that miscellaneous driver is being used and that the IOSB is using command chaining, the channel program is complete, and that the MVS
- Input/Output Supervisor should by-pass channel program prefixing. This prevents any manipulation of the constructed channel programs by MVS' IOS component.
- This routine is used to initialize the COMMDS and to generate channel programs consisting of Channel Command Words (CCWs). It may only be called by the TDMF Master system.
- CCWs Channel Command Words
- the information in the TDMFMSE, TDMFMSV, TFMFMSVE, and TDMFWORK control blocks is updated to contain the assigned cylinder, track and record locations.
- SRB Service Request Block
- IOSB Input/Output Supervisor control Block
- READ COMMDS This routine is used to generate a channel program consisting of Channel Command Words
- CCWs Service Request Block
- IOSB Supervisor control Block
- This routine is used to generate a channel program consisting of Channel Command Words (CCWs) and to selectively update/write user specified TDMF control blocks or a combination thereof, to the COMMDS.
- CCWs Channel Command Words
- SRB Service Request Block
- IOSB Input/Output Supervisor control Block
- This module is to generate channel programs consisting of Channel Command Words (CCWs) to implement I/O operations being requested to volumes being migrated by the TDMF session via parameters passed by the caller.
- CCWs Channel Command Words
- the caller may request that the system:
- Miscellaneous I/O functions include items such as, Read Device Characteristics information, Sense ID information, Sense Subsystem Status information, Read LISTING 1 TDMF LOGIC ® 1998 AMDAHL CORPORA ⁇ ON
- Configuration Data information Read Volume Label information, Write Volume information, reserve the volume, and release the volume.
- This module may be called from both the TDMF main task or a TDMF copy sub-task.
- This module contains a MVS Extended Set Task Abend Exit (ESTAE) recovery routine. This routine ensures that the TDMFTVOL module receives control in the event that the TDMFIVOL module attempts an abend condition. This code executes in 31-bit mode,
- This routine is used to generate a channel program consisting of Channel Command Words (CCWs) and to selectively read all tracks upon the caller specified cylinder from the user specified volume.
- CCWs Channel Command Words
- SRB Service Request Block
- IOSB Input/Output Supervisor control Block
- This routine is used to generate a channel program consisting of Channel Command Words (CCWs) and to selectively read the caller specified track from the user specified volume.
- CCWs Channel Command Words
- a routine is called to build the Service Request Block (SRB) and the Input/Output
- IOSB Supervisor control Block
- This SRB/IOSB pair is passed to the TDMFSIO module to actually issue the I/O request and await its completion by the operating system.
- control is returned to the calling module.
- All I/O operations to a requested volume are represented by an IOSB, which contains an lOSLEVEL of the specified lOSLEVEL of Dynamic Device Reconfiguration (DDR), and that a miscellaneous driver is being used and that the IOSB is using command chaining, the channel program is complete, and that the MVS Input/Output Supervisor should bypass channel program prefixing. This prevents any manipulation of the constructed channel programs by MVS' IOS component.
- DDR Dynamic Device Reconfiguration
- the setting of the lOSLEVEL to that used by DDR ensures that I/O operations may be requested against a migrating volume and even if the volume is logically quiesced.
- This routine is used to generate a channel program consisting of Channel Command Words (CCWs) and to selectively read specific tracks from the caller specified cylinder from the user specified volume. Multiple tracks may be read with one request and are not required to be adjacent.
- CCWs Channel Command Words
- SRB Service Request Block
- IOSB Input/Output Supervisor control Block
- This routine is used to generate a channel program consisting of Channel Command Words (CCWs) and to selectively write all tracks upon the caller specified cylinder from the user specified volume.
- CCWs Channel Command Words
- SRB Service Request Block
- IOSB Input/Output Supervisor control Block
- This routine is used to generate a channel program consisting of Channel Command Words (CCWs) and to selectively write the caller specified track from the user specified volume.
- CCWs Channel Command Words
- a routine is called to build the Service Request Block (SRB) and the Input/ Output
- IOSB Supervisor control Block
- This routine is used to generate a channel program consisting of Channel Command Words (CCWs) and to selectively write specific tracks from the caller specified cylinder from the user specified volume. Multiple tracks may be written with one request and are not required to be adjacent.
- a routine is called to build the Service Request Block (SRB) and the Input/Output
- IOSB Supervisor control Block
- This routine is used to generate a channel program consisting of a Channel Command Word (CCW) to perform the caller specified I/O operation.
- CCW Channel Command Word
- SRB Service Request Block
- IOSB Input/Output Supervisor control Block
- This module is to issue a request to MVS' Input/Output Supervisor (IOS) component to perform the I/O operation represented by the Input/Output Supervisor control Block (IOSB) in conjunction with its Service Request Block (SRB) as requested by the caller.
- IOS Input/Output Supervisor
- SRB Service Request Block
- This module may be called from module TDMFICOM and module TDMFIVOL. Upon completion of the I/O operation requested, control will be returned to the calling module.
- This module executes in 31 -bit mode, Supervisor State, Protect Key Zero. This module contains a MVS Extended Set Task Abend Exit (ESTAE) recovery routine. This routine ensures that the TDMFSIO module receives control in the event that the TDMFSIO module attempts an abend condition.
- ESTAE Extended Set Task Abend Exit
- the module Before passing the I/O request to the IOS component, the module ensures that the device containing the volume to which the I/O operation is to be directed, is online and has available channel paths. If not, an error indication is returned to the caller. Since the TDMF design requires, in some cases, that I/O operations be attempted to a device and/or volume while I/O operations are logically quiesced, the normal MVS START O macro cannot be used. This is because the MVS STARTTO compatibility interface routine, module IECVSTIO, will reset the lOSLEVEL in the IOSB to a value that may be incompatible to successfully complete the I/O operation. Therefore, the TDMF design requires an implementation that calls module IOS VSSEQ directly, so that the I/O operation may be completed even if the device and/or volume is considered quiesced.
- the elapsed time of the I/O operation is calculated, which provides device service time information.
- This device service time information is required for use by the TDMF TSO Monitor and the routine called EXCHANGE VRBM in CSECT TDMFMVOL of module TDMFVOL.
- the I/O operation is checked for any error conditions upon its completion and an indication returned to the caller of the success or failure of the I/O operation.
- the module contains a normal and abnormal channel end appendage routines and a post routine to be used as routines called by MVS' IOS component.
- the initial REXX EXEC sets up an environment under Interactive Structure Program Facility (TSPF), allocates all library definitions required for use for the TDMF TSO Monitor and goes to REXX EXEC TDMFMON.
- TSPF Interactive Structure Program Facility
- TDMFPERF Calls module TDMFPERF which displays performance information for all volume migrations within the customer specified active TDMF session. This information can also be displayed for a TDMF session has previously completed.
- OPTION 6.3 Calls module TDMFCOMF which displays the control blocks requested by control block name.
- TDMFTMON which provides an operator interface mechanism that may be used to either terminate an active volume migration, respond to the synchronization prompt, and modify the synchronization goal time value, in seconds, dynamically.
- OPTION 8 Calls module TDMFUNIQ which displays customer and site information with the additional capability to dynamically add or delete authorization keys, which allows TDMF to execute upon an authorized CPU processor.
- This module provides the actual authority checking to determine if the CPU is authorized to execute TDMF.
- This module can be called by modules TDMFSECR, TDMFUNIQ, or TDMFICOM, executing as a part of the main task during TDMF execution.
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP98960191A EP1031084A1 (en) | 1997-11-14 | 1998-11-13 | Computer system transparent data migration |
CA002310099A CA2310099A1 (en) | 1997-11-14 | 1998-11-13 | Computer system transparent data migration |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US08/971,473 | 1997-11-14 | ||
US08/971,473 US6145066A (en) | 1997-11-14 | 1997-11-14 | Computer system with transparent data migration between storage volumes |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1999026143A1 true WO1999026143A1 (en) | 1999-05-27 |
Family
ID=25518433
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US1998/024187 WO1999026143A1 (en) | 1997-11-14 | 1998-11-13 | Computer system transparent data migration |
Country Status (4)
Country | Link |
---|---|
US (1) | US6145066A (en) |
EP (1) | EP1031084A1 (en) |
CA (1) | CA2310099A1 (en) |
WO (1) | WO1999026143A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1170657A2 (en) * | 2000-07-06 | 2002-01-09 | Hitachi, Ltd. | Computer system |
US6557089B1 (en) | 2000-11-28 | 2003-04-29 | International Business Machines Corporation | Backup by ID-suppressed instant virtual copy then physical backup copy with ID reintroduced |
Families Citing this family (102)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6279011B1 (en) * | 1998-06-19 | 2001-08-21 | Network Appliance, Inc. | Backup and restore for heterogeneous file server environment |
US6434678B1 (en) * | 1999-02-12 | 2002-08-13 | Gtp, Inc. | Method for data storage organization |
US6698017B1 (en) * | 1999-07-16 | 2004-02-24 | Nortel Networks Limited | Software migration on an active processing element |
US6490666B1 (en) * | 1999-08-20 | 2002-12-03 | Microsoft Corporation | Buffering data in a hierarchical data storage environment |
US6757797B1 (en) * | 1999-09-30 | 2004-06-29 | Fujitsu Limited | Copying method between logical disks, disk-storage system and its storage medium |
JP3918394B2 (en) * | 2000-03-03 | 2007-05-23 | 株式会社日立製作所 | Data migration method |
US6895485B1 (en) * | 2000-12-07 | 2005-05-17 | Lsi Logic Corporation | Configuring and monitoring data volumes in a consolidated storage array using one storage array to configure the other storage arrays |
US6848021B2 (en) * | 2001-08-01 | 2005-01-25 | International Business Machines Corporation | Efficient data backup using a single side file |
US6862632B1 (en) | 2001-11-14 | 2005-03-01 | Emc Corporation | Dynamic RDF system for transferring initial data between source and destination volume wherein data maybe restored to either volume at same time other data is written |
US6976139B2 (en) * | 2001-11-14 | 2005-12-13 | Emc Corporation | Reversing a communication path between storage devices |
US6898688B2 (en) * | 2001-12-28 | 2005-05-24 | Storage Technology Corporation | Data management appliance |
US7065549B2 (en) * | 2002-03-29 | 2006-06-20 | Illinois Institute Of Technology | Communication and process migration protocols for distributed heterogeneous computing |
US7076690B1 (en) | 2002-04-15 | 2006-07-11 | Emc Corporation | Method and apparatus for managing access to volumes of storage |
US6973586B2 (en) * | 2002-04-29 | 2005-12-06 | International Business Machines Corporation | System and method for automatic dynamic address switching |
US7085956B2 (en) * | 2002-04-29 | 2006-08-01 | International Business Machines Corporation | System and method for concurrent logical device swapping |
JP2004013215A (en) * | 2002-06-03 | 2004-01-15 | Hitachi Ltd | Storage system, storage sub-system, and information processing system including them |
US7584131B1 (en) | 2002-07-31 | 2009-09-01 | Ameriprise Financial, Inc. | Method for migrating financial and indicative plan data between computerized record keeping systems without a blackout period |
US7707151B1 (en) | 2002-08-02 | 2010-04-27 | Emc Corporation | Method and apparatus for migrating data |
JP2004102374A (en) * | 2002-09-05 | 2004-04-02 | Hitachi Ltd | Information processing system having data transition device |
US7546482B2 (en) * | 2002-10-28 | 2009-06-09 | Emc Corporation | Method and apparatus for monitoring the storage of data in a computer system |
US6954835B1 (en) | 2002-10-30 | 2005-10-11 | Emc Corporation | Intercepting control of a host I/O process |
US7631359B2 (en) * | 2002-11-06 | 2009-12-08 | Microsoft Corporation | Hidden proactive replication of data |
US7313560B2 (en) * | 2002-12-09 | 2007-12-25 | International Business Machines Corporation | Data migration system and method |
US7376764B1 (en) | 2002-12-10 | 2008-05-20 | Emc Corporation | Method and apparatus for migrating data in a computer system |
US6981117B2 (en) | 2003-01-29 | 2005-12-27 | International Business Machines Corporation | Method, system, and program for transferring data |
US7415591B1 (en) | 2003-04-23 | 2008-08-19 | Emc Corporation | Method and apparatus for migrating data and automatically provisioning a target for the migration |
US7805583B1 (en) | 2003-04-23 | 2010-09-28 | Emc Corporation | Method and apparatus for migrating data in a clustered computer system environment |
US7093088B1 (en) * | 2003-04-23 | 2006-08-15 | Emc Corporation | Method and apparatus for undoing a data migration in a computer system |
US7263590B1 (en) | 2003-04-23 | 2007-08-28 | Emc Corporation | Method and apparatus for migrating data in a computer system |
US7111004B2 (en) * | 2003-06-18 | 2006-09-19 | International Business Machines Corporation | Method, system, and program for mirroring data between sites |
US7305418B2 (en) * | 2003-11-05 | 2007-12-04 | International Business Machines Corporation | Selecting and showing copy pairs in a storage subsystem copy services graphical user interface |
BRPI0415939A (en) * | 2003-11-12 | 2007-01-02 | Pharmacia & Upjohn Co Llc | compositions of a selective cyclooxygenase-2 inhibitor and a neurotrophic factor modulating agent for the treatment of central nervous system mediated disorders |
US7337197B2 (en) * | 2003-11-13 | 2008-02-26 | International Business Machines Corporation | Data migration system, method and program product |
US7146475B2 (en) * | 2003-11-18 | 2006-12-05 | Mainstar Software Corporation | Data set level mirroring to accomplish a volume merge/migrate in a digital data storage system |
US20050114465A1 (en) * | 2003-11-20 | 2005-05-26 | International Business Machines Corporation | Apparatus and method to control access to logical volumes using one or more copy services |
JP4520755B2 (en) * | 2004-02-26 | 2010-08-11 | 株式会社日立製作所 | Data migration method and data migration apparatus |
JP4452557B2 (en) * | 2004-05-27 | 2010-04-21 | 株式会社日立製作所 | Remote copy with WORM guarantee |
US7409510B2 (en) * | 2004-05-27 | 2008-08-05 | International Business Machines Corporation | Instant virtual copy to a primary mirroring portion of data |
US7590706B2 (en) * | 2004-06-04 | 2009-09-15 | International Business Machines Corporation | Method for communicating in a computing system |
US7613889B2 (en) * | 2004-06-10 | 2009-11-03 | International Business Machines Corporation | System, method, and program for determining if write data overlaps source data within a data migration scheme |
US7685129B1 (en) | 2004-06-18 | 2010-03-23 | Emc Corporation | Dynamic data set migration |
US7707186B2 (en) * | 2004-06-18 | 2010-04-27 | Emc Corporation | Method and apparatus for data set migration |
JP4488807B2 (en) * | 2004-06-25 | 2010-06-23 | 株式会社日立製作所 | Volume providing system and method |
US7707372B1 (en) * | 2004-06-30 | 2010-04-27 | Symantec Operating Corporation | Updating a change track map based on a mirror recovery map |
US20060041488A1 (en) * | 2004-08-18 | 2006-02-23 | O'reirdon Michael | Method and system for migrating messaging accounts between vendors |
US7296024B2 (en) | 2004-08-19 | 2007-11-13 | Storage Technology Corporation | Method, apparatus, and computer program product for automatically migrating and managing migrated data transparently to requesting applications |
US8359429B1 (en) | 2004-11-08 | 2013-01-22 | Symantec Operating Corporation | System and method for distributing volume status information in a storage system |
US7437608B2 (en) * | 2004-11-15 | 2008-10-14 | International Business Machines Corporation | Reassigning storage volumes from a failed processing system to a surviving processing system |
US7343467B2 (en) * | 2004-12-20 | 2008-03-11 | Emc Corporation | Method to perform parallel data migration in a clustered storage environment |
US7404039B2 (en) * | 2005-01-13 | 2008-07-22 | International Business Machines Corporation | Data migration with reduced contention and increased speed |
US7877239B2 (en) | 2005-04-08 | 2011-01-25 | Caterpillar Inc | Symmetric random scatter process for probabilistic modeling system for product design |
US7565333B2 (en) | 2005-04-08 | 2009-07-21 | Caterpillar Inc. | Control system and method |
US8364610B2 (en) | 2005-04-08 | 2013-01-29 | Caterpillar Inc. | Process modeling and optimization method and system |
US8209156B2 (en) | 2005-04-08 | 2012-06-26 | Caterpillar Inc. | Asymmetric random scatter process for probabilistic modeling system for product design |
US7640409B1 (en) | 2005-07-29 | 2009-12-29 | International Business Machines Corporation | Method and apparatus for data migration and failover |
US7487134B2 (en) | 2005-10-25 | 2009-02-03 | Caterpillar Inc. | Medical risk stratifying method and system |
JP4550717B2 (en) * | 2005-10-28 | 2010-09-22 | 富士通株式会社 | Virtual storage system control apparatus, virtual storage system control program, and virtual storage system control method |
US7900202B2 (en) * | 2005-10-31 | 2011-03-01 | Microsoft Corporation | Identification of software execution data |
US8006242B2 (en) * | 2005-10-31 | 2011-08-23 | Microsoft Corporation | Identification of software configuration data |
US7499842B2 (en) | 2005-11-18 | 2009-03-03 | Caterpillar Inc. | Process model based virtual sensor and method |
US20070162691A1 (en) * | 2006-01-06 | 2007-07-12 | Bhakta Snehal S | Apparatus and method to store information |
US7505949B2 (en) | 2006-01-31 | 2009-03-17 | Caterpillar Inc. | Process model error correction method and system |
JP4903461B2 (en) * | 2006-03-15 | 2012-03-28 | 株式会社日立製作所 | Storage system, data migration method, and server apparatus |
US8131682B2 (en) * | 2006-05-11 | 2012-03-06 | Hitachi, Ltd. | System and method for replacing contents addressable storage |
US8387038B2 (en) * | 2006-08-14 | 2013-02-26 | Caterpillar Inc. | Method and system for automatic computer and user migration |
US7809912B1 (en) * | 2006-09-29 | 2010-10-05 | Emc Corporation | Methods and systems for managing I/O requests to minimize disruption required for data migration |
US8478506B2 (en) | 2006-09-29 | 2013-07-02 | Caterpillar Inc. | Virtual sensor based engine control system and method |
US7483774B2 (en) | 2006-12-21 | 2009-01-27 | Caterpillar Inc. | Method and system for intelligent maintenance |
US7787969B2 (en) | 2007-06-15 | 2010-08-31 | Caterpillar Inc | Virtual sensor system and method |
US7831416B2 (en) | 2007-07-17 | 2010-11-09 | Caterpillar Inc | Probabilistic modeling system for product design |
US7788070B2 (en) | 2007-07-30 | 2010-08-31 | Caterpillar Inc. | Product design optimization method and system |
JP5149556B2 (en) * | 2007-07-30 | 2013-02-20 | 株式会社日立製作所 | Storage system that migrates system information elements |
US7542879B2 (en) | 2007-08-31 | 2009-06-02 | Caterpillar Inc. | Virtual sensor based control system and method |
US7593804B2 (en) | 2007-10-31 | 2009-09-22 | Caterpillar Inc. | Fixed-point virtual sensor control system and method |
US8036764B2 (en) | 2007-11-02 | 2011-10-11 | Caterpillar Inc. | Virtual sensor network (VSN) system and method |
US8224468B2 (en) | 2007-11-02 | 2012-07-17 | Caterpillar Inc. | Calibration certificate for virtual sensor network (VSN) |
JP5223463B2 (en) * | 2008-05-28 | 2013-06-26 | 富士通株式会社 | Control method, control program, and information processing apparatus for connection device in information processing system |
US8086640B2 (en) | 2008-05-30 | 2011-12-27 | Caterpillar Inc. | System and method for improving data coverage in modeling systems |
US7917333B2 (en) | 2008-08-20 | 2011-03-29 | Caterpillar Inc. | Virtual sensor network (VSN) based control system and method |
US8495021B2 (en) * | 2008-11-26 | 2013-07-23 | Yahoo! Inc. | Distribution data items within geographically distributed databases |
US8498890B2 (en) * | 2009-09-18 | 2013-07-30 | International Business Machines Corporation | Planning and orchestrating workflow instance migration |
US8983902B2 (en) * | 2010-12-10 | 2015-03-17 | Sap Se | Transparent caching of configuration data |
US8819374B1 (en) * | 2011-06-15 | 2014-08-26 | Emc Corporation | Techniques for performing data migration |
US8793004B2 (en) | 2011-06-15 | 2014-07-29 | Caterpillar Inc. | Virtual sensor system and method for generating output parameters |
US9223502B2 (en) | 2011-08-01 | 2015-12-29 | Infinidat Ltd. | Method of migrating stored data and system thereof |
US8856191B2 (en) | 2011-08-01 | 2014-10-07 | Infinidat Ltd. | Method of migrating stored data and system thereof |
US8854872B2 (en) | 2011-12-22 | 2014-10-07 | International Business Machines Corporation | Drift mitigation for multi-bits phase change memory |
US8775861B1 (en) * | 2012-06-28 | 2014-07-08 | Emc Corporation | Non-disruptive storage device migration in failover cluster environment |
GB2504716A (en) * | 2012-08-07 | 2014-02-12 | Ibm | A data migration system and method for migrating data objects |
US9563371B2 (en) * | 2013-07-26 | 2017-02-07 | Globalfoundreis Inc. | Self-adjusting phase change memory storage module |
US9405628B2 (en) * | 2013-09-23 | 2016-08-02 | International Business Machines Corporation | Data migration using multi-storage volume swap |
US9514164B1 (en) | 2013-12-27 | 2016-12-06 | Accenture Global Services Limited | Selectively migrating data between databases based on dependencies of database entities |
US9619331B2 (en) | 2014-01-18 | 2017-04-11 | International Business Machines Corporation | Storage unit replacement using point-in-time snap copy |
CN106462458B (en) | 2014-04-30 | 2019-08-30 | 大连理工大学 | Virtual machine (vm) migration |
US10261943B2 (en) | 2015-05-01 | 2019-04-16 | Microsoft Technology Licensing, Llc | Securely moving data across boundaries |
US10678762B2 (en) | 2015-05-01 | 2020-06-09 | Microsoft Technology Licensing, Llc | Isolating data to be moved across boundaries |
US10229124B2 (en) | 2015-05-01 | 2019-03-12 | Microsoft Technology Licensing, Llc | Re-directing tenants during a data move |
US10216379B2 (en) | 2016-10-25 | 2019-02-26 | Microsoft Technology Licensing, Llc | User interaction processing in an electronic mail system |
US10782893B2 (en) * | 2017-02-22 | 2020-09-22 | International Business Machines Corporation | Inhibiting tracks within a volume of a storage system |
WO2020142025A1 (en) * | 2018-12-31 | 2020-07-09 | Turkiye Garanti Bankasi Anonim Sirketi | A data migration system by tdmf product |
US10809937B2 (en) | 2019-02-25 | 2020-10-20 | International Business Machines Corporation | Increasing the speed of data migration |
CN111273872A (en) * | 2020-02-14 | 2020-06-12 | 北京百度网讯科技有限公司 | Data migration method, device, equipment and medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1994000816A1 (en) * | 1992-06-18 | 1994-01-06 | Andor Systems, Inc. | Remote dual copy of data in computer systems |
WO1997009676A1 (en) * | 1995-09-01 | 1997-03-13 | Emc Corporation | System and method for on-line, real-time, data migration |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4638424A (en) * | 1984-01-12 | 1987-01-20 | International Business Machines Corporation | Managing data storage devices connected to a digital computer |
US5276867A (en) * | 1989-12-19 | 1994-01-04 | Epoch Systems, Inc. | Digital data storage system with improved data migration |
US5564037A (en) * | 1995-03-29 | 1996-10-08 | Cheyenne Software International Sales Corp. | Real time data migration system and method employing sparse files |
US5835954A (en) * | 1996-09-12 | 1998-11-10 | International Business Machines Corporation | Target DASD controlled data migration move |
US5832274A (en) * | 1996-10-09 | 1998-11-03 | Novell, Inc. | Method and system for migrating files from a first environment to a second environment |
-
1997
- 1997-11-14 US US08/971,473 patent/US6145066A/en not_active Expired - Fee Related
-
1998
- 1998-11-13 WO PCT/US1998/024187 patent/WO1999026143A1/en not_active Application Discontinuation
- 1998-11-13 EP EP98960191A patent/EP1031084A1/en not_active Withdrawn
- 1998-11-13 CA CA002310099A patent/CA2310099A1/en not_active Abandoned
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1994000816A1 (en) * | 1992-06-18 | 1994-01-06 | Andor Systems, Inc. | Remote dual copy of data in computer systems |
WO1997009676A1 (en) * | 1995-09-01 | 1997-03-13 | Emc Corporation | System and method for on-line, real-time, data migration |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1170657A2 (en) * | 2000-07-06 | 2002-01-09 | Hitachi, Ltd. | Computer system |
EP1170657A3 (en) * | 2000-07-06 | 2007-04-18 | Hitachi, Ltd. | Computer system |
JP2008152807A (en) * | 2000-07-06 | 2008-07-03 | Hitachi Ltd | Computer system |
US6557089B1 (en) | 2000-11-28 | 2003-04-29 | International Business Machines Corporation | Backup by ID-suppressed instant virtual copy then physical backup copy with ID reintroduced |
Also Published As
Publication number | Publication date |
---|---|
CA2310099A1 (en) | 1999-05-27 |
EP1031084A1 (en) | 2000-08-30 |
US6145066A (en) | 2000-11-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1031084A1 (en) | Computer system transparent data migration | |
EP0902923B1 (en) | Method for independent and simultaneous access to a common data set | |
US6304980B1 (en) | Peer-to-peer backup system with failure-triggered device switching honoring reservation of primary device | |
US7149787B1 (en) | Apparatus and method for mirroring and restoring data | |
US7577788B2 (en) | Disk array apparatus and disk array apparatus control method | |
US8027952B2 (en) | System and article of manufacture for mirroring data at storage locations | |
US6578120B1 (en) | Synchronization and resynchronization of loosely-coupled copy operations between a primary and a remote secondary DASD volume under concurrent updating | |
US7856425B2 (en) | Article of manufacture and system for fast reverse restore | |
US5978565A (en) | Method for rapid recovery from a network file server failure including method for operating co-standby servers | |
US7376651B2 (en) | Virtual storage device that uses volatile memory | |
US20040260899A1 (en) | Method, system, and program for handling a failover to a remote storage location | |
EP0566968A2 (en) | Method and system for concurrent access during backup copying of data | |
US7133983B2 (en) | Method, system, and program for asynchronous copy | |
KR100450400B1 (en) | A High Avaliability Structure of MMDBMS for Diskless Environment and data synchronization control method thereof | |
JPH05210555A (en) | Method and device for zero time data-backup-copy | |
EP1636690B1 (en) | Managing a relationship between one target volume and one source volume | |
EP1668508B1 (en) | Method and apparatus for providing copies of stored data for disasters recovery | |
EP1507207B1 (en) | Hierarchical approach to identifying changing device characteristics | |
EP1684178A2 (en) | Reversing a communication path between storage devices | |
Das et al. | Storage Management for SAP and DB2 UDB: Split Mirror Backup/Recovery With IBM’s Enterprise Storage Server (ESS) |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CA CN JP |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 1998960191 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2310099 Country of ref document: CA Ref country code: CA Ref document number: 2310099 Kind code of ref document: A Format of ref document f/p: F |
|
WWP | Wipo information: published in national office |
Ref document number: 1998960191 Country of ref document: EP |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 1998960191 Country of ref document: EP |