US20060123312A1 - Method and system for increasing parallelism of disk accesses when restoring data in a disk array system - Google Patents

Method and system for increasing parallelism of disk accesses when restoring data in a disk array system Download PDF

Info

Publication number
US20060123312A1
US20060123312A1 US10/994,098 US99409804A US2006123312A1 US 20060123312 A1 US20060123312 A1 US 20060123312A1 US 99409804 A US99409804 A US 99409804A US 2006123312 A1 US2006123312 A1 US 2006123312A1
Authority
US
United States
Prior art keywords
data
disk array
disks
parity
parity stripe
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/994,098
Inventor
Carl Forhan
Robert Galbraith
Adrian Gerhard
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US10/994,098 priority Critical patent/US20060123312A1/en
Assigned to INTERNATIIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GERHARD, ADRIAN CUENIN, FORHAN, CARL EDWARD, GALBRAITH, ROBERT EDWARD
Priority to CN200710100836.9A priority patent/CN101059751B/en
Priority to CNB2005101267241A priority patent/CN100345099C/en
Publication of US20060123312A1 publication Critical patent/US20060123312A1/en
Priority to US11/923,280 priority patent/US7669107B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems
    • G06F11/1092Rebuilding, e.g. when physically replacing a failing disk
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2211/00Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
    • G06F2211/10Indexing scheme relating to G06F11/10
    • G06F2211/1002Indexing scheme relating to G06F11/1076
    • G06F2211/1052RAID padding, i.e. completing a redundancy group with dummy data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2211/00Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
    • G06F2211/10Indexing scheme relating to G06F11/10
    • G06F2211/1002Indexing scheme relating to G06F11/1076
    • G06F2211/1057Parity-multiple bits-RAID6, i.e. RAID 6 implementations

Definitions

  • the present invention relates to data protection methods for data storage and, more particularly, to systems implementing RAID-6 and similar data protection and recovery strategies.
  • RAID stands for Redundant Array of Independent Disks and is a taxonomy of redundant disk array storage schemes which define a number of ways of configuring and using multiple computer disk drives to achieve varying levels of availability, performance, capacity and cost while appearing to the software application as a single large capacity drive.
  • Typical RAID storage subsystems can be implemented in either hardware or software. In the former instance, the RAID algorithms are packaged into separate controller hardware coupled to the computer input/output (“I/O”) bus and, although adding little or no central processing unit (“CPU”) overhead, the additional hardware required nevertheless adds to the overall system cost.
  • software implementations incorporate the RAID algorithms into system software executed by the main processor together with the operating system, obviating the need and cost of a separate hardware controller, yet adding to CPU overhead.
  • RAID-0 is nothing more than traditional striping in which user data is broken into chunks which are stored onto the stripe set by being spread across multiple disks with no data redundancy.
  • RAID-1 is equivalent to conventional “shadowing” or “mirroring” techniques and is the simplest method of achieving data redundancy by having, for each disk, another containing the same data and writing to both disks simultaneously.
  • the combination of RAID-0 and RAID1 is typically referred to as RAID-0+1 and is implemented by striping shadow sets resulting in the relative performance advantages of both RAID levels.
  • RAID-2 which utilizes Hamming Code written across the members of the RAID set is not now considered to be of significant importance.
  • RAID-3 data is striped across a set of disks with the addition of a separate dedicated drive to hold parity data.
  • the parity data is calculated dynamically as user data is written to the other disks to allow reconstruction of the original user data if a drive fails without requiring replication of the data bit-for-bit.
  • Error detection and correction codes such as Exclusive-OR (“XOR”) or more sophisticated Reed-Solomon techniques may be used to perform the necessary mathematical calculations on the binary data to produce the parity information in RAID-3 and higher level implementations. While parity allows the reconstruction of the user data in the event of a drive failure, the speed of such reconstruction is a function of system workload and the particular algorithm used.
  • RAID-4 the RAID scheme known as RAID-4 consists of N data disks and one parity disk wherein the parity disk sectors contain the bitwise XOR of the corresponding sectors on each data disk. This allows the contents of the data in the RAID set to survive the failure of any one disk.
  • RAID-5 is a modification of RAID-4 which stripes the parity across all of the disks in the array in order to statistically equalize the load on the disks.
  • RAID-6 has been used colloquially to describe RAID schemes that can withstand the failure of two disks without losing data through the use of two parity drives (commonly referred to as the “P” and “Q” drives) for redundancy and sophisticated ECC techniques.
  • parity is used to describe the codes used in RAID-6 technologies, the codes are more correctly a type of ECC code rather than simply a parity code. Data and ECC information are striped across all members of the RAID set and write performance is generally lower than with RAID-5 because three separate drives must each be accessed twice during writes.
  • the principles of RAID-6 may be used to recover a number of drive failures depending on the number of “parity” drives that are used.
  • Some RAID-6 implementations are based upon Reed-Solomon algorithms, which depend on Galois Field arithmetic.
  • a complete explanation of Galois Field arithmetic and the mathematics behind RAID-6 can be found in a variety of sources and, therefore, only a brief overview is provided below as background.
  • the Galois Field arithmetic used in these RAID-6 implementations takes place in GF(2 N ). This is the field of polynomials with coefficients in GF(2), modulo some generator polynomial of degree N.
  • All the polynomials in this field are of degree N-1 or less, and their coefficients are all either 0 or 1, which means they can be represented by a vector of N coefficients all in ⁇ 0,1 ⁇ ; that is, these polynomials “look” just like N-bit binary numbers.
  • Polynomial addition in this Field is simply N-bit XOR, which has the property that every element of the Field is its own additive inverse, so addition and subtraction are the same operation.
  • Polynomial multiplication in this Field can be performed with table lookup techniques based upon logarithms or with simple combinational logic.
  • Each RAID-6 check code expresses an invariant relationship, or equation, between the data on the data disks of the RAID-6 array and the data on one or both of the check disks. If there are C check codes and a set of F disks fail, F ⁇ C, the failed disks can be reconstructed by selecting F of these equations and solving them simultaneously in GF(2 N ) for the F missing variables.
  • check disk P, and check disk Q there are only 2 check disks—check disk P, and check disk Q. It is worth noting that the check disks P and Q change for each stripe of data and parity across the array such that parity data is not written to a dedicated disk but is, instead, striped across all the disks.
  • RAID-6 has been implemented with varying degrees of success in different ways in different systems, there remains an ongoing need to improve the efficiency and costs of providing RAID-6 protection for data storage.
  • the mathematics of implementing RAID-6 involve complicated calculations that are also repetitive. Accordingly, efforts to improve the simplicity of circuitry, the cost of circuitry and the efficiency of the circuitry needed to implement RAID-6 remains a priority today and in the future.
  • one limitation of existing RAID-6 designs relates to the performance overhead associated with performing resync (where parity data for a parity stripe is resynchronized with the current data), rebuild (where data from a faulty or missing drive is regenerated based upon the parity data) or other exposed mode operations such as exposed mode reads.
  • a resync operation requires that, for each parity stripe defined in the disk array, the data must be read from all of the disks and used to solve a parity stripe equation by multiplying the data from each disk by an appropriate value and XOR'ing the multiplied data like a sum of products to construct a parity value for the parity stripe.
  • parity value calculated as the result of solving the parity stripe equation must be written to the appropriate disk.
  • the aforementioned process typically must be performed twice for each parity stripe to generate and write both parity values to the disk array.
  • the invention addresses these and other problems associated with the prior art through a number of techniques that individual or collectively increase parallelism in terms of accessing the disks in a disk array, and thereby reduce the performance overhead associated with exposed mode operations such as resynchronization, rebuild and exposed mode read operations.
  • accesses to disks in a disk array for the purpose of solving a parity stripe equation may be optimized by selecting only a subset of the possible disks required to solve the parity stripe equation, and thus omitting accesses to one or more disks.
  • utilization of the disks in a disk array typically may be better balanced when a number of such operations are performed over a particular time period, so long as different subsets of disks are selected for different operations.
  • each subset of data may comprise N-2 disks among the N disks in a disk array.
  • a random selection mechanism may be used such that certain disks are randomly omitted.
  • a disk array of N disks may be accessed such that, for each of a plurality of parity stripes defined in the disk array, a different subset of disks among the N disks to be used to solve a parity stripe equation for such parity stripe is selected. Retrieval of data associated with each parity stripe may then be initiated only from the selected subset of disks for that parity stripe, with such retrieved data used to solve the parity stripe equation for that parity stripe.
  • each selected subset of disks includes at most N-2 disks.
  • parallelism may be increased in a disk array system by overlapping disk accesses associated with different parity stripes when restoring data in a disk array (e.g., to resynchronize parity and data, or to rebuild data for an exposed disk).
  • restoring data to a disk array may include the retrieval of a first set of data associated with a first parity stripe, coupled with concurrent operations of writing to the disk array a result value generated by processing the first set of data, and reading from the disk array a second set of data associated with a second parity stripe.
  • FIG. 1 is a block diagram of an exemplary computer system that can implement a RAID-6 storage controller in accordance with the principles of the present invention.
  • FIG. 2 is a block diagram illustrating the principal components of a RAID controller of FIG. 1 .
  • FIG. 3 depicts a flowchart for performing restoration operations in an overlapping manner to improve utilization of the disk array in a RAID-6 system in accordance with the principles of the present invention.
  • FIG. 4 depicts a flowchart for performing exposed mode read operations with random selection of which disks to access to improve utilization of the disk array in a RAID-6 system in accordance with the principles of the present invention.
  • the embodiments discussed hereinafter utilize one or both of two techniques to increase parallelism and otherwise reduce the overhead associated with restoring data in a disk array environment such as a RAID-6 environment.
  • One technique described hereinafter selects different subsets of disks to access in connection with an operation such as a rebuild or exposed read operation.
  • Another technique described hereinafter overlaps read and write accesses associated with restoration operations performed with respect to multiple parity stripes.
  • ⁇ x is an element of the finite field and d x is data from the x th disk. While the P and Q disk can be any of the N disks for any particular stripe of data, they are often noted as d P and d Q . When data to one of the disks (i.e., d X ) is updated, the above two equations resolve to:
  • FIG. 1 illustrates an exemplary computer system in which a RAID-6, or other disk array, may be implemented.
  • apparatus 10 may represent practically any type of computer, computer system or other programmable electronic device, including a client computer, a server computer, a portable computer, a handheld computer, an embedded controller, etc.
  • apparatus 10 may be implemented using one or more networked computers, e.g., in a cluster or other distributed computing system.
  • Apparatus 10 will hereinafter also be referred to as a “computer”, although it should be appreciated the term “apparatus” may also include other suitable programmable electronic devices consistent with the invention.
  • Computer 10 typically includes at least one processor 12 coupled to a memory 14 .
  • Processor 12 may represent one or more processors (e.g., microprocessors), and memory 14 may represent the random access memory (RAM) devices comprising the main storage of computer 10 , as well as any supplemental levels of memory, e.g., cache memories, non-volatile or backup memories (e.g., programmable or flash memories), read-only memories, etc.
  • RAM random access memory
  • memory 14 may be considered to include memory storage physically located elsewhere in computer 10 , e.g., any cache memory in a processor 12 , as well as any storage capacity used as a virtual memory, e.g., as stored on the disk array 34 or on another computer coupled to computer 10 via network 18 (e.g., a client computer 20 ).
  • Computer 10 also typically receives a number of inputs and outputs for communicating information externally.
  • computer 10 For interface with a user or operator, computer 10 typically includes one or more user input devices 22 (e.g., a keyboard, a mouse, a trackball, a joystick, a touchpad, and/or a microphone, among others) and a display 24 (e.g., a CRT monitor, an LCD display panel, and/or a speaker, among others).
  • user input may be received via another computer (e.g., a computer 20 ) interfaced with computer 10 over network 18 , or via a dedicated workstation interface or the like.
  • computer 10 may also include one or more mass storage devices accessed via a storage controller, or adapter, 16 , e.g., removable disk drive, a hard disk drive, a direct access storage device (DASD), an optical drive (e.g., a CD drive, a DVD drive, etc.), and/or a tape drive, among others.
  • computer 10 may include an interface with one or more networks 18 (e.g., a LAN, a WAN, a wireless network, and/or the Internet, among others) to permit the communication of information with other computers coupled to the network.
  • networks 18 e.g., a LAN, a WAN, a wireless network, and/or the Internet, among others
  • computer 10 typically includes suitable analog and/or digital interfaces between processor 12 and each of components 14 , 16 , 18 , 22 and 24 as is well known in the art.
  • the mass storage controller 16 advantageously implements RAID-6 storage protection within an array of disks 34 .
  • Computer 10 operates under the control of an operating system 30 , and executes or otherwise relies upon various computer software applications, components, programs, objects, modules, data structures, etc. (e.g., software applications 32 ). Moreover, various applications, components, programs, objects, modules, etc. may also execute on one or more processors in another computer coupled to computer 10 via a network 18 , e.g., in a distributed or client-server computing environment, whereby the processing required to implement the functions of a computer program may be allocated to multiple computers over a network.
  • a network 18 e.g., in a distributed or client-server computing environment, whereby the processing required to implement the functions of a computer program may be allocated to multiple computers over a network.
  • routines executed to implement the embodiments of the invention will be referred to herein as “computer program code,” or simply “program code.”
  • Program code typically comprises one or more instructions that are resident at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause that computer to perform the steps necessary to execute steps or elements embodying the various aspects of the invention.
  • computer readable signal bearing media include but are not limited to recordable type media such as volatile and non-volatile memory devices, floppy and other removable disks, hard disk drives, magnetic tape, optical disks (e.g., CD-ROM's, DVD's, etc.), among others, and transmission type media such as digital and analog communication links.
  • FIG. 2 illustrates a block diagram of the control subsystem of a disk array system, e.g., a RAID-6 compatible system.
  • the mass storage controller 16 of FIG. 1 is shown in more detail to include a RAID controller 202 that is coupled through a system bus 208 with the processor 12 and through a storage bus 210 to various disk drives 212 - 218 .
  • these buses may be proprietary in nature or conform to industry standards such as SCSI-1, SCSI-2, etc.
  • the RAID controller includes a microcontroller 204 that executes program code that implements the RAID-6 algorithm for data protection, and that is typically resident in memory located in the RAID controller.
  • data to be stored on the disks 212 - 218 is used to generate parity data and then broken apart and striped across the disks 212 - 218 .
  • the disk drives 212 - 218 can be individual disk drives that are directly coupled to the controller 202 through the bus 210 or may include their own disk drive adapters that permit a string a individual disk drives to be connected to the storage bus 210 .
  • a disk drive 212 may be physically implemented as 4 or 8 separate disk drives coupled to a single controller connected to the bus 210 .
  • buffers 206 are provided to assist in the data transfers.
  • the utilization of the buffers 206 can sometimes produce a bottle neck in data transfers and the inclusion of numerous buffers may increase cost, complexity and size of the RAID controller 202 .
  • certain embodiments of the present invention relate to provision and utilizing these buffers 206 in an economical and efficient manner.
  • FIGS. 1 and 2 is merely exemplary in nature.
  • the invention may be applicable to disk array environments other than RAID-6 environments.
  • a disk array environment consistent with the invention may utilize a completely software-implemented control algorithm resident in the main storage of the computer, or that some functions handled via program code in a computer or controller can be implemented in hardware logic circuits, and vice versa. Therefore, the invention should not be limited to the particular embodiments discussed herein.
  • a restoration operation such as resyncing parity and data, rebuilding a disk, or performing an exposed mode read
  • a number of I/O operations on the different disks must be performed to read the available data, and if appropriate, store restored data back to the disk array.
  • the appropriate calculations may be performed to restore either the data on a disk or the parity information in the RAID array.
  • Embodiments of the present invention include techniques for performing these operations in such a manner as to maximize the parallelism of the various I/O operations and to better balance disk utilization.
  • any attempt to restore parity or data for a given disk requires that all other disks in the array be accessed.
  • RAID-6 implementations do not require the data from all other disks to solve a parity stripe equation, it has been found that a disk may not even need to be accessed in connection with solving such an equation.
  • FIG. 3 The flowchart of an exemplary method for accomplishing a restore operation (e.g., a resync or rebuild operation) is depicted in FIG. 3 .
  • a restore operation e.g., a resync or rebuild operation
  • accesses for two different parity resync operations are interleaved so that accesses to both the parity and the data disks can occur in parallel and, therefore, reduce the overall idle time of the disks and improve the time it takes to perform rebuilds and resyncs.
  • a rebuild operation for two or more parity stripes proceeds in a similar manner.
  • a set of data distributed across the data disks in a parity stripe A is used to calculate parity values P and Q for parity stripe A.
  • a set of data distributed across the data discs in a parity stripe B is used to calculate different parity values P and Q for parity stripe B.
  • a first set of read operations directed to the data disks, and specifically to the regions thereof located in parity stripe A is performed to retrieve a set of data used to calculate a corresponding parity value P for parity stripe A.
  • a second set of read operations are queued that will retrieve a different set of data from the region allocated to parity stripe B on each of the data disks, which is used to calculate the corresponding parity value P for parity stripe B.
  • the new parity value P may be written to the P parity disk for parity stripe A, in step 304 , while the second set of read operations are being executed by the other disks of the disk array.
  • a third set of read operations is performed—this time to retrieve the data from parity stripe A a second time to generate the parity value Q and, concurrently, the parity value P for parity stripe B is written to the P parity disk.
  • a fourth set of read operations is performed, in step 308 , to read the set of data from parity stripe B, which is used to generate the parity value Q for parity stripe B. While these latter read operations are being performed the parity value Q is written to the Q parity disk for parity stripe A. Finally, in step 310 , the parity value Q for parity stripe B is written to the Q parity disk.
  • the parity drives and the data drives are more equally utilized which improves the performance of the resync and rebuild functions.
  • One of ordinary skill in the art having the benefit of the instant disclosure will note that the aforementioned algorithm may be applied to overlap operations between any number of parity stripes.
  • FIG. 4 next illustrates an exemplary method for accomplishing an exposed read operation, e.g., to retrieve data from an exposed disk.
  • accesses for two exposed read operations to two parity stripes are illustrated, with one such access being performed in step 400 , and another access being performed in step 402 .
  • a different subset of N-2 disks is selected randomly from among the N-1 disks containing data from the parity stripe that can be used to solve the parity stripe equation and generate the data for the exposed disk.
  • one disk in the disk array will not be accessed, leaving the disk free to perform other operations (including, for example, handling overlapped accesses such as those described above in connection with FIG. 3 ).
  • rebuild operations may utilize such a technique in a similar manner.
  • embodiments of the present invention provide a method and system, within a RAID-6 or similar disk array environment, that interleaves different disk access operations and/or selects different disks to be used while performing restore operations to balance disk utilization and decrease latency.
  • Various modifications may be made to the illustrated embodiments without departing from the spirit and scope of the invention. Therefore, the invention lies in the claims hereinafter appended.

Abstract

In a disk array environment such as a RAID-6 environment, the overall performance overhead associated with exposed mode operations such as resynchronization, rebuild and exposed mode read operations is reduced through increased parallelism. By selecting only subsets of the possible disks required to solve a parity stripe equation for a particular parity stripe, accesses to one or more disks in a disk array may be omitted, thus freeing the omitted disks to perform other disk accesses. In addition, disk accesses associated with different parity stripes may be overlapped such that the retrieval of data necessary for restoring data for one parity stripe is performed concurrently with the storage of restored data for another parity stripe.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is related to the following U.S. patent applications all filed on even date herewith by Carl Edward Forhan, Robert Edward Galbraith and Adrian Cuenin Gerhard: Ser. No. ______, entitled “METHOD AND SYSTEM FOR ENHANCED ERROR IDENTIFICATION WITH DISK ARRAY PARITY CHECKING,” Ser. No. ______, entitled “RAID ENVIRONMENT INCORPORATING HARDWARE-BASED FINITE FIELD MULTIPLIER FOR ON-THE-FLY XOR,” Ser. No. ______, entitled “METHOD AND SYSTEM FOR IMPROVED BUFFER UTILIZATION FOR DISK ARRAY PARITY UPDATES,” and Ser. No. ______, entitled “METHOD AND SYSTEM FOR RECOVERING FROM ABNORMAL INTERRUPTION OF A PARITY UPDATE OPERATION IN A DISK ARRAY SYSTEM.” Each of these applications is incorporated by reference herein.
  • FIELD OF THE INVENTION
  • The present invention relates to data protection methods for data storage and, more particularly, to systems implementing RAID-6 and similar data protection and recovery strategies.
  • BACKGROUND OF THE INVENTION
  • RAID stands for Redundant Array of Independent Disks and is a taxonomy of redundant disk array storage schemes which define a number of ways of configuring and using multiple computer disk drives to achieve varying levels of availability, performance, capacity and cost while appearing to the software application as a single large capacity drive. Typical RAID storage subsystems can be implemented in either hardware or software. In the former instance, the RAID algorithms are packaged into separate controller hardware coupled to the computer input/output (“I/O”) bus and, although adding little or no central processing unit (“CPU”) overhead, the additional hardware required nevertheless adds to the overall system cost. On the other hand, software implementations incorporate the RAID algorithms into system software executed by the main processor together with the operating system, obviating the need and cost of a separate hardware controller, yet adding to CPU overhead.
  • Various RAID levels have been defined from RAID-0 to RAID-6, each offering tradeoffs in the previously mentioned factors. RAID-0 is nothing more than traditional striping in which user data is broken into chunks which are stored onto the stripe set by being spread across multiple disks with no data redundancy. RAID-1 is equivalent to conventional “shadowing” or “mirroring” techniques and is the simplest method of achieving data redundancy by having, for each disk, another containing the same data and writing to both disks simultaneously. The combination of RAID-0 and RAID1 is typically referred to as RAID-0+1 and is implemented by striping shadow sets resulting in the relative performance advantages of both RAID levels. RAID-2, which utilizes Hamming Code written across the members of the RAID set is not now considered to be of significant importance.
  • In RAID-3, data is striped across a set of disks with the addition of a separate dedicated drive to hold parity data. The parity data is calculated dynamically as user data is written to the other disks to allow reconstruction of the original user data if a drive fails without requiring replication of the data bit-for-bit. Error detection and correction codes (“ECC”) such as Exclusive-OR (“XOR”) or more sophisticated Reed-Solomon techniques may be used to perform the necessary mathematical calculations on the binary data to produce the parity information in RAID-3 and higher level implementations. While parity allows the reconstruction of the user data in the event of a drive failure, the speed of such reconstruction is a function of system workload and the particular algorithm used.
  • As with RAID-3, the RAID scheme known as RAID-4 consists of N data disks and one parity disk wherein the parity disk sectors contain the bitwise XOR of the corresponding sectors on each data disk. This allows the contents of the data in the RAID set to survive the failure of any one disk. RAID-5 is a modification of RAID-4 which stripes the parity across all of the disks in the array in order to statistically equalize the load on the disks.
  • The designation of RAID-6 has been used colloquially to describe RAID schemes that can withstand the failure of two disks without losing data through the use of two parity drives (commonly referred to as the “P” and “Q” drives) for redundancy and sophisticated ECC techniques. Although the term “parity” is used to describe the codes used in RAID-6 technologies, the codes are more correctly a type of ECC code rather than simply a parity code. Data and ECC information are striped across all members of the RAID set and write performance is generally lower than with RAID-5 because three separate drives must each be accessed twice during writes. However, the principles of RAID-6 may be used to recover a number of drive failures depending on the number of “parity” drives that are used.
  • Some RAID-6 implementations are based upon Reed-Solomon algorithms, which depend on Galois Field arithmetic. A complete explanation of Galois Field arithmetic and the mathematics behind RAID-6 can be found in a variety of sources and, therefore, only a brief overview is provided below as background. The Galois Field arithmetic used in these RAID-6 implementations takes place in GF(2N). This is the field of polynomials with coefficients in GF(2), modulo some generator polynomial of degree N. All the polynomials in this field are of degree N-1 or less, and their coefficients are all either 0 or 1, which means they can be represented by a vector of N coefficients all in {0,1}; that is, these polynomials “look” just like N-bit binary numbers. Polynomial addition in this Field is simply N-bit XOR, which has the property that every element of the Field is its own additive inverse, so addition and subtraction are the same operation. Polynomial multiplication in this Field, however, can be performed with table lookup techniques based upon logarithms or with simple combinational logic.
  • Each RAID-6 check code (i.e., P and Q) expresses an invariant relationship, or equation, between the data on the data disks of the RAID-6 array and the data on one or both of the check disks. If there are C check codes and a set of F disks fail, F≦C, the failed disks can be reconstructed by selecting F of these equations and solving them simultaneously in GF(2N) for the F missing variables. In the RAID-6 systems implemented or contemplated today there are only 2 check disks—check disk P, and check disk Q. It is worth noting that the check disks P and Q change for each stripe of data and parity across the array such that parity data is not written to a dedicated disk but is, instead, striped across all the disks.
  • Even though RAID-6 has been implemented with varying degrees of success in different ways in different systems, there remains an ongoing need to improve the efficiency and costs of providing RAID-6 protection for data storage. The mathematics of implementing RAID-6 involve complicated calculations that are also repetitive. Accordingly, efforts to improve the simplicity of circuitry, the cost of circuitry and the efficiency of the circuitry needed to implement RAID-6 remains a priority today and in the future.
  • For example, one limitation of existing RAID-6 designs relates to the performance overhead associated with performing resync (where parity data for a parity stripe is resynchronized with the current data), rebuild (where data from a faulty or missing drive is regenerated based upon the parity data) or other exposed mode operations such as exposed mode reads. A resync operation, for example, requires that, for each parity stripe defined in the disk array, the data must be read from all of the disks and used to solve a parity stripe equation by multiplying the data from each disk by an appropriate value and XOR'ing the multiplied data like a sum of products to construct a parity value for the parity stripe. In addition, the parity value calculated as the result of solving the parity stripe equation must be written to the appropriate disk. In addition, since RAID-6 designs rely on two parity values for each parity stripe, the aforementioned process typically must be performed twice for each parity stripe to generate and write both parity values to the disk array.
  • Likewise, to rebuild an exposed disk, data for each parity stripe must be read from all of the other disks and used to solve a parity stripe equation in a similar multiply-and-XOR manner as is used for resynchronization. The result of solving the parity stripe equation is the data that is written back to the exposed disk. In addition, for other exposed mode operations such as exposed mode read operations, a similar process to a rebuild operation must be performed, albeit without storing back the result of the parity stripe equation to the disk array.
  • In each of these exposed mode operations, however, the requirements of reading data from certain disks and writing data back to certain disks results in substantial performance overhead, specifically with respect to the sequential nature of the various disk access operations on the disk array. A substantial need therefore exists for a manner of improving the performance of a disk array system such as a RAID-6 system to improve performance in connection with resynchronization, rebuild and other exposed mode operations.
  • SUMMARY OF THE INVENTION
  • The invention addresses these and other problems associated with the prior art through a number of techniques that individual or collectively increase parallelism in terms of accessing the disks in a disk array, and thereby reduce the performance overhead associated with exposed mode operations such as resynchronization, rebuild and exposed mode read operations.
  • In one aspect, for example, accesses to disks in a disk array for the purpose of solving a parity stripe equation (e.g., in connection with a rebuild, exposed mode read or other exposed mode operation) may be optimized by selecting only a subset of the possible disks required to solve the parity stripe equation, and thus omitting accesses to one or more disks. By doing so, utilization of the disks in a disk array typically may be better balanced when a number of such operations are performed over a particular time period, so long as different subsets of disks are selected for different operations.
  • While other disk array environments may be used, when implemented in a RAID-6 environment, where the data in a parity stripe equation is related via two parity stripe equations, each subset of data may comprise N-2 disks among the N disks in a disk array. Moreover, while other manners of selecting subsets of disks may be used, in one embodiment a random selection mechanism may be used such that certain disks are randomly omitted.
  • Consistent with this aspect of the invention, a disk array of N disks may be accessed such that, for each of a plurality of parity stripes defined in the disk array, a different subset of disks among the N disks to be used to solve a parity stripe equation for such parity stripe is selected. Retrieval of data associated with each parity stripe may then be initiated only from the selected subset of disks for that parity stripe, with such retrieved data used to solve the parity stripe equation for that parity stripe. In addition, each selected subset of disks includes at most N-2 disks.
  • In another aspect, parallelism may be increased in a disk array system by overlapping disk accesses associated with different parity stripes when restoring data in a disk array (e.g., to resynchronize parity and data, or to rebuild data for an exposed disk). Specifically, consistent with this aspect of the invention, restoring data to a disk array may include the retrieval of a first set of data associated with a first parity stripe, coupled with concurrent operations of writing to the disk array a result value generated by processing the first set of data, and reading from the disk array a second set of data associated with a second parity stripe. By overlapping read and write accesses associated with different parity stripes, data associated with multiple parity stripes may be restored with less overhead than if the accesses and operations associated with restoring data to different parity stripes were performed sequentially.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of an exemplary computer system that can implement a RAID-6 storage controller in accordance with the principles of the present invention.
  • FIG. 2 is a block diagram illustrating the principal components of a RAID controller of FIG. 1.
  • FIG. 3 depicts a flowchart for performing restoration operations in an overlapping manner to improve utilization of the disk array in a RAID-6 system in accordance with the principles of the present invention.
  • FIG. 4 depicts a flowchart for performing exposed mode read operations with random selection of which disks to access to improve utilization of the disk array in a RAID-6 system in accordance with the principles of the present invention.
  • DETAILED DESCRIPTION
  • The embodiments discussed hereinafter utilize one or both of two techniques to increase parallelism and otherwise reduce the overhead associated with restoring data in a disk array environment such as a RAID-6 environment. One technique described hereinafter selects different subsets of disks to access in connection with an operation such as a rebuild or exposed read operation. Another technique described hereinafter overlaps read and write accesses associated with restoration operations performed with respect to multiple parity stripes.
  • Presented hereinafter are a number of embodiments of a disk array environment implementing the aforementioned techniques. However, prior to discussing such embodiments, a brief background on RAID-6 is provided, followed by a description of an exemplary hardware environment within which the aforementioned techniques may be implemented.
  • General RAID-6 Background
  • The nomenclature used herein to describe RAID-6 storage systems conforms to the most readily accepted standards for this field. In particular, there are N drives of which any two are considered to be the parity drives, P and Q. Using Galois Field arithmetic, two independent equations can be written:
    α0 d 00 d 10 d 2+ . . . +α0 d N-1=0   (1)
    α0 d 01 d 12 d 2+ . . . +αN-1 d N-1=0   (2)
    where the “+” operator used herein represents an Exclusive-OR (XOR) operation.
  • In these equations, αx is an element of the finite field and dx is data from the xth disk. While the P and Q disk can be any of the N disks for any particular stripe of data, they are often noted as dP and dQ. When data to one of the disks (i.e., dX) is updated, the above two equations resolve to:
  • Δ=(old d X)+(new d X)   (3)
    (new d P)=(old d P)+((αQX)/(αPQ))Δ  (4)
    (new d Q)=(old d Q)+((αPX)/(αPQ))Δ  (5)
  • In each of the last two equations the term to the right of the addition sign is a constant multiplied by the change in the data (i.e., Δ). These terms in equations (4) and (5) are often denoted as K1Δ and K2Δ, respectively.
  • In the case of one missing, or unavailable drive, simple XOR'ing can be used to recover the drive's data. For example, if d1 fails then d1 can be restored by
    d 1 =d 0 +d 2 +d 3+ . . . (6)
  • In the case of two drives failing, or being “exposed”, the above equations can be used to restore a drive's data. For example, given drives 0 through X and assuming drives A and B have failed, the data for either drive can be restored from the remaining drives. If for example, drive A was to be restored, the above equations reduce to:
    d A=((αB0)/(αBA))d 0+((αB1)/(αBA))d 1+ . . . +((αBX)/(αBA))d x   (7)
    Exemplary Hardware Environment
  • With this general background of RAID-6 in mind, attention can be turned to the drawings, wherein like numbers denote like parts throughout the several views. FIG. 1 illustrates an exemplary computer system in which a RAID-6, or other disk array, may be implemented. For the purposes of the invention, apparatus 10 may represent practically any type of computer, computer system or other programmable electronic device, including a client computer, a server computer, a portable computer, a handheld computer, an embedded controller, etc. Moreover, apparatus 10 may be implemented using one or more networked computers, e.g., in a cluster or other distributed computing system. Apparatus 10 will hereinafter also be referred to as a “computer”, although it should be appreciated the term “apparatus” may also include other suitable programmable electronic devices consistent with the invention.
  • Computer 10 typically includes at least one processor 12 coupled to a memory 14. Processor 12 may represent one or more processors (e.g., microprocessors), and memory 14 may represent the random access memory (RAM) devices comprising the main storage of computer 10, as well as any supplemental levels of memory, e.g., cache memories, non-volatile or backup memories (e.g., programmable or flash memories), read-only memories, etc. In addition, memory 14 may be considered to include memory storage physically located elsewhere in computer 10, e.g., any cache memory in a processor 12, as well as any storage capacity used as a virtual memory, e.g., as stored on the disk array 34 or on another computer coupled to computer 10 via network 18 (e.g., a client computer 20).
  • Computer 10 also typically receives a number of inputs and outputs for communicating information externally. For interface with a user or operator, computer 10 typically includes one or more user input devices 22 (e.g., a keyboard, a mouse, a trackball, a joystick, a touchpad, and/or a microphone, among others) and a display 24 (e.g., a CRT monitor, an LCD display panel, and/or a speaker, among others). Otherwise, user input may be received via another computer (e.g., a computer 20) interfaced with computer 10 over network 18, or via a dedicated workstation interface or the like. For additional storage, computer 10 may also include one or more mass storage devices accessed via a storage controller, or adapter, 16, e.g., removable disk drive, a hard disk drive, a direct access storage device (DASD), an optical drive (e.g., a CD drive, a DVD drive, etc.), and/or a tape drive, among others. Furthermore, computer 10 may include an interface with one or more networks 18 (e.g., a LAN, a WAN, a wireless network, and/or the Internet, among others) to permit the communication of information with other computers coupled to the network. It should be appreciated that computer 10 typically includes suitable analog and/or digital interfaces between processor 12 and each of components 14, 16, 18, 22 and 24 as is well known in the art.
  • In accordance with the principles of the present invention, the mass storage controller 16 advantageously implements RAID-6 storage protection within an array of disks 34.
  • Computer 10 operates under the control of an operating system 30, and executes or otherwise relies upon various computer software applications, components, programs, objects, modules, data structures, etc. (e.g., software applications 32). Moreover, various applications, components, programs, objects, modules, etc. may also execute on one or more processors in another computer coupled to computer 10 via a network 18, e.g., in a distributed or client-server computing environment, whereby the processing required to implement the functions of a computer program may be allocated to multiple computers over a network.
  • In general, the routines executed to implement the embodiments of the invention, whether implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions, or even a subset thereof, will be referred to herein as “computer program code,” or simply “program code.” Program code typically comprises one or more instructions that are resident at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause that computer to perform the steps necessary to execute steps or elements embodying the various aspects of the invention. Moreover, while the invention has and hereinafter will be described in the context of fully functioning computers and computer systems, those skilled in the art will appreciate that the various embodiments of the invention are capable of being distributed as a program product in a variety of forms, and that the invention applies equally regardless of the particular type of computer readable signal bearing media used to actually carry out the distribution. Examples of computer readable signal bearing media include but are not limited to recordable type media such as volatile and non-volatile memory devices, floppy and other removable disks, hard disk drives, magnetic tape, optical disks (e.g., CD-ROM's, DVD's, etc.), among others, and transmission type media such as digital and analog communication links.
  • In addition, various program code described hereinafter may be identified based upon the application within which it is implemented in a specific embodiment of the invention. However, it should be appreciated that any particular program nomenclature that follows is used merely for convenience, and thus the invention should not be limited to use solely in any specific application identified and/or implied by such nomenclature. Furthermore, given the typically endless number of manners in which computer programs may be organized into routines, procedures, methods, modules, objects, and the like, as well as the various manners in which program functionality may be allocated among various software layers that are resident within a typical computer (e.g., operating systems, libraries, API's, applications, applets, etc.), it should be appreciated that the invention is not limited to the specific organization and allocation of program functionality described herein.
  • FIG. 2 illustrates a block diagram of the control subsystem of a disk array system, e.g., a RAID-6 compatible system. In particular, the mass storage controller 16 of FIG. 1 is shown in more detail to include a RAID controller 202 that is coupled through a system bus 208 with the processor 12 and through a storage bus 210 to various disk drives 212-218. As known to one of ordinary skill, these buses may be proprietary in nature or conform to industry standards such as SCSI-1, SCSI-2, etc. The RAID controller includes a microcontroller 204 that executes program code that implements the RAID-6 algorithm for data protection, and that is typically resident in memory located in the RAID controller. In particular, data to be stored on the disks 212-218 is used to generate parity data and then broken apart and striped across the disks 212-218. The disk drives 212-218 can be individual disk drives that are directly coupled to the controller 202 through the bus 210 or may include their own disk drive adapters that permit a string a individual disk drives to be connected to the storage bus 210. In other words, a disk drive 212 may be physically implemented as 4 or 8 separate disk drives coupled to a single controller connected to the bus 210. As data is exchanged between the disk drives 212-218 and the RAID controller 202, in either direction, buffers 206 are provided to assist in the data transfers. The utilization of the buffers 206 can sometimes produce a bottle neck in data transfers and the inclusion of numerous buffers may increase cost, complexity and size of the RAID controller 202. Thus, certain embodiments of the present invention relate to provision and utilizing these buffers 206 in an economical and efficient manner.
  • It will be appreciated that the embodiment illustrated in FIGS. 1 and 2 is merely exemplary in nature. For example, it will be appreciated that the invention may be applicable to disk array environments other than RAID-6 environments. It will also be appreciated that a disk array environment consistent with the invention may utilize a completely software-implemented control algorithm resident in the main storage of the computer, or that some functions handled via program code in a computer or controller can be implemented in hardware logic circuits, and vice versa. Therefore, the invention should not be limited to the particular embodiments discussed herein.
  • Increasing Parallelism in RAID-6 Disk Accesses
  • In a RAID-6 system, when performing a restoration operation such as resyncing parity and data, rebuilding a disk, or performing an exposed mode read, a number of I/O operations on the different disks must be performed to read the available data, and if appropriate, store restored data back to the disk array. After reading the data for a particular parity stripe, the appropriate calculations may be performed to restore either the data on a disk or the parity information in the RAID array. Embodiments of the present invention include techniques for performing these operations in such a manner as to maximize the parallelism of the various I/O operations and to better balance disk utilization.
  • It has been found, for example, that improvements in performance may be obtained by selectively omitting accesses from disks in a disk array in connection with various restoration operations. As mentioned previously, RAID-6 is designed to handle two disk failures and, therefore, equation (7) above may be solved using data from N-2 disks. If two disks have failed, then the data for a disk, from equation (7), is recoverable using the remaining N-2 disks. Even when only one disk has failed, data for that disk is recoverable, in accordance with equation (7). Of note, however, it should be appreciated that in such a circumstance, the data from one of the disks may be omitted when solving the equation.
  • In RAID-5 implementations, any attempt to restore parity or data for a given disk (e.g., for resyncing parity and data, rebuilding the disk, or performing an exposed mode read) requires that all other disks in the array be accessed. Given, however, that RAID-6 implementations do not require the data from all other disks to solve a parity stripe equation, it has been found that a disk may not even need to be accessed in connection with solving such an equation. As a result, it may be desirable in embodiments consistent with the invention to omit an access to one or more disks in association with retrieving data used to solve a parity stripe equation, and thereby reduce the overall utilization of such disks.
  • Furthermore, while one particular disk could be omitted in all situations where a parity stripe equation needs to be solved, it is typically desirable to select different subsets of disks to omit when solving a parity stripe equation for different parity stripes, e.g., in connection with a restoration operation such as a disk rebuild or a series of exposed mode read operations. Therefore, instead of one disk consistently being unused during restoration operations, the determination of which disk to not use during a given restoration operation may be performed so as to better balance utilization levels among all of the disks.
  • Various manners of selecting different subsets of disks may be used consistent with the invention. In one embodiment, random selection may be used. In other embodiments, however, other load balancing-type algorithms may be used, e.g., round robin selection. It will be appreciated that the selection of different subsets does not require that each subset be different from every other subset, only that which disks are incorporated into the subsets used in solving parity stripe equations changes from time to time (e.g., for each parity stripe, or for subsets of parity stripes) such that the utilization of the disks in a disk array is better balanced than were the same disk(s) omitted for every parity stripe.
  • Additionally, it has also been found that improvements in performance may be obtained by overlapping disk accesses associated with multiple parity stripes in connection with various restoration operations. For example, when a parity stripe is resynchronized, the data drives are first read and then the result of the parity calculations is written to the parity drive. In conventional designs, during the time that the data drives are being read, the parity drives remain idle. During a rebuild, a similar underutilization of the disk(s) being rebuilt occurs as well. Embodiments consistent with the invention address this inefficiency by overlapping the read and write operations associated with restoring data to multiple parity stripes to reduce the idle time of the disks in a given disk array. In addition to RAID-6 and similar environments, overlapped disk accesses as described herein may also be used in other disk array environments, e.g., in RAID-5 environments.
  • The flowchart of an exemplary method for accomplishing a restore operation (e.g., a resync or rebuild operation) is depicted in FIG. 3. In accordance with this method accesses for two different parity resync operations are interleaved so that accesses to both the parity and the data disks can occur in parallel and, therefore, reduce the overall idle time of the disks and improve the time it takes to perform rebuilds and resyncs. It will be appreciated that a rebuild operation for two or more parity stripes proceeds in a similar manner.
  • In the flowchart of FIG. 3, a set of data distributed across the data disks in a parity stripe A is used to calculate parity values P and Q for parity stripe A. Also, a set of data distributed across the data discs in a parity stripe B is used to calculate different parity values P and Q for parity stripe B. In step 302, a first set of read operations directed to the data disks, and specifically to the regions thereof located in parity stripe A is performed to retrieve a set of data used to calculate a corresponding parity value P for parity stripe A. Concurrently, a second set of read operations are queued that will retrieve a different set of data from the region allocated to parity stripe B on each of the data disks, which is used to calculate the corresponding parity value P for parity stripe B. Once the first set of read operations is complete, the new parity value P may be written to the P parity disk for parity stripe A, in step 304, while the second set of read operations are being executed by the other disks of the disk array. In step 306, a third set of read operations is performed—this time to retrieve the data from parity stripe A a second time to generate the parity value Q and, concurrently, the parity value P for parity stripe B is written to the P parity disk. Next, a fourth set of read operations is performed, in step 308, to read the set of data from parity stripe B, which is used to generate the parity value Q for parity stripe B. While these latter read operations are being performed the parity value Q is written to the Q parity disk for parity stripe A. Finally, in step 310, the parity value Q for parity stripe B is written to the Q parity disk.
  • By overlapping resync and rebuild operations in accordance with this algorithm, the parity drives and the data drives are more equally utilized which improves the performance of the resync and rebuild functions. One of ordinary skill in the art having the benefit of the instant disclosure will note that the aforementioned algorithm may be applied to overlap operations between any number of parity stripes.
  • FIG. 4 next illustrates an exemplary method for accomplishing an exposed read operation, e.g., to retrieve data from an exposed disk. In accordance with this method accesses for two exposed read operations to two parity stripes are illustrated, with one such access being performed in step 400, and another access being performed in step 402. In both operations, a different subset of N-2 disks is selected randomly from among the N-1 disks containing data from the parity stripe that can be used to solve the parity stripe equation and generate the data for the exposed disk. As a result, in each operation, one disk in the disk array will not be accessed, leaving the disk free to perform other operations (including, for example, handling overlapped accesses such as those described above in connection with FIG. 3). It will be appreciated that by randomly omitting different disks from a series of operations will assist in better balancing disk utilization across the disk array, and thus improve overall system throughput. It will also be appreciated that rebuild operations may utilize such a technique in a similar manner.
  • Thus, embodiments of the present invention provide a method and system, within a RAID-6 or similar disk array environment, that interleaves different disk access operations and/or selects different disks to be used while performing restore operations to balance disk utilization and decrease latency. Various modifications may be made to the illustrated embodiments without departing from the spirit and scope of the invention. Therefore, the invention lies in the claims hereinafter appended.

Claims (38)

1. A method of accessing a disk array comprising N disks, the method comprising the steps of, for each of a plurality of parity stripes defined in the disk array:
selecting a different subset of disks among the N disks to be used to solve a parity stripe equation for such parity stripe, wherein each subset of disks includes at most N-2 disks;
initiating retrieval of data associated with such parity stripe only from the selected subset of disks; and
solving the parity stripe equation using the retrieved data.
2. The method of claim 1, wherein the step of selecting comprises the step of randomly selecting the subset of disks.
3. The method of claim 1, wherein the disk array is of the type wherein the data in each parity stripe is related by multiple parity stripe equations.
4. The method of claim 1, wherein the disk array comprises a RAID-6 system.
5. The method of claim 1, wherein solving the parity stripe equation comprises rebuilding a data value, the method further comprising initiating storage of the data value to one of the disks other than the subset of disks.
6. The method of claim 1, further comprising initiating storage of a result of the parity stripe equation for a first parity stripe concurrently with initiating retrieval of data associated with a second parity stripe.
7. A method of restoring data in a RAID-6 system of N disks, the method comprising the steps of:
identifying a plurality of data values, each to be restored to a respective one of the N disks, wherein each data value is capable of being restored from data retrieved from the other N-1 disks; and
for each of the plurality of data values selecting N-2 disks from the respective other N-1 disks to be used to calculate the data value; and
initiating retrieval of data from the respective selected N-2 disks for each of the plurality of data values, wherein the selection of N-2 disks for each of the plurality of data values balances utilization of the N disks during restoration of the data.
8. The method of claim 7, wherein the step of selecting N-2 disks includes selecting the N-2 disks randomly.
9. A program product comprising:
program code configured upon execution to access a disk array of the type comprising N disks by, for each of a plurality of parity stripes defined in the disk array, selecting a different subset of disks among the N disks to be used to solve a parity stripe equation for such parity stripe, initiating retrieval of data associated with such parity stripe only from the selected subset of disks, and solving the parity stripe equation using the retrieved data, wherein each subset of disks includes at most N-2 disks; and
a computer readable signal bearing medium bearing the program code.
10. An apparatus comprising:
an interface configured to couple to at least N disks in a disk array; and
a disk array controller coupled to the interface, the disk array controller configured to, for each of a plurality of parity stripes defined in the disk array, select a different subset of disks among the N disks to be used to solve a parity stripe equation for such parity stripe, initiate retrieval of data associated with such parity stripe only from the selected subset of disks, and solve the parity stripe equation using the retrieved data, wherein each subset of disks includes at most N-2 disks.
11. The apparatus of claim 10, wherein the disk array controller comprises a RAID-6 controller.
12. The apparatus of claim 10, wherein the disk array controller comprises program code configured to perform at least one of selecting the different subset, initiating retrieval of the data, and solving the parity stripe equation.
13. The apparatus of claim 10, further comprising a plurality of disks coupled to the interface.
14. The apparatus of claim 10, wherein the disk array controller is configured to select the different subset of disks by randomly selecting the subset of disks.
15. The apparatus of claim 10, wherein the disk array controller is configured to solve the parity stripe equation by rebuilding a data value, and to initiate storage of the data value to one of the disks other than the subset of disks.
16. The apparatus of claim 10, wherein the disk array controller is further configured to initiate storage of a result of the parity stripe equation for a first parity stripe concurrently with initiating retrieval of data associated with a second parity stripe.
17. A method of restoring data in a disk array comprising a plurality of disks, the method comprising the steps of:
reading from the disk array a first set of data associated with a first parity stripe;
writing to the disk array a result value generated by processing the first set of data; and
concurrently with writing the result value to the disk array, reading from the disk array a second set of data associated with a second parity stripe.
18. The method of claim 17, further comprising:
writing to the disk array a second result value generated by processing the second set of data; and
concurrently with writing the second result value to the disk array, reading from the disk array a third set of data.
19. The method of claim 18, wherein the first and second result values are generated by processing the first and second sets of data using at least one parity stripe equation.
20. The method of claim 19, wherein the third set of data is associated with the first parity stripe, the method further comprising:
writing to the disk array a third result value generated by processing the third set of data using a second parity stripe equation.
21. The method of claim 20, further comprising:
concurrently with writing the third result value to the disk array, reading from the disk array a fourth set of data associated with the second parity stripe; and
writing to the disk array a fourth result value generated by processing the fourth set of data using the second parity stripe equation.
22. The method of claim 19, wherein the first and second result values each comprise a parity value, and wherein writing the first and second result values are performed to synchronize parity and data for the first and second parity stripes.
23. The method of claim 19, wherein the first and second result values each comprise a data value, and wherein writing the first and second result values are performed to rebuild data for a disk in the disk array.
24. The method of claim 18, wherein the third set of data is associated with a third parity stripe.
25. The method of claim 17, wherein writing the result value to the disk array comprises writing the result value to a first disk in the disk array, and wherein reading the second set of data from the disk array comprises reading the second set of data from a subset of disks in the disk array that excludes the first disk.
26. A program product comprising:
program code configured upon execution to restore data in a disk array of the type comprising a plurality of disks by reading from the disk array a first set of data associated with a first parity stripe, writing to the disk array a result value generated by processing the first set of data, and reading from the disk array a second set of data associated with a second parity stripe concurrently with writing the result value to the disk array; and
a computer readable signal bearing medium bearing the program code.
27. An apparatus comprising:
an interface configured to couple to a plurality of disks in a disk array; and
a disk array controller coupled to the interface, the disk array controller configured to restore data in the disk array by reading from the disk array a first set of data associated with a first parity stripe, writing to the disk array a result value generated by processing the first set of data, and reading from the disk array a second set of data associated with a second parity stripe concurrently with writing the result value to the disk array.
28. The apparatus of claim 27, wherein the disk array controller comprises a RAID-6 controller.
29. The apparatus of claim 27, wherein the disk array controller comprises program code configured to perform at least one of selecting the different subset, initiating retrieval of the data, and solving the parity stripe equation.
30. The apparatus of claim 27, further comprising a plurality of disks coupled to the interface.
31. The apparatus of claim 27, wherein the disk array controller is further configured to write to the disk array a second result value generated by processing the second set of data, and read from the disk array a third set of data concurrently with writing the second result value to the disk array.
32. The apparatus of claim 31, wherein the disk array controller is configured to generate the first and second result values by processing the first and second sets of data using at least one parity stripe equation.
33. The apparatus of claim 32, wherein the third set of data is associated with the first parity stripe, and wherein the disk array controller is further configured to write to the disk array a third result value generated by processing the third set of data using a second parity stripe equation.
34. The apparatus of claim 33, wherein the disk array controller is further configured to read from the disk array a fourth set of data associated with the second parity stripe concurrently with writing the third result value to the disk array, and to write to the disk array a fourth result value generated by processing the fourth set of data using the second parity stripe equation.
35. The apparatus of claim 32, wherein the first and second result values each comprise a parity value, and wherein the disk array controller is configured to write the first and second result values to synchronize parity and data for the first and second parity stripes.
36. The apparatus of claim 32, wherein the first and second result values each comprise a data value, and wherein the disk array controller is configured to write the first and second result values to rebuild data for a disk in the disk array.
37. The apparatus of claim 31, wherein the third set of data is associated with a third parity stripe.
38. The apparatus of claim 27, wherein the disk array controller is configured to write the result value to the disk array by writing the result value to a first disk in the disk array, and wherein the disk array controller is configured to read the second set of data from the disk array by reading the second set of data from a subset of disks in the disk array that excludes the first disk.
US10/994,098 2004-11-19 2004-11-19 Method and system for increasing parallelism of disk accesses when restoring data in a disk array system Abandoned US20060123312A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US10/994,098 US20060123312A1 (en) 2004-11-19 2004-11-19 Method and system for increasing parallelism of disk accesses when restoring data in a disk array system
CN200710100836.9A CN101059751B (en) 2004-11-19 2005-11-21 Method and system for increasing parallelism of disk accesses when restoring data in a disk array system
CNB2005101267241A CN100345099C (en) 2004-11-19 2005-11-21 Method and system for increasing parallelism of disk accesses when restoring data in a disk array system
US11/923,280 US7669107B2 (en) 2004-11-19 2007-10-24 Method and system for increasing parallelism of disk accesses when restoring data in a disk array system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/994,098 US20060123312A1 (en) 2004-11-19 2004-11-19 Method and system for increasing parallelism of disk accesses when restoring data in a disk array system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/923,280 Division US7669107B2 (en) 2004-11-19 2007-10-24 Method and system for increasing parallelism of disk accesses when restoring data in a disk array system

Publications (1)

Publication Number Publication Date
US20060123312A1 true US20060123312A1 (en) 2006-06-08

Family

ID=36575805

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/994,098 Abandoned US20060123312A1 (en) 2004-11-19 2004-11-19 Method and system for increasing parallelism of disk accesses when restoring data in a disk array system
US11/923,280 Expired - Fee Related US7669107B2 (en) 2004-11-19 2007-10-24 Method and system for increasing parallelism of disk accesses when restoring data in a disk array system

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/923,280 Expired - Fee Related US7669107B2 (en) 2004-11-19 2007-10-24 Method and system for increasing parallelism of disk accesses when restoring data in a disk array system

Country Status (2)

Country Link
US (2) US20060123312A1 (en)
CN (2) CN101059751B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060294416A1 (en) * 2005-06-22 2006-12-28 Accusys, Inc. XOR circuit, raid device capable of recovering a plurality of failures and method thereof
US20090228648A1 (en) * 2008-03-04 2009-09-10 International Business Machines Corporation High performance disk array rebuild
US8037391B1 (en) * 2009-05-22 2011-10-11 Nvidia Corporation Raid-6 computation system and method
CN102419697A (en) * 2011-11-02 2012-04-18 华中科技大学 Method for reconstructing single disk in vertical redundant array of independent disks (RAID)-6 coding
US8296515B1 (en) 2009-05-22 2012-10-23 Nvidia Corporation RAID-6 computation system and method
WO2013057764A1 (en) * 2011-10-19 2013-04-25 Hitachi, Ltd. Storage system
EP2924577A1 (en) * 2014-03-28 2015-09-30 Fujitsu Limited Storage control apparatus, storage control program, and storage control method
US20160246518A1 (en) * 2015-02-20 2016-08-25 International Business Machines Corporation Raid array systems and operations using mapping information
US10372366B2 (en) * 2007-03-29 2019-08-06 Violin Systems Llc Memory system with multiple striping of RAID groups and method for performing the same
US11010076B2 (en) 2007-03-29 2021-05-18 Violin Systems Llc Memory system with multiple striping of raid groups and method for performing the same
CN113407122A (en) * 2016-12-21 2021-09-17 伊姆西Ip控股有限责任公司 RAID reconstruction method and equipment
CN114063908A (en) * 2021-10-23 2022-02-18 苏州普福斯信息科技有限公司 Hard disk read-write processing method and device based on RAID and storage medium
US20220100602A1 (en) * 2020-09-29 2022-03-31 Micron Technology, Inc. Apparatuses and methods for cyclic redundancy calculation for semiconductor device

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100403249C (en) * 2006-06-19 2008-07-16 威盛电子股份有限公司 Magnetic disk array data configuration structure and data acces method thereof
US7904619B2 (en) 2006-11-24 2011-03-08 Sandforce, Inc. System, method, and computer program product for reducing memory write operations using difference information
US7904672B2 (en) 2006-12-08 2011-03-08 Sandforce, Inc. System and method for providing data redundancy after reducing memory writes
US7788526B2 (en) * 2007-01-10 2010-08-31 International Business Machines Corporation Providing enhanced tolerance of data loss in a disk array system
US7849275B2 (en) 2007-11-19 2010-12-07 Sandforce, Inc. System, method and a computer program product for writing data to different storage devices based on write frequency
US8510643B2 (en) * 2009-12-23 2013-08-13 Nvidia Corporation Optimizing raid migration performance
CN101923501B (en) * 2010-07-30 2012-01-25 华中科技大学 Disk array multi-level fault tolerance method
US8683296B2 (en) 2011-12-30 2014-03-25 Streamscale, Inc. Accelerated erasure coding system and method
US8914706B2 (en) 2011-12-30 2014-12-16 Streamscale, Inc. Using parity data for concurrent data authentication, correction, compression, and encryption
US9594652B1 (en) * 2013-12-19 2017-03-14 Veritas Technologies Systems and methods for decreasing RAID rebuilding time
US10133630B2 (en) 2016-09-06 2018-11-20 International Business Machines Corporation Disposable subset parities for use in a distributed RAID

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5134619A (en) * 1990-04-06 1992-07-28 Sf2 Corporation Failure-tolerant mass storage system
US5140592A (en) * 1990-03-02 1992-08-18 Sf2 Corporation Disk array system
USRE34100E (en) * 1987-01-12 1992-10-13 Seagate Technology, Inc. Data error correction system
US5390187A (en) * 1990-10-23 1995-02-14 Emc Corporation On-line reconstruction of a failed redundant array system
US5488731A (en) * 1992-08-03 1996-01-30 International Business Machines Corporation Synchronization method for loosely coupled arrays of redundant disk drives
US5499253A (en) * 1994-01-05 1996-03-12 Digital Equipment Corporation System and method for calculating RAID 6 check codes
US5754563A (en) * 1995-09-11 1998-05-19 Ecc Technologies, Inc. Byte-parallel system for implementing reed-solomon error-correcting codes
US5956524A (en) * 1990-04-06 1999-09-21 Micro Technology Inc. System and method for dynamic alignment of associated portions of a code word from a plurality of asynchronous sources
US6092215A (en) * 1997-09-29 2000-07-18 International Business Machines Corporation System and method for reconstructing data in a storage array system
US6101615A (en) * 1998-04-08 2000-08-08 International Business Machines Corporation Method and apparatus for improving sequential writes to RAID-6 devices
US6279050B1 (en) * 1998-12-18 2001-08-21 Emc Corporation Data transfer apparatus having upper, lower, middle state machines, with middle state machine arbitrating among lower state machine side requesters including selective assembly/disassembly requests
US6408400B2 (en) * 1997-11-04 2002-06-18 Fujitsu Limited Disk array device
US20020166078A1 (en) * 2001-03-14 2002-11-07 Oldfield Barry J. Using task description blocks to maintain information regarding operations
US6480944B2 (en) * 2000-03-22 2002-11-12 Interwoven, Inc. Method of and apparatus for recovery of in-progress changes made in a software application
US20020194427A1 (en) * 2001-06-18 2002-12-19 Ebrahim Hashemi System and method for storing data and redundancy information in independent slices of a storage device
US6567891B2 (en) * 2001-03-14 2003-05-20 Hewlett-Packard Development Company, L.P. Methods and arrangements for improved stripe-based processing
US6687872B2 (en) * 2001-03-14 2004-02-03 Hewlett-Packard Development Company, L.P. Methods and systems of using result buffers in parity operations
US20040049632A1 (en) * 2002-09-09 2004-03-11 Chang Albert H. Memory controller interface with XOR operations on memory read to accelerate RAID operations
US20050108613A1 (en) * 2003-11-17 2005-05-19 Nec Corporation Disk array device, parity data generating circuit for raid and galois field multiplying circuit
US6944791B2 (en) * 2002-07-18 2005-09-13 Lsi Logic Corporation Method of handling unreadable blocks during write of a RAID device
US7206946B2 (en) * 2003-10-09 2007-04-17 Hitachi, Ltd. Disk drive system for starting destaging of unwritten cache memory data to disk drive upon detection of DC voltage level falling below predetermined value

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3688265A (en) 1971-03-18 1972-08-29 Ibm Error-free decoding for failure-tolerant memories
US5673412A (en) 1990-07-13 1997-09-30 Hitachi, Ltd. Disk system and power-on sequence for the same
US5274799A (en) * 1991-01-04 1993-12-28 Array Technology Corporation Storage device array architecture with copyback cache
US5537379A (en) 1991-05-10 1996-07-16 Discovision Associates Optical data storage and retrieval system and method
US5448719A (en) * 1992-06-05 1995-09-05 Compaq Computer Corp. Method and apparatus for maintaining and retrieving live data in a posted write cache in case of power failure
JPH08511368A (en) 1993-06-04 1996-11-26 ネットワーク・アプリアンス・コーポレーション Method for forming parity in RAID subsystem using non-volatile memory
US5530948A (en) 1993-12-30 1996-06-25 International Business Machines Corporation System and method for command queuing on raid levels 4 and 5 parity drives
US5537567A (en) * 1994-03-14 1996-07-16 International Business Machines Corporation Parity block configuration in an array of storage devices
CN1124376A (en) * 1994-12-06 1996-06-12 国际商业机器公司 An improved data storage device and method of operation
US5537534A (en) * 1995-02-10 1996-07-16 Hewlett-Packard Company Disk array having redundant storage and methods for incrementally generating redundancy as data is written to the disk array
US5720025A (en) * 1996-01-18 1998-02-17 Hewlett-Packard Company Frequently-redundant array of independent disks
US6161165A (en) * 1996-11-14 2000-12-12 Emc Corporation High performance data path with XOR on the fly
US5950225A (en) * 1997-02-28 1999-09-07 Network Appliance, Inc. Fly-by XOR for generating parity for data gleaned from a bus
US6542960B1 (en) * 1999-12-16 2003-04-01 Adaptec, Inc. System and method for parity caching based on stripe locking in raid data storage
US6370616B1 (en) * 2000-04-04 2002-04-09 Compaq Computer Corporation Memory interface controller for datum raid operations with a datum multiplier
US6836820B1 (en) 2002-02-25 2004-12-28 Network Appliance, Inc. Flexible disabling of disk sets
US7028136B1 (en) 2002-08-10 2006-04-11 Cisco Technology, Inc. Managing idle time and performing lookup operations to adapt to refresh requirements or operational rates of the particular associative memory or other devices used to implement the system
US7082492B2 (en) 2002-08-10 2006-07-25 Cisco Technology, Inc. Associative memory entries with force no-hit and priority indications of particular use in implementing policy maps in communication devices
US7065609B2 (en) 2002-08-10 2006-06-20 Cisco Technology, Inc. Performing lookup operations using associative memories optionally including selectively determining which associative memory blocks to use in identifying a result and possibly propagating error indications
US7426611B1 (en) 2003-08-18 2008-09-16 Symantec Operating Corporation Method and system for improved storage system performance using cloning of cached data

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE34100E (en) * 1987-01-12 1992-10-13 Seagate Technology, Inc. Data error correction system
US5140592A (en) * 1990-03-02 1992-08-18 Sf2 Corporation Disk array system
US5956524A (en) * 1990-04-06 1999-09-21 Micro Technology Inc. System and method for dynamic alignment of associated portions of a code word from a plurality of asynchronous sources
US5134619A (en) * 1990-04-06 1992-07-28 Sf2 Corporation Failure-tolerant mass storage system
US5390187A (en) * 1990-10-23 1995-02-14 Emc Corporation On-line reconstruction of a failed redundant array system
US5488731A (en) * 1992-08-03 1996-01-30 International Business Machines Corporation Synchronization method for loosely coupled arrays of redundant disk drives
US5499253A (en) * 1994-01-05 1996-03-12 Digital Equipment Corporation System and method for calculating RAID 6 check codes
US5754563A (en) * 1995-09-11 1998-05-19 Ecc Technologies, Inc. Byte-parallel system for implementing reed-solomon error-correcting codes
US6092215A (en) * 1997-09-29 2000-07-18 International Business Machines Corporation System and method for reconstructing data in a storage array system
US6408400B2 (en) * 1997-11-04 2002-06-18 Fujitsu Limited Disk array device
US6101615A (en) * 1998-04-08 2000-08-08 International Business Machines Corporation Method and apparatus for improving sequential writes to RAID-6 devices
US6279050B1 (en) * 1998-12-18 2001-08-21 Emc Corporation Data transfer apparatus having upper, lower, middle state machines, with middle state machine arbitrating among lower state machine side requesters including selective assembly/disassembly requests
US6480944B2 (en) * 2000-03-22 2002-11-12 Interwoven, Inc. Method of and apparatus for recovery of in-progress changes made in a software application
US20020166078A1 (en) * 2001-03-14 2002-11-07 Oldfield Barry J. Using task description blocks to maintain information regarding operations
US6567891B2 (en) * 2001-03-14 2003-05-20 Hewlett-Packard Development Company, L.P. Methods and arrangements for improved stripe-based processing
US6687872B2 (en) * 2001-03-14 2004-02-03 Hewlett-Packard Development Company, L.P. Methods and systems of using result buffers in parity operations
US7111227B2 (en) * 2001-03-14 2006-09-19 Hewlett-Packard Development Company, L.P. Methods and systems of using result buffers in parity operations
US20020194427A1 (en) * 2001-06-18 2002-12-19 Ebrahim Hashemi System and method for storing data and redundancy information in independent slices of a storage device
US6944791B2 (en) * 2002-07-18 2005-09-13 Lsi Logic Corporation Method of handling unreadable blocks during write of a RAID device
US20040049632A1 (en) * 2002-09-09 2004-03-11 Chang Albert H. Memory controller interface with XOR operations on memory read to accelerate RAID operations
US6918007B2 (en) * 2002-09-09 2005-07-12 Hewlett-Packard Development Company, L.P. Memory controller interface with XOR operations on memory read to accelerate RAID operations
US7206946B2 (en) * 2003-10-09 2007-04-17 Hitachi, Ltd. Disk drive system for starting destaging of unwritten cache memory data to disk drive upon detection of DC voltage level falling below predetermined value
US20050108613A1 (en) * 2003-11-17 2005-05-19 Nec Corporation Disk array device, parity data generating circuit for raid and galois field multiplying circuit

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8086939B2 (en) 2005-06-22 2011-12-27 Accusys, Inc. XOR circuit, RAID device capable of recovering a plurality of failures and method thereof
US7685499B2 (en) 2005-06-22 2010-03-23 Accusys, Inc. XOR circuit, RAID device capable of recovering a plurality of failures and method thereof
US20100162088A1 (en) * 2005-06-22 2010-06-24 Accusys, Inc. Xor circuit, raid device capable of recovering a plurality of failures and method thereof
US20060294416A1 (en) * 2005-06-22 2006-12-28 Accusys, Inc. XOR circuit, raid device capable of recovering a plurality of failures and method thereof
US11599285B2 (en) 2007-03-29 2023-03-07 Innovations In Memory Llc Memory system with multiple striping of raid groups and method for performing the same
US10372366B2 (en) * 2007-03-29 2019-08-06 Violin Systems Llc Memory system with multiple striping of RAID groups and method for performing the same
US11010076B2 (en) 2007-03-29 2021-05-18 Violin Systems Llc Memory system with multiple striping of raid groups and method for performing the same
US7970994B2 (en) 2008-03-04 2011-06-28 International Business Machines Corporation High performance disk array rebuild
US20090228648A1 (en) * 2008-03-04 2009-09-10 International Business Machines Corporation High performance disk array rebuild
US8296515B1 (en) 2009-05-22 2012-10-23 Nvidia Corporation RAID-6 computation system and method
US8037391B1 (en) * 2009-05-22 2011-10-11 Nvidia Corporation Raid-6 computation system and method
US8707090B2 (en) 2011-10-19 2014-04-22 Hitachi, Ltd. Storage system
WO2013057764A1 (en) * 2011-10-19 2013-04-25 Hitachi, Ltd. Storage system
US9519554B2 (en) 2011-10-19 2016-12-13 Hitachi, Ltd. Storage system with rebuild operations
CN102419697A (en) * 2011-11-02 2012-04-18 华中科技大学 Method for reconstructing single disk in vertical redundant array of independent disks (RAID)-6 coding
US20150278020A1 (en) * 2014-03-28 2015-10-01 Fujitsu Limited Storage control apparatus, recording medium having stored therein storage control program and storage control method
US9524213B2 (en) * 2014-03-28 2016-12-20 Fujitsu Limited Storage control apparatus, recording medium having stored therein storage control program and storage control method
EP2924577A1 (en) * 2014-03-28 2015-09-30 Fujitsu Limited Storage control apparatus, storage control program, and storage control method
US20160246678A1 (en) * 2015-02-20 2016-08-25 International Business Machines Corporation Raid array systems and operations using mapping information
US10528272B2 (en) * 2015-02-20 2020-01-07 International Business Machines Corporation RAID array systems and operations using mapping information
US10628054B2 (en) * 2015-02-20 2020-04-21 International Business Machines Corporation Raid array systems and operations using mapping information
US20160246518A1 (en) * 2015-02-20 2016-08-25 International Business Machines Corporation Raid array systems and operations using mapping information
CN113407122A (en) * 2016-12-21 2021-09-17 伊姆西Ip控股有限责任公司 RAID reconstruction method and equipment
US20220100602A1 (en) * 2020-09-29 2022-03-31 Micron Technology, Inc. Apparatuses and methods for cyclic redundancy calculation for semiconductor device
US11537462B2 (en) * 2020-09-29 2022-12-27 Micron Technology, Inc. Apparatuses and methods for cyclic redundancy calculation for semiconductor device
CN114063908A (en) * 2021-10-23 2022-02-18 苏州普福斯信息科技有限公司 Hard disk read-write processing method and device based on RAID and storage medium

Also Published As

Publication number Publication date
CN101059751A (en) 2007-10-24
CN100345099C (en) 2007-10-24
CN1776599A (en) 2006-05-24
US20080046648A1 (en) 2008-02-21
US7669107B2 (en) 2010-02-23
CN101059751B (en) 2013-06-12

Similar Documents

Publication Publication Date Title
US7669107B2 (en) Method and system for increasing parallelism of disk accesses when restoring data in a disk array system
US7487394B2 (en) Recovering from abnormal interruption of a parity update operation in a disk array system
US7779335B2 (en) Enhanced error identification with disk array parity checking
US20080022150A1 (en) Method and system for improved buffer utilization for disk array parity updates
US20080040415A1 (en) Raid environment incorporating hardware-based finite field multiplier for on-the-fly xor
US7529970B2 (en) System and method for improving the performance of operations requiring parity reads in a storage array system
US8839028B1 (en) Managing data availability in storage systems
US8583984B2 (en) Method and apparatus for increasing data reliability for raid operations
US6282671B1 (en) Method and system for improved efficiency of parity calculation in RAID system
US20090055682A1 (en) Data storage systems and methods having block group error correction for repairing unrecoverable read errors
US20030056142A1 (en) Method and system for leveraging spares in a data storage system including a plurality of disk drives
US20050102548A1 (en) Method and apparatus for enabling high-reliability storage of distributed data on a plurality of independent storage devices
US20120096329A1 (en) Method of, and apparatus for, detection and correction of silent data corruption
US8239625B2 (en) Parity generator for redundant array of independent discs type memory
US20050278612A1 (en) Storage device parity computation
US9645745B2 (en) I/O performance in resilient arrays of computer storage devices
Gao et al. Reliability analysis of declustered-parity raid 6 with disk scrubbing and considering irrecoverable read errors
Wu et al. Code 5-6: An efficient mds array coding scheme to accelerate online raid level migration
US11150988B1 (en) Metadata pattern to detect write loss/inconsistencies of optimized-write-once operations

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIIONAL BUSINESS MACHINES CORPORATION, NEW

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FORHAN, CARL EDWARD;GALBRAITH, ROBERT EDWARD;GERHARD, ADRIAN CUENIN;REEL/FRAME:015425/0574;SIGNING DATES FROM 20041116 TO 20041118

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE