|Publication number||US6272591 B2|
|Application number||US 09/174,580|
|Publication date||Aug 7, 2001|
|Filing date||Oct 19, 1998|
|Priority date||Oct 19, 1998|
|Also published as||CN1169061C, CN1324462A, EP1141843A1, EP1141843A4, US20010002478, WO2000023901A1, WO2000023901A9|
|Publication number||09174580, 174580, US 6272591 B2, US 6272591B2, US-B2-6272591, US6272591 B2, US6272591B2|
|Inventors||Paul A. Grun|
|Original Assignee||Intel Corporation|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (2), Non-Patent Citations (2), Referenced by (40), Classifications (10), Legal Events (6)|
|External Links: USPTO, USPTO Assignment, Espacenet|
The present invention is directed to RAID devices. More particularly, the present invention is directed to striping data on a RAID device using multiple virtual channels.
Redundant Array of Inexpensive or Independent Disks (“RAID”) devices are an increasingly popular way to store large amounts of computer data. RAID devices typically consist of a RAID controller and multiple low capacity personal computer type disk drives that are bundled together to form a single high capacity drive. A RAID device is usually less expensive than conventional high capacity drives because the personal computer type drives are relatively inexpensive based on their high volume of production.
Because RAID devices include multiple disk drives, the probability that one of the drives will fail at any given time is relatively high. An issue with RAID devices is how to avoid the loss of data when one or more of the drives fail. One solution to this issue is to “stripe” a single data block across multiple disk drives in the RAID device. The data block is striped by breaking the block into multiple pieces or portions and storing each portion on a different disk drive. Frequently, parity information for the entire block is stored on one of the drives. If a single drive fails, the piece of the data block that was stored on the failed drive can be reassembled based on the remaining portions of the data block and the parity information stored on the other drives. U.S. Pat. No. 4,761,785 discloses an example of a RAID device that performs striping.
In most RAID devices, a host computer sends an entire data block in one piece to the RAID controller. The RAID controller must then partition the data block into multiple sub-blocks, calculate a parity block, and then write the sub-blocks and parity block to the disk drives. Because the RAID controller is required to perform all of these steps each time a data block is stored, the RAID controller causes some delay when data is stored on a RAID device. The delay can detrimentally slow the process of striping data on a RAID device.
Based on the foregoing, there is a need for an method and apparatus to more efficiently stripe data on a RAID device.
One embodiment of the present invention is a RAID device for striping a data block across N disk drives. The RAID device receives a storage request from a host computer for the data block, and creates N virtual interface (“VI”) queue pairs. The queue pairs form N virtual channels to the host computer. Further, the RAID device posts a descriptor to each of the queue pairs, with each descriptor referring to 1/Nth of the data block. Further, the RAID device receives 1/Nth of the data block over each of the virtual channels and writes each received 1/Nth data block to a different one of the N disk drives.
FIG. 1 is block diagram of a computer system in accordance with one embodiment of the present invention.
FIG. 2 is a flowchart of the steps executed by a RAID device in one embodiment of the present invention when a request is received from a host computer to store a data block in the RAID device.
One embodiment of the present invention is a RAID device that transfers a data block over multiple virtual channels using a virtual interface. The multiple virtual channels each transfer a portion of the data block, and the portions are striped across multiple disk drives.
FIG. 1 is a block diagram of a computer system in accordance with one embodiment of the present invention. The computer system 100 includes a host computer 10 coupled to a RAID device 40. Host computer 10 is coupled to RAID device 40 in FIG. 1 via a direct connection 30 such as a single wire or multiple wires. However, in other embodiments, host computer 10 can be coupled to RAID device 40 using any known manner to transfer data, including switches, a computer network, and wireless techniques. Further, additional computers and other devices may be coupled to RAID device 40.
Host computer 10 includes a processor 12. Processor 12 executes a software application that includes a driver 14. Host computer 10 further includes a memory 16 and a transport 20. Host computer 10 further includes a network interface card (“NIC”) 25 that couples host computer 10 to RAID device 40.
Host computer 10 communicates with devices coupled to it such as RAID device 40 using a Virtual Interface (“VI”) architecture. A VI architecture provides the illusion of a dedicated network interface to multiple applications and processes simultaneously, thus “virtualizing” the interface. Further, a VI architecture defines a standard interface between a VI consumer and one or more networks. In the present invention, driver 14 can function as a VI consumer.
In one embodiment, the VI architecture used to implemented the present invention is disclosed in the Virtual Interface Architecture Specification, Version 1.0, (the “VI Specification”) announced Dec. 19, 1997 by Compaq Corp., Intel Corp., and Microsoft Corp. The VI Specification is available at Web site www.viarch.org/ on the Internet. The VI Specification defines mechanisms for low-latency, high-bandwidth message-passing between interconnected nodes and interconnected storage devices. Low latency and sustained high bandwidth are achieved by avoiding intermediate copies of data and bypassing the operating system when sending and receiving messages. Other architectures that perform a similar function as the VI architecture disclosed in the VI Specification can also be used to implement the present invention, and therefore the present invention is not limited to a single VI architecture.
Transport 20 includes a plurality of VIs 21-24. Each VI 21-24 includes a queue pair (“QP”). In accordance with the VI Specification, a QP includes a send queue and a receive queue.
RAID device 40 includes a plurality of disk drives 60-63. Disk drives 60-63 are coupled to a RAID controller 70. RAID controller 70 executes steps in connection with storing and retrieving data to and from disk drives 60-63. RAID controller 70 includes a memory storage area 45 that includes a number of memory storage locations 46-49.
RAID device 40 further includes a transport 50 coupled to a NIC 42. NIC 42 couples RAID device 40 to host computer 10. Transport 50 includes a number of QPs 51-54. A QP in RAID device 40 and a corresponding VI in host computer 10 form endpoints of a virtual channel between RAID device 40 and host computer 10. In one embodiment, when storing a data block on RAID device 40, the number of disk drives 60-63 (referred to as “N”) equals the number of memory locations 46-49, the number of QPs 51-54 and the number of VIs 21-24. Therefore, if the data block is being striped across N disk drives, RAID controller will have N memory locations, transport 50 will have N QPs, and transport 20 will have N corresponding VIs. The N QPs and the N VIs form endpoints of N virtual channels.
Within computer system 100, driver 14 is referred to as an “initiator” because it initiates requests for storing or retrieving data. In contrast, RAID device 40 is referred to as a “target” because it responds to requests from initiators within computer system 100. RAID device 40 responds to requests by, for example, storing data on drives 60-63 or retrieving data from drives 60-63.
FIG. 2 is a flowchart of the steps executed by RAID device 40 in one embodiment of the present invention when an I/O request is received from host computer 10 to store a data block in RAID device 40. It is assumed that RAID device 40 stripes data blocks across “N” disk drives.
The request is received at step 110 and includes the location in memory 16 of host computer 10 where the data block is stored. Driver 14 stores the I/O request in a location of memory 16. In accordance with the VI specification, driver 14 posts a descriptor that refers to the I/O request (i.e., specifies the location in memory 16 where the I/O request is stored) to a send queue in transport 20. Driver 14 then rings a doorbell in NIC 25. The doorbell tells NIC 25 to look in the send queue for the descriptor. NIC 25 then fetches the descriptor and performs the task. The task places an I/O request message on connection 30 to be transmitted. The receiving device (i.e., RAID device 40) of the I/O request also has a NIC (i.e., NIC 42) that receives the I/O request message from connection 30.
The I/O request message contains information specifying the location in host memory 16 from which the data is to be moved, and specifies where in RAID device 40 the data is to be stored. The location in host memory 16 is specified with a virtual address memory handle pair in accordance with the VI specification. RAID device 40 uses the information contained in the I/O request message to build descriptors to accomplish the actual data movement from host computer 10 to RAID device 40. For example, in response to receiving the request from host computer 10, in one embodiment RAID device 40 initiates a data transfer from host computer 10. The data transfer is initiated using a VI Remote Direct Memory Access (“RDMA”) transfer facility.
At step 120, in one embodiment RAID device 40 generates N virtual channels across direct connection 30. The virtual channels are generated by creating N QPs 51-54 in transport 50, and requesting host computer 10 to create N VIs 21-24 in transport 20. In another embodiment, the N virtual channels were previously generated before the request at step 110 was received.
At step 130, RAID device 40 posts descriptors to each QP 51-54. The descriptors specify that each QP 51-54 should move 1/Nth of the data block stored in memory 16 across the virtual channel associated with each QP 51-54. The data is then moved across the virtual channels in accordance with the VI specification.
At step 140, the 1/Nth data blocks moved by each QP 51-54 are stored in memory locations 46-49 of memory 45. Each 1/Nth data block is stored in a separate memory location 46-49.
Finally, at step 150, RAID controller 70 writes each 1/Nth data block stored in memory 45 to a different disk drive 60-63. Therefore, the original data block is striped across disk drives 60-63 because portions of the data block are written to each drive 60-63.
Parity data may also be generated and stored in disk drives 60-63. The parity data may be generated by RAID controller 70, or by host computer 10.
As described, the RAID device in accordance with the present invention uses the services of the VI transport to partition a single data block into N sub-blocks at the same time that the data block is being transported from the host computer to the RAID device. Therefore, the RAID controller in the RAID device does not have to partition the data block in order to stripe the data block across N disk drives.
The present invention allows the benefits of RAID (e.g., low costs disks, high performance and high reliability) without requiring a sophisticated RAID controller, or without requiring a RAID controller at all (since the responsibilities of RAID controller 70 are merely to write the data from “N” memory locations to the “N” devices). Further, the latency that is caused by the RAID controller is reduced, and the entire functionality of the RAID controller can be implemented in software.
The present invention also provides benefits when data is read from RAID device 40 by host computer 10. For example, when reading data, RAID controller 70 does not have to wait to finish reading data off all of disks 60-63 and into memory 45 before it begins writing the 1/Nth blocks into host memory 16. Thus, as soon as any one of disks 60-63 has finished returning its 1/Nth to one of the N memory locations in memory 45, RAID controller 70 can begin moving that part of the block to host memory 45.
Several embodiments of the present invention are specifically illustrated and/or described herein. However, it will be appreciated that modifications and variations of the present invention are covered by the above teachings and within the purview of the appended claims without departing from the spirit and intended scope of the invention.
For example, although memory 45 is located within RAID controller 70, it can be located anywhere it can be coupled to RAID controller 70.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US5613071 *||Jul 14, 1995||Mar 18, 1997||Intel Corporation||Method and apparatus for providing remote memory access in a distributed memory multiprocessor system|
|US6081848 *||Aug 14, 1998||Jun 27, 2000||Intel Corporation||Striping packets of data across multiple virtual channels|
|1||Dunning et al., "The Virtual Interface Architecture", IEEE Micro, Mar.-Apr. 1998, vol. 18, issue 2, pp. 66-76.*|
|2||Intel and the VI Architecture: Frequently Asked Questions. [Retrieved Online May 11, 1998] URL: www.intel.com/p...rs/isv/vi/html/whats_VI/Vi_FAQ.htm.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US6493343 *||Dec 30, 1998||Dec 10, 2002||Compaq Information Technologies Group||System and method for implementing multi-pathing data transfers in a system area network|
|US6539451 *||Dec 30, 1998||Mar 25, 2003||Emc Corporation||Data storage system having master-slave arbiters|
|US6545981||Dec 30, 1998||Apr 8, 2003||Compaq Computer Corporation||System and method for implementing error detection and recovery in a system area network|
|US6877011 *||Oct 10, 2001||Apr 5, 2005||Sun Microsystems, Inc.||System and method for host based storage virtualization|
|US7010614 *||Jul 3, 2001||Mar 7, 2006||International Business Machines Corporation||System for computing cumulative amount of data received by all RDMA to determine when a complete data transfer has arrived at receiving device|
|US7320083||Apr 23, 2004||Jan 15, 2008||Dot Hill Systems Corporation||Apparatus and method for storage controller to deterministically kill one of redundant servers integrated within the storage controller chassis|
|US7330999||Apr 23, 2004||Feb 12, 2008||Dot Hill Systems Corporation||Network storage appliance with integrated redundant servers and storage controllers|
|US7334064||Apr 23, 2004||Feb 19, 2008||Dot Hill Systems Corporation||Application server blade for embedded storage appliance|
|US7353406||Feb 11, 2004||Apr 1, 2008||Hitachi, Ltd.||Disk array optimizing the drive operation time|
|US7380163||Apr 23, 2004||May 27, 2008||Dot Hill Systems Corporation||Apparatus and method for deterministically performing active-active failover of redundant servers in response to a heartbeat link failure|
|US7401254||Jul 16, 2004||Jul 15, 2008||Dot Hill Systems Corporation||Apparatus and method for a server deterministically killing a redundant server integrated within the same network storage appliance chassis|
|US7437604||Feb 10, 2007||Oct 14, 2008||Dot Hill Systems Corporation||Network storage appliance with integrated redundant servers and storage controllers|
|US7464205||Dec 19, 2006||Dec 9, 2008||Dot Hill Systems Corporation||Application server blade for embedded storage appliance|
|US7464214||Dec 19, 2006||Dec 9, 2008||Dot Hill Systems Corporation||Application server blade for embedded storage appliance|
|US7565566||Nov 2, 2004||Jul 21, 2009||Dot Hill Systems Corporation||Network storage appliance with an integrated switch|
|US7627780||Jul 16, 2004||Dec 1, 2009||Dot Hill Systems Corporation||Apparatus and method for deterministically performing active-active failover of redundant servers in a network storage appliance|
|US7657768||Feb 27, 2008||Feb 2, 2010||Hitachi, Ltd.||Disk array optimizing the drive operation time|
|US7661014||Apr 23, 2004||Feb 9, 2010||Dot Hill Systems Corporation||Network storage appliance with integrated server and redundant storage controllers|
|US7676597 *||Jan 23, 2002||Mar 9, 2010||Mellanox Technologies Ltd.||Handling multiple network transport service levels with hardware and software arbitration|
|US7676600||Apr 23, 2004||Mar 9, 2010||Dot Hill Systems Corporation||Network, storage appliance, and method for externalizing an internal I/O link between a server and a storage controller integrated within the storage appliance chassis|
|US8185777||Oct 28, 2009||May 22, 2012||Dot Hill Systems Corporation||Network storage appliance with integrated server and redundant storage controllers|
|US9176835||Nov 2, 2009||Nov 3, 2015||Dot Hill Systems Corporation||Network, storage appliance, and method for externalizing an external I/O link between a server and a storage controller integrated within the storage appliance chassis|
|US20020029305 *||Jul 3, 2001||Mar 7, 2002||International Business Machines Corporation||Method and system for transmitting multiple data packets to reading device|
|US20020150106 *||Jan 23, 2002||Oct 17, 2002||Michael Kagan||Handling multiple network transport service levels with hardware and software arbitration|
|US20030069886 *||Oct 10, 2001||Apr 10, 2003||Sun Microsystems, Inc.||System and method for host based storage virtualization|
|US20050010709 *||Apr 23, 2004||Jan 13, 2005||Dot Hill Systems Corporation||Application server blade for embedded storage appliance|
|US20050010715 *||Apr 23, 2004||Jan 13, 2005||Dot Hill Systems Corporation||Network storage appliance with integrated server and redundant storage controllers|
|US20050010838 *||Apr 23, 2004||Jan 13, 2005||Dot Hill Systems Corporation||Apparatus and method for deterministically performing active-active failover of redundant servers in response to a heartbeat link failure|
|US20050021605 *||Apr 23, 2004||Jan 27, 2005||Dot Hill Systems Corporation||Apparatus and method for storage controller to deterministically kill one of redundant servers integrated within the storage controller chassis|
|US20050021606 *||Apr 23, 2004||Jan 27, 2005||Dot Hill Systems Corporation||Network storage appliance with integrated redundant servers and storage controllers|
|US20050102549 *||Nov 2, 2004||May 12, 2005||Dot Hill Systems Corporation||Network storage appliance with an integrated switch|
|US20050111249 *||Feb 11, 2004||May 26, 2005||Hitachi, Ltd.||Disk array optimizing the drive operation time|
|US20050207105 *||Jul 16, 2004||Sep 22, 2005||Dot Hill Systems Corporation||Apparatus and method for deterministically performing active-active failover of redundant servers in a network storage appliance|
|US20050246568 *||Jul 16, 2004||Nov 3, 2005||Dot Hill Systems Corporation||Apparatus and method for deterministically killing one of redundant servers integrated within a network storage appliance chassis|
|US20070100933 *||Dec 19, 2006||May 3, 2007||Dot Hill Systems Corporation||Application server blade for embedded storage appliance|
|US20080168227 *||Feb 27, 2008||Jul 10, 2008||Hitachi, Ltd.||Disk Array Optimizing The Drive Operation Time|
|US20090006743 *||Jun 27, 2007||Jan 1, 2009||Chee Keong Sim||Writing data to multiple storage devices|
|US20100049822 *||Nov 2, 2009||Feb 25, 2010||Dot Hill Systems Corporation||Network, storage appliance, and method for externalizing an external I/O link between a server and a storage controller integrated within the storage appliance chassis|
|US20100064169 *||Oct 28, 2009||Mar 11, 2010||Dot Hill Systems Corporation||Network storage appliance with integrated server and redundant storage controllers|
|CN101419534B||Oct 8, 2008||Jan 12, 2011||京瓷美达株式会社||Information processing device and cirtual disk management method|
|U.S. Classification||711/114, 710/52, 710/22|
|Cooperative Classification||G06F3/0611, G06F3/0689, G06F3/0659|
|European Classification||G06F3/06A2P2, G06F3/06A4T6, G06F3/06A6L4R|
|Oct 19, 1998||AS||Assignment|
Owner name: INTEL CORPORATION, CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GRUN, PAUL A.;REEL/FRAME:009523/0960
Effective date: 19981016
|Feb 7, 2005||FPAY||Fee payment|
Year of fee payment: 4
|Feb 4, 2009||FPAY||Fee payment|
Year of fee payment: 8
|Mar 20, 2013||REMI||Maintenance fee reminder mailed|
|Aug 7, 2013||LAPS||Lapse for failure to pay maintenance fees|
|Sep 24, 2013||FP||Expired due to failure to pay maintenance fee|
Effective date: 20130807