|Publication number||US6934878 B2|
|Application number||US 10/104,839|
|Publication date||Aug 23, 2005|
|Filing date||Mar 22, 2002|
|Priority date||Mar 22, 2002|
|Also published as||US20030182592|
|Publication number||10104839, 104839, US 6934878 B2, US 6934878B2, US-B2-6934878, US6934878 B2, US6934878B2|
|Inventors||Dieter Massa, Otto Lehner|
|Original Assignee||Intel Corporation|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (10), Referenced by (87), Classifications (12), Legal Events (4)|
|External Links: USPTO, USPTO Assignment, Espacenet|
This invention relates generally to detecting and handling failures in a clustered array of mass storage devices such as an array of disk drives.
A redundant array of inexpensive disks (RAID) (called a “RAID array”) is often selected as a mass storage for a computer system due to the array's ability to preserve data even if one of the disk drives of the array should fail. There are a number of RAID arrangements but most rely on redundancy to achieve a robust storage system. For example, RAID 1 systems may utilize a mirror disk drive for redundancy. In other RAID systems such as RAID 2-5 systems, data may be split, or stripped, across a plurality of disk drives such that if one disk drive fails, the data may still be recovered by using the information contained on the still working disk drives in the system. As an example, in a parity RAID system such as a RAID 2-5 system, if three disks are utilized to store data and associated parity information, if one disk fails, the data may be recovered from the still working two drives in the system. A system having a single disk drive may be considered a RAID 0 system even though the system provides no redundancy.
A RAID array may also be part of a cluster environment, an environment in which two or more file servers share one or more RAID arrays. Typically, for purposes of assuring data consistency, only one of these file servers accesses a particular RAID array at a time to modify data. In this manner, when granted exclusive access to the RAID array, a particular file server may perform read and write operations as necessary to modify data contained in the RAID array. After the particular file server finishes its access, then another file server may be granted exclusive access to modify data in a particular RAID array.
For purposes of establishing a logical-to-physical interface between the file servers and the RAID array, one or more RAID controllers typically are used. As examples of the various possible arrangements, a single RAID controller may be contained in the enclosure that houses the RAID array, or alternatively, each file server may have an internal RAID controller. In the latter case, each file server may have an internal RAID controller card that is plugged into a card connector slot of the file server. Alternatively, the server may have the RAID functionality contained on a main printed circuit board.
For the case where the file server has an internal RAID controller, the file server (“Server”) is described herein as accessing the RAID array. However, it is understood that in these cases, it is actually the RAID controller card, or the RAID controller circuits on the main printed circuit board, of the server that is accessing the RAID array.
Before a particular server accesses a RAID array, the file server that currently is accessing the RAID array is responsible for closing all open read and write transactions. Hence, under normal circumstances, whenever a file server is granted access to a RAID array, all data on the shared disk drives of the array are in a consistent state.
In a clustering environment where different storage controllers access the same disk, the cluster operating system needs to guarantee data coherency and failure tolerance. Thus, there is a need for better ways to control the distribution of access rights, and for recovering from network failures, in clustered RAID networks.
Each server 102 communicates with a RAID array 108-111 through a controller 106 that stores a software layer 10. In some embodiments, the controller 106 may be part of a server 102. In other embodiments, the controller 106 may be part of the RAID array 108-111. The controllers 106 may communicate with each other over a communications network. Also, while two controllers 106 a&b are illustrated associated with server 102 a, a single controller having the ability to control two RAID arrays may be utilized instead.
Coupled to the CDML 14 is an array management layer (“AML”) 12. The cluster network layer (“CNL”) 16 may be interfaced to all the other controllers 106 in the cluster 100. The CNL 16 may maintain login and logout of other controllers 106, intercontroller communication and may handle network failures. The CNL 16 may also provide the CDML 14 with communications services. The communications services may include handling redundant access to other controllers 106 if they are connected by more than one input/output channel.
A Ping Application (“PA”) 28 may also be coupled to the CNL 16. The Ping Application 28 may communicate with one or more neighboring controllers 102 to detect a network failure. For example, the PA may “ping” the neighboring controller. If the proper response to the “ping” is not received, the PA may determine that the neighboring controller has gone inactive due to a failure or other cause. Communications for the PA 28 may be performed by the CNL 16 in some embodiments.
In the case of a login or a logout network event, the CNL 16 on a controller 106 logging in or out may call the CDML 14 to update its network information. In addition, the CNL may communicate changes to the PA 28. The CDML 14 is installed on every controller 106 in the cluster network 100. The CDML 14 knows all of the available controller 106 identifiers in the cluster network 100. These identifiers are reported through the cluster network layer 16. In addition, the CDML 14 is asynchronously informed of network changes by the cluster network layer 16. In one embodiment, the CDML 14 treats the list of known controllers 106 as a chain, where the local controller where the CDML is installed is always the last controller in the chain.
The generation of an access right called a token is based on a unique identifier in one embodiment of the present invention. This identifier may be the serial number of a requesting controller in one embodiment. For a particular RAID array 108-111, there may be two separate types of access rights generated that belong to the same unique identifier, distinguished by the CDML 14 by a sub-identifier within each access type. One sub-identifier may be reserved for array management (configuration access) and the other sub-identifier may be reserved for user data access.
The CDML 14 of each controller 106 includes two control processes. One is called the token master 20 and the other is called the token requester 24. The master 20 may not be activated on each controller 106 but the capability of operating as a token master may be provided to every controller 106 in some embodiments. In some embodiments, ensuring that each controller 106 may be configured as a master ensures a symmetric flow of CDML 14 commands, whether the master is available on a local or a remote controller 106.
Both the CDML master 20 and the CDML requester 24 handle the tasks for all access tokens needed in the cluster network 100. The administration of the tokens is done in a way that treats every token separately in some embodiments.
A requester 24 from one controller 106 communicates with a master 20 from another controller 106 by exchanging commands. Each command is atomic. For example, a requester 24 may send a command to the master 20 to obtain an access token. The commands are encapsulated, in one embodiment, so that the master 20 only confirms receipt of the command. The master 20 sends a response to the requester 24 providing the token in some cases. Thus, the protocol utilized by the CDML 14 may be independent from that used for transmission of other rights and data.
A CDML command may consist of a small data buffer and may include a token identifier, a subtoken identifier, a request type, a master identifier, a generation index which is an incremented counter and a forward identifier which is the identifier where the token has to be forwarded upon master request. All of the communications are handled by the cluster network layer 16 in one embodiment of the present invention.
For each RAID array 108-111, there is a master 20 that controls the distribution of access tokens and which is responsible for general array management. Whenever a controller 106 wants to access a RAID array 104, it requests the corresponding token from the corresponding master of the array being accessed. In some embodiments, any controller may be allowed to read an array identification without an access token. This ability may be helpful for a controller 106 or associated server 102 to recognize what RAID arrays are online.
When access is granted, a controller 106 can access the particular array 108-111 as long as needed. However, in some embodiments, when a request to transfer the access token is received, it should be accommodated as soon as possible. In other embodiments, a token transfer may be accommodated upon the controller having the token completing a minimum number of IO transactions. Upon dedicated shut down, each controller 106 may ensure that all tokens have been returned and the logout is completed.
Each controller 106 guarantees that the data is coherent before the token is transferred to another controller. In one embodiment, all of the mechanisms described are based on controller 106 to controller 106 communications. Therefore, each controller 106 advantageously communicates with all of the other controllers in the network 100. Each controller 106 may have a unique identifier in one embodiment to facilitate connections and communications between controllers 106.
A check at diamond 38 determines whether any network errors have occurred. One type of network failure may be the loss of a controller 106 that had logged in but not logged out. If so, a check at diamond 40 determines whether the master is still available. If so, the master is notified of the error because the master may be a remote controller 106. If there is no error, the flow continues.
When a second controller requests access to an array 104 being accessed by a first controller including the requester 24, the requester 24 that was previously granted the token makes a decision whether to yield to the second requester as indicated in block 50. If the requester decides to yield as determined in diamond 52, the requester 24 attempts to complete the transaction, or series of transactions, as soon as possible as indicated in block 48. When the transaction is completed, the requester 24 transfers the access token to the next requester in the queue as indicated in block 54. Otherwise the requester 24 again requests access to complete one or more additional transactions as indicated in block 54.
If at decision tree 364 the neighbor controller is determined to not be functional, for example it did not respond correctly to the “ping”, then the local CNL may be notified 366. This notification may be by direct communication from the PA to the CNL in some embodiments. In other embodiments the PA may set a flag that may be read to determine a network error such as at 38 in
Should a network failure occur such as in an IO cable between two disks, at least one “ping” function will fail. For example an IO cable failure between disks 110 a and 110 b, as shown in
If a network failure is detected, the controller that detected the failure checks which controllers are still available in the cluster and which controller, if any, is the new next neighbor. The PA is called to replace the next neighbor address and the CDML may be called to process the failure. One action taken by the CDML may be to request a disk array analysis from the array management layer.
Upon the detection of a network failure, the CDML may perform a disk array analysis to determine which disk arrays, if any, are still useable. Each array of drives may be checked by testing access to the member disks. If the network failure caused loss of a disk member of a non-redundant cluster drive, for example a single disk RAID 0 array, the drive is set to offline and any access to this drive is cancelled.
If the network failure caused a loss of more than one member disk of a redundant RAID 4 or RAID 5 disk array, the associated disk array is set to offline and any access to this array is cancelled. This is because data in a RAID 4 or RAID 5 system may not be recoverable in the event of multiple disk failures in the RAID array.
In the situation shown in
A check at diamond 68 determines whether a network error has occurred. Again, one type of network error may be the loss of a controller 106 which is recognized by the PA, reported to and processed by the local CNL. If so, then the local CNL 16 performs a network analysis 403 and reconfigures the network 405. The CNL 16 then provides the PA 28 with the new neighbor 407 and notifies the CDML 14 of the error 409. The CDML 14 also requests the array management layer 12 to perform an array analysis for each affected array. The CDML 14 checks at diamond 70 to determine whether the token user has been lost. If so, a new token is assigned, if possible, as indicated in diamond 72. As discussed above, should a token for an array have been lost and a master for that array not be available due to the network failure, a new token will not be assigned and that array may be set to an offline condition. The local CNL 16 may then notify all other CNLs of the network failure and they may reconfigure their associated controllers as required. For example, the CNLs may inform their associated PA of the new neighboring controller and detecting that the CDML token master may have changed for an array.
If a token was not available, as determined at diamond 62, the request for the token may be queued, as indicated in block 74. The master 20 may then request that the current holder of the token yield to the new requester, as indicated in block 76. A check at diamond 78 determines whether the yield has occurred. If so, the token may then be granted to the requester 24 that has waited in the queue for the longest time, as indicated in block 80.
In some embodiments of the present invention, the server 102 may be a computer, such as exemplary computer 200 that is depicted in FIG. 6. The computer 200 may include a processor (one or more microprocessors, for example) 202, that is coupled to a local bus 204. Also coupled to local bus 204 may be, for example, a memory hub, or north bridge 206. The north bridge 206 provides interfaces to the local bus 204, a memory bus 208, an accelerated graphics port (AGP) bus 212 and a hub link. The AGP bus is described in detail in the Accelerated Graphics Port Interface Specification, Revision 1.0, published Jul. 31, 1996 by Intel Corporation, Santa Clara, Calif. A system memory 210 may be accessed via the system bus 208, and an AGP device 214 may communicate over the AGB bus 212 and generate signals to drive a display 216. The system memory 210 may store various program instructions such as the instructions described in connection with
The north bridge 206 may communicate with a south bridge 210 over the hub link. In this manner, the south bridge 220 may provide an interface for the input/output (I/O) expansion bus 223 in a peripheral component interconnect (PCI) bus 240. The PCI specification is available from the PCI Special Interest Group, Portland, Oreg. 97214. An I/O controller 230 may be coupled to the I/O expansion bus 223 and may receive inputs from a mouse 232 and a keyboard 234 as well as control operations on a floppy disk drive 238. The south bridge 220 may, for example, control operations of a hard disk drive 225 and a compact disk read only memory (CD-ROM) drive 221.
A RAID controller 250 may be coupled to the bus 240 to establish communication between the RAID array 104 and the computer 200 via bus 252, for example. The RAID controller 250, in some embodiments of the present invention, may be in the form of a PCI circuit card that is inserted into a PCI slot of the computer 200, for example.
In some embodiments of the present invention, the RAID controller 250 includes a processor 300 and a memory 302 that stores instructions 310 such as those related to
While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US5371882 *||Jan 14, 1992||Dec 6, 1994||Storage Technology Corporation||Spare disk drive replacement scheduling system for a disk drive array data storage subsystem|
|US5615330 *||Oct 16, 1995||Mar 25, 1997||International Computers Limited||Recovery method for a high availability data processing system|
|US5712970 *||Sep 28, 1995||Jan 27, 1998||Emc Corporation||Method and apparatus for reliably storing data to be written to a peripheral device subsystem using plural controllers|
|US5720028 *||Jun 5, 1996||Feb 17, 1998||Hitachi, Ltd.||External storage system|
|US6119244 *||Aug 25, 1998||Sep 12, 2000||Network Appliance, Inc.||Coordinating persistent status information with multiple file servers|
|US6317844 *||Mar 10, 1998||Nov 13, 2001||Network Appliance, Inc.||File server storage arrangement|
|US6330687 *||Nov 13, 1998||Dec 11, 2001||Digi-Data Corporation||System and method to maintain performance among N single raid systems during non-fault conditions while sharing multiple storage devices during conditions of a faulty host computer or faulty storage array controller|
|US6728897 *||Jul 25, 2000||Apr 27, 2004||Network Appliance, Inc.||Negotiating takeover in high availability cluster|
|US20020133735 *||Jan 16, 2001||Sep 19, 2002||International Business Machines Corporation||System and method for efficient failover/failback techniques for fault-tolerant data storage system|
|USRE37038 *||Aug 31, 1995||Jan 30, 2001||International Business Machines Corporation||Method and system for automated termination and resumption in a time zero backup copy process|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7346720||Oct 21, 2005||Mar 18, 2008||Isilon Systems, Inc.||Systems and methods for managing concurrent access requests to a shared resource|
|US7386675||Oct 21, 2005||Jun 10, 2008||Isilon Systems, Inc.||Systems and methods for using excitement values to predict future access to resources|
|US7392336 *||Nov 18, 2005||Jun 24, 2008||Hitachi, Ltd.||Connecting device of storage device and computer system including the same connecting device|
|US7478264 *||Mar 10, 2008||Jan 13, 2009||International Business Machines Corporation||Storage management server communication via storage device servers|
|US7509448||Jan 5, 2007||Mar 24, 2009||Isilon Systems, Inc.||Systems and methods for managing semantic locks|
|US7509524||Aug 11, 2006||Mar 24, 2009||Isilon Systems Inc.||Systems and methods for a distributed file system with data recovery|
|US7551572||Oct 21, 2005||Jun 23, 2009||Isilon Systems, Inc.||Systems and methods for providing variable protection|
|US7552357 *||Apr 29, 2005||Jun 23, 2009||Network Appliance, Inc.||Lost writes detection in a redundancy group based on RAID with multiple parity|
|US7590652||Aug 18, 2006||Sep 15, 2009||Isilon Systems, Inc.||Systems and methods of reverse lookup|
|US7593938||Dec 22, 2006||Sep 22, 2009||Isilon Systems, Inc.||Systems and methods of directory entry encodings|
|US7676691||Mar 9, 2010||Isilon Systems, Inc.||Systems and methods for providing nonlinear journaling|
|US7680836||Mar 16, 2010||Isilon Systems, Inc.||Systems and methods for a snapshot of data|
|US7680842||Aug 18, 2006||Mar 16, 2010||Isilon Systems, Inc.||Systems and methods for a snapshot of data|
|US7685126 *||Mar 23, 2010||Isilon Systems, Inc.||System and methods for providing a distributed file system utilizing metadata to track information about data stored throughout the system|
|US7743033||Jul 19, 2007||Jun 22, 2010||Isilon Systems, Inc.||Systems and methods for providing a distributed file system utilizing metadata to track information about data stored throughout the system|
|US7752402||Aug 18, 2006||Jul 6, 2010||Isilon Systems, Inc.||Systems and methods for allowing incremental journaling|
|US7756898||Jul 13, 2010||Isilon Systems, Inc.||Systems and methods for notifying listeners of events|
|US7779048||Apr 13, 2007||Aug 17, 2010||Isilon Systems, Inc.||Systems and methods of providing possible value ranges|
|US7788303||Oct 21, 2005||Aug 31, 2010||Isilon Systems, Inc.||Systems and methods for distributed system scanning|
|US7797283||Sep 14, 2010||Isilon Systems, Inc.||Systems and methods for maintaining distributed data|
|US7822932||Oct 26, 2010||Isilon Systems, Inc.||Systems and methods for providing nonlinear journaling|
|US7844617||Jun 4, 2010||Nov 30, 2010||Isilon Systems, Inc.||Systems and methods of directory entry encodings|
|US7848261||Dec 7, 2010||Isilon Systems, Inc.||Systems and methods for providing a quiescing protocol|
|US7870345||Jan 11, 2011||Isilon Systems, Inc.||Systems and methods for managing stalled storage devices|
|US7882068||Aug 21, 2007||Feb 1, 2011||Isilon Systems, Inc.||Systems and methods for adaptive copy on write|
|US7882071||Feb 1, 2011||Isilon Systems, Inc.||Systems and methods for a snapshot of data|
|US7899800||Mar 1, 2011||Isilon Systems, Inc.||Systems and methods for providing nonlinear journaling|
|US7900015||Mar 1, 2011||Isilon Systems, Inc.||Systems and methods of quota accounting|
|US7917474||Oct 21, 2005||Mar 29, 2011||Isilon Systems, Inc.||Systems and methods for accessing and updating distributed data|
|US7937421||May 3, 2011||Emc Corporation||Systems and methods for restriping files in a distributed file system|
|US7949636||May 24, 2011||Emc Corporation||Systems and methods for a read only mode for a portion of a storage system|
|US7949692||May 24, 2011||Emc Corporation||Systems and methods for portals into snapshot data|
|US7953704||Aug 18, 2006||May 31, 2011||Emc Corporation||Systems and methods for a snapshot of data|
|US7953709||May 31, 2011||Emc Corporation||Systems and methods for a read only mode for a portion of a storage system|
|US7962779||Jun 9, 2008||Jun 14, 2011||Emc Corporation||Systems and methods for a distributed file system with data recovery|
|US7966289||Jun 21, 2011||Emc Corporation||Systems and methods for reading objects in a file system|
|US7971021||Jun 28, 2011||Emc Corporation||Systems and methods for managing stalled storage devices|
|US7984227 *||Jul 19, 2011||Hitachi, Ltd.||Connecting device of storage device and computer system including the same connecting device|
|US7984324||Jul 19, 2011||Emc Corporation||Systems and methods for managing stalled storage devices|
|US8005865||May 27, 2010||Aug 23, 2011||Emc Corporation||Systems and methods for notifying listeners of events|
|US8010493||Mar 4, 2010||Aug 30, 2011||Emc Corporation||Systems and methods for a snapshot of data|
|US8015156||Sep 6, 2011||Emc Corporation||Systems and methods for a snapshot of data|
|US8015216||Sep 6, 2011||Emc Corporation||Systems and methods of providing possible value ranges|
|US8027984||Sep 4, 2009||Sep 27, 2011||Emc Corporation||Systems and methods of reverse lookup|
|US8051425||Oct 28, 2005||Nov 1, 2011||Emc Corporation||Distributed system with asynchronous execution systems and methods|
|US8054765||Nov 8, 2011||Emc Corporation||Systems and methods for providing variable protection|
|US8055711||Nov 8, 2011||Emc Corporation||Non-blocking commit protocol systems and methods|
|US8060521||Nov 15, 2011||Emc Corporation||Systems and methods of directory entry encodings|
|US8082379||Mar 23, 2009||Dec 20, 2011||Emc Corporation||Systems and methods for managing semantic locks|
|US8112395||Feb 7, 2012||Emc Corporation||Systems and methods for providing a distributed file system utilizing metadata to track information about data stored throughout the system|
|US8140623||Jun 8, 2006||Mar 20, 2012||Emc Corporation||Non-blocking commit protocol systems and methods|
|US8171347 *||Jul 11, 2007||May 1, 2012||Oracle America, Inc.||Method and apparatus for troubleshooting a computer system|
|US8176013||May 8, 2012||Emc Corporation||Systems and methods for accessing and updating distributed data|
|US8181065||Mar 2, 2010||May 15, 2012||Emc Corporation||Systems and methods for providing nonlinear journaling|
|US8195905||Jun 5, 2012||Emc Corporation||Systems and methods of quota accounting|
|US8195978||Jun 5, 2012||Fusion-IO. Inc.||Apparatus, system, and method for detecting and replacing failed data storage|
|US8200632||Jan 14, 2011||Jun 12, 2012||Emc Corporation||Systems and methods for adaptive copy on write|
|US8214334||Jul 3, 2012||Emc Corporation||Systems and methods for distributed system scanning|
|US8214400||Jul 3, 2012||Emc Corporation||Systems and methods for maintaining distributed data|
|US8238350||Oct 28, 2005||Aug 7, 2012||Emc Corporation||Message batching with checkpoints systems and methods|
|US8281227||May 18, 2009||Oct 2, 2012||Fusion-10, Inc.||Apparatus, system, and method to increase data integrity in a redundant storage system|
|US8286029||Oct 9, 2012||Emc Corporation||Systems and methods for managing unavailable storage devices|
|US8307258||May 18, 2009||Nov 6, 2012||Fusion-10, Inc||Apparatus, system, and method for reconfiguring an array to operate with less storage elements|
|US8356013||Jan 15, 2013||Emc Corporation||Systems and methods for a snapshot of data|
|US8356150||Sep 30, 2010||Jan 15, 2013||Emc Corporation||Systems and methods for providing nonlinear journaling|
|US8380689||Feb 19, 2013||Emc Corporation||Systems and methods for providing nonlinear journaling|
|US8412978||May 8, 2012||Apr 2, 2013||Fusion-Io, Inc.||Apparatus, system, and method for managing data storage|
|US8495460||Oct 4, 2012||Jul 23, 2013||Fusion-Io, Inc.||Apparatus, system, and method for reconfiguring an array of storage elements|
|US8539056||Aug 2, 2006||Sep 17, 2013||Emc Corporation||Systems and methods for configuring multiple network interfaces|
|US8625464||Nov 1, 2010||Jan 7, 2014||Emc Corporation||Systems and methods for providing a quiescing protocol|
|US8738991||May 31, 2013||May 27, 2014||Fusion-Io, Inc.||Apparatus, system, and method for reconfiguring an array of storage elements|
|US8832528||May 18, 2010||Sep 9, 2014||Fusion-Io, Inc.||Apparatus, system, and method to increase data integrity in a redundant storage system|
|US8966080||Apr 13, 2007||Feb 24, 2015||Emc Corporation||Systems and methods of managing resource utilization on a threaded computer system|
|US9306599||May 23, 2014||Apr 5, 2016||Intelligent Intellectual Property Holdings 2 Llc||Apparatus, system, and method for reconfiguring an array of storage elements|
|US20030033308 *||Nov 9, 2001||Feb 13, 2003||Patel Sujal M.||System and methods for providing a distributed file system utilizing metadata to track information about data stored throughout the system|
|US20040153479 *||Nov 14, 2003||Aug 5, 2004||Mikesell Paul A.||Systems and methods for restriping files in a distributed file system|
|US20040157639 *||Nov 25, 2003||Aug 12, 2004||Morris Roy D.||Systems and methods of mobile restore|
|US20060095640 *||Nov 18, 2005||May 4, 2006||Yasuyuki Mimatsu||Connecting device of storage device and computer system including the same connecting device|
|US20060248378 *||Apr 29, 2005||Nov 2, 2006||Network Appliance, Inc.||Lost writes detection in a redundancy group based on RAID with multiple parity|
|US20070094431 *||Oct 21, 2005||Apr 26, 2007||Fachan Neal T||Systems and methods for managing concurrent access requests to a shared resource|
|US20080046476 *||Aug 18, 2006||Feb 21, 2008||Anderson Robert J||Systems and methods for a snapshot of data|
|US20080222356 *||May 12, 2008||Sep 11, 2008||Yasuyuki Mimatsu||Connecting device of storage device and computer system including the same connecting device|
|US20090019320 *||Jul 11, 2007||Jan 15, 2009||Sun Microsystems, Inc.||Method and apparatus for troubleshooting a computer system|
|US20090287956 *||Nov 19, 2009||David Flynn||Apparatus, system, and method for detecting and replacing failed data storage|
|US20100293439 *||May 18, 2009||Nov 18, 2010||David Flynn||Apparatus, system, and method for reconfiguring an array to operate with less storage elements|
|US20100293440 *||Nov 18, 2010||Jonathan Thatcher||Apparatus, system, and method to increase data integrity in a redundant storage system|
|US20130332507 *||Jun 6, 2012||Dec 12, 2013||International Business Machines Corporation||Highly available servers|
|U.S. Classification||714/5.11, 714/E11.024|
|International Classification||G06F11/00, G06F11/22, G06F3/06|
|Cooperative Classification||G06F3/0601, G06F11/0727, G06F11/0751, G06F2003/0697, G06F11/2089|
|European Classification||G06F11/07P1F, G06F11/07P2|
|Mar 22, 2002||AS||Assignment|
Owner name: INTEL CORPORATION, CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MASSA, DIETER;LEHNER, OTTO;REEL/FRAME:012742/0573
Effective date: 20020321
|Dec 27, 2005||CC||Certificate of correction|
|Feb 18, 2009||FPAY||Fee payment|
Year of fee payment: 4
|Feb 13, 2013||FPAY||Fee payment|
Year of fee payment: 8