US20020069317A1 - E-RAID system and method of operating the same - Google Patents
E-RAID system and method of operating the same Download PDFInfo
- Publication number
- US20020069317A1 US20020069317A1 US10/007,410 US741001A US2002069317A1 US 20020069317 A1 US20020069317 A1 US 20020069317A1 US 741001 A US741001 A US 741001A US 2002069317 A1 US2002069317 A1 US 2002069317A1
- Authority
- US
- United States
- Prior art keywords
- data
- memory
- banks
- group
- raid level
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
- G06F3/0611—Improving I/O performance in relation to response time
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1076—Parity data used in redundant arrays of independent storages, e.g. in RAID systems
- G06F11/108—Parity data distribution in semiconductor storages, e.g. in SSD
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1415—Saving, restoring, recovering or retrying at system level
- G06F11/1441—Resetting or repowering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/1666—Error detection or correction of the data by redundancy in hardware where the redundant component is memory or memory area
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0868—Data transfer between cache memory and other subsystems, e.g. storage devices or host systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0873—Mapping of cache memory to specific storage devices or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0626—Reducing size or complexity of storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0629—Configuration or reconfiguration of storage systems
- G06F3/0631—Configuration or reconfiguration of storage systems by allocating resources to storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0662—Virtualisation aspects
- G06F3/0664—Virtualisation aspects at device level, e.g. emulation of a storage device or system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0689—Disk arrays, e.g. RAID, JBOD
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2002—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where interconnections or communication control functionality are redundant
- G06F11/2007—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where interconnections or communication control functionality are redundant using redundant communication media
- G06F11/201—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where interconnections or communication control functionality are redundant using redundant communication media between storage system components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/21—Employing a record carrier using a specific recording technology
- G06F2212/214—Solid state disk
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/26—Using a specific storage system architecture
- G06F2212/261—Storage comprising a plurality of storage devices
- G06F2212/262—Storage comprising a plurality of storage devices configured as RAID
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/26—Using a specific storage system architecture
- G06F2212/263—Network storage, e.g. SAN or NAS
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99931—Database or file accessing
- Y10S707/99932—Access augmentation or optimizing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Quality & Reliability (AREA)
- Techniques For Improving Reliability Of Storages (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
An apparatus and method for storing, manipulating, processing, and transferring data in a memory subsystem (110) to provide a dynamic-RAID system Generally, the memory subsystem (110) includes a memory array (255) having number of memory devices (250) arranged in banks (260) each with a predetermined number of devices, a memory controller (265) coupled to the banks for accessing the devices, and a processor (275) coupled to the controller and through a network (120) to a data processing system (115). The memory controller (265) is configured to store data to any combination of banks (260) in one or more memory matrix modules (105) simultaneously to provide a dynamic-RAID system. Preferably, the controller (265) is configured to detect and correct errors in data transferred to or stored in the memory devices (250) using a Hamming code.
Description
- This application claims priority from U.S. Provisional Patent Application Serial No. 60/250,812 entitled a Memory Matrix and Method of Operating the Same, filed Dec. 1, 2000.
- The present invention relates generally to data storage or memory systems, and more particularly to a memory system having a memory matrix and a method of configuring and operating the memory system to provide an electronic RAID or e-RAID system.
- Computers are widely used for storing, manipulating, processing, and displaying various types of data, including financial, scientific, technical and corporate data, such as names, addresses, and market and product information. Thus, modern data processing systems generally require large, expensive, fault-tolerant memory or data storage systems. This is particularly true for computers interconnected by networks such as the Internet, wide area networks (WANs), and local area networks (LANs). These computer networks already store, manipulate, process, and display unprecedented quantities of various types of data, and the quantity continues to grow at a rapid pace.
- Several attempts have been made to provide a data storage system that meets these demands. One, illustrated in FIG. 1, involves a server attached storage (SAS)
architecture 10. Referring to FIG. 1, the SASarchitecture 10 typically includesseveral client computers 12 attached via anetwork 14 to aserver 16 that manages an attacheddata storage system 18, such as a disk storage system Theclient computers 12 access thedata storage system 18 through a communications protocol such as, for example, TCP/IP protocol. SAS architectures have many advantages, including consolidated, centralized data storage for efficient file access and management, and cost-effective shared storage amongseveral client computers 12. In addition, the SASarchitecture 10 can provide high data availability and can ensure integrity through redundant components such as a redundant array of independent/inexpensive disks (RAID) indata storage system 18. - Although an improvement over prior art data storage systems in which data is duplicated and maintained separately on each
computer 12, the SASarchitecture 10 has serious shortcomings. The SASarchitecture 10 is a defined network architecture that tightly couples thedata storage system 18 to operating systems of theserver 16 andclient computers 12. In this approach theserver 16 must perform numerous tasks concurrently including running applications, manipulating databases in thedata storage system 18, file/print sharing, communications, and various overhead or housekeeping functions. Thus, as the number ofclient computers 12 accessing thedata storage system 18 is increased, response time deteriorates rapidly. In addition, the SASarchitecture 10 has limited scalability and cannot be readily upgraded without shutting down theentire network 14 and allclient computers 12. Finally, such an approach provides limited backup capability since it is very difficult to backup live databases. - Another related approach is a network attached storage (NAS)
architecture 20. Referring to FIG. 2, a typical NASarchitecture 20 involvesseveral client computers 22 and adedicated file server 24 attached via a local area network (LAN 26). The NASarchitecture 20 has many of the same advantages as the SASarchitecture 10 including consolidated, centralized data storage for efficient file access and management, shared storage among a number ofclient computers 22, and separate storage from an application server (not shown). In addition, the NASarchitecture 20 is independent of an operating system of theclient computers 22, enabling thefile server 24 to be shared by heterogeneous client computers and application servers. This approach is also scalable and accessible, enabling additional storage to be easily added without disrupting the rest of thenetwork 26 or application servers. - A third approach is the storage area network (SAN)
architecture 30. Referring to FIG. 3, atypical SAN architecture 30 involvesclient computers 32 connected to a number ofservers 36 through adata network 34. The servers are connected throughseparate connections 37 to a number ofstorage devices 38 through a dedicatedstorage area network 39 and its SAN switches and routers, which typically use the Fibre Channel-Arbitrated Loop protocol. Like NAS, SANarchitecture 30 offers consolidated centralized storage and storage management, and a high degree of scalability. Importantly, the SAN approach removes storage data traffic from the data network and places it on its own dedicated network, which eases traffic on the data network, thereby improving data network performance considerably. - Although both the NAS20 and the SAN 30 architectures are an improvement over
SAS architecture 10, they still suffer from significant limitations. Currently, the storage technology most commonly used in SAS 10, NAS 20, and SAN 30 architectures is the hard disk drive. Disk drives include one or more rotating physical disks having magnetic media coated on at least one, and preferably both, sides of each disk. A magnetic read/write head is suspended above each side of each disk and made to move radially across the surface of the disk as it is rotated. Data is magnetically recorded on the disk surfaces in concentric tracks. - Disk drives are capable of storing large amounts of data, usually on the order of hundreds or thousands of megabytes, at a low cost. However, disk drives are slow relative to the speed of processors and circuits in the
client computers - As a result of the shortcomings of disk drives, and of advancements in semiconductor fabrication techniques made in recent years, solid-state drives (SSDs) using non-mechanical Random Access Memory (RAM) devices are being introduced to the marketplace. RAM devices have data access times on the order of less than 50 microseconds, much faster than the fastest disk drives. To maintain system compatibility, SSDs are typically configured as disk drive emulators or RAM disks. A RAM disk uses a number of RAM devices and a memory-resident program to emulate a disk drive. Like a disk drive a RAM disk typically stores data as files in directories that are accessed in a manner similar to that of a disk drive.
- Prior art SSDs are also not wholly satisfactory for a number of reasons. First, unlike a physical hard disk drive, a RAM disk forgets all stored data when the computer is turned off. The requirement to maintain power to keep data alive is problematic with SSDs that are generally used as disk drive replacements in servers or other computers. Also, SSDs do not presently provide the high densities and large memory capacities that are required for many computer applications. Currently, the largest SSD capacity available is 37.8 gigabytes (GB). SSDs having a 3.5 inch form factor, preferred to make them directly interchangeable with standard hard disk drives, are limited to a mere 3.2 GB. Moreover, existing SSDs operate in amode emulating a conventional disk controller, typically using a Small Computer System Interface (SCSI) or Advanced Technology Attachment (ATA) standard for interfacing between the SSD and a client computer. Thus, encumbered by the limitations of disk controller emulation, hard disk circuitry, and ATA or SCSI buses, existing SSDs fail to take full advantage of the capabilities of RAM devices.
- Accordingly, there is a need for a data storage system with a network centered architecture that has a large data handling capacity, short access times, and maximum flexibility to accommodate various configurations and application scenarios. It is desirable that such a data storage system is scalable, fault-tolerant, and easily maintained. It is further desirable that the data storage system provide non-volatile backup storage, off-line backup storage, and remote management capabilities. The present invention provides these and other advantages over the prior art.
- The present invention provides a network attached memory system based on volatile memory devices, such as Random Access Memory (RAM) devices, and a method of operating the same to store, manipulate, process, and transfer data.
- It is a principal object of the present invention to provide a memory system that combines both volatile and non-volatile storage technologies to take advantage of the strengths of each type of memory.
- It is a further object of the present invention to provide such memory system for use in a data processing network or data network, the data network based on either physical wire connections or wireless connections, without the need of any significant alteration in the data network, in data processing systems attached thereto, or in the operating system and applications software of either.
- It is still a further object of the present invention to provide a fault-tolerant memory system having real-time streaming backup of data stored in memory without adversely affecting the data network or attached data processing systems.
- It is yet a further object of the present invention to provide a memory system wherein data storage and data retrieval are optimized for different types of data, thereby accelerating the execution of different types of application.
- It is yet another object of the present invention to provide a memory system that can function as a large network main memory resource for data processing systems coupled to the memory system by a data network that require large, flexible, and configurable RAM memory systems in order to execute applications that can take advantage of such memory systems.
- In one aspect, the present invention is directed to a memory matrix module for use in or with a data network. The memory matrix module includes at least one memory array having a number of memory devices arranged in a number of banks, and each memory device capable of storing data therein. The memory matrix module further includes a memory controller connected to the memory array and capable of accessing the memory devices, and a cache connected to the memory controller. One or more copies of a file or data allocation table (DAT) stored in the cache are adapted to describe files and directories of data stored in the memory devices. Preferably, each of the banks has multiple ports, and the multiple ports and the DAT in the cache are configured to enable the memory controller to access different memory devices in different banks simultaneously. Also preferably, data stored in memory devices can be processed by the memory controller using block data manipulation, wherein data stored in blocks of addresses rather than in individual addresses are manipulated, yielding additional performance improvement. More preferably, the memory matrix module is part of a memory system for use in a data network including several data processing systems based on either physical wire or wireless connections. Most preferably, the memory matrix module is configured to enable different data processing systems to read or write to the memory array simultaneously.
- Generally, the memory array, memory controller and cache are included within one of a number of memory subsystems within the memory matrix module. The memory subsystem includes, in addition to the memory array, memory controller, and cache, an input and output processor or central processing unit (I/O CPU) connected to the memory controller, a read-only memory (ROM) device connected to the I/O CPU, the ROM device having stored therein an initial boot sequence to boot the memory subsystem, a RAM device connected to the I/O CPU to provide a buffer memory to the I/O CPU, and a switch connected to the I/O CPU through an internal system bus and a network interface controller (NIC). The memory subsystem is further connected through the switch and a local area network (LAN) or data bus to the data network and other memory system modules, which include other memory matrix modules (MMM), memory management modules (MGT), non-volatile storage modules (NVSM), off-line storage modules (OLSM), and uninterruptible power supplies (UPS). This data bus can be in the form of a high-speed data bus such as a high-speed backplane chassis.
- Optionally, the memory matrix module can further include a secondary internal system bus connected to the primary internal system bus by a switch or bridge, additional dedicated function processors each with its own ROM and RAM devices, a wireless network module, a security processor, and one or more expansion slots connected via the internal system buses to connect alternate I/O or peripheral modules to the memory matrix module. Primary and secondary internal system buses can include, for example, a Peripheral Component Interconnect (PCI) bus.
- As noted above, the memory matrix module of the present invention is particularly useful in a memory system further including at least one management module (MGT) connected to one or more memory matrix modules and to the data network to provide an interface between the memory matrix modules and the data network. The management module is connected to the memory matrix modules and other memory system modules by a LAN or data bus and by a power management bus. Generally, the management module contains a NIC connected to an internal system bus, a switch connected to the NIC, and a connection between the switch and the LAN or data bus.
- Optionally, the management module further includes a second switch or bridge connecting the primary and the secondary internal system buses, and additional dedicated function processors each with their own ROM and RAM devices, a wireless network module, a security processor, and one or more expansion slots to connect alternate I/O or peripheral modules to the management module.
- In one embodiment, the memory system further includes one or more non-volatile storage modules (NVSM) to provide backup of data stored in the memory matrix modules. Generally, the non-volatile storage module includes a predetermined combination of one or more magnetic, optical, and/or magnetic-optical disk drives. Preferably, the non-volatile storage module includes a number of hard disk drives. More preferably, the hard disk drives are connected in a RAID configuration to provide a desired storage capacity, data transfer rate, or redundancy. In one version of this embodiment, the hard disk drives are connected in a
RAID Level 1 configuration to provide mirrored copies of data in the memory matrix. Alternatively, the hard disk drives may be connected in a RAID Level 0 configuration to reduce the time to backup data from the memory matrix. The non-volatile storage module also includes an I/O CPU, a non-volatile storage controller connected to the I/O CPU with data storage memory devices connected to the storage controller, a ROM device connected to the I/O CPU, the ROM device having stored therein an initial boot sequence to boot a non-volatile storage module configuration, a RAM device connected to the I/O CPU to provide a buffer memory to the I/O CPU, and a switch connected to the I/O CPU through a NIC, and through the network or data bus to other memory system modules and a number of data processing systems. - Optionally, the non-volatile storage module further includes a switch or bridge connecting the primary and secondary internal system buses, additional dedicated function processors each with their own ROM and RAM devices, a wireless network module, a security processor, and one or more expansion slots to connect alternate I/O or peripheral modules to the non-volatile storage module.
- In one embodiment, the memory system may further include one or more off-line storage modules (OLSM) to provide a non-volatile backup of data stored in the memory matrix modules and non-volatile storage modules on a removable media. Generally, the off-line storage module includes a predetermined combination of one or more magnetic tape drives, removable hard disk drives, magnetic-optical disk drives, optical disk drives, or other removable storage technology, which provide off-line storage of data stored in the memory matrix module and/or the non-volatile storage module. In this embodiment, the management module is further configured to backup the memory matrix modules and the non-volatile storage module to the off-line storage module and its removable storage media. The off-line storage module generally includes an I/O CPU, an off-line storage controller connected to the I/O CPU and data storage memory devices connected to the memory controller. A ROM device having stored therein an initial boot sequence to boot a off-line storage module configuration is connected to the I/O CPU. A RAM device connected to the I/O CPU provides a buffer memory to the I/O CPU. The off-line storage module is further connected through an internal system bus, a NIC, a switch, and the LAN or data bus to other memory system modules and data processing systems. Optionally, the off-line storage module further includes a switch or bridge to connect the primary and secondary internal system buses, additional dedicated function processors each with their own ROM and RAM devices, a wireless network module, a security processor, and one or more expansion slots to connect alternate I/O or peripheral modules to the off-line storage module.
- In another embodiment, the memory system includes an uninterruptible power supply (UPS). The UPS supplies power from an electrical power line to the other memory system modules, and in the event of an excessive fluctuation or interruption in power from the electrical power line, provides backup power from a battery. Preferably, the UPS is configured to transmit a signal over the power management bus to the management module on excessive fluctuation or interruption in power from the electrical power line, and the management module is configured to backup the memory matrix to the non-volatile storage module upon receiving the signal. More preferably, the management module is further configured to notify memory system users of the power failure and to perform a controlled shutdown of the memory system.
- Upon restoration of power, the management module is further configured to restore the contents of the primary memory matrix from the most recent backup copy of the memory matrix stored in the non-volatile storage module, reactivate additional memory matrixes if previously configured as secondary backup memories, reactivate the non-volatile storage module as a secondary memory, and return the memory system to normal operating condition. If the non-volatile storage module is unavailable, the management module is further configured to restore the contents of the memory matrix directly from the most recent backup copy of the memory matrix stored in removable storage media in the off-line storage module.
- In another aspect, the present invention is directed to a memory system having switched multi-channel network interfaces and real-time streaming backup. The memory system includes a memory matrix module and a non-volatile storage module capable of storing data therein, and a management module for coupling a data network to the memory matrix module via a primary network interface and to the non-volatile storage module via a secondary network interface. The management module is configured to enable the data network to access the memory matrix module during normal operation to provide a primary memory, to backup data to a secondary memory module, and to stream data from the secondary memory module to the non-volatile storage module to provide staged backup memory. Alternatively, data can be backed up directly from the primary memory to the non-volatile storage module in situations where the non-volatile storage module can accept data at a sufficiently fast rate from the primary memory, or where the data processing requirements of the primary memory permit backing up data at a rate that can be handled by the non-volatile storage module. Generally, the management module is further configured to detect failure or a non-operating condition of the primary memory, and to reconfigure the secondary network interface to enable the data network to access a secondary memory if the secondary memory is available, or to access the non-volatile storage module if the secondary memory is unavailable. Thus, the failover to the backup memory is completely transparent to a user of the data processing system. Examples of network interface standards that can be used include gigabit Ethernet, ten gigabit Ethernet, Fibre Channel-Arbitrated Loop (FC-AL), Firewire, Small Computer System Interface (SCSI), Advanced Technology Attachment (ATA), InfiniBand, HyperTransport, PCI-X, Direct Access File System (DAFS), IEEE 803.11, or Wireless Application Protocol (WAP).
- In one embodiment, the management module is connected to the memory matrix via a number of network interfaces or data buses connected in parallel, the number of network interfaces configured to provide higher data transfer rates in normal operation and to provide access to the memory matrix at a reduced data transfer rate should one of the network interfaces fail.
- In one aspect of the present invention, a memory system configured in a Solid State Disk (SSD) mode of operation is described. By Solid State Disk it is meant a system that provides basic data storage to and data retrieval from the memory system using one or more memory matrix modules in a configuration analogous to those of standard hard disk drives in a network storage system.
- In yet another aspect, the memory system is configured in a dynamic RAID or an electronic RAID(e-RAID) mode to provide an e-RAID. By e-RAID it is meant a system that provides enhanced capacity, speed, and reliability using one or more memory matrix modules connected in a configuration analogous to those of hard disk drives in a conventional Redundant Array of Independent/Inexpensive Disks (RAID) system. Generally, the memory matrix includes a number of memory devices arranged in a number of banks, and a memory controller capable of accessing the memory devices connected to the banks. The memory controller is configured to store data to any combination of the number of banks simultaneously to provide an e-RAID system In one embodiment, the memory matrix includes two banks of memory devices and the memory controller is configured to mirror the data stored in a first one of the two banks to a second of the two banks to provide an
e-RAID Level 1 system. Alternatively, the memory controller is configured to mirror the data stored in a first group of half of the banks of memory devices into a second group of another half of the banks to provide an e-RAID Level 0+1 system. In yet another embodiment, the memory controller is configured to stripe the data across the banks and to store parity information for each stripe of data in at least one of the banks to provide ane-RAID Level 5 system. In yet another embodiment, to provide scalability, the management module, which includes a memory controller, can likewise configure multiple memory matrix modules where data is stored to any combination of memory matrix modules simultaneously to provide higher capacity e-RAID systems. - In another aspect, a memory system configured in a caching mode is described. By caching mode it is meant a system that provides a temporary memory buffer to cache data reads, writes, and requests from a data network to a data storage system in order to reduce access times for frequently accessed data, and to improve storage system response to multiple data write requests.
- In yet another aspect, a memory system configured in a virtual memory paging mode is described. By virtual memory paging it is meant a staged data overflow system that provides swapping of memory pages or predetermined sections of memory in the memory of a network-connected server or other network-connected data processing device out to a memory matrix in the event of a data overflow condition wherein the storage capacity of the server or data processing device is exceeded. The system also provides swapping of memory pages or predetermined sections of memory in the memory matrix out to a non-volatile storage system in the event of a data overflow condition wherein the storage capacity of the memory matrix is exceeded. The virtual memory pages or sections thereby stored in the non-volatile storage system are then read back into the memory matrix as they are needed, and the virtual memory pages or sections stored in the memory matrix are then read back into the memory of the network-connected server or data processing device as they are needed, wherein the memory matrix and the non-volatile storage system function as staged virtual extensions of the capacity of the memory in a network-connected server or data processing device, and the non-volatile storage system also functions as a virtual extension of the capacity of the memory matrix.
- In still another aspect, a memory system configured in a continuous data streaming mode is described. By continuous data streaming it is meant a system that transmits a continuous stream of data over a data network to a recipient data processing system, the data type requiring the transmission to be continuous without any gaps in timing for the entire duration of the transmission. Examples of this type of data include streaming video and streaming audio.
- In another aspect, a memory system configured in a data encryption-decryption mode is described. By encryption-decryption mode it is meant a system that encrypts data and decrypts encrypted data transmitted over a data network on the fly, using one or more publicly known and well defined encryption standards, or one or more private customized encryption-decryption schemes. Data encryption enhances the security of files transmitted over a data network, whereby an encrypted file that falls into unauthorized hands remains undecipherable.
- In yet another aspect, a memory system configured in a data compression-decompression mode is described. By compression-decompression mode it is meant a system that compresses the physical size of data files and decompresses compressed data files transmitted over a data network on the fly, using one or more publicly known and well defined compression standards, or one or more private customized compression-decompression schemes. Data compression reduces the time needed to transmit files over a data network, reducing data access time and network traffic.
- In another aspect, a memory system configured in a pattern matching mode is described. By pattern matching it is meant a system that locates, retrieves, and analyzes data stored in the memory, either directly or through a derived index, using a pattern matching search key. The search key can be generated in real time or be previously derived from the stored data using a data indexing algorithm, which may include compression, encryption, and other data manipulation techniques. Data may be of any type, including text, graphics, video, audio, multimedia, binary large objects, and metadata. The pattern matching mode provides for the following functions:
- (1) Generation of search key indexes based on data indexing algorithms;
- (2) Searching by pattern matching using a real time or previously derived key;
- (3) Ability to search and analyze data using compound keys consisting of a plurality of search keys;
- (4) Adjustable degree of accuracy and tolerance in searching;
- (5) Retrieval and validation of data by pattern matching
- (6) Sorting of data or indexes by pattern matching search keys;
- (7) Automated reindexing and resorting;
- (8) Analysis, manipulation, and transfer of data found through pattern matching; and
- (9) Ability to provide hierarchical data security by restricting user or application access based on pattern matching.
- In still another aspect, the present invention is directed to a real-time application accelerator mode. A memory system for use with a data processing system is provided, the memory system including a management module and memory matrix module configured to interface with the data processing system. The management module has at least one application programming interface (API) configured to store, retrieve, manipulate, or transfer data in the memory matrix based on a property or logical type of the data, whereby time for a program running on the data processing system to access and transfer data stored in the memory system is reduced.
- In application accelerator mode, the present invention analyzes any application that accesses the data stored in the memory system for any reason, including storage, retrieval, analysis, manipulation, internal or external transfer, error correction, and maintenance. The invention provides for dynamically programmable and automated optimization of memory allocation, data access, data manipulation, and data transfer based on analysis of application characteristics, behavior, and treatment of data, memory system configuration, external network and server characteristics, and user behavior. Examples of situations in which optimization can be applied include:
- (1) Access to the memory system by a single or multiple concurrently running applications;
- (2) Access to the memory system by a single or multiple networks, servers, and users that exhibit diverse access requirements and patterns; and
- (3) Self-diagnostic, self-auditing, self-reporting, error correction, and maintenance applications.
- In one embodiment, the memory system is compatible with Extensible Markup Language (XML) format structured documents, and the management and memory matrix modules are configured to parse and store data from XML compliant documents according to data type, and to format XML documents into multiple presentation formats using Extensible Stylesheet Language (XSL) templates. Preferably, the memory matrix is further configured to provide real-time information on data and data handling processes as data is stored in the memory matrix. For example, a running total of a specified field could be calculated as the data is being stored. More preferably, the memory system is capable of being synchronized with another XML enabled storage device or data processing system.
- In another embodiment, the memory system is SQL enabled to create, update, and query a component of a database or a relational database stored in the memory matrix. Preferably, the management module is configured to provide custom partitioning, bit-level locking, and manipulation of data written to the memory matrix modules. More preferably, the management module and the memory matrix module are configured to provide on-demand random access to data stored in the memory matrix.
- In another aspect, the present invention is directed to the memory matrix module having real-time local and remote management of the memory matrix module. As described above, the memory matrix contained in the memory matrix module includes a number of memory devices, each capable of storing data, arranged in a number of banks, and a memory controller capable of accessing the memory devices connected to each of the banks. The memory matrix further includes a cache connected to the memory controller, the cache having stored therein a DAT adapted to describe files and directories of data stored in the memory devices. In accordance with the present invention, the memory controller is configured to provide local status reporting and management of the memory matrix independent of a data processing system connected to the memory matrix module, and remote status reporting and management of the memory matrix through a data network based on physical wire connections, such as a LAN, WAN, or the Internet, connected to the memory matrix module. Alternatively, remote status reporting and management of the memory matrix can be accomplished through a wireless network connection compatible with the memory matrix module's wireless network module.
- In yet another aspect, the present invention is directed to the management module's ability to be administered in real time locally and remotely, and to perform real-time local and remote management of other management modules as well as one or more memory matrix modules coupled to the management module through a LAN, data network, or data bus. As described above, the memory matrix in the management module, in a fashion similar to the memory matrix contained in a memory matrix module, includes a number of memory devices, each capable of storing data, arranged in a number of banks, and a memory controller capable of accessing the memory devices connected to each of the banks. The memory matrix further includes a cache connected to the memory controller, the cache having stored therein a DAT adapted to describe files and directories of data stored in the memory devices. In accordance with the present invention, the memory controller is configured to provide local status reporting and management of the memory matrix independent of a data processing system connected to the management module, and remote status reporting and management of the memory matrix through a data network based on physical wire connections, such as a LAN, WAN, or the Internet, connected to the management module. Alternatively, remote status reporting and management of the management module can be accomplished through a wireless data network connection compatible with the management module's wireless network module, and independent of any other physically connected data network. In addition to management functions related to the management module, the management module is configured to provide management capabilities for other management modules and memory matrix modules coupled to the management module through a data network or data bus, the data network or data bus based on either physical wire connections or wireless connections.
- In one embodiment, the memory controller is configured to detect and correct errors in data transmitted to or stored in the memory devices using, for example, ECC or a Hamming code.
- In another embodiment, the system is configured to defragment data stored in memory space defined by the memory devices. Preferably, the system is configured to perform the defragmentation in a way that is substantially transparent to users of the data processing system.
- In yet another embodiment, the system is configured to calculate statistics related to operation of the memory matrix and to provide the statistics to an administrator of the data processing system. The statistics can include, for example, information related to the available capacity of the memory matrix, throughput of data transferred between the memory matrix and the data processing system, or a rate at which memory matrix resources are being consumed.
- In still another embodiment, the memory matrix module is part of a memory system that further includes a management module and a non-volatile storage module. The management module is configured to couple the memory matrix module to the data processing system to provide a primary memory, and to couple the non-volatile storage module to the memory matrix to provide a backup memory. Preferably, the memory controller and I/O CPU of the memory matrix module are configured to physically defragment, arrange, and optimize the data in the memory matrix prior to the data being written to the non-volatile storage module.
- The advantages of a memory system of the present invention include:
- (i) short data access times;
- (ii) RAM block data manipulation and simultaneous parallel access capabilities resulting in fast data manipulation;
- (iii) high reliability and data security;
- (iv) modular, network-centric architecture that is readily expandable, scalable, and compatible with multiple network storage architectures such as NAS and SAN;
- (v) real-time local and remote management that optimizes maintenance and backup operations while reducing overhead on a host server or data processing system;
- (vi) ability to be flexibly configured in different low level modes of operation, some of which can run concurrently: SSD, e-RAID, caching, virtual memory paging, continuous data streaming, data encryption and decryption, data compression and decompression, application acceleration, and others; and
- (vii) while in application acceleration mode, the further ability to be flexibly configured to accelerate different applications, some of which can run concurrently: SQL database processing, XML processing, streaming multimedia, high capacity webserving, computationally intensive applications (such as air traffic control or weather mapping), technical and scientific modeling, video and graphics acquisition and processing (accelerating applications such as Adobe Photoshop® and Adobe Premiere®), real-time multi-user network gaming and simulation, voice recognition and analysis, voice-over-IP (VOIP) processing, biometric processing, artificial intelligence and pattern matching, and others.
- These and various other features and advantages of the present invention will be apparent upon reading of the following detailed description in conjunction with the accompanying drawings, where:
- FIG. 1 (prior art) is a block diagram of a conventional memory system having a server attached storage (SAS) architecture;
- FIG. 2 (prior art) is a block diagram of a conventional memory system having a network attached storage (NAS) architecture;
- FIG. 3 (prior art) is a block diagram of a conventional memory system having a storage area network (SAN) architecture;
- FIG. 4 is a block diagram of a memory system according to an embodiment of the present invention having a network attached storage (NAS) architecture;
- FIG. 5 is a block diagram of a memory system according to an embodiment of the present invention having a storage area network (SAN) architecture;
- FIG. 6 is a partial block diagram of the memory system of FIG. 4 showing a memory matrix module (MMM) with several memory subsystems therein according to an embodiment of the present invention;
- FIG. 7 is a block diagram of an embodiment of a memory subsystem according to an embodiment of the present invention;
- FIG. 8 is a block diagram of an embodiment of a memory controller suitable for use in the memory subsystem of FIG. 7;
- FIG. 9 is a block diagram of an e-RAID Level 0 system according to an embodiment of the present invention;
- FIG. 10 is a block diagram of an
e-RAID Level 1 system according to an embodiment of the present invention; - FIG. 11 is a block diagram of an
e-RAID Level 5 system according to an embodiment of the present invention; - FIG. 12 is a block diagram of an e-RAID Level 0+1 system according to an embodiment of the present invention;
- FIG. 13 is a block diagram of a management module (MGT) of the memory system of FIG. 4 according to an embodiment of the present invention;
- FIG. 14 is a block diagram of a non-volatile storage module (NVSM) of the memory system of FIG. 4 according to an embodiment of the present invention;
- FIG. 15 is a block diagram of an off-line storage module (OLSM) of the memory system of FIG. 4 according to an embodiment of the present invention; and
- FIG. 16 is a flowchart showing an overview of a process for operating a memory system having a memory matrix module according to an embodiment of the present invention.
- An improved data storage or memory system having a memory matrix and a method of operating the same are provided.
- An exemplary embodiment of a
memory system 100 including one or more memory matrix modules (MMM) 105 or units each having one ormore memory subsystems 110 according to the present invention for storing data therein will now be described with reference to FIG. 4. FIG. 4 is a block diagram of a memory system (100) having a network attached storage (NAS) architecture. Althoughmemory system 100 is shown as having only twomemory matrix modules 105 each with a single memory subsystem 110 (shown in phantom), it will be appreciated that the memory system can be scaled to include any number of memory matrix modules having any number of memory subsystems depending on the memory capacity desired. In addition,memory system 100 can be used with a singledata processing system 115, such as a computer or PC, or can be coupled to a data processing network ordata network 120 to which several data processing systems are connected.Data network 120 can be based on either a physical connection or wireless connection as described infra. By physical connection it is meant any link or communication pathway, such as wires, twisted pairs, coaxial cable, or fiber optic line or cable, that connects betweenmemory system 100 anddata network 120 ordata processing system 115. For purposes of clarity, many of the details ofdata processing systems 115 anddata networks 120 that are widely known and are not relevant to the present invention have been omitted. In addition tomemory matrix modules 105 withmemory subsystems 110,memory system 100 typically includes one or more management modules (MGT) 125 or units to interface between the memory subsystems anddata network 120; one or more non-volatile storage modules (NVSM) 130 or units to backup data stored in the memory matrix modules; one or more off-line storage modules (OLSM) 135 or units having removable storage media (not shown) to provide an additional backup of data; and an uninterruptible power supply (UPS) 140 to supply power from an electrical power line to thememory matrix modules 105 and tomodules power bus 145. Themodules memory system 100 are coupled to one another and todata processing systems 115 or thedata network 120 via a local area network (LAN) ordata bus 150. To provide increased reliability and throughput, thememory system 100 can include any number of management modules (MGT) 125, non-volatile storage modules (NVSM) 130, and off-line storage modules (OLSM) 135. Operation ofmemory matrix modules 105,UPS 140 andother modules management module 125 via primary and secondary internal system buses (not shown in this figure) and via apower management bus 155. - Although
memory system 100 and method of the present invention are described in context of a memory system having NAS architecture, it will be appreciated that the memory system and method of the present can also be used with memory systems having a storage area network (SAN) architecture using expansion cards 156 and coupled to thedata network 120 via, for example, a Fibre Channel-Arbitrated Loop connection 158, as shown in FIG. 5. - The various components, modules and subsystems of
memory 100 will now be described in more detail with reference to FIGS. 6 through 15. - FIG. 6 is a partial block diagram of a portion of
memory system 100 showing thememory matrix module 105 according to an embodiment of the present invention. Referring to FIG. 6,memory matrix module 105 contains a primaryinternal system bus 160 that is coupled through a bridge or switch 165 to a secondaryinternal system bus 170. Thememory matrix module 105 is coupled tomanagement module 125,non-volatile storage module 130 and off-line storage module 135 and todata processing system 115 or data network 120 (not shown this figure), through a network interface card or controller (NIC) 175, aswitch 180, a number ofphysical links 185 such as Gigabit Interface Converters (GBICs), and one or more individual connections on the LAN ordata bus 150. The redundant paths taken by connections to the LAN ordata bus 150 between theswitches 180 of themodules memory system 100 form a ‘mesh’ or fabric type of network architecture that provides increased fault tolerance through path redundancy, and higher throughput during normal operation when all paths are operating correctly. -
Switch 180 enablesmanagement module 125,non-volatile storage module 130, off-line storage module 135 and data processing systems (not shown in this figure) connected to any of the connections on LAN ordata bus 150, to access anymemory subsystem 110 inmemory matrix module 105. Switch 180 can be a switching fabric or a cross-bar type switch capable of wire-speed operation running at full gigabit speeds, and having dynamic packet buffer memory allocation, multi-layer switching and filtering (Layer 2 andLayer 3 switching and Layer 4-7 filtering), and integrated support for class of service priorities required by multimedia applications. One example is the BCM5680 8-Port Gigabit Switch from Broadcom Corporation of Irvine, Calif., USA. - In the embodiment shown,
memory matrix module 105 further includessecurity processor 200 for specific additional data processing and manipulation, and UPSpower management interface 205 to enable the memory matrix module to interface withuninterruptible power supply 140.Security processor 200 can be any commercially available device that integrates a high-performance IPSec engine handling DES, 3DES, HMAC-SHA-1, and HMAC-MD5, public key processor, true randomnumber generator, context buffer memory, and PCI or equivalent interface. One example is a BCM5805 Security Processor from Broadcom Corporation of Irvine, Calif., USA. - Optionally,
memory matrix module 105 can further include additionaldedicated function processors internal system bus 170 connected to primaryinternal system bus 160 viaswitch 165 for specific additional data processing and manipulation.Dedicated function processors ROM memory subsystems 110, andRAM - Expansion slot or
slots 240, coupled tomemory subsystems 110 viaswitch 165 and primary and secondaryinternal system buses memory system 100. -
Wireless module 245 also coupled tomemory subsystems 110 throughswitch 165 and primary and secondaryinternal system buses memory system 100 to additional data processing systems or data networks via a wireless connection. - An exemplary embodiment of
memory subsystem 110 will now be described with reference to FIG. 7. As shown in FIG. 7,memory subsystem 110 generally includes a number ofmemory devices 250, each capable of storing data therein, arranged in amemory array 255 having a plurality ofbanks 260, each bank each having a predetermined number of memory devices.Memory subsystem 110 can include any number ofmemory devices 250 arranged in any number ofbanks 260 depending on the data storage capacity needed. - Typically,
memory devices 250 include Random Access Memory (RAM) devices. RAM devices are integrated circuit memory chips that have a number of memory cells for storing data, each memory cell capable of being identified by a unique physical address including a row and column number. Some of the more commonly used RAM devices include dynamic RAM (DRAM), fast page mode (FPM) DRAM, extended data out RAM (EDO RAM), burst EDO RAM, static RAM (SRAM), synchronous DRAM (SDRAM), Rambus DRAM (RDRAM), double data rate SDRAM (DDR SDRAM), and future RAM technologies as they become commercially available. Of these SDRAM is currently preferred because it is faster than EDO RAM, and is less expensive than SRAM. - Alternatively,
memory devices 250 can include devices, components or systems using holography, atomic resolution storage or molecular memory technology to store data. Holographic data storage systems (HDSS) split a laser beam A ‘page’ of data is then impressed on one of the beams using a mask or Spatial Light Modulator (SLM) and the components of the split beam aimed so that they cross. The beams are directed so that they intersect to form an interference pattern of light and dark areas within a special optical material that reacts to light and retains the pattern to store the data. To read stored data the optical material is illuminated with a reference beam, which interacts with the interference pattern to reproduce the recorded page of data. This image is then transferred to data processing system using a Charge-Coupled Device (CCD). - Molecular memory uses protein molecules which react with light undergoing a sequence of structural changes known as a photocycle. Data is stored in the protein molecules with an SLM in a manner similar to that used in HDSS. Both HDSS and molecular memories can achieve data densities of about 1 terabyte per cubic centimeter.
- Atomic resolution storage or ARS systems use an array of atom-size probe tips to read and write data on a storage media consisting of a material having two distinct physical states, or phases, that are stable at room temperature. One phase is amorphous, and the other is crystalline. Data is recorded or stored in the media by heating portions spots of the media to change them from one phase to the other. ARS systems can provide memory devices with data densities greater than about 1 terabyte per cubic centimeter.
- In addition to
array 255,memory subsystem 110 generally includes amemory controller 265 for accessing data in the memory devices of the memory matrix, and acache 270 connected to the memory controller having one or more copies of a file or Data Allocation Table (DAT) stored therein for organizing data in thememory subsystem 110 orarray 255. In accordance with the present invention, the DAT is adapted to provide one of several possible methods for organizing data inmemory subsystem 110. Under onemethod memory subsystem 110 is partitioned and each partition divided into clusters. Each cluster is either allocated to a file or directory or it is free (unused). A directory lists the name, size, modification time, access rights, and starting cluster of each file or directory it contains. A special value for “not allocated” indicates a free cluster or the beginning of a series of free clusters. - Under another method for organizing data in
memory subsystem 110, the DAT may set aside customized partition and cluster configurations to achieve particular optimizations in data access. An analogous example of this method from hard disk drive based databases is the creation of nonstandard partitions on hard disk drives to store certain data types such as large multimedia files or small Boolean fields in such a way that data queries, updates, manipulation, and retrieval are optimized. However, customized partition and cluster configurations are generally not available with conventional hard disk controllers, which are generically optimized for the most common data types. - I/
OCPU 275 andmemory controller 265 generally include hardware and software to interface betweenmanagement module 125 andbanks 260 ofmemory devices 250 inmemory array 255. The hardware and/or software include a protocol to translate logical addresses used by adata processing system 115 into physical addresses or locations inmemory devices 250. Optionally,memory controller 265 andmemory devices 250 also include logic for implementing an error detection and correction scheme for detecting and correcting errors in data transferred to or stored inmemory subsystem 110. The error detection and correction can be accomplished, for example, using a Hamming code. Hamming codes add extra or redundant bits, such as parity bits, to stored or transmitted data for the purposes of error detection and correction. Hamming codes are described in, for example, U.S. Pat. No. 5,490,155, which is incorporated herein by reference. Alternatively,memory devices 250 can include a technology, such as Chipkill, developed by IBM Corporation, that enables the memory devices themselves to automatically and transparently detect and correct multi-bit errors and selectively disable problematic parts of the memory. - In one embodiment,
memory controller 265 can be any suitable, commercially available controller for controlling a data storage device, such as a hard disk drive controller. A suitable memory controller should be able to address from about 2 GB to about 48 GB ofmemory devices 250 arranged in from about eight to about forty-eightbanks 260, have at least a 133 MHz local bus, and one or more Direct Memory Access (DMA) channels. One example would be the V340HPC PCI System Controller from V3 Semiconductor Corporation of North York, Ontario, Canada. I/O CPU 275 receives memory requests from primaryinternal system bus 160 and passes the requests tomemory controller 265 throughlocal bus 300. I/O CPU 275 serves to manage the reading and writing of data tobanks 260 ofmemory devices 250 as well as manipulate data within the banks of memory devices. - By manipulate data it is meant defragmenting the
memory array 255, encryption and/or decryption of data to be stored in or read from the array, and data optimization for specific applications. Defragmenting physically consolidates files and free space in thearray 255 into a continuous group of sectors, making storage faster and more efficient. Encryption refers to any cryptographic procedure used to convert plaintext into ciphertext in order to prevent any but the intended recipient from reading that data. Data optimization entails special handling of specific types of data or data for specific applications. For example, some data structures commonly used in scientific applications, such as global climate modeling and satellite image processing, require periodic or infrequent processing of very large amounts of streaming data. By streaming data it is meant data arrays or sequential data that are accessed once by thedata processing system 115 and then not accessed again for a relatively long time. - A read-only memory (ROM)
device 280 having an initial boot sequence stored therein is coupled to I/O CPU 275 toboot memory subsystem 110. ARAM device 285 coupled to I/O CPU 275 provides a buffer memory to the I/O CPU. The I/O CPU 275 can be any commercially available device having a speed of at least 600 MHz and the capability of addressing at least 4 GB of memory. Suitable examples include a 2GHz Pentium® 4 processor commercially available from Intel Corporation of Santa Clara, Calif., USA, and an Athlon®, 1.5 GHz processor commercially available from Advanced Micro Devices, Inc. of Sunnyvale, Calif., USA. - Preferably,
ROM device 280 is an electronically erasable or flash programmable ROM (EEPROM) that can be programmed to enable themanagement module 125 to operate according to the present invention. More preferably,ROM device 280 has from about 32 to about 128 Mbits of memory. One suitable EEPROM, for example, is a 28F6408W30 Wireless FlashMemory with SRAM from Intel Corporation of Santa Clara, Calif., USA. - After data access has been initiated through I/
O CPU 275, data inmemory array 255 is passed throughmemory controller 265 directly to the primaryinternal system bus 160 via a dedicated bus orcommunications pathway 290. Optionally,memory controller 265 can include multiple controllers or parallel input ports (not shown) to enable another CPU, such asdedicated function CPUs communications pathway 290 in the event of a failure of I/O CPU 275. - Referring to FIG. 8,
memory controller 265 typically includes alocal bus interface 305 to connect vialocal bus 300 to I/O CPU 275, and a PCI or equivalentsystem bus interface 310 to connect to primaryinternal systembus 160 viacommunications pathway 290. Although not shown in this figure, it will be appreciated thatmemory controller 265 may be connected to more than onelocal bus 300 or I/O CPU 275, and, similarly, to more than one PCI or equivalent primaryinternal system bus 160 to provide added redundancy and high availability.Memory controller 265 also generally includes a first in, first out (FIFO)storage memory buffer 315, one or more direct memory access (DMA)channels 320, aserial EEPROM controller 325, an interruptcontroller 330, andtimers 335. In addition,memory controller 265 includes amemory array controller 340 that interfaces withmemory array 255 managed bymemory controller 265. Optionally,memory controller 265 can include a plurality of memory array controllers (not shown) connected in parallel to provide increased reliability. - In a preferred embodiment,
memory controller 265 is a Redundant Array of Independent/Inexpensive Disks (RAID) type controller such as used in a conventional RAID system. At least one RAID type memory controller used in conjunction with at least onememory matrix module 105 and at least onemanagement module 125 of the present invention provides an e-RAID or a dynamic-RAID system in which data is written or stored to and read from any combination of the plurality ofbanks 260 simultaneously. - Like conventional RAID, e-RAID is a technology used to improve the I/O performance and reliability of data storage devices, here
memory matrix modules 105. Data is stored acrossmultiple banks 260 ofmemory devices 250 in order to provide immediate access to the data despite one or more device failures. e-RAID provides an access time of less than 25 microseconds and consequently is from about fifteen to about twenty times faster than conventional RAID technology. In addition, as described above,memory controller 265 applies an Error Checking and Correcting (ECC) scheme at the memory device level, thereby providing a reliability unprecedented in conventional RAID systems. - As with conventional disk-based RAID systems, in e-RAID there are several strategies for storing data to
memory matrix modules 105, each referred to as an e-RAID Level. There are a plurality of e-RAID Levels, each having its own benefits and disadvantages, a number of which are described below. Unlike conventional RAID systems, however, an e-RAID system provides for the dynamic allocation and reallocation of memory devices in real time for the various functional partitions in an e-RAID system, which may change the existence, size, properties, and e-RAID level of the e-RAID system Dynamic e-RAID management is under the control of one or more memory controllers under direction of at least one memory management module. The descriptions below apply to a singlememory matrix module 105, but it will be appreciated that e-RAID can be applied over a plurality ofmemory matrix modules 105 using their containedbanks 260 ofmemory devices 250. Multi-module e-RAID is configured with multiple virtual partitions each comprised of one ormore banks 260 ofmemory devices 250, each virtual partition capable of spanning one or morememory matrix modules 105. - An e-RAID Level 0, or striping without fault tolerance, is an I/O performance oriented striped data mapping technique. A block diagram illustrating an e-RAID Level 0 is shown in FIG. 9
Memory matrix modules 105 contains banks ofmemory devices 250, which are divided into a plurality of RAM partitions. Blocks of data are assigned in regular sequence to the RAM partitions. e-RAID Level 0 provides high I/O performance by accessing the plurality of RAM partitions inmemory matrix modules 105 simultaneously. The reliability of e-RAID Level 0, however, is less than that of other e-RAID Levels due to its lack of redundancy. e-RAID Level 0 requires a minimum of two partitions. - An
e-RAID Level 1, also called mirroring and duplexing, is a redundancy or data safety oriented data mapping technique.Memory matrix module 105 is configured with itsbanks 260 ofmemory devices 250 divided into at least two identical partitions, each of which holds an identical image of data. A block diagram illustrating ane-RAID Level 1 is shown in FIG. 10. Ane-RAID Level 1 memory matrix module may use parallel access to achieve higher transfer rates when reading data.e-RAID Level 1 requires a minimum of two partitions. - An e-RAID Level 2 (not shown), also called Hamming code ECC striping, is configured like e-RAID Level 0, except that the Hamming code ECC for each data word is generated and stored to a second e-RAID Level 0 array of
banks 260. Data error correction is provided in real-time.e-RAID Level 2 provides very high data transfer rates and high data security, but also has high cost, requiring additional partitions to store ECC information.e-RAID Level 2 requires a minimum of four partitions. - An e-RAID Level 3 (not shown), also called parallel transfer with parity, is configured like e-RAID Level 0, except that the stripe parity bit is generated for each stripe of data written to the e-RAID Level 0 array of
banks 260 and stored to another partition of banks. Data correction is provided in real-time.e-RAID Level 3 provides high data transfer rates and high data security, with higher cost efficiency because fewer ECC partitions are required relative to the number of data partitions.e-RAID Level 3 requires a minimum of three partitions. - An e-RAID Level 4 (not shown), also called independent partitions with shared parity partition, stores entire blocks of data in successive partitions of
banks 260. The parity for blocks located on the same rank or relative order in the partitions is generated and stored to another partition of banks. Data correction is provided in real-time.e-RAID Level 4 provides very high read data transfer rates, and is relatively cost-effective because the ratio of ECC to data partitions is low.e-RAID Level 4 requires a minimum of three partitions. - An
e-RAID Level 5, also called independent partitions with distributed parity blocks, shown in FIG. 11, adds ECC information to a parallel access stripedmemory matrix module 105, e-RAID Level 0. Each stripe of data includes ECC information permitting regeneration and rebuilding of lost or corrupted data in the event of amemory device 250 orbank 260 failure. The ECC information is distributed across some or all of the memory array's 255banks 260. The ECC information can include redundant or parity bits. For example, the ECC information can include a 64-bit modified Hamming code. Ane-RAID Level 5 provides for extremely high read data transfer rates, moderately high write data transfer rates, and high data security, at a lower cost than mirroring.e-RAID Level 5 requires a minimum of three partitions. - An e-RAID Level 6 (not shown), also called independent partitions with multiple independent distributed parity schemes, is configured like
e-RAID Level 5 but adds additional fault tolerance by integrating one or more additional distributed parity schemes that write additional series of parity bits across some or all of the memory array's 255banks 260.e-RAID Level 6 has poor data write performance, but provides an extremely high level of fault tolerance and is suitable for mission-critical applications, but is more costly because additional RAM memory space is needed to store the second parity scheme information.e-RAID Level 6 requires a minimum of four partitions. - An e-RAID Level 7 (not shown), also called asynchronous e-RAID, is configured like
e-RAID Level 3, except that all data reads and writes are cached centrally, independently, and asynchronously, and parity data are generated within the cache. e-RAID Level 7 provides high data transfer rates depending on the number of partitions, with successful cache hits resulting in near instantaneous data access. - An e-RAID Level 10 (not shown), also called striping of
e-RAID Level 1 partitions, divides thebanks 260 into a series of partitions. Data is striped across the series of RAM partitions, each of which is configured as ane-RAID Level 1 mirrored partition.e-RAID Level 10 provides very high reliability combined with high I/O performance. It has the same fault tolerance ase-RAID Level 1.e-RAID Level 10 requires a minimum of four partitions. - An e-RAID Level 0+3 or Level 53 (not shown), also called striping of
e-RAID Level 3 partitions, is configured like e-RAID Level 0, except that its striped segments aree-RAID Level 3 partitions. e-Raid Level 0+3 provides high I/O and data transfer rates due to its striping pluse-RAID Level 3 configuration, and the same level of data security ase-RAID Level 3, but is costly because more memory space is needed. e-RAID Level 0+3 or Level 53 requires a minimum of five partitions. - An e-RAID Level 0+1, also called mirroring of e-RAID Level 0 partitions, shown in FIG. 12, divides the
banks 260 into first and second mirroredgroups e-RAID Level 1 system with the performance of an e-RAID Level 0 system e-RAID Level 0+1 provides high I/O and data transfer rates and the same level of data security ase-RAID Level 1, but also has high cost, requiring twice the data storage capacity of the anticipated storage needs. e-RAID Level 0+1 requires a minimum of four partitions. -
Management module 125 will now be described in detail with reference to FIG. 13. As noted abovememory system 100 can include one ormore management modules 125 to provide increased reliability and high availability of data through redundancy, and/or to increase data throughput by partitioning the memory available inmemory matrix modules 105 and dedicating each management module to a portion of memory or to a special function. For example, onemanagement module 125 may be dedicated to handling streaming data such as video or audio files. -
Management module 125 generally includes I/O CPUs 275 coupled tomemory controllers 265 in each memory subsystem 110 (not shown in this figure), each I/O CPU 275 havingROM device 280 andRAM device 285. Inmemory systems 100 havingmultiple management modules 125,ROM device 280 can have stored therein an initial boot sequence to boot the management module as acontrolling management module 125. - Referring to FIG. 13,
management module 125 is also coupled to memory matrix module(s) 105,non-volatile storage module 130, and off-line storage module 135 and todata processing system 115 or data network 120 (not shown this figure), through a network interface card or controller (NIC) 350, aswitch 355, a number ofphysical links 360 such as Gigabit Interface Converters (GBICs), and one or more individual connections on LAN ordata bus 150. -
Switch 355 enablesmanagement module 125 to couple data processing systems connected to data network 120 (not shown in this figure) tonon-volatile storage module 130, off-line storage module 135 and anymemory subsystem 110 in anymemory matrix module 105. As withswitch 180 described above, switch 355 can be a switching fabric or a cross-bar type switch capable of wire-speed operation running at full gigabit speeds, and having dynamic packet buffer memory allocation, multi-layer switching and filtering (Layer 2 andLayer 3 switching and Layer 4-7 filtering), and integrated support for class of service priorities required by multimedia applications. One example is the BCM5680 8-Port Gigabit Switch from Broadcom Corporation of Irvine, Calif., USA. - In the embodiment shown,
management module 125 further includessecurity processor 370 for specific additional data processing and manipulation, and UPSpower management interface 375 to enable the management module to interface withuninterruptible power supply 140.Security processor 370 can be any commercially available device that integrates a high-performance IPSec engine handling DES, 3DES, HMAC-SHA-1, and HMAC-MD5, public key processor, true random number generator, context buffer memory, and PCI or equivalent interface. One example is a BCM5805 Security Processor from Broadcom Corporation of Irvine, Calif., USA. - Optionally,
management module 125 can further include additionaldedicated function processors internal system bus 170 connected to primaryinternal system bus 160 viabridge 365 for specific additional data processing and manipulation.Dedicated function processors ROM management module 125, andRAM - Expansion slot or
slots 415 can be used to connect additional I/O or peripheral modules such as ten gigabit Ethernet, Fibre Channel-Arbitrated Loop, and serial I/O tomanagement module 125. -
Wireless module 420 can be used tocouple management module 125 to additional data processing systems or data networks via a wireless connection. - In a preferred embodiment, both the
management module 125 andmemory matrix module 105 further include one or more Application Programming Interfaces (APIs) (not shown) to configure the modules to store, manipulate, and retrieve data based on a property of the data, thereby reducing the time for a program running on the data processing system to access data stored in thememory system 100. Properties of the data used includes the logical type of the data, such as numeric or boolean, and organization of the data, for example, in a string, an array or as a pointer. Locating data of a particular type, such as video to be streamed to users, in contiguous or sequential addresses or locations in the memory matrix can reduce the time required to store and retrieve the data because fragmented data increases search time, and therefore slows down data streaming or delivery. In addition, locating the video stream data acrossmultiple banks 260 allows multiple simultaneous access points, which increases multiple user capacity and performance. In another example, certain manipulations of the data, such as summation or searching, can be performed by the I/O CPU, a dedicated function CPU or processor, or thememory controller 265 itself, thereby reducing overhead or demands on the data processing system and enhancing or accelerating execution of an application by the data processing system. - In one embodiment, the
memory system 100 is enabled with Extensible Markup Language (XML) format structured documents, and themanagement module 125 is configured to parse and store data from XML compliant documents according to data type, and to format XML documents into multiple presentation formats using Extensible Stylesheet Language (XSL) templates. For example, an XML metadata tag describing a particular quantity of data as an audio file might cause the XML enabled management module to place that data in a contiguous series of memory addresses to optimize playback, similar to the video example given above. Preferably, themanagement module 125 is further configured to provide a running total of a specified type of data written to thememory matrix module 105. More preferably, thememory system 100 is capable of being synchronized with another XML enabled storage device or data processing system (not shown). This would allow fast real-time XML translation wherein the management module parses, stores, and forwards XML data based on XML metadata tags. One example is where a management module serves as an intermediary translator between two XML enabled data processing systems or storage devices. - In another embodiment,
memory system 100 is SQL enabled to create, update, and query SQL databases stored inmemory matrix module 105. Preferably,management module 125 ormemory matrix module 105 can be configured to provide bit-level locking and conventional and bit block manipulation of data written tomemory matrix module 105. Data can also be stored in custom SQL partitions tailored to data type to optimize the speed and efficiency of data storage to and retrieval from thememory matrix module 105. More preferably,management module 125 and thememory matrix module 105 are configured to provide on-demand random access to data stored in the memory matrix. - An exemplary embodiment of
non-volatile storage module 130 will now be described in detail with reference to FIG. 14. In general,non-volatile storage module 130 includes one or morenon-volatile storage devices 425, such as hard disk drives,controller 430 to operate the non-volatile storage devices, andRAM device 435 to provide a buffer memory to the controller. The data stored innon-volatile storage devices 425 can be backed up directly frommemory matrix module 110 or streamed fromdata network 120 in a manner described below. - Generally,
non-volatile storage devices 425 can include magnetic, optical, or magnetic-optical disk drives. Alternatively,non-volatile storage devices 425 can include devices or systems using holographic, molecular memory or atomic resolution storage technology as described above. Preferably,non-volatile storage module 130 includes a number of hard disk drives as shown. More preferably, the hard disk drives are connected in a RAID configuration to provide higher data transfer rates betweenmemory matrix module 110 andnon-volatile storage module 130 and/or to provide increased reliability. - There are six basic RAID levels, each possessing different advantages and disadvantages. These levels are described in, for example, an article titled “A Case for Redundant Arrays of Inexpensive Disks (RAID)” by David A. Patterson, Garth Gibson and Randy H. Katz; University of California Report No. UCB/CSD 87/391, December 1987, which is incorporated herein by reference.
RAID level 2 uses non-standard disks and as such is not normally commercially feasible. - RAID level 0 employs “striping” where the data is broken into a number of stripes which are stored across the disks in the array. This technique provides higher performance in accessing the data but provides no redundancy which is needed in the event of a disk failure.
-
RAID level 1 employs “mirroring” where each unit of data is duplicated or “mirrored” onto another disk drive. Mirroring requires two or more disk drives. For read operations, this technique is advantageous since the read operations can be performed in parallel. A drawback with mirroring is that it achieves a storage efficiency of only 50%. - In
RAID level 3, a data block is partitioned into stripes which are striped across a set of drives. A separate parity drive is used to store the parity bytes associated with the data block. The parity is used for data redundancy. Data can be regenerated when there is a single drive failure from the data on the remaining drives and the parity drive. This type of data management is advantageous since it requires less space than mirroring and only a single parity drive. In addition, the data is accessed in parallel from each drive which is beneficial for large file transfers. However, performance is poor for high input/output request (I/O) transaction applications since it requires access to each drive in the array. - In
RAID level 4, an entire data block is written to a disk drive. Parity for each data block is stored on a single parity drive. Since each disk is accessed independently, this technique is beneficial for high I/O transaction applications. A drawback with this technique is the single parity disk which becomes a bottleneck since the single parity drive needs to be accessed for each write operation. This is especially burdensome when there are a number of small I/O operations scattered randomly across the disks in the array. - In
RAID level 5, a data block is partitioned into stripes which are striped across the disk drives. Parity for the data blocks is distributed across the drives thereby reducing the bottleneck inherent tolevel 4 which stores the parity on a single disk drive. This technique offers fast throughput for small data files but performs poorly for large data files. Other somewhat non-standard RAID levels or configurations have been proposed and are in use. Some of these combine features of RAID configuration levels already described. - Thus, for example,
non-volatile storage module 130 can comprise hard disk drives connected in a RAID Level 0 configuration to provide the highest possible data transfer rates, or in aRAID Level 1 configuration to provide multiple mirrored copies of data inmemory matrix module 110. - An I/
O CPU 440 is coupled tocontroller 430 for managing the reading, writing and manipulation of data to volatile storage devices. A read-only memory (ROM)device 445 having an initial boot sequence stored therein is coupled to I/O CPU 440 to bootnonvolatile storage module 130. ARAM device 450 coupled to I/O CPU 440 provides a buffer memory to the I/O CPU. - As with I/
O CPU 275 described above, I/O CPU 440 innon-volatile storage module 130 can be any commercially available device having a speed of at least 600 MHz and the capability of addressing at least 4 GB of memory. Suitable examples include a 2GHz Pentium® 4 processor commercially available from Intel Corporation of Santa Clara, Calif., USA, and an Athlon®, 1.5 GHz processor commercially available from Advanced Micro Devices, Inc. of Sunnyvale, Calif., USA. - Preferably,
ROM device 445 is an electronically erasable or flash programmable ROM (EEPROM) that can be programmed to enablenon-volatile storage module 130 to operate according to the present invention. More preferably,ROM device 445 has from about 32 to about 128 Mbits of memory. One suitable EEPROM, for example, is a 28F6408W30 Wireless Flash Memory with SRAM from Intel Corporation of Santa Clara, Calif., USA. -
Non-volatile storage module 130 is coupled tomanagement module 125, memory matrix module(s) 105, off-line storage module 135 and todata processing system 115 or data network 120 (not shown this figure), through a network interface card or controller (NIC) 455, aswitch 460, a number ofphysical links 465 such as Gigabit Interface Converters (GBICs), and one or more individual connections on LAN ordata bus 150. -
Switch 460 enablesmanagement module 125,memory matrix module 105, off-line storage module 135 and data processing systems (not shown in this figure) connected to any of the connections on LAN ordata bus 150, to access anynon-volatile storage device 425 innon-volatile storage module 130. As with the switches described above, switch 460 can be a switching fabric or a cross-bar type switch capable of wire-speed operation running at full gigabit speeds, and having dynamic packet buffer memory allocation, multi-layer switching and filtering (Layer 2 andLayer 3 switching and Layer 4-7 filtering), and integrated support for class of service priorities required by multimedia applications. One example is the BCM5680 8-Port Gigabit Switch from Broadcom Corporation of Irvine, Calif., USA. - In the embodiment shown,
non-volatile storage module 130 further includessecurity processor 470 for specific additional data processing and manipulation, and UPSpower management interface 475 to enable the non-volatile storage module to interface withuninterruptible power supply 140.Security processor 470 can be any commercially available device that integrates a high-performance IPSec engine handling DES, 3DES, HMAC-SHA-1, and HMAC-MD5, public key processor, true randomnumber generator, context buffer memory, and PCI or equivalent interface. One example is a BCM5805 Security Processor from Broadcom Corporation of Irvine, Calif., USA. - Optionally,
non-volatile storage module 130 can further include additionaldedicated function processors internal system bus 170 connected to primaryinternal system bus 160 viabridge 487 for specific additional data processing and manipulation.Dedicated function processors ROM non-volatile storage module 130, andRAM - Expansion slot or
slots 510 can be used to connect additional I/O or peripheral modules such as ten gigabit Ethernet, Fibre Channel-Arbitrated Loop, and serial I/O tonon-volatile storage module 130. -
Wireless module 515 can be used to couplenon-volatile storage module 130 to additional data processing systems or data networks via a wireless connection. - An exemplary embodiment of off-
line storage module 135 will now be described in detail with reference to FIG. 15. Off-line storage module 135 includes one or more removable media drives 520 each with a removable storage media such as magnetic tape or removable magnetic or optical disks to provide additional non-volatile backup of data inmemory matrix module 110. Removable media drivecontroller 525 operates removable media drives 520, andRAM device 530 provides a buffer memory to the controller. - Off-
line storage module 135 has the advantage of providing a permanent “snapshot” image of data inmemory matrix module 105 that will not be victimized by subsequent data written to the memory matrix module fromdata network 120. Preferably, because of the long time necessary to write data to the removable storage media relative to the rapidity with which data inmemory matrix module 105 can change, the data is copied fromnon-volatile storage module 130 to the removable storage media in off-line storage module 135 on a regular, periodic basis. Alternatively, the data can be copied directly frommemory matrix module 105. - An I/
O CPU 535 is coupled tocontroller 525 for managing the reading and writing of data to removable media drives 520.ROM device 540 having an initial boot sequence stored therein is coupled to I/O CPU 535 to boot off-line storage module 135.RAM device 545 coupled to I/O CPU 535 provides a buffer memory to the I/O CPU. - As with I/
O CPU O CPU 535 in off-line storage module 135 can be any commercially available device having a speed of at least 600 MHz and the capability of addressing at least 4 GB of memory. Suitable examples include a 2GHz Pentium® 4 processor commercially available from Intel Corporation of Santa Clara, Calif., USA, and an Athlon®, 1.5 GHz processor commercially available from Advanced Micro Devices, Inc. of Sunnyvale, Calif., USA. - Preferably,
ROM device 540 is an electronically erasable or flash programmable ROM (EEPROM) that can be programmed to enable off-line storage module 135 to operate according to the present invention. More preferably,ROM device 540 has from about 32 to about 128 Mbits of memory. One suitable EEPROM, for example, is a 28F6408W30 Wireless Flash Memory with SRAM from Intel Corporation of Santa Clara, Calif., USA. - Off-
line storage module 135 is coupled tomanagement module 125, memory matrix module(s) 105,non-volatile storage module 130 and todata processing system 115 or data network 120 (not shown this figure), through a network interface card or controller (NIC) 550, aswitch 555, a number ofphysical links 560 such as Gigabit Interface Converters (GBICs), and one or more individual connections on LAN ordata bus 150. -
Switch 555 enablesmanagement module 125,memory matrix module 105,nonvolatile storage module 130 and data processing systems (not shown in this figure) connected to any of the connections on LAN ordata bus 150, to access data in any removable media drive 520 in off-line storage module 135. As with the switches described above, switch 555 can be a switching fabric or a cross-bar type switch capable of wire-speed operation running at full gigabit speeds, and having dynamic packet buffer memory allocation, multi-layer switching and filtering (Layer 2 andLayer 3 switching and Layer 4-7 filtering), and integrated support for class of service priorities required by multimedia applications. One example is the BCM5680 8-Port Gigabit Switch from Broadcom Corporation of Irvine, Calif., USA. - In the embodiment shown, off-
line storage module 135 further includessecurity processor 570 for specific additional data processing and manipulation, and UPSpower management interface 575 to enable the off-line storage module to interface withuninterruptible power supply 140.Security processor 570 can be any commercially available device that integrates a high-performance IPSec engine handling DES, 3DES, HMAC-SHA-1, and HMAC-MD5, public key processor, true random number generator, context buffer memory, and PCI or equivalent interface. One example is a BCM5805 Security Processor from Broadcom Corporation of Irvine, Calif., USA. - Optionally, off-
line storage module 135 can further include additionaldedicated function processors internal systembus 170 connected to primaryinternal system bus 160 viabridge 565 for specific additional data processing and manipulation.Dedicated function processors ROM line storage module 135, andRAM - Expansion slot or
slots 610 can be used to connect additional I/O or peripheral modules such as ten gigabit Ethernet, Fibre Channel-Arbitrated Loop, and serial I/O to off-line storage module 135. -
Wireless module 615 can be used to couple off-line storage module 135 to additional data processing systems or data networks via a wireless connection. -
Uninterruptible power supply 140 supplies power from the electrical power line (not shown) tomanagement module 125,memory matrix modules 105,non-volatile storage module 130, and off-line storage module 135 throughpower bus 145. In the event of an excessive fluctuation or interruption in power from the electrical power line,UPS 140 supplies backup power from a battery (not shown). Preferably, because the backup power from a battery is limited,uninterruptible power supply 140 is configured to transmit a signal tomanagement module 125 on excessive fluctuation or interruption in power from the electrical power line, and the management module is configured to backup thememory matrix module 105 tonon-volatile storage module 130 and/or off-line storage module 135 upon receiving the signal. More preferably,management module 125 is further configured to notify users ofmemory system 100 of the power failure and to perform a controlled shutdown of the memory system Optionally, ifuninterruptible power supply 140 has a longer term alternate power source such as a diesel generator,management module 125 can be configured to continue to usememory matrix modules 105 or to switch tonon-volatile storage module 130 for greater data safety, thereby allowing users of mission-critical applications to continue their work without interruption. - Some of the important aspects of the present invention will now be repeated to further emphasize their structure, function and advantages.
- In one aspect, multiple links connect or
couple management module 125 todata network 120,memory matrix modules 105,non-volatile storage module 130, and off-line storage module 135. This ‘mesh’ or fabric type redundancy provides a higher data transfer rate during normal operations and the ability to continue operations on a reduced number of buses in a failover mode. These multiple links typically include a set of one or more conductors and a network interface (not shown) using an interface standard such as gigabit Ethernet, ten gigabit Ethernet, Fibre Channel-Arbitrated Loop (FC-AL), Firewire, Small Computer System Interface (SCSI), Advanced Technology Attachment (ATA), InfiniBand, HyperTransport, PCI-X, Direct Access File System (DAFS), IEEE 803.11, or Wireless Application Protocol (WAP). - In one embodiment,
management module 125 intermediates betweendata network 120 andmemory matrix modules 105,non-volatile storage modules 130, and off-line storage modules (135). During normal operation,memory matrix module 105 is accessed bydata network 120 throughmanagement module 125 over primaryinternal system bus 160 to serve as a primary memory system. At the same time, the same data and data transactions are mirrored to a secondmemory matrix module 105 to provide a backup memory system. The data in thesecond memory module 105 is then backed up to a nonvolatile storage module on an incremental basis whereby only changed data is backed up. This arrangement has the advantage that in the event of an impending power failure, only data in buffer memory orRAM 285 inmemory subsystems 110 needs to be written tonon-volatile storage module 130 to provide a complete backup of data inmemory arrays 255. This shortens the backup time and the power demand placed on the battery of uninterruptiblepower supply module 140. It should be noted that data can be written to off-line storage module 135 in a similar manner. - In addition, in one version of this embodiment,
management module 125 is further configured to detect failure or a non-operating condition of the primary memory, and to reconfigurememory system 100 to enabledata network 120 to access data in secondary backupmemory matrix modules 105, ornon-volatile storage module 130 if the memory matrix modules are unavailable. Thus, the failover to a backup memory is completely transparent to a user ofdata processing system 115 attached todata network 120. - Optionally, the
management module 125 is further configured to provide a failback capability in which restoration of the primarymemory matrix module 105 is detected, and the contents of the memory matrix module automatically restored from the backup memory matrix modules ornon-volatile storage module 130. Preferably, themanagement module 125 is configured to reactivate thememory matrix 105 as the primary memory. More preferably, themanagement module 125 is also configured to reactivate other memory matrixes as secondary or backup memories, thereby returning the memory system to normal operating condition. - Similarly, in another optional embodiment, the
memory system 100 has severalmemory matrix modules 105, each of configured to couple directly to thedata network 120 in case of failure of themanagement module 125, thereby providing backup or failover capability for the management module. Thememory matrix modules 105 can be coupled to thedata network 120 in a master-slave arrangement in which one of the memory matrix modules, for example a primary memory matrix module, functions as themanagement module 125 coupling all of the remaining memory matrix modules to the data network. Alternatively, all of thememory matrix modules 105 can be configured to couple to thedata network 120, thereby providing a peer to peer network of memory matrix modules. Thus, thememory system 100 of the present invention provides complete and redundant backup or failover capability for all components of the memory system. That is, in case of failure of a primarymemory matrix module 105, themanagement module 125 is configured to couple a secondary memory matrix module to thedata network 120 to provide a backup of data in the primary memory matrix module. In case of subsequent failure of the secondary memory matrix module, themanagement module 125 is configured to couple the NVSM or OLSM to thedata network 120. It will be appreciated that this unparalleled redundancy is achieved through the use of substantially identical programmable components, such as the controllers, which can be quickly reconfigured through alteration of their programming to function in other capacities. - A method for operating
memory system 100 will now be described with reference to FIG. 16. FIG. 16 is a flowchart showing an embodiment of a process for operating a memory system having at least onememory matrix module 105 according to an embodiment of the present invention. In the method, data fromdata network 120, is received in management module 125 (Step 620) and transferred tomemory controller 265 of amemory subsystem 110 via primary internal system bus 160 (Step 625). The DAT associated withmemory subsystem 110 is checked to determine an address or location inmemory array 255 in which to store the data (Step 630). The data is then stored tomemory array 255 at a specified address (Step 635). Typically, this involves the sub-steps (not shown) of applying a row address and a column address, and applying data to one or more ports on one ormore memory devices 250. Optionally, the method includes the further steps of mirroring the same data to a second memory subsystem or memory matrix module 105 (Step 640), which is then backed up by streaming its data to non-volatile storage module 130 (Step 645). If failure or a non-operating condition of primary memory, that is thefirst memory subsystem 110, is detected by the management module (Step 650), the management module will reconfigure thememory system 100 to enabledata network 120 to directly access the data in the second memory subsystem, secondary memory matrix module or non-volatile storage module 130 (Step 655). This last step,step 655, allows the memory system to continue operation in a manner transparent to the user of the system. - In one embodiment, not shown, the step of storing data to the
memory array 255 at a specified address,step 635, involves storing data to at least two of the banks of memory devices simultaneously to provide a dynamic or an e-RAID system. This can be accomplished by storing uniformly sized blocks of data, in regular sequence, to all of the plurality of banks to provide an e-RAID Level 0 system, mirroring data stored in a first of two banks of memory devices to a second of two banks of memory devices to provide ane-RAID Level 1 system, mirroring data stored in a first group of half of the plurality of banks into a second group of another half of the plurality of banks to provide an e-RAID Level 0+1 system, or striping data across the plurality of banks and storing parity information for each stripe of data in at least one of the plurality of banks to provide ane-RAID Level 5 system. - In another embodiment, not shown, the method includes the additional step of, prior to storing data to the
memory array 255,step 635, determining properties of the data, such as which one of a number of logical types the data is, and step 635 involves storing the data in a predetermined location in the memory matrix based on its properties. - In one aspect, multiple links connect or
couple management module 125 todata network 120,memory matrix modules 105,non-volatile storage module 130, and off-line storage module 135. This ‘mesh’ or fabric type redundancy provides a higher data transfer rate during normal operations and the ability to continue operations on a reduced number of buses in a failover mode. These multiple links typically include a set of one or more conductors and a network interface (not shown) using an interface standard such as gigabit Ethernet, ten gigabit Ethernet, Fibre Channel-Arbitrated Loop (FC-AL), Firewire, Small Computer System Interface (SCSI), Advanced Technology Attachment (ATA), InfiniBand, HyperTransport, PCI-X, IEEE 803.11b, or Wireless Application Protocol (WAP). - In one embodiment,
management module 125 intermediates betweendata network 120 andmemory matrix modules 105,non-volatile storage modules 130, and off-line storage modules (135). During normal operation,memory matrix module 105 is accessed bydata network 120 throughmanagement module 125 over primaryinternal system bus 160 to serve as a primary memory system At the same time, the same data and data transactions are mirrored to a secondmemory matrix module 105 to provide a backup memory system The data in thesecond memory module 105 is then backed up to a nonvolatile storage module on an incremental basis whereby only changed data is backed up. This arrangement has the advantage that in the event of an impending power failure, only data in buffer memory orRAM 285 inmemory subsystems 110 needs to be written tonon-volatile storage module 130 to provide a complete backup of data inmemory arrays 255. This shortens the backup time and the power demand placed on the battery of uninterruptiblepower supply module 140. It should be noted that data can be written to off-line storage module 135 in a similar manner. - In addition, in one version of this embodiment,
management module 125 is further configured to detect failure or a non-operating condition of the primary memory, and to reconfigurememory system 100 to enabledata network 120 to access data in secondary backupmemory matrix modules 105, ornon-volatile storage module 130 if the memory matrix modules are unavailable. Thus, the failover to a backup memory is completely transparent to a user ofdata processing system 115 attached todata network 120. - The following examples illustrate advantages of a memory system and method according to the present invention for storing data in a network attached configuration. The examples are provided to illustrate certain embodiments of the present invention, and are not intended to limit the scope of the invention in any way.
- In these examples, performance characteristics of 1.5 gigabytes (GB) of RAM memory configured to model an active storage memory system according to the present invention were compared with the performance of an IBM DeskStar® 43 GB, 7200 rpm hard disk drive operating on an ATA 66 bus, and a
Maxtor 20 GB, 7200 rpm hard disk drive operating on anATA 100 bus, using the industry standard Intel IOMeter software program to generate storage I/O benchmarks. - In a first example, a typical database configuration was used. Multiple data files of 2048 bytes each were written to and subsequently read from each of the three memory systems, i.e., the active storage memory system and the hard drives. The read operations comprised 67% of all operations, the write operations comprised 33% of all operations, and the order in which files were accessed was completely random. In this example, the active storage memory system averaged 26,552.242 I/O operations per second (IOps). The Deskstar and Maxtor hard drives averaged 79.723 and 89.610 respectively. Thus, the active memory system was 333 times faster than the DeskStar and 296 times faster than the Maxtor in the rate at which it was able to perform I/O operations.
- In a second example, a typical data streaming configuration was used. Large files of 65,536 bytes were read in sequential order from each of the three memory systems. No writes were performed. The active storage memory system averaged 4,513.751 IOps. The Deskstar and Maxtor hard drives averaged 343.459 and 421.942 respectively. Thus, the active memory system was 13.14 and 10.70 times faster than the DeskStar and the Maxtor respectively.
- In a third example, multiple files of 512 bytes each were read from each of the three memory systems. The read operations comprised 100% of all operations, and the order of the files was strictly sequential thereby minimizing or eliminating the effect of seek time and rotational latency on hard disk drive performance. In this example, the active storage memory system averaged 5,432.898 IOps. The Deskstar and Maxtor hard drives averaged 4,888.884 and 5,017.892 respectively. Thus, the active memory system was 1.11 and 1.08 times faster than the DeskStar and the Maxtor respectively.
- In a fourth example, the conditions of the third test were repeated with the exception that the order in which files were read or accessed was completely random, more typical of real-world conditions. The active storage memory system averaged 30,272.041 IOps. The Deskstar and Maxtor hard drives averaged 83.807 and 82.957, or were 361.21 and 364.91 times slower respectively.
- It is to be understood that even though numerous characteristics and advantages of certain embodiments of the present invention have been set forth in the foregoing description, together with details of the structure and function of various embodiments of the invention, this disclosure is illustrative only, and changes may be made in detail, especially in matters of structure and arrangement of parts within the principles of the present invention to the full extent indicated by the broad general meaning of the terms in which the appended claims are expressed.
Claims (26)
1. A memory system having a matrix unit comprising:
a plurality of memory devices each capable of storing data therein, the memory devices arranged in a plurality of banks each having a predetermined number of memory devices; and
a memory controller coupled to the banks for accessing the memory devices, the memory controller configured to store data simultaneously in any combination of the banks to provide an e-RAID system.
2. A memory system according to claim 1 , wherein the memory controller is configured to store blocks of data, in regular sequence, to all of the banks of memory devices to provide an e-RAID Level 0 system.
3. A memory system according to claim 1 , wherein the memory controller is configured to mirror the data stored in a first group of half of the banks to a second group of another half of the banks to provide an e-RAID Level 1 system.
4. A memory system according to claim 1 , wherein the memory controller is configured to mirror the data stored in a first group of half of the banks, the first group configured as an e-RAID Level 0 system, into a second group of another half of the banks, the second group also configured as an e-RAID Level 0 system, to provide an e-RAID Level 0+1 system.
5. A memory system according to claim 1 , wherein the memory controller is configured to store data to a first group of half of the banks, the first group configured as an e-RAID Level 0 system, generating the Hamming code ECC for each data word stored to the first group, and storing the Hamming code ECC to a second group of another half of the banks, the second group also configured as an e-RAID Level 0 system, to provide an e-RAID Level 2 system.
6. A memory system according to claim 1 , wherein the memory controller is configured to store data to a first group of the banks, the first group configured as an e-RAID Level 0 system, and storing the parity data for each sequence or stripe of data spanning the first group of banks to a second group of the banks, to provide an e-RAID Level 3 system.
7. A memory system according to claim 1 , wherein the memory controller is configured to store data, in regular sequence, to a plurality of equally sized partitions of the banks, each partition configured as an e-RAID Level 3 system, to provide an e-RAID Level 0+3 or Level 53 system.
8. A memory system according to claim 1 , wherein the memory controller is configured to store data as entire blocks to multiple independent partitions of a first group of the banks, generating the parity data for same rank blocks, and storing the parity data to a second group of the banks, to provide an e-RAID Level 4 system.
9. A memory system according to claim 1 , wherein the memory controller is configured to stripe data across the banks and storing parity data for each stripe of data in at least one of the banks to provide an e-RAID Level 5 system.
10. A memory system according to claim 1 , wherein the memory controller is configured to stripe data across the banks, generating parity data for each stripe of data using two independent parity schemes, and storing the two parity data in at least one of the banks to provide an e-RAID Level 6 system.
11. A memory system according to claim 1 , wherein the memory controller is configured to store data to a plurality of banks configured as an e-RAID Level 3 system, except that all data reads and writes are cached independently and asynchronously in a memory location external to the banks, and parity data are generated within the cache, to provide an e-RAID Level 7 system.
12. A memory system according to claim 1 , wherein the memory controller is configured to use an error checking code to detect and correct errors in data stored in each of the memory devices.
13. A memory system according to claim 1 , wherein the error checking code is a Hamming code.
14. A memory system according to claim 1 , further comprising a cache coupled to the memory controller, the cache having stored therein one or more copies of a Data Allocation Table (DAT) adapted to describe data stored in the memory devices.
15. A memory system for use in a data network, the memory system comprising at least one memory matrix unit according to claim 1 , the memory system further comprising a management unit coupled to the memory matrix unit and to the data network to interface between the memory matrix unit and the data network.
16. A method of storing data in one or more memory matrix units having a plurality of memory devices arranged in a plurality of banks each having a predetermined number of memory devices, the method comprising the step of simultaneously writing blocks of data to a predetermined combination of the banks contained in one or more of the memory matrix units to provide an e-RAID system.
17. A method according to claim 16 , wherein the step of writing blocks of data to the banks comprises the step of writing blocks of data, in regular sequence, to all of the banks of memory devices to provide an e-RAID Level 0 system.
18. A method according to claim 16 , wherein the step of writing blocks of data to the banks comprises the step of writing the same data to a first group of half of the banks and a second group of another half of the banks to provide an e-RAID Level 1 system.
19. A method according to claim 16 wherein the step of writing blocks of data to the banks comprises the steps of:
writing blocks of data, in a regular sequence, to a first group of half of the banks; and
writing the same data stored in the first group to a second group of half of the banks to provide an e-RAID Level 0+1 system.
20. A method according to claim 16 , wherein the step of writing blocks of data to the banks comprises the steps of:
writing blocks of data, in a regular sequence, to a first group of half of the banks;
generating the Hamming code ECC for each data word stored to the first group; and
writing the Hamming code ECC, in a regular sequence, to a second group of half of the banks to provide an e-RAID Level 2 system.
21. A method according to claim 16 , wherein the step of writing blocks of data to the banks comprises the steps of:
writing blocks of data, in a regular sequence, to a first group of the banks;
generating the parity data for each sequence or stripe of data spanning the first group of banks; and
writing the parity data, in a regular sequence, to a second group of the banks to provide an e-RAID Level 3 system.
22. A method according to claim 16 , wherein the step of writing blocks of data to the banks comprises the steps of:
writing blocks of data, in a regular sequence, to a plurality of equally sized partitions of the banks, each partition configured as an e-RAID Level 3 system, wherein writing data to each partition comprises the steps of
writing blocks of data, in a regular sequence, to a first group of banks in the partition;
generating the parity data for each sequence or stripe of data spanning the first group of banks; and writing the parity data, in a regular sequence, to a second group of banks in the partition;
to provide an e-RAID Level 0+3 or Level 53 system.
23. A method according to claim 16 , wherein the step of writing blocks of data to the banks comprises the steps of:
writing blocks of data to a plurality of independent partitions of a first group of the banks;
generating the parity data for same rank blocks; and
writing the parity data to a second group of the banks to provide an e-RAID Level 4 system.
24. A method according to claim 16 , wherein the step of writing blocks of data to the banks comprises the steps of:
writing blocks of data, in regular sequence, to the banks to provide a stripe of data; and
writing parity data for each stripe of data in at least one of the banks to provide an e-RAID Level 5 system.
25. A method according to claim 16 , wherein the step of writing blocks of data to the banks comprises the steps of:
writing blocks of data, in regular sequence, to the banks to provide a stripe of data;
generating parity data for each stripe of data using two independent parity schemes; and
writing the two parity data for each stripe of data in at least one of the banks to provide an e-RAID Level 6 system.
26. A method according to claim 16 , wherein the step of writing blocks of data to the banks comprises the steps of:
writing blocks of data, in a regular sequence, to a plurality of equally sized partitions of the banks, each partition configured as an e-RAID Level 3 system, wherein writing data to each partition comprises the steps of
writing blocks of data, in a regular sequence, to a first group of banks in the partition;
generating the parity data for each sequence or stripe of data spanning the first group of banks; and
writing the parity data, in a regular sequence, to a second group of banks in the partition by caching all data reads and writes independently and asynchronously in a memory location external to the banks, and
generating parity data within the cache, to provide an e-RAID Level 7 system.
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/007,413 US6745310B2 (en) | 2000-12-01 | 2001-11-30 | Real time local and remote management of data files and directories and method of operating the same |
US10/007,410 US20020069317A1 (en) | 2000-12-01 | 2001-11-30 | E-RAID system and method of operating the same |
US10/007,415 US6754785B2 (en) | 2000-12-01 | 2001-11-30 | Switched multi-channel network interfaces and real-time streaming backup |
US10/007,418 US6957313B2 (en) | 2000-12-01 | 2001-11-30 | Memory matrix and method of operating the same |
US10/007,436 US20020069318A1 (en) | 2000-12-01 | 2001-11-30 | Real time application accelerator and method of operating the same |
AU2001297837A AU2001297837A1 (en) | 2000-12-01 | 2001-12-03 | A memory matrix and method of operating the same |
PCT/US2001/047594 WO2002091382A2 (en) | 2000-12-01 | 2001-12-03 | A memory matrix and method of operating the same |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US25081200P | 2000-12-01 | 2000-12-01 | |
US10/007,413 US6745310B2 (en) | 2000-12-01 | 2001-11-30 | Real time local and remote management of data files and directories and method of operating the same |
US10/007,410 US20020069317A1 (en) | 2000-12-01 | 2001-11-30 | E-RAID system and method of operating the same |
US10/007,415 US6754785B2 (en) | 2000-12-01 | 2001-11-30 | Switched multi-channel network interfaces and real-time streaming backup |
US10/007,418 US6957313B2 (en) | 2000-12-01 | 2001-11-30 | Memory matrix and method of operating the same |
US10/007,436 US20020069318A1 (en) | 2000-12-01 | 2001-11-30 | Real time application accelerator and method of operating the same |
Publications (1)
Publication Number | Publication Date |
---|---|
US20020069317A1 true US20020069317A1 (en) | 2002-06-06 |
Family
ID=27555577
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/007,413 Expired - Fee Related US6745310B2 (en) | 2000-12-01 | 2001-11-30 | Real time local and remote management of data files and directories and method of operating the same |
US10/007,418 Expired - Fee Related US6957313B2 (en) | 2000-12-01 | 2001-11-30 | Memory matrix and method of operating the same |
US10/007,436 Abandoned US20020069318A1 (en) | 2000-12-01 | 2001-11-30 | Real time application accelerator and method of operating the same |
US10/007,415 Expired - Fee Related US6754785B2 (en) | 2000-12-01 | 2001-11-30 | Switched multi-channel network interfaces and real-time streaming backup |
US10/007,410 Abandoned US20020069317A1 (en) | 2000-12-01 | 2001-11-30 | E-RAID system and method of operating the same |
Family Applications Before (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/007,413 Expired - Fee Related US6745310B2 (en) | 2000-12-01 | 2001-11-30 | Real time local and remote management of data files and directories and method of operating the same |
US10/007,418 Expired - Fee Related US6957313B2 (en) | 2000-12-01 | 2001-11-30 | Memory matrix and method of operating the same |
US10/007,436 Abandoned US20020069318A1 (en) | 2000-12-01 | 2001-11-30 | Real time application accelerator and method of operating the same |
US10/007,415 Expired - Fee Related US6754785B2 (en) | 2000-12-01 | 2001-11-30 | Switched multi-channel network interfaces and real-time streaming backup |
Country Status (3)
Country | Link |
---|---|
US (5) | US6745310B2 (en) |
AU (1) | AU2001297837A1 (en) |
WO (1) | WO2002091382A2 (en) |
Cited By (116)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030065836A1 (en) * | 2001-09-28 | 2003-04-03 | Pecone Victor Key | Controller data sharing using a modular DMA architecture |
US6643758B2 (en) * | 2001-04-26 | 2003-11-04 | Fujitsu Limited | Flash memory capable of changing bank configuration |
US20030217300A1 (en) * | 2002-04-30 | 2003-11-20 | Hitachi, Ltd. | Method for backing up power supply of disk array device and system thereof |
US20040158675A1 (en) * | 2002-12-02 | 2004-08-12 | Elpida Memory, Inc. | Memory system and control method therefor |
US20040162940A1 (en) * | 2003-02-17 | 2004-08-19 | Ikuya Yagisawa | Storage system |
US6785788B1 (en) * | 2001-07-03 | 2004-08-31 | Unisys Corporation | System and method for implementing an enhanced raid disk storage system |
US20040177126A1 (en) * | 2003-02-18 | 2004-09-09 | Chaparral Network Storage, Inc. | Broadcast bridge apparatus for transferring data to redundant memory subsystems in a storage controller |
US20040186931A1 (en) * | 2001-11-09 | 2004-09-23 | Gene Maine | Transferring data using direct memory access |
US20040201755A1 (en) * | 2001-12-06 | 2004-10-14 | Norskog Allen C. | Apparatus and method for generating multi-image scenes with a camera |
US20040230869A1 (en) * | 2003-03-19 | 2004-11-18 | Stmicroelectronics S.R.I. | Integrated memory system |
US20040236908A1 (en) * | 2003-05-22 | 2004-11-25 | Katsuyoshi Suzuki | Disk array apparatus and method for controlling the same |
US20050068417A1 (en) * | 2003-09-30 | 2005-03-31 | Kreiner Barrett Morris | Video recorder |
US20050068429A1 (en) * | 2003-09-30 | 2005-03-31 | Kreiner Barrett Morris | Video recorder |
US20050078186A1 (en) * | 2003-09-30 | 2005-04-14 | Kreiner Barrett Morris | Video recorder |
US20050102557A1 (en) * | 2001-09-28 | 2005-05-12 | Dot Hill Systems Corporation | Apparatus and method for adopting an orphan I/O port in a redundant storage controller |
US20050120263A1 (en) * | 2003-11-28 | 2005-06-02 | Azuma Kano | Disk array system and method for controlling disk array system |
US20050141184A1 (en) * | 2003-12-25 | 2005-06-30 | Hiroshi Suzuki | Storage system |
EP1577774A2 (en) | 2004-02-19 | 2005-09-21 | Nec Corporation | Semiconductor storage data striping |
US20050210323A1 (en) * | 2004-03-05 | 2005-09-22 | Batchelor Gary W | Scanning modified data during power loss |
US20050262388A1 (en) * | 2002-11-08 | 2005-11-24 | Dahlen Eric J | Memory controllers with interleaved mirrored memory modes |
US6993635B1 (en) * | 2002-03-29 | 2006-01-31 | Intransa, Inc. | Synchronizing a distributed mirror |
US20060053236A1 (en) * | 2004-09-08 | 2006-03-09 | Sonksen Bradley S | Method and system for optimizing DMA channel selection |
US20060074988A1 (en) * | 2003-06-30 | 2006-04-06 | Yuko Imanishi | Garbage collection system |
US20060106982A1 (en) * | 2001-09-28 | 2006-05-18 | Dot Hill Systems Corporation | Certified memory-to-memory data transfer between active-active raid controllers |
WO2006036809A3 (en) * | 2004-09-22 | 2006-06-01 | Xyratex Technnology Ltd | System and method for customization of network controller behavior, based on application -specific inputs |
US20060129781A1 (en) * | 2004-12-15 | 2006-06-15 | Gellai Andrew P | Offline configuration simulator |
US20060161709A1 (en) * | 2005-01-20 | 2006-07-20 | Dot Hill Systems Corporation | Safe message transfers on PCI-Express link from RAID controller to receiver-programmable window of partner RAID controller CPU memory |
US20060161702A1 (en) * | 2005-01-20 | 2006-07-20 | Bowlby Gavin J | Method and system for testing host bus adapters |
US20060161707A1 (en) * | 2005-01-20 | 2006-07-20 | Dot Hill Systems Corporation | Method for efficient inter-processor communication in an active-active RAID system using PCI-express links |
US20060206660A1 (en) * | 2003-05-22 | 2006-09-14 | Hiromi Matsushige | Storage unit and circuit for shaping communication signal |
US20060230215A1 (en) * | 2005-04-06 | 2006-10-12 | Woodral David E | Elastic buffer module for PCI express devices |
US7130229B2 (en) * | 2002-11-08 | 2006-10-31 | Intel Corporation | Interleaved mirrored memory systems |
US20060253731A1 (en) * | 2004-11-16 | 2006-11-09 | Petruzzo Stephen E | Data Backup System and Method |
US20060259723A1 (en) * | 2004-11-16 | 2006-11-16 | Petruzzo Stephen E | System and method for backing up data |
US20060255409A1 (en) * | 2004-02-04 | 2006-11-16 | Seiki Morita | Anomaly notification control in disk array |
US20060271605A1 (en) * | 2004-11-16 | 2006-11-30 | Petruzzo Stephen E | Data Mirroring System and Method |
US20060277347A1 (en) * | 2001-09-28 | 2006-12-07 | Dot Hill Systems Corporation | RAID system for performing efficient mirrored posted-write operations |
US7200603B1 (en) * | 2004-01-08 | 2007-04-03 | Network Appliance, Inc. | In a data storage server, for each subsets which does not contain compressed data after the compression, a predetermined value is stored in the corresponding entry of the corresponding compression group to indicate that corresponding data is compressed |
US7234101B1 (en) * | 2003-08-27 | 2007-06-19 | Qlogic, Corporation | Method and system for providing data integrity in storage systems |
US20070234115A1 (en) * | 2006-04-04 | 2007-10-04 | Nobuyuki Saika | Backup system and backup method |
US7293138B1 (en) * | 2002-06-27 | 2007-11-06 | Adaptec, Inc. | Method and apparatus for raid on memory |
US20080005470A1 (en) * | 2006-06-30 | 2008-01-03 | Dot Hill Systems Corporation | System and method for sharing sata drives in active-active raid controller system |
KR100802666B1 (en) | 2004-08-27 | 2008-02-12 | 인피니언 테크놀로지스 아게 | Circuit arrangement and method for operating such a circuit arrangement |
US20080140910A1 (en) * | 2006-12-06 | 2008-06-12 | David Flynn | Apparatus, system, and method for managing data in a storage device with an empty data token directive |
US20080201616A1 (en) * | 2007-02-20 | 2008-08-21 | Dot Hill Systems Corporation | Redundant storage controller system with enhanced failure analysis capability |
US7437493B2 (en) | 2001-09-28 | 2008-10-14 | Dot Hill Systems Corp. | Modular architecture for a network storage controller |
US20090070651A1 (en) * | 2007-09-06 | 2009-03-12 | Siliconsystems, Inc. | Storage subsystem capable of adjusting ecc settings based on monitored conditions |
US20090094406A1 (en) * | 2007-10-05 | 2009-04-09 | Joseph Ashwood | Scalable mass data storage device |
US20090125671A1 (en) * | 2006-12-06 | 2009-05-14 | David Flynn | Apparatus, system, and method for storage space recovery after reaching a read count limit |
US20090150641A1 (en) * | 2007-12-06 | 2009-06-11 | David Flynn | Apparatus, system, and method for efficient mapping of virtual and physical addresses |
US20090150744A1 (en) * | 2007-12-06 | 2009-06-11 | David Flynn | Apparatus, system, and method for ensuring data validity in a data storage process |
US20090220073A1 (en) * | 2008-02-28 | 2009-09-03 | Nortel Networks Limited | Transparent protocol independent data compression and encryption |
US7594134B1 (en) * | 2006-08-14 | 2009-09-22 | Network Appliance, Inc. | Dual access pathways to serially-connected mass data storage units |
US20090240912A1 (en) * | 2008-03-18 | 2009-09-24 | Apple Inc. | System and method for selectively storing and updating primary storage |
US20090287956A1 (en) * | 2008-05-16 | 2009-11-19 | David Flynn | Apparatus, system, and method for detecting and replacing failed data storage |
US20090307423A1 (en) * | 2008-06-06 | 2009-12-10 | Pivot3 | Method and system for initializing storage in a storage system |
US20100037002A1 (en) * | 2008-08-05 | 2010-02-11 | Broadcom Corporation | Mixed technology storage device |
US20100037019A1 (en) * | 2008-08-06 | 2010-02-11 | Sundrani Kapil | Methods and devices for high performance consistency check |
US7669190B2 (en) | 2004-05-18 | 2010-02-23 | Qlogic, Corporation | Method and system for efficiently recording processor events in host bus adapters |
US20100106906A1 (en) * | 2008-10-28 | 2010-04-29 | Pivot3 | Method and system for protecting against multiple failures in a raid system |
US7778020B2 (en) | 2006-12-06 | 2010-08-17 | Fusion Multisystems, Inc. | Apparatus, system, and method for a modular blade |
US20100293439A1 (en) * | 2009-05-18 | 2010-11-18 | David Flynn | Apparatus, system, and method for reconfiguring an array to operate with less storage elements |
US20100293440A1 (en) * | 2009-05-18 | 2010-11-18 | Jonathan Thatcher | Apparatus, system, and method to increase data integrity in a redundant storage system |
US20110022801A1 (en) * | 2007-12-06 | 2011-01-27 | David Flynn | Apparatus, system, and method for redundant write caching |
US20110040936A1 (en) * | 2008-06-30 | 2011-02-17 | Pivot3 | Method and system for execution of applications in conjunction with raid |
US20110047437A1 (en) * | 2006-12-06 | 2011-02-24 | Fusion-Io, Inc. | Apparatus, system, and method for graceful cache device degradation |
US20110078496A1 (en) * | 2009-09-29 | 2011-03-31 | Micron Technology, Inc. | Stripe based memory operation |
US20110153798A1 (en) * | 2009-12-22 | 2011-06-23 | Groenendaal Johan Van De | Method and apparatus for providing a remotely managed expandable computer system |
US20110182119A1 (en) * | 2010-01-27 | 2011-07-28 | Fusion-Io, Inc. | Apparatus, system, and method for determining a read voltage threshold for solid-state storage media |
US20110307689A1 (en) * | 2010-06-11 | 2011-12-15 | Jaewoong Chung | Processor support for hardware transactional memory |
US20120008506A1 (en) * | 2010-07-12 | 2012-01-12 | International Business Machines Corporation | Detecting intermittent network link failures |
US20120079313A1 (en) * | 2010-09-24 | 2012-03-29 | Honeywell International Inc. | Distributed memory array supporting random access and file storage operations |
US20120239865A1 (en) * | 2006-08-09 | 2012-09-20 | Hitachi Ulsi Systems Co., Ltd. | Storage device |
US8380915B2 (en) | 2010-01-27 | 2013-02-19 | Fusion-Io, Inc. | Apparatus, system, and method for managing solid-state storage media |
US8489817B2 (en) | 2007-12-06 | 2013-07-16 | Fusion-Io, Inc. | Apparatus, system, and method for caching data |
US8527699B2 (en) | 2011-04-25 | 2013-09-03 | Pivot3, Inc. | Method and system for distributed RAID implementation |
US8527841B2 (en) | 2009-03-13 | 2013-09-03 | Fusion-Io, Inc. | Apparatus, system, and method for using multi-level cell solid-state storage as reduced-level cell solid-state storage |
US20130279500A1 (en) * | 2010-11-03 | 2013-10-24 | Broadcom Corporation | Switch module |
US8645816B1 (en) * | 2006-08-08 | 2014-02-04 | Emc Corporation | Customizing user documentation |
US8661184B2 (en) | 2010-01-27 | 2014-02-25 | Fusion-Io, Inc. | Managing non-volatile media |
US8719501B2 (en) | 2009-09-08 | 2014-05-06 | Fusion-Io | Apparatus, system, and method for caching data on a solid-state storage device |
US8782013B1 (en) * | 2002-10-08 | 2014-07-15 | Symantec Operating Corporation | System and method for archiving data |
US8804415B2 (en) | 2012-06-19 | 2014-08-12 | Fusion-Io, Inc. | Adaptive voltage range management in non-volatile memory |
US8825937B2 (en) | 2011-02-25 | 2014-09-02 | Fusion-Io, Inc. | Writing cached data forward on read |
US8854882B2 (en) | 2010-01-27 | 2014-10-07 | Intelligent Intellectual Property Holdings 2 Llc | Configuring storage cells |
US8874823B2 (en) | 2011-02-15 | 2014-10-28 | Intellectual Property Holdings 2 Llc | Systems and methods for managing data input/output operations |
US8949653B1 (en) * | 2012-08-03 | 2015-02-03 | Symantec Corporation | Evaluating high-availability configuration |
US8966184B2 (en) | 2011-01-31 | 2015-02-24 | Intelligent Intellectual Property Holdings 2, LLC. | Apparatus, system, and method for managing eviction of data |
US20150082122A1 (en) * | 2012-05-31 | 2015-03-19 | Aniruddha Nagendran Udipi | Local error detection and global error correction |
US9003104B2 (en) | 2011-02-15 | 2015-04-07 | Intelligent Intellectual Property Holdings 2 Llc | Systems and methods for a file-level cache |
US9058123B2 (en) | 2012-08-31 | 2015-06-16 | Intelligent Intellectual Property Holdings 2 Llc | Systems, methods, and interfaces for adaptive persistence |
US9104599B2 (en) | 2007-12-06 | 2015-08-11 | Intelligent Intellectual Property Holdings 2 Llc | Apparatus, system, and method for destaging cached data |
US9116812B2 (en) | 2012-01-27 | 2015-08-25 | Intelligent Intellectual Property Holdings 2 Llc | Systems and methods for a de-duplication cache |
US9116823B2 (en) | 2006-12-06 | 2015-08-25 | Intelligent Intellectual Property Holdings 2 Llc | Systems and methods for adaptive error-correction coding |
US9170754B2 (en) | 2007-12-06 | 2015-10-27 | Intelligent Intellectual Property Holdings 2 Llc | Apparatus, system, and method for coordinating storage requests in a multi-processor/multi-thread environment |
US9201677B2 (en) | 2011-05-23 | 2015-12-01 | Intelligent Intellectual Property Holdings 2 Llc | Managing data input/output operations |
US9245653B2 (en) | 2010-03-15 | 2016-01-26 | Intelligent Intellectual Property Holdings 2 Llc | Reduced level cell mode for non-volatile memory |
US9251086B2 (en) | 2012-01-24 | 2016-02-02 | SanDisk Technologies, Inc. | Apparatus, system, and method for managing a cache |
US9430508B2 (en) | 2013-12-30 | 2016-08-30 | Microsoft Technology Licensing, Llc | Disk optimized paging for column oriented databases |
US9495241B2 (en) | 2006-12-06 | 2016-11-15 | Longitude Enterprise Flash S.A.R.L. | Systems and methods for adaptive data storage |
US9519540B2 (en) | 2007-12-06 | 2016-12-13 | Sandisk Technologies Llc | Apparatus, system, and method for destaging cached data |
US9612966B2 (en) | 2012-07-03 | 2017-04-04 | Sandisk Technologies Llc | Systems, methods and apparatus for a virtual machine cache |
US9723054B2 (en) | 2013-12-30 | 2017-08-01 | Microsoft Technology Licensing, Llc | Hierarchical organization for scale-out cluster |
US9842053B2 (en) | 2013-03-15 | 2017-12-12 | Sandisk Technologies Llc | Systems and methods for persistent cache logging |
US20180024764A1 (en) * | 2016-07-22 | 2018-01-25 | Intel Corporation | Technologies for accelerating data writes |
CN107634865A (en) * | 2017-10-26 | 2018-01-26 | 郑州云海信息技术有限公司 | A kind of Novel storage system and management system |
US9898398B2 (en) | 2013-12-30 | 2018-02-20 | Microsoft Technology Licensing, Llc | Re-use of invalidated data in buffers |
US9910777B2 (en) | 2010-07-28 | 2018-03-06 | Sandisk Technologies Llc | Enhanced integrity through atomic writes in cache |
CN108008914A (en) * | 2016-10-27 | 2018-05-08 | 华为技术有限公司 | The method, apparatus and ARM equipment of disk management in a kind of ARM equipment |
US9983993B2 (en) | 2009-09-09 | 2018-05-29 | Sandisk Technologies Llc | Apparatus, system, and method for conditional and atomic storage operations |
US10133663B2 (en) | 2010-12-17 | 2018-11-20 | Longitude Enterprise Flash S.A.R.L. | Systems and methods for persistent address space management |
US10339056B2 (en) | 2012-07-03 | 2019-07-02 | Sandisk Technologies Llc | Systems, methods and apparatus for cache transfers |
US20190205206A1 (en) * | 2017-12-28 | 2019-07-04 | Micron Technology, Inc. | Memory controller implemented error correction code memory |
US10592173B2 (en) * | 2018-01-10 | 2020-03-17 | International Business Machines Corporation | Increasing storage efficiency of a data protection technique |
US10715596B2 (en) * | 2016-07-12 | 2020-07-14 | Wiwynn Corporation | Server system and control method for storage unit |
US11055233B2 (en) * | 2015-10-27 | 2021-07-06 | Medallia, Inc. | Predictive memory management |
Families Citing this family (173)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040073681A1 (en) * | 2000-02-01 | 2004-04-15 | Fald Flemming Danhild | Method for paralled data transmission from computer in a network and backup system therefor |
WO2002017553A2 (en) * | 2000-08-18 | 2002-02-28 | United States Postal Service | Apparatus and methods for the secure transfer of electronic data |
US7302582B2 (en) | 2000-08-21 | 2007-11-27 | United States Postal Service | Delivery point validation system |
US7225467B2 (en) * | 2000-11-15 | 2007-05-29 | Lockheed Martin Corporation | Active intrusion resistant environment of layered object and compartment keys (airelock) |
US7213265B2 (en) * | 2000-11-15 | 2007-05-01 | Lockheed Martin Corporation | Real time active network compartmentalization |
EP1229433A1 (en) * | 2001-01-31 | 2002-08-07 | Hewlett-Packard Company | File sort for backup |
US7171453B2 (en) * | 2001-04-19 | 2007-01-30 | Hitachi, Ltd. | Virtual private volume method and system |
US20030030540A1 (en) * | 2001-08-09 | 2003-02-13 | Hom Wayne C. | Method and apparatus for updating security control system operating parameters |
JP2003177963A (en) * | 2001-12-12 | 2003-06-27 | Hitachi Ltd | Storage device |
US7109985B2 (en) * | 2001-12-14 | 2006-09-19 | Liquidpixels, Inc. | System and method for dynamically generating on-demand digital images |
US7372828B2 (en) * | 2001-12-21 | 2008-05-13 | Broadcom Corporation | Wireless access point management in a campus environment |
US7453839B2 (en) * | 2001-12-21 | 2008-11-18 | Broadcom Corporation | Wireless local area network channel resource management |
JP2003208347A (en) * | 2002-01-16 | 2003-07-25 | Fujitsu Ltd | Access controller, access control program, host device and host control program |
US7389315B1 (en) * | 2002-02-28 | 2008-06-17 | Network Appliance, Inc. | System and method for byte swapping file access data structures |
US7143307B1 (en) * | 2002-03-15 | 2006-11-28 | Network Appliance, Inc. | Remote disaster recovery and data migration using virtual appliance migration |
JP3743509B2 (en) * | 2002-03-20 | 2006-02-08 | セイコーエプソン株式会社 | Data transfer control device and electronic device |
WO2003081440A1 (en) * | 2002-03-21 | 2003-10-02 | Snapp Robert F | Method and system for storing and retrieving data using hash-accessed multiple data stores |
US7664731B2 (en) * | 2002-03-21 | 2010-02-16 | United States Postal Service | Method and system for storing and retrieving data using hash-accessed multiple data stores |
US7228338B2 (en) * | 2002-03-27 | 2007-06-05 | Motorola, Inc. | Multi-service platform module |
US20030188153A1 (en) * | 2002-04-02 | 2003-10-02 | Demoff Jeff S. | System and method for mirroring data using a server |
US7609718B2 (en) * | 2002-05-15 | 2009-10-27 | Broadcom Corporation | Packet data service over hyper transport link(s) |
US20040029593A1 (en) * | 2002-08-09 | 2004-02-12 | Skinner Davey N. | Global positioning system receiver with high density memory storage |
US7159119B2 (en) * | 2002-09-06 | 2007-01-02 | United States Postal Service | Method and system for efficiently retrieving secured data by securely pre-processing provided access information |
WO2004028155A2 (en) * | 2002-09-19 | 2004-04-01 | Image Stream Medical, Llc | Streaming digital recording system |
US7080094B2 (en) * | 2002-10-29 | 2006-07-18 | Lockheed Martin Corporation | Hardware accelerated validating parser |
US7146643B2 (en) * | 2002-10-29 | 2006-12-05 | Lockheed Martin Corporation | Intrusion detection accelerator |
US20040083466A1 (en) * | 2002-10-29 | 2004-04-29 | Dapp Michael C. | Hardware parser accelerator |
US20070061884A1 (en) * | 2002-10-29 | 2007-03-15 | Dapp Michael C | Intrusion detection accelerator |
US7240119B2 (en) * | 2002-11-04 | 2007-07-03 | Ge Fanuc Automation North America, Inc. | Method for configuring a programmable logic controller using an extensible markup language schema |
US7406481B2 (en) * | 2002-12-17 | 2008-07-29 | Oracle International Corporation | Using direct memory access for performing database operations between two or more machines |
US7386639B2 (en) * | 2003-01-15 | 2008-06-10 | Avago Technologies Fiber Ip (Singapore) Pte. Ltd. | Switch for coupling one bus to another bus |
US20040148547A1 (en) * | 2003-01-28 | 2004-07-29 | Jim Thompson | UPS-based file data storage apparatus and computer program products |
CA2521576A1 (en) * | 2003-02-28 | 2004-09-16 | Lockheed Martin Corporation | Hardware accelerator state table compiler |
US7200206B1 (en) * | 2003-03-27 | 2007-04-03 | At&T Corp. | Method and apparatus for testing a subscriber loop-based service |
JP2004302512A (en) * | 2003-03-28 | 2004-10-28 | Hitachi Ltd | Cluster computing system and fail-over method for the same |
GB2417360B (en) * | 2003-05-20 | 2007-03-28 | Kagutech Ltd | Digital backplane |
JP4100256B2 (en) * | 2003-05-29 | 2008-06-11 | 株式会社日立製作所 | Communication method and information processing apparatus |
US7454555B2 (en) * | 2003-06-12 | 2008-11-18 | Rambus Inc. | Apparatus and method including a memory device having multiple sets of memory banks with duplicated data emulating a fast access time, fixed latency memory device |
JP2005018185A (en) * | 2003-06-24 | 2005-01-20 | Hitachi Ltd | Storage device system |
DE60316419T2 (en) * | 2003-06-24 | 2008-06-19 | Research In Motion Ltd., Waterloo | Serialization of a distributed application of a router |
US20050015645A1 (en) * | 2003-06-30 | 2005-01-20 | Anil Vasudevan | Techniques to allocate information for processing |
US7254754B2 (en) * | 2003-07-14 | 2007-08-07 | International Business Machines Corporation | Raid 3+3 |
US7281177B2 (en) * | 2003-07-14 | 2007-10-09 | International Business Machines Corporation | Autonomic parity exchange |
US7139942B2 (en) * | 2003-07-21 | 2006-11-21 | Sun Microsystems, Inc. | Method and apparatus for memory redundancy and recovery from uncorrectable errors |
AU2003903967A0 (en) * | 2003-07-30 | 2003-08-14 | Canon Kabushiki Kaisha | Distributed data caching in hybrid peer-to-peer systems |
JP4354233B2 (en) * | 2003-09-05 | 2009-10-28 | 株式会社日立製作所 | Backup system and method |
US20050086471A1 (en) * | 2003-10-20 | 2005-04-21 | Spencer Andrew M. | Removable information storage device that includes a master encryption key and encryption keys |
US20050097388A1 (en) * | 2003-11-05 | 2005-05-05 | Kris Land | Data distributor |
CN100440746C (en) * | 2003-12-01 | 2008-12-03 | 中兴通讯股份有限公司 | A method for multi-port multi-link communication network backup control and apparatus therefor |
US20050160249A1 (en) * | 2004-01-21 | 2005-07-21 | Hewlett-Packard Development Company, L.P. | Volume type determination for disk volumes managed by a LDM |
US7386663B2 (en) | 2004-05-13 | 2008-06-10 | Cousins Robert E | Transaction-based storage system and method that uses variable sized objects to store data |
US20050262275A1 (en) * | 2004-05-19 | 2005-11-24 | Gil Drori | Method and apparatus for accessing a multi ordered memory array |
US7590522B2 (en) * | 2004-06-14 | 2009-09-15 | Hewlett-Packard Development Company, L.P. | Virtual mass storage device for server management information |
US7484016B2 (en) * | 2004-06-30 | 2009-01-27 | Intel Corporation | Apparatus and method for high performance volatile disk drive memory access using an integrated DMA engine |
GB2418268A (en) * | 2004-09-15 | 2006-03-22 | Ibm | Method for monitoring software components using native device instructions |
DE102004047145A1 (en) * | 2004-09-29 | 2006-03-30 | Bayer Business Services Gmbh | storage concept |
US8131926B2 (en) | 2004-10-20 | 2012-03-06 | Seagate Technology, Llc | Generic storage container for allocating multiple data formats |
US20060095460A1 (en) * | 2004-10-29 | 2006-05-04 | International Business Machines Corporation | Systems and methods for efficiently clustering objects based on access patterns |
EP1836634A2 (en) * | 2004-12-06 | 2007-09-26 | United Technologies Corporation | Method and system for developing lubricants, lubricant additives, and lubricant base stocks utilizing atomistic modeling tools |
US7801925B2 (en) * | 2004-12-22 | 2010-09-21 | United States Postal Service | System and method for electronically processing address information |
US7565496B2 (en) * | 2005-01-22 | 2009-07-21 | Cisco Technology, Inc. | Sharing memory among multiple information channels |
US20060190552A1 (en) * | 2005-02-24 | 2006-08-24 | Henze Richard H | Data retention system with a plurality of access protocols |
US7617343B2 (en) * | 2005-03-02 | 2009-11-10 | Qualcomm Incorporated | Scalable bus structure |
WO2006124910A2 (en) | 2005-05-17 | 2006-11-23 | United States Postal Service | System and method for automated management of an address database |
JP4662550B2 (en) * | 2005-10-20 | 2011-03-30 | 株式会社日立製作所 | Storage system |
JP4972932B2 (en) * | 2005-12-26 | 2012-07-11 | 富士通株式会社 | Memory access device |
US8006011B2 (en) * | 2006-02-07 | 2011-08-23 | Cisco Technology, Inc. | InfiniBand boot bridge with fibre channel target |
US7447836B2 (en) * | 2006-02-14 | 2008-11-04 | Software Site Applications, Limited Liability Company | Disk drive storage defragmentation system |
US8171307B1 (en) * | 2006-05-26 | 2012-05-01 | Netapp, Inc. | Background encryption of disks in a large cluster |
US7451286B2 (en) * | 2006-07-18 | 2008-11-11 | Network Appliance, Inc. | Removable portable data backup for a network storage system |
US8806227B2 (en) * | 2006-08-04 | 2014-08-12 | Lsi Corporation | Data shredding RAID mode |
US7908259B2 (en) * | 2006-08-25 | 2011-03-15 | Teradata Us, Inc. | Hardware accelerated reconfigurable processor for accelerating database operations and queries |
US8935302B2 (en) * | 2006-12-06 | 2015-01-13 | Intelligent Intellectual Property Holdings 2 Llc | Apparatus, system, and method for data block usage information synchronization for a non-volatile storage volume |
WO2008092031A2 (en) * | 2007-01-24 | 2008-07-31 | Vir2Us, Inc. | Computer system architecture having isolated file system management for secure and reliable data processing |
US20080183959A1 (en) * | 2007-01-29 | 2008-07-31 | Pelley Perry H | Memory system having global buffered control for memory modules |
US20080201569A1 (en) * | 2007-02-21 | 2008-08-21 | Hsiao-Yuan Wen | Dual cpu inverter system and method for the same |
US7610459B2 (en) * | 2007-04-11 | 2009-10-27 | International Business Machines Corporation | Maintain owning application information of data for a data storage system |
US8458129B2 (en) | 2008-06-23 | 2013-06-04 | Teradata Us, Inc. | Methods and systems for real-time continuous updates |
US8862625B2 (en) | 2008-04-07 | 2014-10-14 | Teradata Us, Inc. | Accessing data in a column store database based on hardware compatible indexing and replicated reordered columns |
US9424315B2 (en) | 2007-08-27 | 2016-08-23 | Teradata Us, Inc. | Methods and systems for run-time scheduling database operations that are executed in hardware |
US7966343B2 (en) | 2008-04-07 | 2011-06-21 | Teradata Us, Inc. | Accessing data in a column store database based on hardware compatible data structures |
KR20090024971A (en) * | 2007-09-05 | 2009-03-10 | 삼성전자주식회사 | Method and apparatus for cache using sector set |
US8032497B2 (en) * | 2007-09-26 | 2011-10-04 | International Business Machines Corporation | Method and system providing extended and end-to-end data integrity through database and other system layers |
US8140746B2 (en) * | 2007-12-14 | 2012-03-20 | Spansion Llc | Intelligent memory data management |
US20090228488A1 (en) * | 2008-03-04 | 2009-09-10 | Kim Brand | Data safety appliance and method |
US9002906B1 (en) * | 2008-03-31 | 2015-04-07 | Emc Corporation | System and method for handling large transactions in a storage virtualization system |
WO2009131542A1 (en) * | 2008-04-23 | 2009-10-29 | Drone Technology Pte Ltd | Module for data acquisition and control in a sensor/control network |
AU2008207572B2 (en) * | 2008-04-23 | 2010-10-28 | Drone Technology Pte Ltd | Module for data acquisition and conrol in a sensor/control network |
US8914744B2 (en) * | 2008-06-06 | 2014-12-16 | Liquidpixels, Inc. | Enhanced zoom and pan for viewing digital images |
US8407172B1 (en) * | 2008-06-09 | 2013-03-26 | Euler Optimization, Inc. | Method, apparatus, and article of manufacture for performing a pivot-in-place operation for a linear programming problem |
US8566267B1 (en) | 2008-06-09 | 2013-10-22 | Euler Optimization, Inc. | Method, apparatus, and article of manufacture for solving linear optimization problems |
US8812421B1 (en) | 2008-06-09 | 2014-08-19 | Euler Optimization, Inc. | Method and apparatus for autonomous synchronous computing |
US8190699B2 (en) * | 2008-07-28 | 2012-05-29 | Crossfield Technology LLC | System and method of multi-path data communications |
US20100049914A1 (en) * | 2008-08-20 | 2010-02-25 | Goodwin Paul M | RAID Enhanced solid state drive |
US8301671B1 (en) * | 2009-01-08 | 2012-10-30 | Avaya Inc. | Method and apparatus providing removal of replicated objects based on garbage collection |
US8392654B2 (en) * | 2009-04-17 | 2013-03-05 | Lsi Corporation | Raid level migration for spanned arrays |
KR101601792B1 (en) * | 2009-08-12 | 2016-03-09 | 삼성전자주식회사 | Semiconductor memory device controller and semiconductor memory system |
EP2476079A4 (en) | 2009-09-09 | 2013-07-03 | Fusion Io Inc | Apparatus, system, and method for allocating storage |
US9223514B2 (en) | 2009-09-09 | 2015-12-29 | SanDisk Technologies, Inc. | Erase suspend/resume for memory |
US8289801B2 (en) | 2009-09-09 | 2012-10-16 | Fusion-Io, Inc. | Apparatus, system, and method for power reduction management in a storage device |
US9122579B2 (en) | 2010-01-06 | 2015-09-01 | Intelligent Intellectual Property Holdings 2 Llc | Apparatus, system, and method for a storage layer |
US8281218B1 (en) * | 2009-11-02 | 2012-10-02 | Western Digital Technologies, Inc. | Data manipulation engine |
US8464135B2 (en) | 2010-07-13 | 2013-06-11 | Sandisk Technologies Inc. | Adaptive flash interface |
US8725934B2 (en) | 2011-12-22 | 2014-05-13 | Fusion-Io, Inc. | Methods and appratuses for atomic storage operations |
US8984216B2 (en) | 2010-09-09 | 2015-03-17 | Fusion-Io, Llc | Apparatus, system, and method for managing lifetime of a storage device |
US9244769B2 (en) * | 2010-09-28 | 2016-01-26 | Pure Storage, Inc. | Offset protection data in a RAID array |
WO2012061048A1 (en) * | 2010-11-04 | 2012-05-10 | Rambus Inc. | Techniques for storing data and tags in different memory arrays |
US10817421B2 (en) | 2010-12-13 | 2020-10-27 | Sandisk Technologies Llc | Persistent data structures |
US9218278B2 (en) | 2010-12-13 | 2015-12-22 | SanDisk Technologies, Inc. | Auto-commit memory |
WO2012082792A2 (en) | 2010-12-13 | 2012-06-21 | Fusion-Io, Inc. | Apparatus, system, and method for auto-commit memory |
US9047178B2 (en) | 2010-12-13 | 2015-06-02 | SanDisk Technologies, Inc. | Auto-commit memory synchronization |
US9208071B2 (en) | 2010-12-13 | 2015-12-08 | SanDisk Technologies, Inc. | Apparatus, system, and method for accessing memory |
US10817502B2 (en) | 2010-12-13 | 2020-10-27 | Sandisk Technologies Llc | Persistent memory management |
CN102571478B (en) * | 2010-12-31 | 2016-05-25 | 上海宽惠网络科技有限公司 | Server |
US9213594B2 (en) | 2011-01-19 | 2015-12-15 | Intelligent Intellectual Property Holdings 2 Llc | Apparatus, system, and method for managing out-of-service conditions |
WO2012129191A2 (en) | 2011-03-18 | 2012-09-27 | Fusion-Io, Inc. | Logical interfaces for contextual storage |
US9563555B2 (en) | 2011-03-18 | 2017-02-07 | Sandisk Technologies Llc | Systems and methods for storage allocation |
CN102147773A (en) * | 2011-03-30 | 2011-08-10 | 浪潮(北京)电子信息产业有限公司 | Method, device and system for managing high-end disk array data |
US9753858B2 (en) | 2011-11-30 | 2017-09-05 | Advanced Micro Devices, Inc. | DRAM cache with tags and data jointly stored in physical rows |
US9274937B2 (en) | 2011-12-22 | 2016-03-01 | Longitude Enterprise Flash S.A.R.L. | Systems, methods, and interfaces for vector input/output operations |
US10133662B2 (en) | 2012-06-29 | 2018-11-20 | Sandisk Technologies Llc | Systems, methods, and interfaces for managing persistent data of atomic storage operations |
US10102117B2 (en) | 2012-01-12 | 2018-10-16 | Sandisk Technologies Llc | Systems and methods for cache and storage device coordination |
US9251052B2 (en) | 2012-01-12 | 2016-02-02 | Intelligent Intellectual Property Holdings 2 Llc | Systems and methods for profiling a non-volatile cache having a logical-to-physical translation layer |
US9767032B2 (en) | 2012-01-12 | 2017-09-19 | Sandisk Technologies Llc | Systems and methods for cache endurance |
US8606755B2 (en) * | 2012-01-12 | 2013-12-10 | International Business Machines Corporation | Maintaining a mirrored file system for performing defragmentation |
US10019353B2 (en) | 2012-03-02 | 2018-07-10 | Longitude Enterprise Flash S.A.R.L. | Systems and methods for referencing data on a storage medium |
US9443591B2 (en) * | 2013-01-23 | 2016-09-13 | Seagate Technology Llc | Storage device out-of-space handling |
US8868672B2 (en) | 2012-05-14 | 2014-10-21 | Advanced Micro Devices, Inc. | Server node interconnect devices and methods |
US9678863B2 (en) | 2012-06-12 | 2017-06-13 | Sandisk Technologies, Llc | Hybrid checkpointed memory |
US9137173B2 (en) | 2012-06-19 | 2015-09-15 | Advanced Micro Devices, Inc. | Devices and methods for interconnecting server nodes |
US8930595B2 (en) * | 2012-06-21 | 2015-01-06 | Advanced Micro Devices, Inc. | Memory switch for interconnecting server nodes |
US9253287B2 (en) | 2012-08-20 | 2016-02-02 | Advanced Micro Devices, Inc. | Speculation based approach for reliable message communications |
US10509776B2 (en) | 2012-09-24 | 2019-12-17 | Sandisk Technologies Llc | Time sequence data management |
US10318495B2 (en) | 2012-09-24 | 2019-06-11 | Sandisk Technologies Llc | Snapshots for a non-volatile device |
US8984368B2 (en) * | 2012-10-11 | 2015-03-17 | Advanced Micro Devices, Inc. | High reliability memory controller |
US8875256B2 (en) | 2012-11-13 | 2014-10-28 | Advanced Micro Devices, Inc. | Data flow processing in a network environment |
US10558561B2 (en) | 2013-04-16 | 2020-02-11 | Sandisk Technologies Llc | Systems and methods for storage metadata management |
US10102144B2 (en) | 2013-04-16 | 2018-10-16 | Sandisk Technologies Llc | Systems, methods and interfaces for data virtualization |
WO2015016832A1 (en) * | 2013-07-30 | 2015-02-05 | Hewlett-Packard Development Company, L.P. | Recovering stranded data |
US9842128B2 (en) | 2013-08-01 | 2017-12-12 | Sandisk Technologies Llc | Systems and methods for atomic storage operations |
US10019320B2 (en) | 2013-10-18 | 2018-07-10 | Sandisk Technologies Llc | Systems and methods for distributed atomic storage operations |
US10073630B2 (en) | 2013-11-08 | 2018-09-11 | Sandisk Technologies Llc | Systems and methods for log coordination |
US9141291B2 (en) | 2013-11-26 | 2015-09-22 | Sandisk Technologies Inc. | Adaptive context disbursement for improved performance in non-volatile memory systems |
TWI544342B (en) * | 2013-12-17 | 2016-08-01 | 緯創資通股份有限公司 | Method and system for verifing quality of server |
US10419454B2 (en) | 2014-02-28 | 2019-09-17 | British Telecommunications Public Limited Company | Malicious encrypted traffic inhibitor |
EP3111612B1 (en) * | 2014-02-28 | 2018-03-21 | British Telecommunications public limited company | Profiling for malicious encrypted network traffic identification |
US10469507B2 (en) | 2014-02-28 | 2019-11-05 | British Telecommunications Public Limited Company | Malicious encrypted network traffic identification |
US9384128B2 (en) | 2014-04-18 | 2016-07-05 | SanDisk Technologies, Inc. | Multi-level redundancy code for non-volatile memory controller |
US9696920B2 (en) | 2014-06-02 | 2017-07-04 | Micron Technology, Inc. | Systems and methods for improving efficiencies of a memory system |
US9619177B2 (en) | 2014-06-05 | 2017-04-11 | Kabushiki Kaisha Toshiba | Memory system including non-volatile memory, buffer memory, and controller controlling reading data from non-volatile memory |
US9442657B2 (en) * | 2014-06-05 | 2016-09-13 | Kabushiki Kaisha Toshiba | Memory system utilizing a connection condition of an interface to transmit data |
US9582201B2 (en) | 2014-09-26 | 2017-02-28 | Western Digital Technologies, Inc. | Multi-tier scheme for logical storage management |
WO2016089381A1 (en) * | 2014-12-02 | 2016-06-09 | Hewlett Packard Enterprise Development Lp | Backup power communication |
US9946607B2 (en) | 2015-03-04 | 2018-04-17 | Sandisk Technologies Llc | Systems and methods for storage error management |
WO2016146610A1 (en) | 2015-03-17 | 2016-09-22 | British Telecommunications Public Limited Company | Malicious encrypted network traffic identification using fourier transform |
US10594707B2 (en) | 2015-03-17 | 2020-03-17 | British Telecommunications Public Limited Company | Learned profiles for malicious encrypted network traffic identification |
US10009438B2 (en) | 2015-05-20 | 2018-06-26 | Sandisk Technologies Llc | Transaction log acceleration |
EP3394783B1 (en) | 2015-12-24 | 2020-09-30 | British Telecommunications public limited company | Malicious software identification |
WO2017108575A1 (en) | 2015-12-24 | 2017-06-29 | British Telecommunications Public Limited Company | Malicious software identification |
WO2017109135A1 (en) | 2015-12-24 | 2017-06-29 | British Telecommunications Public Limited Company | Malicious network traffic identification |
US10032115B2 (en) * | 2016-05-03 | 2018-07-24 | International Business Machines Corporation | Estimating file level input/output operations per second (IOPS) |
US10515017B2 (en) * | 2017-02-23 | 2019-12-24 | Honeywell International Inc. | Memory partitioning for a computing system with memory pools |
EP3602999B1 (en) | 2017-03-28 | 2021-05-19 | British Telecommunications Public Limited Company | Initialisation vector identification for encrypted malware traffic detection |
US10387673B2 (en) | 2017-06-30 | 2019-08-20 | Microsoft Technology Licensing, Llc | Fully managed account level blob data encryption in a distributed storage environment |
US10659225B2 (en) | 2017-06-30 | 2020-05-19 | Microsoft Technology Licensing, Llc | Encrypting existing live unencrypted data using age-based garbage collection |
US10764045B2 (en) * | 2017-06-30 | 2020-09-01 | Microsoft Technology Licensing, Llc | Encrypting object index in a distributed storage environment |
US10831935B2 (en) * | 2017-08-31 | 2020-11-10 | Pure Storage, Inc. | Encryption management with host-side data reduction |
US10366007B2 (en) | 2017-12-11 | 2019-07-30 | Honeywell International Inc. | Apparatuses and methods for determining efficient memory partitioning |
WO2019180494A1 (en) * | 2018-03-23 | 2019-09-26 | Pratik Sharma | Multi-ported disk for storage cluster |
CN109413497B (en) * | 2018-09-12 | 2021-04-13 | 海信视像科技股份有限公司 | Intelligent television and system starting method thereof |
EP3623980B1 (en) | 2018-09-12 | 2021-04-28 | British Telecommunications public limited company | Ransomware encryption algorithm determination |
EP3623982B1 (en) | 2018-09-12 | 2021-05-19 | British Telecommunications public limited company | Ransomware remediation |
US11494087B2 (en) | 2018-10-31 | 2022-11-08 | Advanced Micro Devices, Inc. | Tolerating memory stack failures in multi-stack systems |
Family Cites Families (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4661900A (en) * | 1983-04-25 | 1987-04-28 | Cray Research, Inc. | Flexible chaining in vector processor with selective use of vector registers as operand and result registers |
US4901230A (en) * | 1983-04-25 | 1990-02-13 | Cray Research, Inc. | Computer vector multiprocessing control with multiple access memory and priority conflict resolution method |
DE69131551T2 (en) * | 1990-11-09 | 2000-02-17 | Emc Corp | Logical division of a storage system with redundant matrix |
US5289377A (en) * | 1991-08-12 | 1994-02-22 | Trw Inc. | Fault-tolerant solid-state flight data recorder |
US5526507A (en) * | 1992-01-06 | 1996-06-11 | Hill; Andrew J. W. | Computer memory array control for accessing different memory banks simullaneously |
US5987627A (en) | 1992-05-13 | 1999-11-16 | Rawlings, Iii; Joseph H. | Methods and apparatus for high-speed mass storage access in a computer system |
US5321697A (en) * | 1992-05-28 | 1994-06-14 | Cray Research, Inc. | Solid state storage device |
US5790773A (en) | 1995-12-29 | 1998-08-04 | Symbios, Inc. | Method and apparatus for generating snapshot copies for data backup in a raid subsystem |
US5781910A (en) * | 1996-09-13 | 1998-07-14 | Stratus Computer, Inc. | Preforming concurrent transactions in a replicated database environment |
JP3563541B2 (en) * | 1996-09-13 | 2004-09-08 | 株式会社東芝 | Data storage device and data storage method |
US5893919A (en) * | 1996-09-27 | 1999-04-13 | Storage Computer Corporation | Apparatus and method for storing data with selectable data protection using mirroring and selectable parity inhibition |
US5890207A (en) * | 1996-11-27 | 1999-03-30 | Emc Corporation | High performance integrated cached storage device |
US6195754B1 (en) * | 1997-01-28 | 2001-02-27 | Tandem Computers Incorporated | Method and apparatus for tolerating power outages of variable duration in a multi-processor system |
US6381674B2 (en) | 1997-09-30 | 2002-04-30 | Lsi Logic Corporation | Method and apparatus for providing centralized intelligent cache between multiple data controlling elements |
CN1145913C (en) * | 1998-03-18 | 2004-04-14 | 西门子公司 | Device for reproducing information or executing functions |
US6321295B1 (en) * | 1998-03-19 | 2001-11-20 | Insilicon Corporation | System and method for selective transfer of application data between storage devices of a computer system through utilization of dynamic memory allocation |
US6163856A (en) * | 1998-05-29 | 2000-12-19 | Sun Microsystems, Inc. | Method and apparatus for file system disaster recovery |
US6070182A (en) | 1998-06-05 | 2000-05-30 | Intel Corporation | Data processor having integrated boolean and adder logic for accelerating storage and networking applications |
US6530035B1 (en) * | 1998-10-23 | 2003-03-04 | Oracle Corporation | Method and system for managing storage systems containing redundancy data |
US6351838B1 (en) * | 1999-03-12 | 2002-02-26 | Aurora Communications, Inc | Multidimensional parity protection system |
US6446141B1 (en) * | 1999-03-25 | 2002-09-03 | Dell Products, L.P. | Storage server system including ranking of data source |
US6553408B1 (en) * | 1999-03-25 | 2003-04-22 | Dell Products L.P. | Virtual device architecture having memory for storing lists of driver modules |
US6538669B1 (en) * | 1999-07-15 | 2003-03-25 | Dell Products L.P. | Graphical user interface for configuration of a storage system |
US6549922B1 (en) * | 1999-10-01 | 2003-04-15 | Alok Srivastava | System for collecting, transforming and managing media metadata |
US6370611B1 (en) * | 2000-04-04 | 2002-04-09 | Compaq Computer Corporation | Raid XOR operations to synchronous DRAM using a read buffer and pipelining of synchronous DRAM burst read data |
US6571351B1 (en) * | 2000-04-07 | 2003-05-27 | Omneon Video Networks | Tightly coupled secondary storage system and file system |
US6523102B1 (en) * | 2000-04-14 | 2003-02-18 | Interactive Silicon, Inc. | Parallel compression/decompression system and method for implementation of in-memory compressed cache improving storage density and access speed for industry standard memory subsystems and in-line memory modules |
-
2001
- 2001-11-30 US US10/007,413 patent/US6745310B2/en not_active Expired - Fee Related
- 2001-11-30 US US10/007,418 patent/US6957313B2/en not_active Expired - Fee Related
- 2001-11-30 US US10/007,436 patent/US20020069318A1/en not_active Abandoned
- 2001-11-30 US US10/007,415 patent/US6754785B2/en not_active Expired - Fee Related
- 2001-11-30 US US10/007,410 patent/US20020069317A1/en not_active Abandoned
- 2001-12-03 WO PCT/US2001/047594 patent/WO2002091382A2/en not_active Application Discontinuation
- 2001-12-03 AU AU2001297837A patent/AU2001297837A1/en not_active Abandoned
Cited By (306)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6643758B2 (en) * | 2001-04-26 | 2003-11-04 | Fujitsu Limited | Flash memory capable of changing bank configuration |
US6785788B1 (en) * | 2001-07-03 | 2004-08-31 | Unisys Corporation | System and method for implementing an enhanced raid disk storage system |
US20050102557A1 (en) * | 2001-09-28 | 2005-05-12 | Dot Hill Systems Corporation | Apparatus and method for adopting an orphan I/O port in a redundant storage controller |
US7536495B2 (en) | 2001-09-28 | 2009-05-19 | Dot Hill Systems Corporation | Certified memory-to-memory data transfer between active-active raid controllers |
GB2396463B (en) * | 2001-09-28 | 2006-04-12 | Chaparral Network Storage Inc | Controller data sharing using a modular DMA architecture |
US20060106982A1 (en) * | 2001-09-28 | 2006-05-18 | Dot Hill Systems Corporation | Certified memory-to-memory data transfer between active-active raid controllers |
US20030065836A1 (en) * | 2001-09-28 | 2003-04-03 | Pecone Victor Key | Controller data sharing using a modular DMA architecture |
WO2003030006A1 (en) * | 2001-09-28 | 2003-04-10 | Chaparral Network Storage Inc. | Controller data sharing using a modular dma architecture |
US20060282701A1 (en) * | 2001-09-28 | 2006-12-14 | Dot Hill Systems Corporation | Method for adopting an orphan i/o port in a redundant storage controller |
US7437493B2 (en) | 2001-09-28 | 2008-10-14 | Dot Hill Systems Corp. | Modular architecture for a network storage controller |
US7062591B2 (en) | 2001-09-28 | 2006-06-13 | Dot Hill Systems Corp. | Controller data sharing using a modular DMA architecture |
US20060277347A1 (en) * | 2001-09-28 | 2006-12-07 | Dot Hill Systems Corporation | RAID system for performing efficient mirrored posted-write operations |
US7146448B2 (en) | 2001-09-28 | 2006-12-05 | Dot Hill Systems Corporation | Apparatus and method for adopting an orphan I/O port in a redundant storage controller |
US7558897B2 (en) | 2001-09-28 | 2009-07-07 | Dot Hill Systems Corporation | Method for adopting an orphan I/O port in a redundant storage controller |
GB2396463A (en) * | 2001-09-28 | 2004-06-23 | Chaparral Network Storage Inc | Controller data sharing using a modular DMA architecture |
US7340555B2 (en) | 2001-09-28 | 2008-03-04 | Dot Hill Systems Corporation | RAID system for performing efficient mirrored posted-write operations |
US20040186931A1 (en) * | 2001-11-09 | 2004-09-23 | Gene Maine | Transferring data using direct memory access |
US7380115B2 (en) | 2001-11-09 | 2008-05-27 | Dot Hill Systems Corp. | Transferring data using direct memory access |
US20040201755A1 (en) * | 2001-12-06 | 2004-10-14 | Norskog Allen C. | Apparatus and method for generating multi-image scenes with a camera |
US6993635B1 (en) * | 2002-03-29 | 2006-01-31 | Intransa, Inc. | Synchronizing a distributed mirror |
US20030217300A1 (en) * | 2002-04-30 | 2003-11-20 | Hitachi, Ltd. | Method for backing up power supply of disk array device and system thereof |
US7051233B2 (en) | 2002-04-30 | 2006-05-23 | Hitachi, Ltd. | Method for backing up power supply of disk array device and system thereof |
US7293138B1 (en) * | 2002-06-27 | 2007-11-06 | Adaptec, Inc. | Method and apparatus for raid on memory |
US8782013B1 (en) * | 2002-10-08 | 2014-07-15 | Symantec Operating Corporation | System and method for archiving data |
US7076618B2 (en) | 2002-11-08 | 2006-07-11 | Intel Corporation | Memory controllers with interleaved mirrored memory modes |
US20050262388A1 (en) * | 2002-11-08 | 2005-11-24 | Dahlen Eric J | Memory controllers with interleaved mirrored memory modes |
US7130229B2 (en) * | 2002-11-08 | 2006-10-31 | Intel Corporation | Interleaved mirrored memory systems |
US20040158675A1 (en) * | 2002-12-02 | 2004-08-12 | Elpida Memory, Inc. | Memory system and control method therefor |
US20090164724A1 (en) * | 2002-12-02 | 2009-06-25 | Elpida Memory, Inc. | System and control method for hot swapping of memory modules configured in a ring bus |
US8370572B2 (en) | 2003-02-17 | 2013-02-05 | Hitachi, Ltd. | Storage system for holding a remaining available lifetime of a logical storage region |
US20050065984A1 (en) * | 2003-02-17 | 2005-03-24 | Ikuya Yagisawa | Storage system |
US7272686B2 (en) | 2003-02-17 | 2007-09-18 | Hitachi, Ltd. | Storage system |
US7275133B2 (en) | 2003-02-17 | 2007-09-25 | Hitachi, Ltd. | Storage system |
US7925830B2 (en) | 2003-02-17 | 2011-04-12 | Hitachi, Ltd. | Storage system for holding a remaining available lifetime of a logical storage region |
US20040162940A1 (en) * | 2003-02-17 | 2004-08-19 | Ikuya Yagisawa | Storage system |
US20050066078A1 (en) * | 2003-02-17 | 2005-03-24 | Ikuya Yagisawa | Storage system |
US20050071525A1 (en) * | 2003-02-17 | 2005-03-31 | Ikuya Yagisawa | Storage system |
US7146464B2 (en) | 2003-02-17 | 2006-12-05 | Hitachi, Ltd. | Storage system |
US7366839B2 (en) | 2003-02-17 | 2008-04-29 | Hitachi, Ltd. | Storage system |
US20050050275A1 (en) * | 2003-02-17 | 2005-03-03 | Ikuya Yagisawa | Storage system |
US7047354B2 (en) | 2003-02-17 | 2006-05-16 | Hitachi, Ltd. | Storage system |
US20080172528A1 (en) * | 2003-02-17 | 2008-07-17 | Hitachi, Ltd. | Storage system |
US20050066128A1 (en) * | 2003-02-17 | 2005-03-24 | Ikuya Yagisawa | Storage system |
US20110167220A1 (en) * | 2003-02-17 | 2011-07-07 | Hitachi, Ltd. | Storage system for holding a remaining available lifetime of a logical storage region |
US7143227B2 (en) | 2003-02-18 | 2006-11-28 | Dot Hill Systems Corporation | Broadcast bridge apparatus for transferring data to redundant memory subsystems in a storage controller |
US20040177126A1 (en) * | 2003-02-18 | 2004-09-09 | Chaparral Network Storage, Inc. | Broadcast bridge apparatus for transferring data to redundant memory subsystems in a storage controller |
US20040230869A1 (en) * | 2003-03-19 | 2004-11-18 | Stmicroelectronics S.R.I. | Integrated memory system |
US7730357B2 (en) * | 2003-03-19 | 2010-06-01 | Rino Micheloni | Integrated memory system |
US20050149668A1 (en) * | 2003-05-22 | 2005-07-07 | Katsuyoshi Suzuki | Disk array apparatus and method for controlling the same |
US8151046B2 (en) | 2003-05-22 | 2012-04-03 | Hitachi, Ltd. | Disk array apparatus and method for controlling the same |
US8200898B2 (en) | 2003-05-22 | 2012-06-12 | Hitachi, Ltd. | Storage apparatus and method for controlling the same |
US7080201B2 (en) | 2003-05-22 | 2006-07-18 | Hitachi, Ltd. | Disk array apparatus and method for controlling the same |
US20060206660A1 (en) * | 2003-05-22 | 2006-09-14 | Hiromi Matsushige | Storage unit and circuit for shaping communication signal |
US7523258B2 (en) | 2003-05-22 | 2009-04-21 | Hitachi, Ltd. | Disk array apparatus and method for controlling the same |
US7480765B2 (en) | 2003-05-22 | 2009-01-20 | Hitachi, Ltd. | Storage unit and circuit for shaping communication signal |
US20080301365A1 (en) * | 2003-05-22 | 2008-12-04 | Hiromi Matsushige | Storage unit and circuit for shaping communication signal |
US7461203B2 (en) | 2003-05-22 | 2008-12-02 | Hitachi, Ltd. | Disk array apparatus and method for controlling the same |
US20050149669A1 (en) * | 2003-05-22 | 2005-07-07 | Katsuyoshi Suzuki | Disk array apparatus and method for controlling the same |
US20050149670A1 (en) * | 2003-05-22 | 2005-07-07 | Katsuyoshi Suzuki | Disk array apparatus and method for controlling the same |
US8429342B2 (en) | 2003-05-22 | 2013-04-23 | Hitachi, Ltd. | Drive apparatus and method for controlling the same |
US7685362B2 (en) | 2003-05-22 | 2010-03-23 | Hitachi, Ltd. | Storage unit and circuit for shaping communication signal |
US20040236908A1 (en) * | 2003-05-22 | 2004-11-25 | Katsuyoshi Suzuki | Disk array apparatus and method for controlling the same |
US20050149671A1 (en) * | 2003-05-22 | 2005-07-07 | Katsuyoshi Suzuki | Disk array apparatus and method for controlling the same |
US20050149674A1 (en) * | 2003-05-22 | 2005-07-07 | Katsuyoshi Suzuki | Disk array apparatus and method for controlling the same |
US20050149672A1 (en) * | 2003-05-22 | 2005-07-07 | Katsuyoshi Suzuki | Disk array apparatus and method for controlling the same |
US7587548B2 (en) | 2003-05-22 | 2009-09-08 | Hitachi, Ltd. | Disk array apparatus and method for controlling the same |
US7395285B2 (en) * | 2003-06-30 | 2008-07-01 | Matsushita Electric Industrial Co., Ltd. | Garbage collection system |
US20060074988A1 (en) * | 2003-06-30 | 2006-04-06 | Yuko Imanishi | Garbage collection system |
US7234101B1 (en) * | 2003-08-27 | 2007-06-19 | Qlogic, Corporation | Method and system for providing data integrity in storage systems |
US7667731B2 (en) | 2003-09-30 | 2010-02-23 | At&T Intellectual Property I, L.P. | Video recorder |
US11482062B2 (en) | 2003-09-30 | 2022-10-25 | Intellectual Ventures Ii Llc | Video recorder |
US20050068417A1 (en) * | 2003-09-30 | 2005-03-31 | Kreiner Barrett Morris | Video recorder |
US20050068429A1 (en) * | 2003-09-30 | 2005-03-31 | Kreiner Barrett Morris | Video recorder |
US20050078186A1 (en) * | 2003-09-30 | 2005-04-14 | Kreiner Barrett Morris | Video recorder |
US9934628B2 (en) | 2003-09-30 | 2018-04-03 | Chanyu Holdings, Llc | Video recorder |
US10559141B2 (en) | 2003-09-30 | 2020-02-11 | Chanyu Holdings, Llc | Video recorder |
US10950073B2 (en) | 2003-09-30 | 2021-03-16 | Chanyu Holdings, Llc | Video recorder |
US20100085430A1 (en) * | 2003-09-30 | 2010-04-08 | Barrett Morris Kreiner | Video Recorder |
US7505673B2 (en) * | 2003-09-30 | 2009-03-17 | At&T Intellectual Property I, L.P. | Video recorder for detection of occurrences |
US20050154942A1 (en) * | 2003-11-28 | 2005-07-14 | Azuma Kano | Disk array system and method for controlling disk array system |
US20050117468A1 (en) * | 2003-11-28 | 2005-06-02 | Azuma Kano | Disk array system and method of controlling disk array system |
US7203135B2 (en) | 2003-11-28 | 2007-04-10 | Hitachi, Ltd. | Disk array system and method for controlling disk array system |
US7453774B2 (en) | 2003-11-28 | 2008-11-18 | Hitachi, Ltd. | Disk array system |
US7865665B2 (en) | 2003-11-28 | 2011-01-04 | Hitachi, Ltd. | Storage system for checking data coincidence between a cache memory and a disk drive |
US7447121B2 (en) | 2003-11-28 | 2008-11-04 | Hitachi, Ltd. | Disk array system |
US7057981B2 (en) | 2003-11-28 | 2006-06-06 | Hitachi, Ltd. | Disk array system and method for controlling disk array system |
US20050120263A1 (en) * | 2003-11-28 | 2005-06-02 | Azuma Kano | Disk array system and method for controlling disk array system |
US8468300B2 (en) | 2003-11-28 | 2013-06-18 | Hitachi, Ltd. | Storage system having plural controllers and an expansion housing with drive units |
US7200074B2 (en) | 2003-11-28 | 2007-04-03 | Hitachi, Ltd. | Disk array system and method for controlling disk array system |
US20050117462A1 (en) * | 2003-11-28 | 2005-06-02 | Azuma Kano | Disk array system and method for controlling disk array system |
US20050141184A1 (en) * | 2003-12-25 | 2005-06-30 | Hiroshi Suzuki | Storage system |
US7423354B2 (en) | 2003-12-25 | 2008-09-09 | Hitachi, Ltd. | Storage system |
US20070170782A1 (en) * | 2003-12-25 | 2007-07-26 | Hiroshi Suzuki | Storage system |
US7671485B2 (en) | 2003-12-25 | 2010-03-02 | Hitachi, Ltd. | Storage system |
US7200603B1 (en) * | 2004-01-08 | 2007-04-03 | Network Appliance, Inc. | In a data storage server, for each subsets which does not contain compressed data after the compression, a predetermined value is stored in the corresponding entry of the corresponding compression group to indicate that corresponding data is compressed |
US8015442B2 (en) | 2004-02-04 | 2011-09-06 | Hitachi, Ltd. | Anomaly notification control in disk array |
US7457981B2 (en) | 2004-02-04 | 2008-11-25 | Hitachi, Ltd. | Anomaly notification control in disk array |
US8365013B2 (en) | 2004-02-04 | 2013-01-29 | Hitachi, Ltd. | Anomaly notification control in disk array |
US7475283B2 (en) | 2004-02-04 | 2009-01-06 | Hitachi, Ltd. | Anomaly notification control in disk array |
US20060255409A1 (en) * | 2004-02-04 | 2006-11-16 | Seiki Morita | Anomaly notification control in disk array |
US7823010B2 (en) | 2004-02-04 | 2010-10-26 | Hitachi, Ltd. | Anomaly notification control in disk array |
EP1577774A2 (en) | 2004-02-19 | 2005-09-21 | Nec Corporation | Semiconductor storage data striping |
EP1577774A3 (en) * | 2004-02-19 | 2010-06-09 | Nec Corporation | Method of data writing to and data reading from storage device and data storage system |
US20050210323A1 (en) * | 2004-03-05 | 2005-09-22 | Batchelor Gary W | Scanning modified data during power loss |
US7260695B2 (en) * | 2004-03-05 | 2007-08-21 | International Business Machines Corporation | Scanning modified data during power loss |
US7669190B2 (en) | 2004-05-18 | 2010-02-23 | Qlogic, Corporation | Method and system for efficiently recording processor events in host bus adapters |
KR100802666B1 (en) | 2004-08-27 | 2008-02-12 | 인피니언 테크놀로지스 아게 | Circuit arrangement and method for operating such a circuit arrangement |
US20060053236A1 (en) * | 2004-09-08 | 2006-03-09 | Sonksen Bradley S | Method and system for optimizing DMA channel selection |
US7577772B2 (en) | 2004-09-08 | 2009-08-18 | Qlogic, Corporation | Method and system for optimizing DMA channel selection |
US20070266205A1 (en) * | 2004-09-22 | 2007-11-15 | Bevilacqua John F | System and Method for Customization of Network Controller Behavior, Based on Application-Specific Inputs |
WO2006036809A3 (en) * | 2004-09-22 | 2006-06-01 | Xyratex Technnology Ltd | System and method for customization of network controller behavior, based on application -specific inputs |
US20100325091A1 (en) * | 2004-11-16 | 2010-12-23 | Petruzzo Stephen E | Data Mirroring Method |
US20060271605A1 (en) * | 2004-11-16 | 2006-11-30 | Petruzzo Stephen E | Data Mirroring System and Method |
US8473465B2 (en) | 2004-11-16 | 2013-06-25 | Greentec-Usa, Inc. | Data mirroring system |
US8401999B2 (en) | 2004-11-16 | 2013-03-19 | Greentec-Usa, Inc. | Data mirroring method |
US7822715B2 (en) | 2004-11-16 | 2010-10-26 | Petruzzo Stephen E | Data mirroring method |
US20100030754A1 (en) * | 2004-11-16 | 2010-02-04 | Petruzzo Stephen E | Data Backup Method |
US20110035563A1 (en) * | 2004-11-16 | 2011-02-10 | Petruzzo Stephen E | Data Mirroring System |
US20060253731A1 (en) * | 2004-11-16 | 2006-11-09 | Petruzzo Stephen E | Data Backup System and Method |
US20060259723A1 (en) * | 2004-11-16 | 2006-11-16 | Petruzzo Stephen E | System and method for backing up data |
US7627776B2 (en) | 2004-11-16 | 2009-12-01 | Petruzzo Stephen E | Data backup method |
US20060129781A1 (en) * | 2004-12-15 | 2006-06-15 | Gellai Andrew P | Offline configuration simulator |
US7543096B2 (en) | 2005-01-20 | 2009-06-02 | Dot Hill Systems Corporation | Safe message transfers on PCI-Express link from RAID controller to receiver-programmable window of partner RAID controller CPU memory |
US20060161709A1 (en) * | 2005-01-20 | 2006-07-20 | Dot Hill Systems Corporation | Safe message transfers on PCI-Express link from RAID controller to receiver-programmable window of partner RAID controller CPU memory |
US7315911B2 (en) | 2005-01-20 | 2008-01-01 | Dot Hill Systems Corporation | Method for efficient inter-processor communication in an active-active RAID system using PCI-express links |
US20060161707A1 (en) * | 2005-01-20 | 2006-07-20 | Dot Hill Systems Corporation | Method for efficient inter-processor communication in an active-active RAID system using PCI-express links |
US7392437B2 (en) | 2005-01-20 | 2008-06-24 | Qlogic, Corporation | Method and system for testing host bus adapters |
US20060161702A1 (en) * | 2005-01-20 | 2006-07-20 | Bowlby Gavin J | Method and system for testing host bus adapters |
US20060230215A1 (en) * | 2005-04-06 | 2006-10-12 | Woodral David E | Elastic buffer module for PCI express devices |
US7281077B2 (en) | 2005-04-06 | 2007-10-09 | Qlogic, Corporation | Elastic buffer module for PCI express devices |
US20070234115A1 (en) * | 2006-04-04 | 2007-10-04 | Nobuyuki Saika | Backup system and backup method |
US7487390B2 (en) * | 2006-04-04 | 2009-02-03 | Hitachi, Ltd. | Backup system and backup method |
US7536508B2 (en) | 2006-06-30 | 2009-05-19 | Dot Hill Systems Corporation | System and method for sharing SATA drives in active-active RAID controller system |
US20080005470A1 (en) * | 2006-06-30 | 2008-01-03 | Dot Hill Systems Corporation | System and method for sharing sata drives in active-active raid controller system |
US8645816B1 (en) * | 2006-08-08 | 2014-02-04 | Emc Corporation | Customizing user documentation |
US8504762B2 (en) * | 2006-08-09 | 2013-08-06 | Hitachi Ulsi Systems Co., Ltd. | Flash memory storage device with data interface |
US20120239865A1 (en) * | 2006-08-09 | 2012-09-20 | Hitachi Ulsi Systems Co., Ltd. | Storage device |
US7594134B1 (en) * | 2006-08-14 | 2009-09-22 | Network Appliance, Inc. | Dual access pathways to serially-connected mass data storage units |
US20110047356A2 (en) * | 2006-12-06 | 2011-02-24 | Fusion-Io, Inc. | Apparatus,system,and method for managing commands of solid-state storage using bank interleave |
US8261005B2 (en) | 2006-12-06 | 2012-09-04 | Fusion-Io, Inc. | Apparatus, system, and method for managing data in a storage device with an empty data token directive |
US7778020B2 (en) | 2006-12-06 | 2010-08-17 | Fusion Multisystems, Inc. | Apparatus, system, and method for a modular blade |
US11960412B2 (en) | 2006-12-06 | 2024-04-16 | Unification Technologies Llc | Systems and methods for identifying storage resources that are not in use |
US8402201B2 (en) | 2006-12-06 | 2013-03-19 | Fusion-Io, Inc. | Apparatus, system, and method for storage space recovery in solid-state storage |
US11847066B2 (en) | 2006-12-06 | 2023-12-19 | Unification Technologies Llc | Apparatus, system, and method for managing commands of solid-state storage using bank interleave |
US11640359B2 (en) | 2006-12-06 | 2023-05-02 | Unification Technologies Llc | Systems and methods for identifying storage resources that are not in use |
US11573909B2 (en) | 2006-12-06 | 2023-02-07 | Unification Technologies Llc | Apparatus, system, and method for managing commands of solid-state storage using bank interleave |
US20080140910A1 (en) * | 2006-12-06 | 2008-06-12 | David Flynn | Apparatus, system, and method for managing data in a storage device with an empty data token directive |
US8412979B2 (en) | 2006-12-06 | 2013-04-02 | Fusion-Io, Inc. | Apparatus, system, and method for data storage using progressive raid |
US20080168304A1 (en) * | 2006-12-06 | 2008-07-10 | David Flynn | Apparatus, system, and method for data storage using progressive raid |
US20080183953A1 (en) * | 2006-12-06 | 2008-07-31 | David Flynn | Apparatus, system, and method for storage space recovery in solid-state storage |
US20110047437A1 (en) * | 2006-12-06 | 2011-02-24 | Fusion-Io, Inc. | Apparatus, system, and method for graceful cache device degradation |
US8412904B2 (en) | 2006-12-06 | 2013-04-02 | Fusion-Io, Inc. | Apparatus, system, and method for managing concurrent storage requests |
US9824027B2 (en) | 2006-12-06 | 2017-11-21 | Sandisk Technologies Llc | Apparatus, system, and method for a storage area network |
US9734086B2 (en) | 2006-12-06 | 2017-08-15 | Sandisk Technologies Llc | Apparatus, system, and method for a device shared between multiple independent hosts |
US7934055B2 (en) | 2006-12-06 | 2011-04-26 | Fusion-io, Inc | Apparatus, system, and method for a shared, front-end, distributed RAID |
US20090132760A1 (en) * | 2006-12-06 | 2009-05-21 | David Flynn | Apparatus, system, and method for solid-state storage as cache for high-capacity, non-volatile storage |
US9575902B2 (en) | 2006-12-06 | 2017-02-21 | Longitude Enterprise Flash S.A.R.L. | Apparatus, system, and method for managing commands of solid-state storage using bank interleave |
US20110157992A1 (en) * | 2006-12-06 | 2011-06-30 | Fusion-Io, Inc. | Apparatus, system, and method for biasing data in a solid-state storage device |
US8266496B2 (en) | 2006-12-06 | 2012-09-11 | Fusion-10, Inc. | Apparatus, system, and method for managing data using a data pipeline |
US20110179225A1 (en) * | 2006-12-06 | 2011-07-21 | Fusion-Io, Inc. | Apparatus, system, and method for a shared, front-end, distributed raid |
US9495241B2 (en) | 2006-12-06 | 2016-11-15 | Longitude Enterprise Flash S.A.R.L. | Systems and methods for adaptive data storage |
US8015440B2 (en) | 2006-12-06 | 2011-09-06 | Fusion-Io, Inc. | Apparatus, system, and method for data storage using progressive raid |
US9454492B2 (en) | 2006-12-06 | 2016-09-27 | Longitude Enterprise Flash S.A.R.L. | Systems and methods for storage parallelism |
US8019938B2 (en) | 2006-12-06 | 2011-09-13 | Fusion-I0, Inc. | Apparatus, system, and method for solid-state storage as cache for high-capacity, non-volatile storage |
US8019940B2 (en) | 2006-12-06 | 2011-09-13 | Fusion-Io, Inc. | Apparatus, system, and method for a front-end, distributed raid |
US8074011B2 (en) | 2006-12-06 | 2011-12-06 | Fusion-Io, Inc. | Apparatus, system, and method for storage space recovery after reaching a read count limit |
US8443134B2 (en) | 2006-12-06 | 2013-05-14 | Fusion-Io, Inc. | Apparatus, system, and method for graceful cache device degradation |
US9116823B2 (en) | 2006-12-06 | 2015-08-25 | Intelligent Intellectual Property Holdings 2 Llc | Systems and methods for adaptive error-correction coding |
US20090125671A1 (en) * | 2006-12-06 | 2009-05-14 | David Flynn | Apparatus, system, and method for storage space recovery after reaching a read count limit |
US20080229079A1 (en) * | 2006-12-06 | 2008-09-18 | David Flynn | Apparatus, system, and method for managing commands of solid-state storage using bank interleave |
US8482993B2 (en) | 2006-12-06 | 2013-07-09 | Fusion-Io, Inc. | Apparatus, system, and method for managing data in a solid-state storage device |
US8762658B2 (en) | 2006-12-06 | 2014-06-24 | Fusion-Io, Inc. | Systems and methods for persistent deallocation |
US8756375B2 (en) | 2006-12-06 | 2014-06-17 | Fusion-Io, Inc. | Non-volatile cache |
US8189407B2 (en) | 2006-12-06 | 2012-05-29 | Fusion-Io, Inc. | Apparatus, system, and method for biasing data in a solid-state storage device |
US20080256183A1 (en) * | 2006-12-06 | 2008-10-16 | David Flynn | Apparatus, system, and method for a front-end, distributed raid |
US8601211B2 (en) | 2006-12-06 | 2013-12-03 | Fusion-Io, Inc. | Storage system with front-end controller |
US8533569B2 (en) | 2006-12-06 | 2013-09-10 | Fusion-Io, Inc. | Apparatus, system, and method for managing data using a data pipeline |
US7681089B2 (en) | 2007-02-20 | 2010-03-16 | Dot Hill Systems Corporation | Redundant storage controller system with enhanced failure analysis capability |
US20080201616A1 (en) * | 2007-02-20 | 2008-08-21 | Dot Hill Systems Corporation | Redundant storage controller system with enhanced failure analysis capability |
US8296625B2 (en) | 2007-09-06 | 2012-10-23 | Siliconsystems, Inc. | Storage subsystem capable of adjusting ECC settings based on monitored conditions |
US20090070651A1 (en) * | 2007-09-06 | 2009-03-12 | Siliconsystems, Inc. | Storage subsystem capable of adjusting ecc settings based on monitored conditions |
US8095851B2 (en) * | 2007-09-06 | 2012-01-10 | Siliconsystems, Inc. | Storage subsystem capable of adjusting ECC settings based on monitored conditions |
US20090094406A1 (en) * | 2007-10-05 | 2009-04-09 | Joseph Ashwood | Scalable mass data storage device |
US8397011B2 (en) | 2007-10-05 | 2013-03-12 | Joseph Ashwood | Scalable mass data storage device |
US9600184B2 (en) | 2007-12-06 | 2017-03-21 | Sandisk Technologies Llc | Apparatus, system, and method for coordinating storage requests in a multi-processor/multi-thread environment |
US8316277B2 (en) | 2007-12-06 | 2012-11-20 | Fusion-Io, Inc. | Apparatus, system, and method for ensuring data validity in a data storage process |
US9104599B2 (en) | 2007-12-06 | 2015-08-11 | Intelligent Intellectual Property Holdings 2 Llc | Apparatus, system, and method for destaging cached data |
US9170754B2 (en) | 2007-12-06 | 2015-10-27 | Intelligent Intellectual Property Holdings 2 Llc | Apparatus, system, and method for coordinating storage requests in a multi-processor/multi-thread environment |
US9519540B2 (en) | 2007-12-06 | 2016-12-13 | Sandisk Technologies Llc | Apparatus, system, and method for destaging cached data |
US20090150744A1 (en) * | 2007-12-06 | 2009-06-11 | David Flynn | Apparatus, system, and method for ensuring data validity in a data storage process |
US8706968B2 (en) | 2007-12-06 | 2014-04-22 | Fusion-Io, Inc. | Apparatus, system, and method for redundant write caching |
US8489817B2 (en) | 2007-12-06 | 2013-07-16 | Fusion-Io, Inc. | Apparatus, system, and method for caching data |
US20090150641A1 (en) * | 2007-12-06 | 2009-06-11 | David Flynn | Apparatus, system, and method for efficient mapping of virtual and physical addresses |
US8195912B2 (en) | 2007-12-06 | 2012-06-05 | Fusion-io, Inc | Apparatus, system, and method for efficient mapping of virtual and physical addresses |
US20110022801A1 (en) * | 2007-12-06 | 2011-01-27 | David Flynn | Apparatus, system, and method for redundant write caching |
US8270599B2 (en) * | 2008-02-28 | 2012-09-18 | Ciena Corporation | Transparent protocol independent data compression and encryption |
US20090220073A1 (en) * | 2008-02-28 | 2009-09-03 | Nortel Networks Limited | Transparent protocol independent data compression and encryption |
US20090240912A1 (en) * | 2008-03-18 | 2009-09-24 | Apple Inc. | System and method for selectively storing and updating primary storage |
US8412978B2 (en) | 2008-05-16 | 2013-04-02 | Fusion-Io, Inc. | Apparatus, system, and method for managing data storage |
US8195978B2 (en) | 2008-05-16 | 2012-06-05 | Fusion-IO. Inc. | Apparatus, system, and method for detecting and replacing failed data storage |
US20090287956A1 (en) * | 2008-05-16 | 2009-11-19 | David Flynn | Apparatus, system, and method for detecting and replacing failed data storage |
US9465560B2 (en) | 2008-06-06 | 2016-10-11 | Pivot3, Inc. | Method and system for data migration in a distributed RAID implementation |
US8621147B2 (en) | 2008-06-06 | 2013-12-31 | Pivot3, Inc. | Method and system for distributed RAID implementation |
US20090307422A1 (en) * | 2008-06-06 | 2009-12-10 | Pivot3 | Method and system for data migration in a distributed raid implementation |
US8316180B2 (en) | 2008-06-06 | 2012-11-20 | Pivot3, Inc. | Method and system for rebuilding data in a distributed RAID system |
US9146695B2 (en) | 2008-06-06 | 2015-09-29 | Pivot3, Inc. | Method and system for distributed RAID implementation |
US8316181B2 (en) | 2008-06-06 | 2012-11-20 | Pivot3, Inc. | Method and system for initializing storage in a storage system |
US8082393B2 (en) | 2008-06-06 | 2011-12-20 | Pivot3 | Method and system for rebuilding data in a distributed RAID system |
US20090307421A1 (en) * | 2008-06-06 | 2009-12-10 | Pivot3 | Method and system for distributed raid implementation |
US8086797B2 (en) | 2008-06-06 | 2011-12-27 | Pivot3 | Method and system for distributing commands to targets |
US8271727B2 (en) | 2008-06-06 | 2012-09-18 | Pivot3, Inc. | Method and system for distributing commands to targets |
US9535632B2 (en) | 2008-06-06 | 2017-01-03 | Pivot3, Inc. | Method and system for distributed raid implementation |
US20090307424A1 (en) * | 2008-06-06 | 2009-12-10 | Pivot3 | Method and system for placement of data on a storage device |
US8261017B2 (en) | 2008-06-06 | 2012-09-04 | Pivot3, Inc. | Method and system for distributed RAID implementation |
US8255625B2 (en) | 2008-06-06 | 2012-08-28 | Pivot3, Inc. | Method and system for placement of data on a storage device |
US8239624B2 (en) | 2008-06-06 | 2012-08-07 | Pivot3, Inc. | Method and system for data migration in a distributed RAID implementation |
US20090307425A1 (en) * | 2008-06-06 | 2009-12-10 | Pivot3 | Method and system for distributing commands to targets |
US20090307426A1 (en) * | 2008-06-06 | 2009-12-10 | Pivot3 | Method and System for Rebuilding Data in a Distributed RAID System |
US8090909B2 (en) | 2008-06-06 | 2012-01-03 | Pivot3 | Method and system for distributed raid implementation |
US8127076B2 (en) | 2008-06-06 | 2012-02-28 | Pivot3 | Method and system for placement of data on a storage device |
US8140753B2 (en) | 2008-06-06 | 2012-03-20 | Pivot3 | Method and system for rebuilding data in a distributed RAID system |
US20090307423A1 (en) * | 2008-06-06 | 2009-12-10 | Pivot3 | Method and system for initializing storage in a storage system |
US8145841B2 (en) | 2008-06-06 | 2012-03-27 | Pivot3 | Method and system for initializing storage in a storage system |
US8219750B2 (en) | 2008-06-30 | 2012-07-10 | Pivot3 | Method and system for execution of applications in conjunction with distributed RAID |
US9086821B2 (en) | 2008-06-30 | 2015-07-21 | Pivot3, Inc. | Method and system for execution of applications in conjunction with raid |
US8417888B2 (en) | 2008-06-30 | 2013-04-09 | Pivot3, Inc. | Method and system for execution of applications in conjunction with raid |
US20110040936A1 (en) * | 2008-06-30 | 2011-02-17 | Pivot3 | Method and system for execution of applications in conjunction with raid |
US8812805B2 (en) * | 2008-08-05 | 2014-08-19 | Broadcom Corporation | Mixed technology storage device that supports a plurality of storage technologies |
US20100037002A1 (en) * | 2008-08-05 | 2010-02-11 | Broadcom Corporation | Mixed technology storage device |
US20100037019A1 (en) * | 2008-08-06 | 2010-02-11 | Sundrani Kapil | Methods and devices for high performance consistency check |
US7971092B2 (en) * | 2008-08-06 | 2011-06-28 | Lsi Corporation | Methods and devices for high performance consistency check |
WO2010051078A1 (en) * | 2008-10-28 | 2010-05-06 | Pivot3 | Method and system for protecting against multiple failures in a raid system |
US8386709B2 (en) | 2008-10-28 | 2013-02-26 | Pivot3, Inc. | Method and system for protecting against multiple failures in a raid system |
US20100106906A1 (en) * | 2008-10-28 | 2010-04-29 | Pivot3 | Method and system for protecting against multiple failures in a raid system |
US8176247B2 (en) | 2008-10-28 | 2012-05-08 | Pivot3 | Method and system for protecting against multiple failures in a RAID system |
US8527841B2 (en) | 2009-03-13 | 2013-09-03 | Fusion-Io, Inc. | Apparatus, system, and method for using multi-level cell solid-state storage as reduced-level cell solid-state storage |
US9306599B2 (en) | 2009-05-18 | 2016-04-05 | Intelligent Intellectual Property Holdings 2 Llc | Apparatus, system, and method for reconfiguring an array of storage elements |
US20100293440A1 (en) * | 2009-05-18 | 2010-11-18 | Jonathan Thatcher | Apparatus, system, and method to increase data integrity in a redundant storage system |
US8832528B2 (en) | 2009-05-18 | 2014-09-09 | Fusion-Io, Inc. | Apparatus, system, and method to increase data integrity in a redundant storage system |
US8738991B2 (en) | 2009-05-18 | 2014-05-27 | Fusion-Io, Inc. | Apparatus, system, and method for reconfiguring an array of storage elements |
US8307258B2 (en) | 2009-05-18 | 2012-11-06 | Fusion-10, Inc | Apparatus, system, and method for reconfiguring an array to operate with less storage elements |
US8495460B2 (en) | 2009-05-18 | 2013-07-23 | Fusion-Io, Inc. | Apparatus, system, and method for reconfiguring an array of storage elements |
US8281227B2 (en) | 2009-05-18 | 2012-10-02 | Fusion-10, Inc. | Apparatus, system, and method to increase data integrity in a redundant storage system |
US20100293439A1 (en) * | 2009-05-18 | 2010-11-18 | David Flynn | Apparatus, system, and method for reconfiguring an array to operate with less storage elements |
US8719501B2 (en) | 2009-09-08 | 2014-05-06 | Fusion-Io | Apparatus, system, and method for caching data on a solid-state storage device |
US9983993B2 (en) | 2009-09-09 | 2018-05-29 | Sandisk Technologies Llc | Apparatus, system, and method for conditional and atomic storage operations |
US8788876B2 (en) | 2009-09-29 | 2014-07-22 | Micron Technology, Inc. | Stripe-based memory operation |
US8448018B2 (en) * | 2009-09-29 | 2013-05-21 | Micron Technology, Inc. | Stripe-based memory operation |
US8266501B2 (en) * | 2009-09-29 | 2012-09-11 | Micron Technology, Inc. | Stripe based memory operation |
US20110078496A1 (en) * | 2009-09-29 | 2011-03-31 | Micron Technology, Inc. | Stripe based memory operation |
US20110153798A1 (en) * | 2009-12-22 | 2011-06-23 | Groenendaal Johan Van De | Method and apparatus for providing a remotely managed expandable computer system |
US8667110B2 (en) * | 2009-12-22 | 2014-03-04 | Intel Corporation | Method and apparatus for providing a remotely managed expandable computer system |
US8873286B2 (en) | 2010-01-27 | 2014-10-28 | Intelligent Intellectual Property Holdings 2 Llc | Managing non-volatile media |
US20110182119A1 (en) * | 2010-01-27 | 2011-07-28 | Fusion-Io, Inc. | Apparatus, system, and method for determining a read voltage threshold for solid-state storage media |
US8315092B2 (en) | 2010-01-27 | 2012-11-20 | Fusion-Io, Inc. | Apparatus, system, and method for determining a read voltage threshold for solid-state storage media |
US8661184B2 (en) | 2010-01-27 | 2014-02-25 | Fusion-Io, Inc. | Managing non-volatile media |
US8854882B2 (en) | 2010-01-27 | 2014-10-07 | Intelligent Intellectual Property Holdings 2 Llc | Configuring storage cells |
US8380915B2 (en) | 2010-01-27 | 2013-02-19 | Fusion-Io, Inc. | Apparatus, system, and method for managing solid-state storage media |
US9245653B2 (en) | 2010-03-15 | 2016-01-26 | Intelligent Intellectual Property Holdings 2 Llc | Reduced level cell mode for non-volatile memory |
US20110307689A1 (en) * | 2010-06-11 | 2011-12-15 | Jaewoong Chung | Processor support for hardware transactional memory |
US9880848B2 (en) * | 2010-06-11 | 2018-01-30 | Advanced Micro Devices, Inc. | Processor support for hardware transactional memory |
US20120008506A1 (en) * | 2010-07-12 | 2012-01-12 | International Business Machines Corporation | Detecting intermittent network link failures |
US8917610B2 (en) * | 2010-07-12 | 2014-12-23 | International Business Machines Corporation | Detecting intermittent network link failures |
US10013354B2 (en) | 2010-07-28 | 2018-07-03 | Sandisk Technologies Llc | Apparatus, system, and method for atomic storage operations |
US9910777B2 (en) | 2010-07-28 | 2018-03-06 | Sandisk Technologies Llc | Enhanced integrity through atomic writes in cache |
US20120079313A1 (en) * | 2010-09-24 | 2012-03-29 | Honeywell International Inc. | Distributed memory array supporting random access and file storage operations |
US20130279500A1 (en) * | 2010-11-03 | 2013-10-24 | Broadcom Corporation | Switch module |
US9276801B2 (en) * | 2010-11-03 | 2016-03-01 | Broadcom Corporation | Switch module |
US10133663B2 (en) | 2010-12-17 | 2018-11-20 | Longitude Enterprise Flash S.A.R.L. | Systems and methods for persistent address space management |
US8966184B2 (en) | 2011-01-31 | 2015-02-24 | Intelligent Intellectual Property Holdings 2, LLC. | Apparatus, system, and method for managing eviction of data |
US9092337B2 (en) | 2011-01-31 | 2015-07-28 | Intelligent Intellectual Property Holdings 2 Llc | Apparatus, system, and method for managing eviction of data |
US8874823B2 (en) | 2011-02-15 | 2014-10-28 | Intellectual Property Holdings 2 Llc | Systems and methods for managing data input/output operations |
US9003104B2 (en) | 2011-02-15 | 2015-04-07 | Intelligent Intellectual Property Holdings 2 Llc | Systems and methods for a file-level cache |
US8825937B2 (en) | 2011-02-25 | 2014-09-02 | Fusion-Io, Inc. | Writing cached data forward on read |
US9141527B2 (en) | 2011-02-25 | 2015-09-22 | Intelligent Intellectual Property Holdings 2 Llc | Managing cache pools |
US8527699B2 (en) | 2011-04-25 | 2013-09-03 | Pivot3, Inc. | Method and system for distributed RAID implementation |
US9201677B2 (en) | 2011-05-23 | 2015-12-01 | Intelligent Intellectual Property Holdings 2 Llc | Managing data input/output operations |
US9251086B2 (en) | 2012-01-24 | 2016-02-02 | SanDisk Technologies, Inc. | Apparatus, system, and method for managing a cache |
US9116812B2 (en) | 2012-01-27 | 2015-08-25 | Intelligent Intellectual Property Holdings 2 Llc | Systems and methods for a de-duplication cache |
US20150082122A1 (en) * | 2012-05-31 | 2015-03-19 | Aniruddha Nagendran Udipi | Local error detection and global error correction |
US9600359B2 (en) * | 2012-05-31 | 2017-03-21 | Hewlett Packard Enterprise Development Lp | Local error detection and global error correction |
US8804415B2 (en) | 2012-06-19 | 2014-08-12 | Fusion-Io, Inc. | Adaptive voltage range management in non-volatile memory |
US10339056B2 (en) | 2012-07-03 | 2019-07-02 | Sandisk Technologies Llc | Systems, methods and apparatus for cache transfers |
US9612966B2 (en) | 2012-07-03 | 2017-04-04 | Sandisk Technologies Llc | Systems, methods and apparatus for a virtual machine cache |
US8949653B1 (en) * | 2012-08-03 | 2015-02-03 | Symantec Corporation | Evaluating high-availability configuration |
US10346095B2 (en) | 2012-08-31 | 2019-07-09 | Sandisk Technologies, Llc | Systems, methods, and interfaces for adaptive cache persistence |
US9058123B2 (en) | 2012-08-31 | 2015-06-16 | Intelligent Intellectual Property Holdings 2 Llc | Systems, methods, and interfaces for adaptive persistence |
US10359972B2 (en) | 2012-08-31 | 2019-07-23 | Sandisk Technologies Llc | Systems, methods, and interfaces for adaptive persistence |
US9842053B2 (en) | 2013-03-15 | 2017-12-12 | Sandisk Technologies Llc | Systems and methods for persistent cache logging |
US9430508B2 (en) | 2013-12-30 | 2016-08-30 | Microsoft Technology Licensing, Llc | Disk optimized paging for column oriented databases |
US10257255B2 (en) | 2013-12-30 | 2019-04-09 | Microsoft Technology Licensing, Llc | Hierarchical organization for scale-out cluster |
US10366000B2 (en) | 2013-12-30 | 2019-07-30 | Microsoft Technology Licensing, Llc | Re-use of invalidated data in buffers |
US10885005B2 (en) | 2013-12-30 | 2021-01-05 | Microsoft Technology Licensing, Llc | Disk optimized paging for column oriented databases |
US9723054B2 (en) | 2013-12-30 | 2017-08-01 | Microsoft Technology Licensing, Llc | Hierarchical organization for scale-out cluster |
US9898398B2 (en) | 2013-12-30 | 2018-02-20 | Microsoft Technology Licensing, Llc | Re-use of invalidated data in buffers |
US9922060B2 (en) | 2013-12-30 | 2018-03-20 | Microsoft Technology Licensing, Llc | Disk optimized paging for column oriented databases |
US11055233B2 (en) * | 2015-10-27 | 2021-07-06 | Medallia, Inc. | Predictive memory management |
US10715596B2 (en) * | 2016-07-12 | 2020-07-14 | Wiwynn Corporation | Server system and control method for storage unit |
US20180024764A1 (en) * | 2016-07-22 | 2018-01-25 | Intel Corporation | Technologies for accelerating data writes |
CN108008914A (en) * | 2016-10-27 | 2018-05-08 | 华为技术有限公司 | The method, apparatus and ARM equipment of disk management in a kind of ARM equipment |
US10990415B2 (en) * | 2016-10-27 | 2021-04-27 | Huawei Technologies Co., Ltd. | Disk management method and apparatus in ARM device and ARM device |
CN107634865A (en) * | 2017-10-26 | 2018-01-26 | 郑州云海信息技术有限公司 | A kind of Novel storage system and management system |
US11397638B2 (en) * | 2017-12-28 | 2022-07-26 | Micron Technology, Inc. | Memory controller implemented error correction code memory |
US10606693B2 (en) * | 2017-12-28 | 2020-03-31 | Micron Technology, Inc. | Memory controller implemented error correction code memory |
US20190205206A1 (en) * | 2017-12-28 | 2019-07-04 | Micron Technology, Inc. | Memory controller implemented error correction code memory |
US10592173B2 (en) * | 2018-01-10 | 2020-03-17 | International Business Machines Corporation | Increasing storage efficiency of a data protection technique |
Also Published As
Publication number | Publication date |
---|---|
AU2001297837A1 (en) | 2002-11-18 |
WO2002091382A3 (en) | 2003-05-01 |
US6957313B2 (en) | 2005-10-18 |
US20020069337A1 (en) | 2002-06-06 |
WO2002091382A2 (en) | 2002-11-14 |
US6754785B2 (en) | 2004-06-22 |
US6745310B2 (en) | 2004-06-01 |
US20020087823A1 (en) | 2002-07-04 |
US20020069318A1 (en) | 2002-06-06 |
US20020069334A1 (en) | 2002-06-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20020069317A1 (en) | E-RAID system and method of operating the same | |
US7054998B2 (en) | File mode RAID subsystem | |
US5124987A (en) | Logical track write scheduling system for a parallel disk drive array data storage subsystem | |
EP1810173B1 (en) | System and method for configuring memory devices for use in a network | |
US6009481A (en) | Mass storage system using internal system-level mirroring | |
US5430855A (en) | Disk drive array memory system using nonuniform disk drives | |
US8024525B2 (en) | Storage control unit with memory cache protection via recorded log | |
US6985995B2 (en) | Data file migration from a mirrored RAID to a non-mirrored XOR-based RAID without rewriting the data | |
JP3304115B2 (en) | Configurable redundant array storage | |
US7228381B2 (en) | Storage system using fast storage device for storing redundant data | |
Teigland et al. | Volume Managers in Linux. | |
WO2002013033A1 (en) | Data storage system | |
US20030159082A1 (en) | Apparatus for reducing the overhead of cache coherency processing on each primary controller and increasing the overall throughput of the system | |
US8140886B2 (en) | Apparatus, system, and method for virtual storage access method volume data set recovery | |
JP3096392B2 (en) | Method and apparatus for full motion video network support using RAID | |
US20070214313A1 (en) | Apparatus, system, and method for concurrent RAID array relocation | |
US20020144028A1 (en) | Method and apparatus for increased performance of sequential I/O operations over busses of differing speeds | |
Wiebalck | ClusterRAID: Architecture and Prototype of a Distributed Fault-Tolerant Mass Storage System for Clusters | |
Scriba et al. | Disk and Storage System Basics | |
KR20010028691A (en) | RAID Level Y Using Disk Management Method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |