WO2002025446A2 - Method and system of allocating storage resources in a storage area network - Google Patents

Method and system of allocating storage resources in a storage area network Download PDF

Info

Publication number
WO2002025446A2
WO2002025446A2 PCT/US2000/042349 US0042349W WO0225446A2 WO 2002025446 A2 WO2002025446 A2 WO 2002025446A2 US 0042349 W US0042349 W US 0042349W WO 0225446 A2 WO0225446 A2 WO 0225446A2
Authority
WO
WIPO (PCT)
Prior art keywords
lun
storage
parameter
target
read
Prior art date
Application number
PCT/US2000/042349
Other languages
French (fr)
Other versions
WO2002025446A3 (en
Inventor
John W. Bates
Nicos A. Vekiarides
Original Assignee
Storageapps Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Storageapps Inc. filed Critical Storageapps Inc.
Priority to AU2001234413A priority Critical patent/AU2001234413A1/en
Publication of WO2002025446A2 publication Critical patent/WO2002025446A2/en
Publication of WO2002025446A3 publication Critical patent/WO2002025446A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0605Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/062Securing storage systems
    • G06F3/0622Securing storage systems in relation to access
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Definitions

  • the invention relates generally to the field of storage area networks, and more particularly to the allocation of storage in storage area networks.
  • Host-based approaches include those where storage management functionality is loaded and operated from the host (server).
  • Storage-based solutions are those where storage management functionality is loaded and operated from a storage array controller (or similar device).
  • Host-based approaches typically focus on application servers that run critical applications. For example, an application server may execute trading calculations for a trading room floor.
  • Application servers are typically expensive, and are essential to a user's daily operations.
  • Host-based storage solutions that run on application servers require processor cycles, and thus have a negative effect on the performance of the application server.
  • host-based solutions suffer from difficulties in managing software and hardware interoperability in a multi-platform environment. Some of these difficulties include: managing separate licenses for each operating system; training system administrators on the various operating systems and host-based software; managing upgrades of operating systems; and managing inter-host dependencies when some functionality needs to be altered.
  • Storage-based solutions suffer from many of the same drawbacks.
  • compatibility between primary and target storage sites may become an issue.
  • This compatibility problem may require a user to obtain hardware and software from the same provider or vendor.
  • hardware and software compatibility may also be limited to a particular range of versions provided by the vendor.
  • another vendor develops superior disk technology or connectivity solutions, a user may have difficulty introducing them into their existing environment.
  • SANs Storage area networks
  • a SAN is a network linking servers or workstations to storage devices.
  • a SAN is intended to increase the pool of storage available to each server in the computer network, while reducing the data supply demand on servers.
  • Conventional SANs still may suffer from some of the above discussed problems, and some of their own.
  • SANs may also suffer from problems associated with storage allocation.
  • One problem relates to detemiining how to present the storage itself. For instance, it must be determined which storage devices shall be designated to provide storage for which servers.
  • a further problem relates to storage security. It may be difficult for a SAN administrator to restrict access by certain servers to particular storage modules, while allowing other servers to access them. SAN administrators also have to confront the difficulty of coordinating networks that include a wide variety of different storage device types and manufactures, communication protocols, and other variations.
  • the present invention is directed to a system for allocating storage resources in a storage area network.
  • a logical unit number (LUN) mapper receives at least one storage request parameter and maps the storage request parameters to at least one physical LUN.
  • the LUN mapper includes at least one
  • the storage request parameters include a host id parameter, a target LUN parameter, and a target host bus adaptor (HBA) parameter.
  • the LUN mapper uses the host id parameter to select the one of the LUN maps that corresponds to the host id parameter.
  • the LUN mapper applies the target LUN parameter and the target HBA parameter to the selected LUN map to locate the physical LUN(s) stored in the selected LUN map.
  • the LUN mapper issues the received read/write storage request to at least one storage device that houses the physical LUN(s).
  • the one or more storage devices are located in the storage area network,
  • a method for allocating storage in a storage area network is provided.
  • a read/write storage request is received from a host computer.
  • the read/write storage request is resolved.
  • a physical LUN is determined from the resolved read/write storage request.
  • a read/write storage request is issued to a storage device in a storage area network.
  • the storage device corresponds to the determined physical LUN.
  • FIG. 1 illustrates a block diagram ofan example storage allocator network configuration, according to embodiments of the present invention.
  • FIG. 2 illustrates a block diagram of a storage allocator, according to an exemplary embodiment of the present invention.
  • FIG. 3 illustrates an exemplary set of LUN maps, according to embodiments of the present invention.
  • FIG. 4 shows a flowchart providing detailed operational steps of an example embodiment of the present invention.
  • FIGS. 5-7 illustrate storage area network implementations of the present invention, according to embodiments of the present invention.
  • FIG. 8 illustrates an example data communication network, according to an embodiment of the present invention.
  • FIG. 9 shows a simplified five-layered communication model, based on an Open System Interconnection (OSI) reference model.
  • OSI Open System Interconnection
  • FIG. 10 shows an example of a computer system for implementing the present invention.
  • the present invention is directed to a method and system of allocating resources in a storage area network (SAN).
  • the invention allocates storage resources in a SAN by mapping logical unit numbers (LUNs) representative of the storage resources to individual hosts, thereby allowing dynamic management of storage devices and hosts in the SAN.
  • LUNs logical unit numbers
  • each mapped LUN can be unique to a particular host or shared among different hosts.
  • FIG, 1 illustrates a block diagram of an example network configuration 100, according to embodiments of the present invention.
  • Network configuration 100 comprises server(s) 102, a storage allocator 104, and storage 106.
  • storage allocator 104 receives data I/O requests from server(s) 102, maps the data I/O requests to physical storage I/O requests, and forwards them to storage 106.
  • Server(s) 102 includes one or more hosts or servers that may be present in a data communication network.
  • Server(s) 102 manage network resources.
  • one or more of the servers of server(s) 102 may be file servers, network servers, application servers, database servers, or other types of server.
  • Server(s) 102 may comprise single processor or multi-processor servers.
  • Server(s) 102 process requests for data and applications from networked computers, workstations, and other peripheral devices otherwise known or described elsewhere herein.
  • Server(s) 102 output requests to storage allocator 104 to write to, or read data from storage 106.
  • Storage allocator 104 receives storage read and write requests from server(s) 102 via first communications link 108.
  • the storage read and write , requests include references to one or more locations in a logical data space recognized by the requesting host.
  • Storage allocator 104 parses the storage read and write requests by extracting various parameters that are included in the requests.
  • each storage read and write request from the host includes a host id, a target HBA (host bus adaptor), and a target LUN.
  • a LUN corresponds to a label for a subunit of storage on a target storage device (virtual or actual), such as a disk drive.
  • Storage allocator 104 uses the parsed read and write request to determine physical storage locations corresponding to the target locations in the logical data space.
  • one or more LUN maps in storage allocator 104 are used to map the virtual data locations to physical locations in storage 106.
  • Storage allocator 104 outputs read and write requests to physical storage/LUNs.
  • First communications link 108 may be an Ethernet link, a fibre channel link, a SCSI link, or other applicable type of communications link otherwise known or described elsewhere herein.
  • Storage 106 receives storage read and write requests from storage allocator 104 via second communications link 110. Storage 106 routes the received physical storage read and write requests to the corresponding storage device(s), which respond by reading or writing data as requested.
  • Storage 106 comprises one or more storage devices that may be directly coupled to storage allocator 104, and/or may be interconnected in a storage area network configuration that is coupled to storage allocator 104.
  • storage 106 may comprise one or more of a variety of storage devices, including tape systems, JBODs (Just a Bunch Of Disks), floppy disk drives, optical disk drives, disk arrays, and other applicable types of storage devices otherwise known or described elsewhere herein.
  • Storage devices in storage 106 may be interconnected via SCSI and fibre channel links, and other types of links, in a variety of topologies. Example topologies for storage 106 are more fully described below.
  • Second communications link 110 may be an Ethernet link, fibre channel link, a SCSI link, or other applicable type of communications link otherwise known or described elsewhere herein.
  • available storage is partitioned without any regard necessarily to the physical divisions of storage devices, and the partitions are stored in the LUN maps. These partitions are referred to as virtual or target LUNs. Portions of, or all of available physical storage may be partitioned and presented as virtual LUNs. Each host may be presented different portions of physical storage via the LUN maps, and/or some hosts may be presented with the same or overlapping portions. LUN maps allow the storage allocator of the present invention to make available a set of storage to a host, that may overlap or be completely independent from that made available to another host.
  • the virtual LUN configurations are stored in storage allocator 104 in LUN maps corresponding to each host.
  • a LUN map may be chosen, and then used to convert virtual storage read or write requests by the respective host to an actual physical storage location.
  • Embodiments for the storage allocator 104 of the present invention are described in further detail below.
  • Arbitrated A shared lOOMBps Fibre Channel transport supporting up to Loop 126 devices and 1 fabric attachment.
  • Fabric One or more Fibre Channel switches in a networked topology.
  • HBA Host bus adapter; an interface between a server or workstation bus and a Fibre Channel network.
  • Hub In Fibre Channel a wiring concentrator that collapses a loop topology into a physical star topology.
  • Initiator On a Fibre Channel network typically a server or a workstation that initiates transactions to disk or tape targets.
  • BOD Just a bunch of disks; typically configured as an Arbitrated Loop segment in a single chassis.
  • LAN Local area network a network linking multiple devices in a single geographical location.
  • Logical The entity within a target that executes I/O commands. For Unit example, SCSI I/O commands are sent to a target and executed by a logical unit within that target.
  • a SCSI physical disk typically has a single logical unit. Tape drives and array controllers may incorporate multiple logical units to which I/O commands can be addressed. Typically, each logical unit exported by an array controller corresponds to a virtual disk.
  • LUN Logical Unit Number The identifier of a logical unit within a target, such as a SCSI identifier.
  • SCSI-3 A SCSI standard that defines transmission of SCSI protocol over serial links.
  • Switch Any device used to store data; typically, magnetic disk media or tape.
  • Switch A device providing full bandwidth per port and high-speed routing of data via link-level addressing.
  • Target Typically a disk array or a tape subsystem on a Fibre Channel network.
  • Topology The physical or logical arrangement of devices in a networked configuration.
  • WAN Wide area network a network linking geographically remote sites.
  • a storage area network is a high-speed sub-network of shared storage devices.
  • a SAN operates to provide access to the shared storage devices for all servers on a local area network (LAN), wide area network (WAN), or other network coupled to the SAN.
  • LAN local area network
  • WAN wide area network
  • SAS System-on-Chip
  • SAN Network Attached Storage
  • NAS devices A SAN configuration potentially provides an entire pool of available storage to each network server, eliminating the conventional dedicated connection between server and disk. Furthermore, because a server's mass data storage requirements are fulfilled by the SAN, the server's processing power is largely conserved for the handling of applications rather than the handling of data requests.
  • FIG.8 illustrates an example data communication network 800, according to an embodiment of the present invention.
  • Network 800 includes a variety of devices which support conimunication between many different entities, including businesses, universities, individuals, government, and financial institutions.
  • a communication network, or combination of networks interconnects the elements of network 800.
  • Network 800 supports many different types of communication links implemented in a variety of architectures.
  • Network 800 may be considered to be an example of a storage area network that is applicable to the present invention.
  • Network 800 comprises a pool of storage devices, including disk arrays 820, 822, 824, 828, 830, and 832.
  • Network 800 provides access to this pool of storage devices to hosts/servers comprised by or coupled to network 800.
  • Network 800 may be configured as point-to-point, arbitrated loop, or fabric topologies, or combinations thereof.
  • Network 800 comprises a switch 812.
  • Switches such as switch 812, typically filter and forward packets between LAN segments.
  • Switch 812 may be an Ethernet switch, fast-Ethernet switch, or another type of switching device known to persons skilled in the relevant art(s).
  • switch 812 may be replaced by a router or a hub.
  • a router generally moves data from one local segment to another, and to the telecommunications carrier, such as AT&T or WorldCom, for remote sites.
  • a hub is a common connection point for devices in a network. Suitable hubs include passive hubs, intelligent hubs, and switching hubs, and other hub types known to persons skilled in the relevant art(s).
  • a personal computer 802 may interface with network 800 via switch 812.
  • a personal computer 802 may interface with network 800 via switch 812.
  • Further types of terminal equipment and devices that may interface with network 800 may include local area network (LAN) connections (e.g., other switches, routers, or hubs), personal computers with modems, content servers of multi-media, audio, video, and other information, pocket organizers, Personal Data Assistants (PDAs), cellular phones, Wireless Application Protocol (WAP) phones, and set-top boxes.
  • LAN local area network
  • PDAs Personal Data Assistants
  • WAP Wireless Application Protocol
  • Network 800 includes one or more hosts or servers.
  • network 800 comprises server 814 and server 816.
  • Servers 814 and 816 provide devices 802, 804, 806, 808, and 810 with network resources via switch 812.
  • Servers 814 and 816 are typically computer systems that process end-user requests for data and/or applications.
  • servers 814 and 816 provide redundant services.
  • server 814 and server 816 provide different services and thus share the processing load needed to serve the requirements of devices 802, 804, 806, 808, and 810.
  • one or both of servers 814 and 816 are connected to the Internet, and thus server 814 and/or server 816 may provide Internet access to network 800.
  • servers 814 and 816 may be Windows NT servers or UNIX servers, or other servers known to persons skilled in the relevant art(s).
  • a SAN appliance or device as described elsewhere herein may be inserted into network 800, according to embodiments of the present invention.
  • a SAN appliance 818 may to implemented to provide the required connectivity between the storage device networking (disk arrays 820, 822, 824, 828, 830, and 832) and hosts and servers 814 and 816, and to provide the additional functionality of the storage allocator of the present invention described elsewhere herein.
  • Network 800 includes a hub 826.
  • Hub 826 is connected to disk arrays 828, 830, and 832.
  • hub 826 is a fibre channel hub or other device used to allow access to data stored on connected storage devices, such as disk arrays 828, 830, and 832. Further fibre channel hubs may be cascaded with hub
  • hub 826 to allow for expansion of the SAN, with additional storage devices, servers, and other devices.
  • hub 826 is an arbitrated loop hub.
  • disk arrays 828, 830, and 832 are organized in a ring or loop topology, which is collapsed into a physical star configuration by hub 826.
  • Hub 826 allows the loop to circumvent a disabled or disconnected device while maintaining operation.
  • Network 800 may include one or more switches in addition to switch 812 that interface with storage devices.
  • a fibre channel switch or other high-speed device may be used to allow servers 814 and 816 access to data stored on connected storage devices, such as disk arrays 820, 822, and 824, via appliance 818.
  • Fibre channel switches may be cascaded to allow for the expansion of the SAN, with additional storage devices, servers, and other devices.
  • Disk arrays 820, 822, 824, 828, 830, and 832 are storage devices providing data and application resources to servers 814 and 816 through appliance 818 and hub 826. As shown in FIG. 8, the storage of network 800 is principally accessed by servers 814 and 816 through appliance 818.
  • the storage devices may be fibre channel-ready devices, or SCSI (Small Computer Systems Interface) compatible devices, for example. Fibre channel-to-SCSI bridges may be used to allow SCSI devices to interface with fibre channel hubs and switches, and other fibre channel-ready devices.
  • 828, 830, and 832 may instead be alternative types of storage devices, including tape systems, JBODs (Just a Bunch Of Disks), floppy disk drives, optical disk drives, and other related storage drive types.
  • tape systems including tape systems, JBODs (Just a Bunch Of Disks), floppy disk drives, optical disk drives, and other related storage drive types.
  • JBODs Just a Bunch Of Disks
  • floppy disk drives Just a Bunch Of Disks
  • optical disk drives and other related storage drive types.
  • the topology or architecture of network 800 will depend on the requirements of the particular application, and on the advantages offered by the chosen topology.
  • One or more hubs 826, one or more switches, and/or one or more appliances 818 may be interconnected in any number of combinations to increase network capacity.
  • Disk arrays 820, 822, 824, 828, 830, and 832, or fewer or more disk arrays as required, may be coupled to network 800 via these hubs 826, switches, and appliances 818.
  • FIG. 9 shows a simplified five-layered communication model, based on Open System Interconnection (OSI) reference model As shown in FIG.9, this model includes an application layer 908, a transport layer 910, a network layer 920, a data link layer 930, and a physical layer 940. As would be apparent to persons skilled in the relevant art(s), any number of different layers and network protocols may be used as required by a particular application.
  • OSI Open System Interconnection
  • Application layer 908 provides functionality for the different tools and information services which are used to access information over the communications network.
  • Example tools used to access information over a network include, but are not limited to Telnet log-in service 901, IRC chat 902, Web service 903, and SMTP (Simple Mail Transfer Protocol) electronic mail service 906.
  • Web service 903 allows access to HTTP documents 904, and FTP (File Transfer Protocol) and Gopher files 905.
  • Secure Socket Layer (SSL) is an optional protocol used to encrypt communications between a Web browser and Web server.
  • Transport layer 910 provides transmission control functionality using protocols, such as TCP, UDP, SPX, and others, that add information for acknowledgments that blocks of the file had been received.
  • Network layer 920 provides routing functionality by adding network addressing information using protocols such as IP, IPX, and others, that enable data transfer over the network.
  • Data link layer 930 provides information about the type of media on which the data was originated, such as Ethernet, token ring, or fiber distributed data interface (FDDI), and others.
  • Ethernet Ethernet
  • token ring token ring
  • FDDI fiber distributed data interface
  • Physical layer 940 provides encoding to place the data on the physical transport, such as twisted pair wire, copper wire, fiber optic cable, coaxial cable, and others. Description of this example environment in these terms is provided for convenience only. It is not intended that the invention be limited to application in this example environment. In fact, after reading the description herein, it will become apparent to persons skilled in the relevant art(s) how to implement the invention in alternative environments. Further details on designing, configuring, and operating storage area networks are provided in Tom Clark, "Designing
  • Structural implementations for the storage allocator of the present invention are described at a high-level and at a more detailed level. These structural implementations are described herein for illustrative purposes, and are not limiting. In particular, the present invention as described herein can be achieved using any number of structural implementations, including hardware, firmware, software, or any combination thereof. For instance, the present invention as described herein may be implemented in a computer system, application-specific box, or other device. In an embodiment, the present invention may be implemented in a SAN appliance, which provides for an interface between host servers and storage. Such SAN appliances include the SANLinkTM appliance, developed by StorageApps Inc., located in Bridgewater, New Jersey.
  • Storage allocator 104 provides the capability to disassociate the logical representation of a disk (or other storage device) as presented to a host from the physical components that make up the logical disk.
  • the storage allocator of the present invention has the ability to change a LUN as presented to the server from the storage (a process also referred to as "LUN mapping," which involves mapping physical space to give a logical view of the storage).
  • LUN mapping works by assigning each host with a separate LUN map. For each incoming command, SCSI or otherwise, the system identifies the host, target host bus adapter (HBA), and target LUN. The system uses that information to convert the received virtual or target LUN to an actual or physical LUN, via a LUN map.
  • HBA target host bus adapter
  • a host can be presented with any size storage pool that is required to meet changing needs.
  • multiple physical LUNs can be merged into a single storage image for a host (i.e., storage "expansion"). All connected storage units can be presented to a host as a single storage pool irrespective of storage area network topology.
  • any storage subsystem for example, SCSI or Fibre Channel
  • any host for example, UNIX or NT.
  • physical LUNs can be partitioned into multiple images for a host (i.e., storage "partitioning").
  • Storage allocation through LUN mapping provides a number of desirable storage functions, including the following:
  • Partitioning Through LUN mapping, a user may present a virtual device that is physically a partition of a larger device. Each partition appears to one or more hosts to be an actual physical device. These partitions may be used to share storage from a single disk (or other storage device) across multiple host operating systems.
  • LUN mapping Through LUN mapping, a user may present a virtual device consisting of multiple merged physical LUNs, creating an image ofan expanded LUN to the outside world. Thus, for example, the user may consolidate both SCSI and Fibre Channel storage systems into the same global storage pool. Security: The user may also manage access to storage via a LUN map
  • mask which operates to limit the LUN(s) that a host sees. This masking may be used to prohibit a host from accessing data that it does not have permission to access.
  • the storage allocator of the present invention is based independently of a host. Unlike host-based solutions, which simply mask storage resources and require additional software on every host in the network, the present invention requires no additional software, hardware, or firmware on host machines. The present invention is supported by all platforms and all operating systems. Furthermore, host machines may be added to the network seamlessly, without disruption to the network.
  • FIG.2 illustrates a block diagram of a storage allocator 104, according to an exemplary embodiment of the present invention.
  • Storage allocator 104 comprises a network interface 202, a read/write storage request parser 204, a LUN mapper 206, and a SAN interface 208.
  • Network interface 202 receives a read/write storage request 210 via first communication link 108, shown in FIG. 1.
  • Network interface 202 may include a host bus adaptor (HBA) that interfaces the internal bus architecture of storage allocator 104 with a fibre channel first communication link 108.
  • Network interface 202 may additionally or alternatively include an Ethernet port when first communication link 108 comprises an Ethernet link. Further interfaces, as would be known to persons skilled in the relevant art(s), for network interface 202 are within the scope and spirit of the invention.
  • HBA host bus adaptor
  • Read/write storage request parser 204 receives the read/write storage request 210 from network interface 202, extracts parameters from the read/write storage request 210, and supplies the parameters to LUN mapper 206. These parameters may include a host id, a target HBA, and a target LUN.
  • the host id parameter includes the host id of the storage request initiator server.
  • the target HBA parameter is the particular HBA port address in storage allocator 104 that receives the read write storage request 210.
  • a server or host may be provided with more than one virtual storage view.
  • the target LUN parameter is the logical unit number of the virtual storage unit to which the read/write storage request 210 is directed. In alternative embodiments, additional or different parameters may be supplied to LUN mapper 206.
  • LUN mapper 206 receives the extracted parameters from read/write storage request parser 204.
  • LUN mapper 206 stores LUN maps corresponding to servers/hosts, as described above. As described above, available storage is partitioned in the LUN maps without any regard necessarily to the physical divisions of storage devices. These partitions are referred to as virtual or target LUNs. Portions of, or all of available physical storage may be partitioned and presented as virtual LUNs. Each host or server may be presented with different portions of physical storage via the LUN maps, or some hosts may have the same portions presented. Furthermore, each host or server has a set of labels or names it uses to refer to the virtual LUNs stored in the LUN maps, which may be the same as or different from another host's set of labels or names.
  • LUN maps are identified by their respective server's host id value. Once identified, a LUN map may then be used to convert virtual storage read or write requests from its corresponding server or host to an actual physical storage location or LUN.
  • FIG. 3 illustrates an exemplary set of LUN maps, according to embodiments of the present invention. FIG. 3 shows a first
  • LUN map 302 a second LUN map 304, a third LUN map 306, and a fourth LUN map 308. While four LUN maps are shown in FIG. 3, the present invention is applicable to any number of LUN maps.
  • First, second, third, and fourth LUN maps 302, 304, 306, and 308 are stored in LUN mapper 206.
  • a LUN map may be a two-dimensional matrix.
  • a LUN map stores a two-dimensional array of physical LUN data. A first axis of the LUN map is indexed by target LUN information, and a second axis of the LUN map is indexed by target HBA information.
  • SAN interface 208 receives a physical LUN read/write request from LUN mapper 206, and transmits it as physical LUN read/write request 212 on second communication link 110 (FIG. 1). In this manner, storage allocator 104 issues the received read/write storage request 210 to the actual storage device or devices comprising the determined physical LUN in the storage area network.
  • SAN interface 208 may include one or more host bus adaptors (HBA) that interface the internal bus architecture of storage allocator 104 with second communication link 110.
  • HBA host bus adaptors
  • SAN interface 208 may support fibre channel and/or SCSI transmissions on second communication link 110. Further interfaces, as would be known to persons skilled in the relevant art(s), for SAN interface 208 are within the scope and spirit of the invention.
  • HBA host bus adaptors
  • FIG. 4 shows a flowchart 400 providing operational steps ofan example embodiment of the present invention.
  • the steps of FIG. 4 may be implemented in hardware, firmware, software, or a combination thereof.
  • the steps of FIG. 4 may be implemented by storage allocator 104.
  • the steps of FIG.4 do not necessarily have to occur in the order shown, as will be apparent to persons skilled in the relevant art(s) based on the teachings herein.
  • Other structural embodiments will be apparent to persons skilled in the relevant art(s) based on the discussion contained herein. These steps are described in detail below.
  • step 402 a read/write storage request is received from a host computer.
  • the read/write storage request may be received by network interface 202 described above.
  • step 404 the read/write storage request is resolved.
  • the parameters of host id, target LUN, and target HBA are extracted from the read/write storage request, as described above.
  • read/write storage request parser 204 may be used to execute step 404.
  • one or more physical LUNs are determined from the resolved read/write storage request.
  • LUN mapper 206 may be used to execute step 406.
  • an the read/write storage request is issued to one or more storage devices in a storage area network, wherein the storage devices correspond to the determined one or more physical LUNs.
  • the read/write storage request is a physical LUN read/write storage request 212.
  • the read/write storage request is issued by LUN mapper 206 to the storage area network via SAN interface 208.
  • step 406 may be implemented by the following procedure: To determine a physical LUN corresponding to a resolved read/write storage request, the proper LUN map must be selected.
  • LUN mapper 206 uses the received host id parameter to determine which of the stored LUN maps to use.
  • LUN map 308 may be the proper LUN map corresponding to the received host id.
  • the received parameters, target LUN 310 and target HBA 312 are used as X- and Y- coordinates when searching the contents of a LUN map. Applying these parameters as X- and Y- coordinates to the determined LUN map 308, the corresponding physical LUN 314 may be located, and supplied as the actual physical storage location to the requesting server.
  • LUN maps may be organized and searched in alternative fashions.
  • the invention may be implemented in any communication network, including LANs, WANs, and the Internet.
  • the invention is implemented in a storage area network, where many or all of the storage devices in the network are made available to the network's servers.
  • network configuration 100 of FIG. 1 shows a exemplary block diagram of a storage area network.
  • FIG. 5 illustrates a detailed block diagram ofan exemplary storage area network implementation 500, according to an embodiment of the present invention.
  • SAN implementation 500 comprises a first, second, and third host computer 502, 504, and 506, a first and second storage allocator 508 and 510, and a first, second, third, and fourth storage device 512, 514, 516, and 518.
  • Host computers 502, 504, and 506 are servers on a communications network.
  • Storage allocators 508 and 510 are substantially equivalent to storage allocator 104.
  • Storage allocators 508 and 510 parse storage read and write requests received from host computers 502, 504, and 506, and use the parsed read and write request to determine physical data locations in storage devices 512, 514,
  • Storage devices 512, 514, 516, and 518 are storage devices that receive physical storage read and write requests from storage allocators 508 and 510, and respond by reading data from or writing data to storage as requested.
  • SAN implementation 500 shows multiple storage allocators, storage allocators 508 and 510, operating in a single SAN. In further embodiments, any number of storage allocators may operate in a single SAN configuration.
  • storage allocators 508 and 510 may receive read and write storage requests from one or more of the same host computers. Furthermore, storage allocators 508 and 510 may transmit physical read and write storage requests to one or more of the same storage devices, In particular, storage allocators 508 and 510 may transmit read and write storage request to one or more of the same LUN partitions. Storage allocators 508 and 510 may assign the same or different names or labels to LUN partitions.
  • FIG, 6 illustrates a storage area network implementation 600, according to an exemplary embodiment of the present invention.
  • SAN implementation 600 comprises storage allocator 104 and storage 106.
  • Storage 106 comprises a loop hub 604, a first, second, and third disks 606, 608, and 610, a fibre channel disk array 612, a tape subsystem 616, and a SCSI disk array 618.
  • Storage allocator 104 may receive read and write storage requests from one or more of the host computers of server(s) 102. Furthermore, storage allocator 104 may transmit physical read and write storage requests to one or more of the storage devices of storage 106. Tape subsystem 616 receives and sends data according to SCSI protocols from/to storage allocator 104.
  • SCSI disk array 618 is coupled to a SCSI port of storage allocator 104, for data transfer.
  • Loop hub 604 forms an arbitrated loop with first, second, and third disks 606, 608, and 610. Loop hub 604 and fibre channel disk array 612 are coupled to fibre channel ports of storage allocator 104 via fibre channel communication links.
  • multiple LUNs of the storage devices of storage 106 can be merged into a single image for a host (expansion) in server(s) 102.
  • All connected storage units including first, second, and third disks 606, 608, and 610, fibre channel disk array 612, SCSI disk array 618, and tape subsystem 616, can be presented to a host as a single storage pool irrespective of the storage device types or storage area network topology.
  • any storage subsystem types including different types such as fibre channel disk array 612 and SCSI disk array 618, may be connected to any host in a single storage pool.
  • the storage subsystems can be can be partitioned into multiple LUN portions for a host.
  • fewer than all storage devices of storage 106 may be made available to a server or host for purposes of security. For example, access to certain storage devices/physical LUNs may be limited to prevent cormption of data. Access to storage devices and/or physical LUNs may be managed by storage allocator 104 via one or more LUN maps, which may operate to limit the LUN(s) that a host sees. This limiting or masking may be used to prohibit a host from accessing data that it does not have permission to access.
  • first, second, and third disks 606, 608, and 610, fibre channel disk array 612, SCSI disk array 618, and tape subsystem 616 may be hidden from view to selected servers of server(s) 102 by storage allocator 104.
  • FIG. 7 illustrates a storage area network implementation 700, according to an exemplary embodiment of the present invention.
  • SAN implementation 700 comprises server(s) 102, storage allocator 104, and storage 106.
  • Server(s) 102 comprises an NT server 702, a first UNIX server 704, a second UNIX server 706, and a switch 708.
  • Storage 106 comprises a fibre channel switch 710, a fibre channel disk array 712, and a SCSI disk array 14.
  • SAN implementation 700 will be used to demonstrate an example of LUN mapping, as follows.
  • fibre channel disk array 712 includes two LUN divisions, with the following attributes:
  • SCSI disk array 714 includes three LUN divisions, with the following attributes:
  • storage allocator 104 has access to a total storage capacity of five LUNs, with 279 GBytes total, in SAN implementation 700.
  • storage allocator 104 is configured to map or mask the total available storage capacity of 279 GBytes, and present it to the hosts in the form of eleven LUNs, with 181 GBytes total. 98 GBytes of the total available storage is not being made accessible to the hosts.
  • SAN implementation 700 provides for storage partitioning. Eleven LUNs, or virtual devices, that are a partition five physical LUNs. Each of the eleven LUNs appear to a host as an actual physical device. These virtual LUN partitions are used to share storage from the five physical LUNs across multiple host operating systems.
  • the five physical LUNs may be combined in a variety of ways to provide for storage expansion. In other words, the five LUNs may be merged to create an image of one or more expanded LUNs to the outside world.
  • SAN implementation 700 also provides for storage security.
  • the 98 GBytes of storage not provided to the servers may be designated as secure storage to be protected from access by the hosts.
  • each server has access to its own set of LUNs. For example, this arrangement may aid in preventing data cormption.
  • the available storage may be apportioned to NT server 702, first UNIX server 704, and second UNIX server 706 as follows.
  • NT server 702 is presented with three virtual LUNs in the following storage configuration:
  • UNIX servers 704 and 706 are each presented with four virtual LUNS in the following storage configuration:
  • each host has access to separate, non-overlapping physical LUNs or storage locations. As shown, the hosts are provided with different LUNs having the same virtual LUN names. In alternative embodiments, the available storage may be partitioned and presented to the hosts in any number of different ways. For example, two or more hosts may have access to the same storage locations, and/or same LUNs.
  • storage allocator 104 is implemented in a SAN appliance or other device.
  • the SAN appliance/storage allocator may be implemented with aspects of a computer system such as described in further detail below.
  • Users may have access to the computer system's graphical user interface (GUI) where they can manually configure the allocation of storage by the storage allocator.
  • GUI graphical user interface
  • users may also configure operation of the storage allocator over a network, such as the Internet, via a GUI such as a web browser. For instance, the user may be able to designate certain storage devices and/or locations as secure areas, and then allocate storage to the hosts from the remaining locations of storage.
  • the storage allocator may be configured to operate in a semi-automated or fully automated fashion, where storage is allocated by the operation of a computer algorithm. For instance, a user may be able to designate certain storage devices and/or locations as secure storage areas, or these may be designated automatically. Further, the computer algorithm may then allocate the remaining storage by proceeding from top to bottom through the list of remaining storage devices and/or locations, allocating all of the remaining storage in this order until no further storage is required. Alternatively, storage may be allocated to match storage device access speed with the required storage access speed of specific hosts and applications. Further ways of allocating storage will be known to persons skilled in the relevant art(s) from the teachings herein.
  • FIG. 10 An example of a computer system 1040 is shown in FIG. 10.
  • the computer system 1040 represents any single or multi-processor computer. In conjunction, single-threaded and multi-threaded applications can be used. Unified or distributed memory systems can be used.
  • Computer system 1040, or portions thereof, may be used to implement the present invention.
  • the storage allocator of the present invention may comprise software running on a computer system such as computer system 1040.
  • the storage allocator of the present invention is implemented in a multi-platform (platform independent) programming language such as JAVA 1.1, programming language/structured query language (PL/SQL), hyper-text mark-up language (HTML), practical extraction report language (PERL), common gateway interface/structured query language (CGI/SQL) or the like.
  • JAVA 1.1 programming language/structured query language
  • HTML hyper-text mark-up language
  • PROL practical extraction report language
  • CGI/SQL common gateway interface/structured query language
  • JavaTM- enabled and JavaScriptTM- enabled browsers are used, such as,
  • Active content Web pages can be used. Such active content Web pages can include JavaTM applets or ActiveXTM controls, or any other active content technology developed now or in the future.
  • the present invention is not intended to be limited to JavaTM, JavaScriptTM, or their enabled browsers, and can be implemented in any programming language and browser, developed now or in the future, as would be apparent to a person skilled in the art given this description.
  • the storage allocator of the present invention including read/write storage request parser 204 and LUN mapper 206, may be implemented using a high-level programming language (e.g., C++) and applications written for the Microsoft WindowsTM environment. It will be apparent to persons skilled in the relevant art(s) how to implement the invention in alternative embodiments from the teachings herein.
  • a high-level programming language e.g., C++
  • Computer system 1040 includes one or more processors, such as processor 1044.
  • processors 1044 can execute software implementing routines described above, such as shown in flowchart 400.
  • Each processor 1044 is connected to a communication infrastracture 1042 (e.g., a communications bus, cross-bar, or network).
  • a communication infrastracture 1042 e.g., a communications bus, cross-bar, or network.
  • Computer system 1040 can include a display interface 1002 that forwards graphics, text, and other data from the communication infrastructure 1042 (or from a frame buffer not shown) for display on the display unit 1030.
  • Computer system 1040 also includes a main memory 1046, preferably random access memory (RAM), and can also include a secondary memory 1048.
  • the secondary memory 1048 can include, for example, a hard disk drive 1050 and/or a removable storage drive 1052, representing a floppy disk drive, a magnetic tape drive, an optical disk drive, etc.
  • the removable storage drive 1052 reads from and/or writes to a removable storage unit 1054 in a well known manner.
  • Removable storage unit 1054 represents a floppy disk, magnetic tape, optical disk, etc., which is read by and written to by removable storage drive 1052.
  • the removable storage unit 1054 includes a computer usable storage medium having stored therein computer software and/or data.
  • secondary memory 1048 may include other similar means for allowing computer programs or other instructions to be loaded into computer system 1040.
  • Such means can include, for example, a removable storage unit 1062 and an interface 1060. Examples can include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, and other removable storage units 1062 and interfaces 1060 which allow software and data to be transferred from the removable storage unit 1062 to computer system 1040.
  • Computer system 1040 can also include a communications interface 1064.
  • Communications interface 1064 allows software and data to be transferred between computer system 1040 and external devices via communications path 1066.
  • Examples of communications interface 1064 can include a modem, a network interface (such as Ethernet card), a communications port, interfaces described above, etc.
  • Software and data transferred via communications interface 1064 are in the form of signals which can be electronic, electromagnetic, optical or other signals capable of being received by communications interface 1064, via communications path 1066.
  • communications interface 1064 provides a means by which computer system 1040 can interface to a network such as the
  • the present invention can be implemented using software running (that is, executing) in an environment similar to that described above with respect to FIG. 8.
  • computer program product is used to generally refer to removable storage unit 1054, a hard disk installed in hard disk drive 1050, or a carrier wave carrying software over a communication path 1066 (wireless link or cable) to communication interface 1064.
  • a computer useable medium can include magnetic media, optical media, or other recordable media, or media that transmits a carrier wave or other signal.
  • Computer programs are stored in main memory 1046 and/or secondary memory 1048. Computer programs can also be received via communications interface 1064. Such computer programs, when executed, enable the computer system 1040 to perform the features of the present invention as discussed herein. In particular, the computer programs, when executed, enable the processor 1044 to perform features of the present invention. Accordingly, such computer programs represent controllers of the computer system 1040.
  • the present invention can be implemented as control logic in software, firmware, hardware or any combination thereof.
  • the software may be stored in a computer program product and loaded into computer system 1040 using removable storage drive 1052, hard disk drive 1050, or interface 1060.
  • the computer program product may be downloaded to computer system 1040 over communications path 1066.
  • the control logic when executed by the one or more processors 1044, causes the processor(s) 1044 to perform functions of the invention as described herein.
  • the invention is implemented primarily in firmware and/or hardware using, for example, hardware components such as application specific integrated circuits (ASICs).
  • ASICs application specific integrated circuits

Abstract

A system for allocating storage resources in a storage area network is described. A logical unit number (LUN) mapper receives at least one storage request parameter and maps the storage request parameters to at least one physical LUN. The LUN mapper includes at least one LUN map. The storage request parameters include a host id parameter, a target LUN parameter, and a target host bus adaptor (HBA) parameter. The LUN mapper uses the host id parameter to select the one of the LUN maps that corresponds to the host id parameter. The LUN mapper applies the target LUN parameter and the target HBA parameter to the selected LUN map to locate the physical LUN(s) stored in the selected LUN map. The LUN mapper issues the received read/write storage request to at least one storage device that houses the physical LUN(s). The one or more storage devices are located in the storage area network.

Description

Method and System of Allocating Storage Resources in a Storage Area Network
Background of the Invention
Field of the Invention
The invention relates generally to the field of storage area networks, and more particularly to the allocation of storage in storage area networks.
Related Art
Traditional approaches exist for providing access to data in computer networks. These approaches generally fall into one of two categories: host-based and storage-based. Host-based approaches include those where storage management functionality is loaded and operated from the host (server). Storage-based solutions are those where storage management functionality is loaded and operated from a storage array controller (or similar device).
Host-based approaches typically focus on application servers that run critical applications. For example, an application server may execute trading calculations for a trading room floor. Application servers are typically expensive, and are essential to a user's daily operations. Host-based storage solutions that run on application servers require processor cycles, and thus have a negative effect on the performance of the application server. Additionally, host-based solutions suffer from difficulties in managing software and hardware interoperability in a multi-platform environment. Some of these difficulties include: managing separate licenses for each operating system; training system administrators on the various operating systems and host-based software; managing upgrades of operating systems; and managing inter-host dependencies when some functionality needs to be altered.
Storage-based solutions suffer from many of the same drawbacks. When storage-based solutions are provided with a disk array controller, compatibility between primary and target storage sites may become an issue. This compatibility problem may require a user to obtain hardware and software from the same provider or vendor. Moreover, hardware and software compatibility may also be limited to a particular range of versions provided by the vendor. Hence, if another vendor develops superior disk technology or connectivity solutions, a user may have difficulty introducing them into their existing environment.
Storage area networks (SANs) have been developed as a more recent approach to providing access to data in computer networks, to address some of the above concerns. A SAN is a network linking servers or workstations to storage devices. A SAN is intended to increase the pool of storage available to each server in the computer network, while reducing the data supply demand on servers. Conventional SANs, however, still may suffer from some of the above discussed problems, and some of their own.
For example, SANs may also suffer from problems associated with storage allocation. One problem relates to detemiining how to present the storage itself. For instance, it must be determined which storage devices shall be designated to provide storage for which servers. A further problem relates to storage security. It may be difficult for a SAN administrator to restrict access by certain servers to particular storage modules, while allowing other servers to access them. SAN administrators also have to confront the difficulty of coordinating networks that include a wide variety of different storage device types and manufactures, communication protocols, and other variations.
Therefore, in view of the above, what is needed is a system, method and computer program product for allocating storage in a storage area network. Furthermore, what is needed is a system, method and computer program product for allocating storage in a storage area network while maintaining storage security. Still further, what is needed is a system, method and computer program product for allocating storage from a variety of storage device types, manufactures, and interfaces in a storage area network. Summary of the Invention
The present invention is directed to a system for allocating storage resources in a storage area network. A logical unit number (LUN) mapper receives at least one storage request parameter and maps the storage request parameters to at least one physical LUN. The LUN mapper includes at least one
LUN map. The storage request parameters include a host id parameter, a target LUN parameter, and a target host bus adaptor (HBA) parameter. The LUN mapper uses the host id parameter to select the one of the LUN maps that corresponds to the host id parameter. The LUN mapper applies the target LUN parameter and the target HBA parameter to the selected LUN map to locate the physical LUN(s) stored in the selected LUN map. The LUN mapper issues the received read/write storage request to at least one storage device that houses the physical LUN(s). The one or more storage devices are located in the storage area network, In a further aspect of the present invention, a method for allocating storage in a storage area network is provided. A read/write storage request is received from a host computer. The read/write storage request is resolved. A physical LUN is determined from the resolved read/write storage request. A read/write storage request is issued to a storage device in a storage area network. The storage device corresponds to the determined physical LUN.
Further aspects of the present invention, and further features and benefits thereof, are described below. The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate the present invention and, together with the description, further serve to explain the principles of the invention and to enable a person skilled in the pertinent art to make and use the invention. Brief Description of the Figures
In the drawings, like reference numbers indicate identical or functionally similar elements. Additionally, the left-most digit(s) of a reference number identifies the drawing in which the reference number first appears. FIG. 1 illustrates a block diagram ofan example storage allocator network configuration, according to embodiments of the present invention.
FIG. 2 illustrates a block diagram of a storage allocator, according to an exemplary embodiment of the present invention.
FIG. 3 illustrates an exemplary set of LUN maps, according to embodiments of the present invention.
FIG. 4 shows a flowchart providing detailed operational steps of an example embodiment of the present invention.
FIGS. 5-7 illustrate storage area network implementations of the present invention, according to embodiments of the present invention. FIG. 8 illustrates an example data communication network, according to an embodiment of the present invention.
FIG. 9 shows a simplified five-layered communication model, based on an Open System Interconnection (OSI) reference model.
FIG. 10 shows an example of a computer system for implementing the present invention.
The present invention will now be described with reference to the accompanying drawings.
Detailed Description of the Preferred Embodiments
Overview
The present invention is directed to a method and system of allocating resources in a storage area network (SAN). The invention allocates storage resources in a SAN by mapping logical unit numbers (LUNs) representative of the storage resources to individual hosts, thereby allowing dynamic management of storage devices and hosts in the SAN. According to the present invention, each mapped LUN can be unique to a particular host or shared among different hosts.
FIG, 1 illustrates a block diagram of an example network configuration 100, according to embodiments of the present invention. Network configuration 100 comprises server(s) 102, a storage allocator 104, and storage 106. Generally, storage allocator 104 receives data I/O requests from server(s) 102, maps the data I/O requests to physical storage I/O requests, and forwards them to storage 106.
Server(s) 102 includes one or more hosts or servers that may be present in a data communication network. Server(s) 102 manage network resources. For instance, one or more of the servers of server(s) 102 may be file servers, network servers, application servers, database servers, or other types of server. Server(s) 102 may comprise single processor or multi-processor servers. Server(s) 102 process requests for data and applications from networked computers, workstations, and other peripheral devices otherwise known or described elsewhere herein. Server(s) 102 output requests to storage allocator 104 to write to, or read data from storage 106. Storage allocator 104 receives storage read and write requests from server(s) 102 via first communications link 108. The storage read and write, requests include references to one or more locations in a logical data space recognized by the requesting host. Storage allocator 104 parses the storage read and write requests by extracting various parameters that are included in the requests. In an embodiment, each storage read and write request from the host includes a host id, a target HBA (host bus adaptor), and a target LUN. Generally, a LUN corresponds to a label for a subunit of storage on a target storage device (virtual or actual), such as a disk drive. Storage allocator 104 uses the parsed read and write request to determine physical storage locations corresponding to the target locations in the logical data space. In a preferred embodiment, one or more LUN maps in storage allocator 104 are used to map the virtual data locations to physical locations in storage 106. Storage allocator 104 outputs read and write requests to physical storage/LUNs.
First communications link 108 may be an Ethernet link, a fibre channel link, a SCSI link, or other applicable type of communications link otherwise known or described elsewhere herein.
Storage 106 receives storage read and write requests from storage allocator 104 via second communications link 110. Storage 106 routes the received physical storage read and write requests to the corresponding storage device(s), which respond by reading or writing data as requested. Storage 106 comprises one or more storage devices that may be directly coupled to storage allocator 104, and/or may be interconnected in a storage area network configuration that is coupled to storage allocator 104. For example, storage 106 may comprise one or more of a variety of storage devices, including tape systems, JBODs (Just a Bunch Of Disks), floppy disk drives, optical disk drives, disk arrays, and other applicable types of storage devices otherwise known or described elsewhere herein. Storage devices in storage 106 may be interconnected via SCSI and fibre channel links, and other types of links, in a variety of topologies. Example topologies for storage 106 are more fully described below.
Second communications link 110 may be an Ethernet link, fibre channel link, a SCSI link, or other applicable type of communications link otherwise known or described elsewhere herein.
According to the present invention, available storage is partitioned without any regard necessarily to the physical divisions of storage devices, and the partitions are stored in the LUN maps. These partitions are referred to as virtual or target LUNs. Portions of, or all of available physical storage may be partitioned and presented as virtual LUNs. Each host may be presented different portions of physical storage via the LUN maps, and/or some hosts may be presented with the same or overlapping portions. LUN maps allow the storage allocator of the present invention to make available a set of storage to a host, that may overlap or be completely independent from that made available to another host.
The virtual LUN configurations are stored in storage allocator 104 in LUN maps corresponding to each host. A LUN map may be chosen, and then used to convert virtual storage read or write requests by the respective host to an actual physical storage location. Embodiments for the storage allocator 104 of the present invention are described in further detail below.
Description in these terms is provided for convenience only. It is not intended that the invention be limited to application in this example environment.
In fact, after reading the following description, it will become apparent to a person skilled in the relevant art how to implement the invention in alternative environments known now or developed in the future. Further detailed embodiments of the elements of network configuration 100 are discussed below. Terminology related to the present invention is described in the following subsection. Next, an example storage area network environment is described, in which the present invention may be applied. Detailed embodiments of the storage allocator of the present invention are presented in the subsequent subsection, followed by exemplary storage area network implementations of the storage allocator. Finally, an exemplary computer system in which the present invention may be implemented is then described.
Terminology
To more clearly delineate the present invention, an effort is made throughout the specification to adhere to the following term definitions as consistently as possible.
Arbitrated A shared lOOMBps Fibre Channel transport supporting up to Loop 126 devices and 1 fabric attachment. Fabric One or more Fibre Channel switches in a networked topology.
HBA Host bus adapter; an interface between a server or workstation bus and a Fibre Channel network.
Hub In Fibre Channel, a wiring concentrator that collapses a loop topology into a physical star topology.
Initiator On a Fibre Channel network, typically a server or a workstation that initiates transactions to disk or tape targets. BOD Just a bunch of disks; typically configured as an Arbitrated Loop segment in a single chassis.
LAN Local area network; a network linking multiple devices in a single geographical location.
Logical The entity within a target that executes I/O commands. For Unit example, SCSI I/O commands are sent to a target and executed by a logical unit within that target. A SCSI physical disk typically has a single logical unit. Tape drives and array controllers may incorporate multiple logical units to which I/O commands can be addressed. Typically, each logical unit exported by an array controller corresponds to a virtual disk.
LUN Logical Unit Number; The identifier of a logical unit within a target, such as a SCSI identifier.
Point-to- A dedicated Fibre Channel connection between two devices. point
Private A free-standing Arbitrated Loop with no fabric attachment. loop
Public loop An Arbitrated Loop attached to a fabric switch.
RAID Redundant Array of Independent Disks.
SCSI Small Computer Systems Interface; both a protocol for transmitting large blocks of data and a parallel bus architecture.
SCSI-3 A SCSI standard that defines transmission of SCSI protocol over serial links.
Storage Any device used to store data; typically, magnetic disk media or tape. Switch A device providing full bandwidth per port and high-speed routing of data via link-level addressing.
Target Typically a disk array or a tape subsystem on a Fibre Channel network.
Topology The physical or logical arrangement of devices in a networked configuration.
WAN Wide area network; a network linking geographically remote sites.
Example Storage Area Network Environment
In a preferred embodiment, the present invention is applicable to storage area networks. As discussed above, a storage area network (SAN) is a high-speed sub-network of shared storage devices. A SAN operates to provide access to the shared storage devices for all servers on a local area network (LAN), wide area network (WAN), or other network coupled to the SAN. SAN attached storage
(SAS) elements connect directly to the SAN, and provide file, database, block, or other types of data access services. SAS elements that provide such file access services are commonly called Network Attached Storage, or NAS devices. A SAN configuration potentially provides an entire pool of available storage to each network server, eliminating the conventional dedicated connection between server and disk. Furthermore, because a server's mass data storage requirements are fulfilled by the SAN, the server's processing power is largely conserved for the handling of applications rather than the handling of data requests.
FIG.8 illustrates an example data communication network 800, according to an embodiment of the present invention. Network 800 includes a variety of devices which support conimunication between many different entities, including businesses, universities, individuals, government, and financial institutions. As shown in FIG. 8, a communication network, or combination of networks, interconnects the elements of network 800. Network 800 supports many different types of communication links implemented in a variety of architectures. Network 800 may be considered to be an example of a storage area network that is applicable to the present invention. Network 800 comprises a pool of storage devices, including disk arrays 820, 822, 824, 828, 830, and 832. Network 800 provides access to this pool of storage devices to hosts/servers comprised by or coupled to network 800. Network 800 may be configured as point-to-point, arbitrated loop, or fabric topologies, or combinations thereof.
Network 800 comprises a switch 812. Switches, such as switch 812, typically filter and forward packets between LAN segments. Switch 812 may be an Ethernet switch, fast-Ethernet switch, or another type of switching device known to persons skilled in the relevant art(s). In other examples, switch 812 may be replaced by a router or a hub. A router generally moves data from one local segment to another, and to the telecommunications carrier, such as AT&T or WorldCom, for remote sites. A hub is a common connection point for devices in a network. Suitable hubs include passive hubs, intelligent hubs, and switching hubs, and other hub types known to persons skilled in the relevant art(s).
Various types of terminal equipment and devices may interface with network 800. For example, a personal computer 802, a workstation 804, a printer 806, alaptop mobile device 808, and a handheld mobile device 810 interface with network 800 via switch 812. Further types of terminal equipment and devices that may interface with network 800 may include local area network (LAN) connections (e.g., other switches, routers, or hubs), personal computers with modems, content servers of multi-media, audio, video, and other information, pocket organizers, Personal Data Assistants (PDAs), cellular phones, Wireless Application Protocol (WAP) phones, and set-top boxes. These and additional types of terminal equipment and devices, and ways to interface them with network 800, will be known by persons skilled in the relevant art(s) from the teachings herein.
Network 800 includes one or more hosts or servers. For example, network 800 comprises server 814 and server 816. Servers 814 and 816 provide devices 802, 804, 806, 808, and 810 with network resources via switch 812. Servers 814 and 816 are typically computer systems that process end-user requests for data and/or applications. In one example configuration, servers 814 and 816 provide redundant services. In another example configuration, server 814 and server 816 provide different services and thus share the processing load needed to serve the requirements of devices 802, 804, 806, 808, and 810. In further example configurations, one or both of servers 814 and 816 are connected to the Internet, and thus server 814 and/or server 816 may provide Internet access to network 800. One or both of servers 814 and 816 may be Windows NT servers or UNIX servers, or other servers known to persons skilled in the relevant art(s). A SAN appliance or device as described elsewhere herein may be inserted into network 800, according to embodiments of the present invention. For example, a SAN appliance 818 may to implemented to provide the required connectivity between the storage device networking (disk arrays 820, 822, 824, 828, 830, and 832) and hosts and servers 814 and 816, and to provide the additional functionality of the storage allocator of the present invention described elsewhere herein.
Network 800 includes a hub 826. Hub 826 is connected to disk arrays 828, 830, and 832. Preferably, hub 826 is a fibre channel hub or other device used to allow access to data stored on connected storage devices, such as disk arrays 828, 830, and 832. Further fibre channel hubs may be cascaded with hub
826 to allow for expansion of the SAN, with additional storage devices, servers, and other devices. In an example configuration for network 800, hub 826 is an arbitrated loop hub. In such an example, disk arrays 828, 830, and 832 are organized in a ring or loop topology, which is collapsed into a physical star configuration by hub 826. Hub 826 allows the loop to circumvent a disabled or disconnected device while maintaining operation.
Network 800 may include one or more switches in addition to switch 812 that interface with storage devices. For example, a fibre channel switch or other high-speed device may be used to allow servers 814 and 816 access to data stored on connected storage devices, such as disk arrays 820, 822, and 824, via appliance 818. Fibre channel switches may be cascaded to allow for the expansion of the SAN, with additional storage devices, servers, and other devices.
Disk arrays 820, 822, 824, 828, 830, and 832 are storage devices providing data and application resources to servers 814 and 816 through appliance 818 and hub 826. As shown in FIG. 8, the storage of network 800 is principally accessed by servers 814 and 816 through appliance 818. The storage devices may be fibre channel-ready devices, or SCSI (Small Computer Systems Interface) compatible devices, for example. Fibre channel-to-SCSI bridges may be used to allow SCSI devices to interface with fibre channel hubs and switches, and other fibre channel-ready devices. One or more of disk arrays 820, 822, 824,
828, 830, and 832 may instead be alternative types of storage devices, including tape systems, JBODs (Just a Bunch Of Disks), floppy disk drives, optical disk drives, and other related storage drive types.
The topology or architecture of network 800 will depend on the requirements of the particular application, and on the advantages offered by the chosen topology. One or more hubs 826, one or more switches, and/or one or more appliances 818 may be interconnected in any number of combinations to increase network capacity. Disk arrays 820, 822, 824, 828, 830, and 832, or fewer or more disk arrays as required, may be coupled to network 800 via these hubs 826, switches, and appliances 818.
Communication over a communication network, such as shown in network 800 of FIG. 8, is carried out through different layers. FIG. 9 shows a simplified five-layered communication model, based on Open System Interconnection (OSI) reference model As shown in FIG.9, this model includes an application layer 908, a transport layer 910, a network layer 920, a data link layer 930, and a physical layer 940. As would be apparent to persons skilled in the relevant art(s), any number of different layers and network protocols may be used as required by a particular application.
Application layer 908 provides functionality for the different tools and information services which are used to access information over the communications network. Example tools used to access information over a network include, but are not limited to Telnet log-in service 901, IRC chat 902, Web service 903, and SMTP (Simple Mail Transfer Protocol) electronic mail service 906. Web service 903 allows access to HTTP documents 904, and FTP (File Transfer Protocol) and Gopher files 905. Secure Socket Layer (SSL) is an optional protocol used to encrypt communications between a Web browser and Web server.
Transport layer 910 provides transmission control functionality using protocols, such as TCP, UDP, SPX, and others, that add information for acknowledgments that blocks of the file had been received.
Network layer 920 provides routing functionality by adding network addressing information using protocols such as IP, IPX, and others, that enable data transfer over the network.
Data link layer 930 provides information about the type of media on which the data was originated, such as Ethernet, token ring, or fiber distributed data interface (FDDI), and others.
Physical layer 940 provides encoding to place the data on the physical transport, such as twisted pair wire, copper wire, fiber optic cable, coaxial cable, and others. Description of this example environment in these terms is provided for convenience only. It is not intended that the invention be limited to application in this example environment. In fact, after reading the description herein, it will become apparent to persons skilled in the relevant art(s) how to implement the invention in alternative environments. Further details on designing, configuring, and operating storage area networks are provided in Tom Clark, "Designing
Storage Area Networks: A Practical Reference for Implementing Fibre Channel SANs" (1999). Storage Allocator Embodiments
Structural implementations for the storage allocator of the present invention are described at a high-level and at a more detailed level. These structural implementations are described herein for illustrative purposes, and are not limiting. In particular, the present invention as described herein can be achieved using any number of structural implementations, including hardware, firmware, software, or any combination thereof. For instance, the present invention as described herein may be implemented in a computer system, application-specific box, or other device. In an embodiment, the present invention may be implemented in a SAN appliance, which provides for an interface between host servers and storage. Such SAN appliances include the SANLink™ appliance, developed by StorageApps Inc., located in Bridgewater, New Jersey.
Storage allocator 104 provides the capability to disassociate the logical representation of a disk (or other storage device) as presented to a host from the physical components that make up the logical disk. Specifically, the storage allocator of the present invention has the ability to change a LUN as presented to the server from the storage (a process also referred to as "LUN mapping," which involves mapping physical space to give a logical view of the storage). In an embodiment, LUN mapping works by assigning each host with a separate LUN map. For each incoming command, SCSI or otherwise, the system identifies the host, target host bus adapter (HBA), and target LUN. The system uses that information to convert the received virtual or target LUN to an actual or physical LUN, via a LUN map. Through such LUN mapping, a host can be presented with any size storage pool that is required to meet changing needs. For example, multiple physical LUNs can be merged into a single storage image for a host (i.e., storage "expansion"). All connected storage units can be presented to a host as a single storage pool irrespective of storage area network topology. The result is that any storage subsystem (for example, SCSI or Fibre Channel) can be connected to any host (for example, UNIX or NT). Alternatively, physical LUNs can be partitioned into multiple images for a host (i.e., storage "partitioning").
Storage allocation through LUN mapping provides a number of desirable storage functions, including the following:
Partitioning: Through LUN mapping, a user may present a virtual device that is physically a partition of a larger device. Each partition appears to one or more hosts to be an actual physical device. These partitions may be used to share storage from a single disk (or other storage device) across multiple host operating systems.
Expansion; Through LUN mapping, a user may present a virtual device consisting of multiple merged physical LUNs, creating an image ofan expanded LUN to the outside world. Thus, for example, the user may consolidate both SCSI and Fibre Channel storage systems into the same global storage pool. Security: The user may also manage access to storage via a LUN map
"mask", which operates to limit the LUN(s) that a host sees. This masking may be used to prohibit a host from accessing data that it does not have permission to access.
In embodiments, the storage allocator of the present invention is based independently of a host. Unlike host-based solutions, which simply mask storage resources and require additional software on every host in the network, the present invention requires no additional software, hardware, or firmware on host machines. The present invention is supported by all platforms and all operating systems. Furthermore, host machines may be added to the network seamlessly, without disruption to the network.
FIG.2 illustrates a block diagram of a storage allocator 104, according to an exemplary embodiment of the present invention. Storage allocator 104 comprises a network interface 202, a read/write storage request parser 204, a LUN mapper 206, and a SAN interface 208. Network interface 202 receives a read/write storage request 210 via first communication link 108, shown in FIG. 1. Network interface 202 may include a host bus adaptor (HBA) that interfaces the internal bus architecture of storage allocator 104 with a fibre channel first communication link 108. Network interface 202 may additionally or alternatively include an Ethernet port when first communication link 108 comprises an Ethernet link. Further interfaces, as would be known to persons skilled in the relevant art(s), for network interface 202 are within the scope and spirit of the invention.
Read/write storage request parser 204 receives the read/write storage request 210 from network interface 202, extracts parameters from the read/write storage request 210, and supplies the parameters to LUN mapper 206. These parameters may include a host id, a target HBA, and a target LUN. The host id parameter includes the host id of the storage request initiator server. The target HBA parameter is the particular HBA port address in storage allocator 104 that receives the read write storage request 210. A server or host may be provided with more than one virtual storage view. The target LUN parameter is the logical unit number of the virtual storage unit to which the read/write storage request 210 is directed. In alternative embodiments, additional or different parameters may be supplied to LUN mapper 206. LUN mapper 206 receives the extracted parameters from read/write storage request parser 204. LUN mapper 206 stores LUN maps corresponding to servers/hosts, as described above. As described above, available storage is partitioned in the LUN maps without any regard necessarily to the physical divisions of storage devices. These partitions are referred to as virtual or target LUNs. Portions of, or all of available physical storage may be partitioned and presented as virtual LUNs. Each host or server may be presented with different portions of physical storage via the LUN maps, or some hosts may have the same portions presented. Furthermore, each host or server has a set of labels or names it uses to refer to the virtual LUNs stored in the LUN maps, which may be the same as or different from another host's set of labels or names. In an embodiment, LUN maps are identified by their respective server's host id value. Once identified, a LUN map may then be used to convert virtual storage read or write requests from its corresponding server or host to an actual physical storage location or LUN. FIG. 3 illustrates an exemplary set of LUN maps, according to embodiments of the present invention. FIG. 3 shows a first
LUN map 302, a second LUN map 304, a third LUN map 306, and a fourth LUN map 308. While four LUN maps are shown in FIG. 3, the present invention is applicable to any number of LUN maps. First, second, third, and fourth LUN maps 302, 304, 306, and 308 are stored in LUN mapper 206. As shown in FIG. 3, a LUN map may be a two-dimensional matrix. In a preferred embodiment, a LUN map stores a two-dimensional array of physical LUN data. A first axis of the LUN map is indexed by target LUN information, and a second axis of the LUN map is indexed by target HBA information.
In FIG.2, SAN interface 208 receives a physical LUN read/write request from LUN mapper 206, and transmits it as physical LUN read/write request 212 on second communication link 110 (FIG. 1). In this manner, storage allocator 104 issues the received read/write storage request 210 to the actual storage device or devices comprising the determined physical LUN in the storage area network. SAN interface 208 may include one or more host bus adaptors (HBA) that interface the internal bus architecture of storage allocator 104 with second communication link 110. SAN interface 208 may support fibre channel and/or SCSI transmissions on second communication link 110. Further interfaces, as would be known to persons skilled in the relevant art(s), for SAN interface 208 are within the scope and spirit of the invention. FIG. 4 shows a flowchart 400 providing operational steps ofan example embodiment of the present invention. The steps of FIG. 4 may be implemented in hardware, firmware, software, or a combination thereof. For instance, the steps of FIG. 4 may be implemented by storage allocator 104. Furthermore, the steps of FIG.4 do not necessarily have to occur in the order shown, as will be apparent to persons skilled in the relevant art(s) based on the teachings herein. Other structural embodiments will be apparent to persons skilled in the relevant art(s) based on the discussion contained herein. These steps are described in detail below.
The process begins with step 402. In step 402, a read/write storage request is received from a host computer. For instance, in embodiments, the read/write storage request may be received by network interface 202 described above.
In step 404, the read/write storage request is resolved. In embodiments, the parameters of host id, target LUN, and target HBA are extracted from the read/write storage request, as described above. In an embodiment, read/write storage request parser 204 may be used to execute step 404.
In step 406, one or more physical LUNs are determined from the resolved read/write storage request. In an embodiment, LUN mapper 206 may be used to execute step 406. In step 408, an the read/write storage request is issued to one or more storage devices in a storage area network, wherein the storage devices correspond to the determined one or more physical LUNs. In an embodiment, the read/write storage request is a physical LUN read/write storage request 212. In an embodiment, the read/write storage request is issued by LUN mapper 206 to the storage area network via SAN interface 208.
In an exemplary embodiment of the present invention, step 406 may be implemented by the following procedure: To determine a physical LUN corresponding to a resolved read/write storage request, the proper LUN map must be selected. LUN mapper 206 uses the received host id parameter to determine which of the stored LUN maps to use. For example, LUN map 308 may be the proper LUN map corresponding to the received host id. As shown in FIG. 3, the received parameters, target LUN 310 and target HBA 312, are used as X- and Y- coordinates when searching the contents of a LUN map. Applying these parameters as X- and Y- coordinates to the determined LUN map 308, the corresponding physical LUN 314 may be located, and supplied as the actual physical storage location to the requesting server. As would be recognized by persons skilled in the relevant art(s) from the teachings herein, LUN maps may be organized and searched in alternative fashions.
The embodiments for the storage allocator of the present invention described above are provided for purposes of illustration. These embodiments are not intended to limit the invention. Alternate embodiments, differing slightly or substantially from those described herein, will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein.
Storage Area Network Embodiments of the Storage Allocator
The invention may be implemented in any communication network, including LANs, WANs, and the Internet. Preferably the invention is implemented in a storage area network, where many or all of the storage devices in the network are made available to the network's servers. As described above, network configuration 100 of FIG. 1 shows a exemplary block diagram of a storage area network.
FIG. 5 illustrates a detailed block diagram ofan exemplary storage area network implementation 500, according to an embodiment of the present invention. SAN implementation 500 comprises a first, second, and third host computer 502, 504, and 506, a first and second storage allocator 508 and 510, and a first, second, third, and fourth storage device 512, 514, 516, and 518.
Host computers 502, 504, and 506 are servers on a communications network. Storage allocators 508 and 510 are substantially equivalent to storage allocator 104. Storage allocators 508 and 510 parse storage read and write requests received from host computers 502, 504, and 506, and use the parsed read and write request to determine physical data locations in storage devices 512, 514,
516, and 518. Storage devices 512, 514, 516, and 518 are storage devices that receive physical storage read and write requests from storage allocators 508 and 510, and respond by reading data from or writing data to storage as requested. SAN implementation 500 shows multiple storage allocators, storage allocators 508 and 510, operating in a single SAN. In further embodiments, any number of storage allocators may operate in a single SAN configuration.
As shown in SAN implementation 500, storage allocators 508 and 510 may receive read and write storage requests from one or more of the same host computers. Furthermore, storage allocators 508 and 510 may transmit physical read and write storage requests to one or more of the same storage devices, In particular, storage allocators 508 and 510 may transmit read and write storage request to one or more of the same LUN partitions. Storage allocators 508 and 510 may assign the same or different names or labels to LUN partitions.
FIG, 6 illustrates a storage area network implementation 600, according to an exemplary embodiment of the present invention. SAN implementation 600 comprises storage allocator 104 and storage 106. Storage 106 comprises a loop hub 604, a first, second, and third disks 606, 608, and 610, a fibre channel disk array 612, a tape subsystem 616, and a SCSI disk array 618.
Storage allocator 104 may receive read and write storage requests from one or more of the host computers of server(s) 102. Furthermore, storage allocator 104 may transmit physical read and write storage requests to one or more of the storage devices of storage 106. Tape subsystem 616 receives and sends data according to SCSI protocols from/to storage allocator 104.
SCSI disk array 618 is coupled to a SCSI port of storage allocator 104, for data transfer.
Loop hub 604 forms an arbitrated loop with first, second, and third disks 606, 608, and 610. Loop hub 604 and fibre channel disk array 612 are coupled to fibre channel ports of storage allocator 104 via fibre channel communication links.
As described above, multiple LUNs of the storage devices of storage 106 can be merged into a single image for a host (expansion) in server(s) 102. All connected storage units, including first, second, and third disks 606, 608, and 610, fibre channel disk array 612, SCSI disk array 618, and tape subsystem 616, can be presented to a host as a single storage pool irrespective of the storage device types or storage area network topology. As shown in FIG. 106, any storage subsystem types, including different types such as fibre channel disk array 612 and SCSI disk array 618, may be connected to any host in a single storage pool. Alternatively, the storage subsystems can be can be partitioned into multiple LUN portions for a host.
Furthermore, fewer than all storage devices of storage 106 may be made available to a server or host for purposes of security. For example, access to certain storage devices/physical LUNs may be limited to prevent cormption of data. Access to storage devices and/or physical LUNs may be managed by storage allocator 104 via one or more LUN maps, which may operate to limit the LUN(s) that a host sees. This limiting or masking may be used to prohibit a host from accessing data that it does not have permission to access. For example, all of, or any portion of first, second, and third disks 606, 608, and 610, fibre channel disk array 612, SCSI disk array 618, and tape subsystem 616 may be hidden from view to selected servers of server(s) 102 by storage allocator 104.
FIG. 7 illustrates a storage area network implementation 700, according to an exemplary embodiment of the present invention. SAN implementation 700 comprises server(s) 102, storage allocator 104, and storage 106. Server(s) 102 comprises an NT server 702, a first UNIX server 704, a second UNIX server 706, and a switch 708. Storage 106 comprises a fibre channel switch 710, a fibre channel disk array 712, and a SCSI disk array 14. SAN implementation 700 will be used to demonstrate an example of LUN mapping, as follows.
In this LUN mapping example, fibre channel disk array 712 includes two LUN divisions, with the following attributes:
Figure imgf000023_0001
SCSI disk array 714 includes three LUN divisions, with the following attributes:
Figure imgf000024_0001
Therefore, according to the present example, storage allocator 104 has access to a total storage capacity of five LUNs, with 279 GBytes total, in SAN implementation 700.
In the present example, storage allocator 104 is configured to map or mask the total available storage capacity of 279 GBytes, and present it to the hosts in the form of eleven LUNs, with 181 GBytes total. 98 GBytes of the total available storage is not being made accessible to the hosts.
SAN implementation 700 provides for storage partitioning. Eleven LUNs, or virtual devices, that are a partition five physical LUNs. Each of the eleven LUNs appear to a host as an actual physical device. These virtual LUN partitions are used to share storage from the five physical LUNs across multiple host operating systems. In alternative embodiments, the five physical LUNs may be combined in a variety of ways to provide for storage expansion. In other words, the five LUNs may be merged to create an image of one or more expanded LUNs to the outside world.
SAN implementation 700 also provides for storage security. For example, the 98 GBytes of storage not provided to the servers may be designated as secure storage to be protected from access by the hosts. Furthermore, each server has access to its own set of LUNs. For example, this arrangement may aid in preventing data cormption.
In the present example, the available storage may be apportioned to NT server 702, first UNIX server 704, and second UNIX server 706 as follows. NT server 702 is presented with three virtual LUNs in the following storage configuration:
Figure imgf000025_0001
UNIX servers 704 and 706 are each presented with four virtual LUNS in the following storage configuration:
Figure imgf000025_0002
In the present example, each host has access to separate, non-overlapping physical LUNs or storage locations. As shown, the hosts are provided with different LUNs having the same virtual LUN names. In alternative embodiments, the available storage may be partitioned and presented to the hosts in any number of different ways. For example, two or more hosts may have access to the same storage locations, and/or same LUNs.
In an embodiment, storage allocator 104 is implemented in a SAN appliance or other device. For instance, the SAN appliance/storage allocator may be implemented with aspects of a computer system such as described in further detail below. Users may have access to the computer system's graphical user interface (GUI) where they can manually configure the allocation of storage by the storage allocator. In embodiments, users may also configure operation of the storage allocator over a network, such as the Internet, via a GUI such as a web browser. For instance, the user may be able to designate certain storage devices and/or locations as secure areas, and then allocate storage to the hosts from the remaining locations of storage.
In alternative embodiments, the storage allocator may be configured to operate in a semi-automated or fully automated fashion, where storage is allocated by the operation of a computer algorithm. For instance, a user may be able to designate certain storage devices and/or locations as secure storage areas, or these may be designated automatically. Further, the computer algorithm may then allocate the remaining storage by proceeding from top to bottom through the list of remaining storage devices and/or locations, allocating all of the remaining storage in this order until no further storage is required. Alternatively, storage may be allocated to match storage device access speed with the required storage access speed of specific hosts and applications. Further ways of allocating storage will be known to persons skilled in the relevant art(s) from the teachings herein.
It will be known to persons skilled in the relevant art(s) from the teachings herein that the invention is adaptable to additional or fewer servers, additional or fewer storage devices, and amounts of storage capacity different than presented in the example of SAN implementation 700, and the other SAN implementation examples. Description in these terms is provided for convenience only. It is not intended that the invention be limited to application in this example environment. In fact, after reading the following description, it will become apparent to persons skilled in the relevant arts how to implement the invention in alternative environments known now or developed in the future.
Example Computer System
An example of a computer system 1040 is shown in FIG. 10. The computer system 1040 represents any single or multi-processor computer. In conjunction, single-threaded and multi-threaded applications can be used. Unified or distributed memory systems can be used. Computer system 1040, or portions thereof, may be used to implement the present invention. For example, the storage allocator of the present invention may comprise software running on a computer system such as computer system 1040. In one example, the storage allocator of the present invention is implemented in a multi-platform (platform independent) programming language such as JAVA 1.1, programming language/structured query language (PL/SQL), hyper-text mark-up language (HTML), practical extraction report language (PERL), common gateway interface/structured query language (CGI/SQL) or the like. Java™- enabled and JavaScript™- enabled browsers are used, such as,
Netscape™, Ho lava™, and Microsoft™ Explorer™ browsers. Active content Web pages can be used. Such active content Web pages can include Java™ applets or ActiveX™ controls, or any other active content technology developed now or in the future. The present invention, however, is not intended to be limited to Java™, JavaScript™, or their enabled browsers, and can be implemented in any programming language and browser, developed now or in the future, as would be apparent to a person skilled in the art given this description.
In another example, the storage allocator of the present invention, including read/write storage request parser 204 and LUN mapper 206, may be implemented using a high-level programming language (e.g., C++) and applications written for the Microsoft Windows™ environment. It will be apparent to persons skilled in the relevant art(s) how to implement the invention in alternative embodiments from the teachings herein.
Computer system 1040 includes one or more processors, such as processor 1044. One or more processors 1044 can execute software implementing routines described above, such as shown in flowchart 400. Each processor 1044 is connected to a communication infrastracture 1042 (e.g., a communications bus, cross-bar, or network). Various software embodiments are described in terms of this exemplary computer system. After reading this description, it will become apparent to a person skilled in the relevant art how to implement the invention using other computer systems and/or computer architectures.
Computer system 1040 can include a display interface 1002 that forwards graphics, text, and other data from the communication infrastructure 1042 (or from a frame buffer not shown) for display on the display unit 1030.
Computer system 1040 also includes a main memory 1046, preferably random access memory (RAM), and can also include a secondary memory 1048. The secondary memory 1048 can include, for example, a hard disk drive 1050 and/or a removable storage drive 1052, representing a floppy disk drive, a magnetic tape drive, an optical disk drive, etc. The removable storage drive 1052 reads from and/or writes to a removable storage unit 1054 in a well known manner. Removable storage unit 1054 represents a floppy disk, magnetic tape, optical disk, etc., which is read by and written to by removable storage drive 1052. As will be appreciated, the removable storage unit 1054 includes a computer usable storage medium having stored therein computer software and/or data.
In alternative embodiments, secondary memory 1048 may include other similar means for allowing computer programs or other instructions to be loaded into computer system 1040. Such means can include, for example, a removable storage unit 1062 and an interface 1060. Examples can include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, and other removable storage units 1062 and interfaces 1060 which allow software and data to be transferred from the removable storage unit 1062 to computer system 1040.
Computer system 1040 can also include a communications interface 1064. Communications interface 1064 allows software and data to be transferred between computer system 1040 and external devices via communications path 1066. Examples of communications interface 1064 can include a modem, a network interface (such as Ethernet card), a communications port, interfaces described above, etc. Software and data transferred via communications interface 1064 are in the form of signals which can be electronic, electromagnetic, optical or other signals capable of being received by communications interface 1064, via communications path 1066. Note that communications interface 1064 provides a means by which computer system 1040 can interface to a network such as the
Internet.
The present invention can be implemented using software running (that is, executing) in an environment similar to that described above with respect to FIG. 8. In this document, the term "computer program product" is used to generally refer to removable storage unit 1054, a hard disk installed in hard disk drive 1050, or a carrier wave carrying software over a communication path 1066 (wireless link or cable) to communication interface 1064. A computer useable medium can include magnetic media, optical media, or other recordable media, or media that transmits a carrier wave or other signal. These computer program products are means for providing software to computer system 1040.
Computer programs (also called computer control logic) are stored in main memory 1046 and/or secondary memory 1048. Computer programs can also be received via communications interface 1064. Such computer programs, when executed, enable the computer system 1040 to perform the features of the present invention as discussed herein. In particular, the computer programs, when executed, enable the processor 1044 to perform features of the present invention. Accordingly, such computer programs represent controllers of the computer system 1040.
The present invention can be implemented as control logic in software, firmware, hardware or any combination thereof. In an embodiment where the invention is implemented using software, the software may be stored in a computer program product and loaded into computer system 1040 using removable storage drive 1052, hard disk drive 1050, or interface 1060. Alternatively, the computer program product may be downloaded to computer system 1040 over communications path 1066. The control logic (software), when executed by the one or more processors 1044, causes the processor(s) 1044 to perform functions of the invention as described herein.
In another embodiment, the invention is implemented primarily in firmware and/or hardware using, for example, hardware components such as application specific integrated circuits (ASICs). Implementation of a hardware state machine so as to perform the functions described herein will be apparent to persons skilled in the relevant art(s) from the teachings herein.
Conclusion
While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the invention. Thus, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims

Whatls Claimed Is:
1. A storage area network, comprising: at least one server; a plurality of storage devices; and a storage allocator, connected between said at least one server and said plurality of storage devices, said storage allocator including a read/write storage request parser that receives a read/write storage request from said at least one server, wherein said read/write storage request parser extracts at least one storage request parameter from said received read/write storage request, and a logical unit mapper (LUN) that receives said at least one storage request parameter from said read/write storage request parser and maps said at least one storage request parameter to at least one physical LUN, wherein said at least one physical LUN represents at least one storage location within said plurality of storage devices.
2. The network of claim 1 , wherein said LUN mapper comprises at least one LUN map.
3. The network of claim 2, wherein said at least one storage request parameter comprises a host id parameter, a target LUN parameter, and a target host bus adaptor (HBA) parameter.
4. The network of claim 3, wherein said LUN mapper uses said host id parameter to select one of said at least one LUN map corresponding to said host id parameter.
5. The network of claim 4, wherein said LUN mapper applies said target LUN parameter and said target HBA parameter to said selected LUN map to locate said at least one physical LUN stored in said selected LUN map.
6. The network of claim 5, wherein said LUN mapper issues said received read/write storage request to at least one storage device corresponding to said at least one physical LUN, wherein said at least one storage device is located in said plurality of storage devices.
7. The network of claim 5, wherein said selected LUN map comprises a two- dimensional array of physical LUN data, wherein a first axis of said LUN map is indexed by target LUN information and a second axis of said LUN map is indexed by target HBA information.
8. A method for allocating storage in a storage area network, comprising the steps of: receiving a read/write storage request from a host computer; resolving the read/write storage request; determining a physical LUN from the resolved read/write storage request; and issuing a read/write storage request to a storage device in a storage area network, wherein the storage device corresponds to the determined physical LUN.
9. The method of claim 8, wherein said resolving step comprises the step of: extracting parameters of host id, target LUN, and target HBA from the read/write storage request.
10. The method of claim 9, further comprising the step of: storing at least one LUN map.
11. The method of claim 10, wherein said determining step comprises the steps of: selecting one of said stored at least one LUN map corresponding to said host id parameter; and applying said extracted parameters of target LUN and target HBA to said selected LUN map to determine the physical LUN.
12. The method of claim 11, wherein said selected LUN map comprises a two-dimensional array of physical LUN data, where said applying step comprises the steps of: applying said extracted target LUN parameter to a first axis of said selected LUN map; applying said extracted target HBA parameter to a second axis of said selected LUN map; and locating the physical LUN in said selected LUN map at the intersection of said applied extracted target LUN and said applied extracted target HBA parameters.
13. A system for allocating storage resources in a storage area network, comprising: means for receiving a read/write storage request from a host computer; means for resolving the read/write storage request; means for determining a physical LUN from the resolved read/write storage request; and means for issuing a read/write storage request to a storage device in a storage area network, wherein the storage device corresponds to the determined physical LUN.
14. The system of claim 13, wherein said resolving means comprises: means for extracting parameters of host id, target LUN, and target HBA from the read/write storage request.
15. The system of claim 14, further comprising: means for storing at least one LUN map.
16. The system of claim 15, wherein said determining means comprises: means for selecting one of said stored at least one LUN map corresponding to said host id parameter; and means for applying said extracted parameters of target LUN and target
HBA to said selected LUN map to determine the physical LUN.
17. The system of claim 16, wherein said selected LUN map comprises a two- dimensional array of physical LUN data, where said applying means comprises: means for applying said extracted target LUN parameter to a first axis of said selected LUN map; means for applying said extracted target HBA parameter to a second axis of said selected LUN map; and means for locating the physical LUN in said selected LUN map at the intersection of said applied extracted target LUN and said applied extracted target HBA parameters.
PCT/US2000/042349 2000-09-18 2000-11-29 Method and system of allocating storage resources in a storage area network WO2002025446A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001234413A AU2001234413A1 (en) 2000-09-18 2000-11-29 Method and system of allocating storage resources in a storage area network

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/664,500 US6977927B1 (en) 2000-09-18 2000-09-18 Method and system of allocating storage resources in a storage area network
US09/664,500 2000-09-18

Publications (2)

Publication Number Publication Date
WO2002025446A2 true WO2002025446A2 (en) 2002-03-28
WO2002025446A3 WO2002025446A3 (en) 2003-10-16

Family

ID=24666223

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/042349 WO2002025446A2 (en) 2000-09-18 2000-11-29 Method and system of allocating storage resources in a storage area network

Country Status (3)

Country Link
US (1) US6977927B1 (en)
AU (1) AU2001234413A1 (en)
WO (1) WO2002025446A2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003088050A1 (en) * 2002-04-05 2003-10-23 Cisco Technology, Inc. Apparatus and method for defining a static fibre channel fabric
WO2003096190A1 (en) * 2002-05-10 2003-11-20 Silicon Graphics, Inc. Real-time storage area network
EP1669873A2 (en) 2003-09-17 2006-06-14 Hitachi, Ltd. Remote storage disk control device with function to transfer commands to remote storage devices
EP1484668A3 (en) * 2003-06-02 2008-05-28 Hitachi, Ltd. Storage system control method, storage system and storage apparatus
US8589499B2 (en) 2002-05-10 2013-11-19 Silicon Graphics International Corp. Real-time storage area network
CN106598908A (en) * 2015-10-19 2017-04-26 阿里巴巴集团控股有限公司 Computing device and management method and system of storage component thereof

Families Citing this family (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6868417B2 (en) * 2000-12-18 2005-03-15 Spinnaker Networks, Inc. Mechanism for handling file level and block level remote file accesses using the same server
US7203730B1 (en) 2001-02-13 2007-04-10 Network Appliance, Inc. Method and apparatus for identifying storage devices
WO2002065249A2 (en) * 2001-02-13 2002-08-22 Candera, Inc. Storage virtualization and storage management to provide higher level storage services
US6901446B2 (en) * 2001-02-28 2005-05-31 Microsoft Corp. System and method for describing and automatically managing resources
IL159587A0 (en) * 2001-07-06 2004-06-01 Computer Ass Think Inc Systems and methods of information backup
US7457846B2 (en) * 2001-10-05 2008-11-25 International Business Machines Corporation Storage area network methods and apparatus for communication and interfacing with multiple platforms
US20030154271A1 (en) * 2001-10-05 2003-08-14 Baldwin Duane Mark Storage area network methods and apparatus with centralized management
US20030101160A1 (en) * 2001-11-26 2003-05-29 International Business Machines Corporation Method for safely accessing shared storage
JP4146653B2 (en) * 2002-02-28 2008-09-10 株式会社日立製作所 Storage device
US7111066B2 (en) * 2002-03-27 2006-09-19 Motorola, Inc. Method of operating a storage device
US7165258B1 (en) 2002-04-22 2007-01-16 Cisco Technology, Inc. SCSI-based storage area network having a SCSI router that routes traffic between SCSI and IP networks
JP3957278B2 (en) * 2002-04-23 2007-08-15 株式会社日立製作所 File transfer method and system
JP4087149B2 (en) * 2002-05-20 2008-05-21 株式会社日立製作所 Disk device sharing system and computer
US7219189B1 (en) * 2002-05-31 2007-05-15 Veritas Operating Corporation Automatic operating system handle creation in response to access control changes
US7873700B2 (en) * 2002-08-09 2011-01-18 Netapp, Inc. Multi-protocol storage appliance that provides integrated support for file and block access protocols
US20040078521A1 (en) * 2002-10-17 2004-04-22 International Business Machines Corporation Method, apparatus and computer program product for emulating an iSCSI device on a logical volume manager
CN100380878C (en) * 2002-11-12 2008-04-09 泽特拉公司 Communication protocols, systems and methods
US8005918B2 (en) * 2002-11-12 2011-08-23 Rateze Remote Mgmt. L.L.C. Data storage devices having IP capable partitions
US20040143733A1 (en) * 2003-01-16 2004-07-22 Cloverleaf Communication Co. Secure network data storage mediator
JP4345309B2 (en) * 2003-01-20 2009-10-14 株式会社日立製作所 Network storage device
US20040160975A1 (en) * 2003-01-21 2004-08-19 Charles Frank Multicast communication protocols, systems and methods
US20040199618A1 (en) * 2003-02-06 2004-10-07 Knight Gregory John Data replication solution
US7526527B1 (en) * 2003-03-31 2009-04-28 Cisco Technology, Inc. Storage area network interconnect server
US7817583B2 (en) * 2003-04-28 2010-10-19 Hewlett-Packard Development Company, L.P. Method for verifying a storage area network configuration
JP4278445B2 (en) * 2003-06-18 2009-06-17 株式会社日立製作所 Network system and switch
US8261037B2 (en) * 2003-07-11 2012-09-04 Ca, Inc. Storage self-healing and capacity planning system and method
US7523201B2 (en) * 2003-07-14 2009-04-21 Network Appliance, Inc. System and method for optimized lun masking
US7688733B1 (en) * 2003-08-04 2010-03-30 Sprint Communications Company L.P. System and method for bandwidth selection in a communication network
US20050050226A1 (en) * 2003-08-26 2005-03-03 Nils Larson Device mapping based on authentication user name
US20050108375A1 (en) * 2003-11-13 2005-05-19 Michele Hallak-Stamler Method and graphical user interface for managing and configuring multiple clusters of virtualization switches
JP2005190036A (en) * 2003-12-25 2005-07-14 Hitachi Ltd Storage controller and control method for storage controller
JP4463042B2 (en) * 2003-12-26 2010-05-12 株式会社日立製作所 Storage system having volume dynamic allocation function
US9178784B2 (en) 2004-04-15 2015-11-03 Raytheon Company System and method for cluster management based on HPC architecture
US8336040B2 (en) 2004-04-15 2012-12-18 Raytheon Company System and method for topology-aware job scheduling and backfilling in an HPC environment
US8335909B2 (en) 2004-04-15 2012-12-18 Raytheon Company Coupling processors to each other for high performance computing (HPC)
US7620981B2 (en) 2005-05-26 2009-11-17 Charles William Frank Virtual devices and virtual bus tunnels, modules and methods
US7644228B2 (en) * 2005-06-03 2010-01-05 Seagate Technology Llc Distributed storage system with global replication
US7984258B2 (en) * 2005-06-03 2011-07-19 Seagate Technology Llc Distributed storage system with global sparing
JP4839706B2 (en) * 2005-07-12 2011-12-21 株式会社日立製作所 Index management method for database management system
US8819092B2 (en) 2005-08-16 2014-08-26 Rateze Remote Mgmt. L.L.C. Disaggregated resources and access methods
JP2007072847A (en) * 2005-09-08 2007-03-22 Nec Corp Information processing system, separation hiding device, separation control method and program
JP4240496B2 (en) * 2005-10-04 2009-03-18 インターナショナル・ビジネス・マシーンズ・コーポレーション Apparatus and method for access control
US9270532B2 (en) 2005-10-06 2016-02-23 Rateze Remote Mgmt. L.L.C. Resource command messages and methods
US7586946B2 (en) * 2005-10-31 2009-09-08 Hewlett-Packard Development Company, L.P. Method and apparatus for automatically evaluating and allocating resources in a cell based system
US7496551B1 (en) * 2006-03-28 2009-02-24 Emc Corporation Methods and apparatus associated with advisory generation
US7921185B2 (en) * 2006-03-29 2011-04-05 Dell Products L.P. System and method for managing switch and information handling system SAS protocol communication
US7698519B2 (en) * 2006-08-31 2010-04-13 International Business Machines Corporation Backup of hierarchically structured storage pools
JP5184552B2 (en) * 2007-01-03 2013-04-17 レイセオン カンパニー Computer storage system
US8555275B1 (en) * 2007-04-26 2013-10-08 Netapp, Inc. Method and system for enabling an application in a virtualized environment to communicate with multiple types of virtual servers
US20080294665A1 (en) * 2007-05-25 2008-11-27 Dell Products L.P. Methods and Systems for Handling Data in a Storage Area Network
GB2460841B (en) * 2008-06-10 2012-01-11 Virtensys Ltd Methods of providing access to I/O devices
US9311319B2 (en) * 2009-08-27 2016-04-12 Hewlett Packard Enterprise Development Lp Method and system for administration of storage objects
JP6073246B2 (en) 2011-01-10 2017-02-01 ストローン リミテッド Large-scale storage system
US20130024614A1 (en) * 2011-07-20 2013-01-24 Balaji Natrajan Storage manager
US9037772B2 (en) * 2012-01-05 2015-05-19 Hewlett-Packard Development Company, L.P. Host based zone configuration
WO2014002094A2 (en) 2012-06-25 2014-01-03 Storone Ltd. System and method for datacenters disaster recovery
EP2976711A4 (en) 2013-03-21 2016-09-14 Storone Ltd Deploying data-path-related plugins
IL235729A (en) * 2014-11-17 2017-06-29 Kaluzhny Uri Secure storage device and method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000000889A2 (en) * 1998-06-30 2000-01-06 Emc Corporation Method and apparatus for providing data management for a storage system coupled to a network
WO2000029954A1 (en) * 1998-11-14 2000-05-25 Mti Technology Corporation Logical unit mapping in a storage area network (san) environment

Family Cites Families (102)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4404647A (en) 1978-03-16 1983-09-13 International Business Machines Corp. Dynamic array error recovery
GB2023314B (en) 1978-06-15 1982-10-06 Ibm Digital data processing systems
JPS59160899A (en) 1982-12-09 1984-09-11 セコイア・システムス・インコ−ポレ−テツド Memory backup system
US4611298A (en) 1983-06-03 1986-09-09 Harding And Harris Behavioral Research, Inc. Information storage and retrieval system and method
JPS60142418A (en) 1983-12-28 1985-07-27 Hitachi Ltd Input/output error recovery system
BR8503913A (en) 1984-08-18 1986-05-27 Fujitsu Ltd ERROR RECOVERY SYSTEM AND PROCESS IN A CHANNEL DATA PROCESSOR HAVING A CONTROL MEMORY DEVICE AND ERROR RECOVERY PROCESS IN A CHANNEL TYPE DATA PROCESSOR
JP2900359B2 (en) 1986-10-30 1999-06-02 株式会社日立製作所 Multiprocessor system
US5257367A (en) 1987-06-02 1993-10-26 Cab-Tek, Inc. Data storage system with asynchronous host operating system communication link
US4942579A (en) 1987-06-02 1990-07-17 Cab-Tek, Inc. High-speed, high-capacity, fault-tolerant error-correcting storage system
US5051887A (en) 1987-08-25 1991-09-24 International Business Machines Corporation Maintaining duplex-paired storage devices during gap processing using of a dual copy function
US5129088A (en) 1987-11-30 1992-07-07 International Business Machines Corporation Data processing method to create virtual disks from non-contiguous groups of logically contiguous addressable blocks of direct access storage device
US5136523A (en) 1988-06-30 1992-08-04 Digital Equipment Corporation System for automatically and transparently mapping rules and objects from a stable storage database management system within a forward chaining or backward chaining inference cycle
US5175849A (en) 1988-07-28 1992-12-29 Amdahl Corporation Capturing data of a database system
US4996687A (en) 1988-10-11 1991-02-26 Honeywell Inc. Fault recovery mechanism, transparent to digital system function
US5167011A (en) 1989-02-15 1992-11-24 W. H. Morris Method for coodinating information storage and retrieval
CA2017458C (en) 1989-07-24 2000-10-10 Jonathan R. Engdahl Intelligent network interface circuit
US5212789A (en) 1989-10-12 1993-05-18 Bell Communications Research, Inc. Method and apparatus for updating application databases used in a distributed transaction processing environment
US5276867A (en) 1989-12-19 1994-01-04 Epoch Systems, Inc. Digital data storage system with improved data migration
US5185884A (en) 1990-01-24 1993-02-09 International Business Machines Corporation Computer controlled optimized pairing of disk units
US5138710A (en) 1990-04-25 1992-08-11 Unisys Corporation Apparatus and method for providing recoverability in mass storage data base systems without audit trail mechanisms
DE69028517D1 (en) 1990-05-11 1996-10-17 Ibm Method and device for deriving the state of a mirrored unit when reinitializing a system
JPH0444673A (en) 1990-06-11 1992-02-14 Toshiba Corp Defective information recording system for disk device
US5155845A (en) 1990-06-15 1992-10-13 Storage Technology Corporation Data storage system for providing redundant copies of data on different disk drives
US5157663A (en) 1990-09-24 1992-10-20 Novell, Inc. Fault tolerant computer system
US5544347A (en) 1990-09-24 1996-08-06 Emc Corporation Data storage system controlled remote data mirroring with respectively maintained data indices
US5206939A (en) 1990-09-24 1993-04-27 Emc Corporation System and method for disk mapping and data retrieval
US5390313A (en) 1990-09-24 1995-02-14 Emc Corporation Data storage system with data mirroring and reduced access time data retrieval
US5561815A (en) 1990-10-02 1996-10-01 Hitachi, Ltd. System and method for control of coexisting code and image data in memory
US5212784A (en) 1990-10-22 1993-05-18 Delphi Data, A Division Of Sparks Industries, Inc. Automated concurrent data backup system
US5528759A (en) 1990-10-31 1996-06-18 International Business Machines Corporation Method and apparatus for correlating network management report messages
AU8683991A (en) 1990-11-09 1992-05-14 Array Technology Corporation Logical partitioning of a redundant array storage system
JP2603757B2 (en) 1990-11-30 1997-04-23 富士通株式会社 Method of controlling array disk device
US5235601A (en) 1990-12-21 1993-08-10 Array Technology Corporation On-line restoration of redundancy information in a redundant array system
US5317731A (en) 1991-02-25 1994-05-31 International Business Machines Corporation Intelligent page store for concurrent and consistent access to a database by a transaction processor and a query processor
US5367682A (en) 1991-04-29 1994-11-22 Steven Chang Data processing virus protection circuitry including a permanent memory for storing a redundant partition table
US5321813A (en) 1991-05-01 1994-06-14 Teradata Corporation Reconfigurable, fault tolerant, multistage interconnect network and protocol
US5278838A (en) 1991-06-18 1994-01-11 Ibm Corp. Recovery from errors in a redundant array of disk drives
US5559958A (en) 1991-06-24 1996-09-24 Compaq Computer Corporation Graphical user interface for computer management system and an associated management information base
US5347653A (en) 1991-06-28 1994-09-13 Digital Equipment Corporation System for reconstructing prior versions of indexes using records indicating changes between successive versions of the indexes
US5325505A (en) 1991-09-04 1994-06-28 Storage Technology Corporation Intelligent storage manager for data storage apparatus having simulation capability
US5481701A (en) 1991-09-13 1996-01-02 Salient Software, Inc. Method and apparatus for performing direct read of compressed data file
JP2793399B2 (en) 1991-12-09 1998-09-03 日本電気株式会社 Buffer device
JP3160106B2 (en) 1991-12-23 2001-04-23 ヒュンダイ エレクトロニクス アメリカ How to sort disk arrays
US5745789A (en) 1992-01-23 1998-04-28 Hitachi, Ltd. Disc system for holding data in a form of a plurality of data blocks dispersed in a plurality of disc units connected by a common data bus
JPH05224822A (en) 1992-02-12 1993-09-03 Hitachi Ltd Collective storage device
WO1993018456A1 (en) 1992-03-13 1993-09-16 Emc Corporation Multiple controller sharing in a redundant storage array
US5263154A (en) 1992-04-20 1993-11-16 International Business Machines Corporation Method and system for incremental time zero backup copying of data
WO1993023811A2 (en) 1992-05-13 1993-11-25 Southwestern Bell Technology Resources, Inc. Open architecture interface storage controller
US5596736A (en) 1992-07-22 1997-01-21 Fujitsu Limited Data transfers to a backing store of a dynamically mapped data storage system in which data has nonsequential logical addresses
US5404361A (en) 1992-07-27 1995-04-04 Storage Technology Corporation Method and apparatus for ensuring data integrity in a dynamically mapped data storage subsystem
US5497483A (en) 1992-09-23 1996-03-05 International Business Machines Corporation Method and system for track transfer control during concurrent copy operations in a data processing storage subsystem
US5375232A (en) 1992-09-23 1994-12-20 International Business Machines Corporation Method and system for asynchronous pre-staging of backup copies in a data processing storage subsystem
US5553235A (en) 1992-10-23 1996-09-03 International Business Machines Corporation System and method for maintaining performance data in a data processing system
US5495601A (en) 1992-12-11 1996-02-27 International Business Machines Corporation Method to off-load host-based DBMS predicate evaluation to a disk controller
JP3422370B2 (en) 1992-12-14 2003-06-30 株式会社日立製作所 Disk cache controller
GB2273584B (en) 1992-12-16 1997-04-16 Quantel Ltd A data storage apparatus
US5771367A (en) 1992-12-17 1998-06-23 International Business Machines Corporation Storage controller and method for improved failure recovery using cross-coupled cache memories and nonvolatile stores
US5689678A (en) 1993-03-11 1997-11-18 Emc Corporation Distributed storage array system having a plurality of modular control units
US5715393A (en) 1993-08-16 1998-02-03 Motorola, Inc. Method for remote system process monitoring
US5432922A (en) 1993-08-23 1995-07-11 International Business Machines Corporation Digital storage system and method having alternating deferred updating of mirrored storage disks
US5619694A (en) 1993-08-26 1997-04-08 Nec Corporation Case database storage/retrieval system
JP3078972B2 (en) 1993-11-05 2000-08-21 富士通株式会社 Disk array device
US5583994A (en) 1994-02-07 1996-12-10 Regents Of The University Of California System for efficient delivery of multimedia information using hierarchical network of servers selectively caching program for a selected time period
US5566316A (en) 1994-02-10 1996-10-15 Storage Technology Corporation Method and apparatus for hierarchical management of data storage elements in an array storage device
JP3745398B2 (en) 1994-06-17 2006-02-15 富士通株式会社 File disk block control method
US5504882A (en) 1994-06-20 1996-04-02 International Business Machines Corporation Fault tolerant data storage subsystem employing hierarchically arranged controllers
US5435004A (en) 1994-07-21 1995-07-18 International Business Machines Corporation Computerized system and method for data backup
ATE197213T1 (en) 1994-07-22 2000-11-15 Koninkl Kpn Nv METHOD FOR GENERATING CONNECTIONS IN A COMMUNICATIONS NETWORK
US5537533A (en) 1994-08-11 1996-07-16 Miralink Corporation System and method for remote mirroring of digital data from a primary network server to a remote network server
US5625818A (en) 1994-09-30 1997-04-29 Apple Computer, Inc. System for managing local database updates published to different online information services in different formats from a central platform
US5671439A (en) 1995-01-10 1997-09-23 Micron Electronics, Inc. Multi-drive virtual mass storage device and method of operating same
GB9501378D0 (en) 1995-01-24 1995-03-15 Ibm A system and method for establishing a communication channel over a heterogeneous network between a source node and a destination node
US5680580A (en) 1995-02-28 1997-10-21 International Business Machines Corporation Remote copy system for setting request interconnect bit in each adapter within storage controller and initiating request connect frame in response to the setting bit
US5692155A (en) 1995-04-19 1997-11-25 International Business Machines Corporation Method and apparatus for suspending multiple duplex pairs during back up processing to insure storage devices remain synchronized in a sequence consistent order
US5659787A (en) 1995-05-26 1997-08-19 Sensormatic Electronics Corporation Data communication network with highly efficient polling procedure
US5710918A (en) 1995-06-07 1998-01-20 International Business Machines Corporation Method for distributed task fulfillment of web browser requests
US5761410A (en) 1995-06-28 1998-06-02 International Business Machines Corporation Storage management mechanism that detects write failures that occur on sector boundaries
US5768623A (en) 1995-09-19 1998-06-16 International Business Machines Corporation System and method for sharing multiple storage arrays by dedicating adapters as primary controller and secondary controller for arrays reside in different host computers
US5805919A (en) 1995-10-05 1998-09-08 Micropolis Corporation Method and system for interleaving the distribution of data segments from different logical volumes on a single physical drive
US5740397A (en) 1995-10-11 1998-04-14 Arco Computer Products, Inc. IDE disk drive adapter for computer backup and fault tolerance
US5710885A (en) 1995-11-28 1998-01-20 Ncr Corporation Network management system with improved node discovery and monitoring
US5774680A (en) 1995-12-11 1998-06-30 Compaq Computer Corporation Interfacing direct memory access devices to a non-ISA bus
US5809328A (en) 1995-12-21 1998-09-15 Unisys Corp. Apparatus for fibre channel transmission having interface logic, buffer memory, multiplexor/control device, fibre channel controller, gigabit link module, microprocessor, and bus control device
US5787304A (en) 1996-02-05 1998-07-28 International Business Machines Corporation Multipath I/O storage systems with multipath I/O request mechanisms
US5761507A (en) 1996-03-05 1998-06-02 International Business Machines Corporation Client/server architecture supporting concurrent servers within a server with a transaction manager providing server/connection decoupling
US5673322A (en) 1996-03-22 1997-09-30 Bell Communications Research, Inc. System and method for providing protocol translation and filtering to access the world wide web from wireless or low-bandwidth networks
US5764913A (en) 1996-04-05 1998-06-09 Microsoft Corporation Computer network status monitoring system
US5790774A (en) 1996-05-21 1998-08-04 Storage Computer Corporation Data storage system with dedicated allocation of parity storage and parity reads and writes only on operations requiring parity information
US5720027A (en) 1996-05-21 1998-02-17 Storage Computer Corporation Redundant disc computer having targeted data broadcast
US5673382A (en) 1996-05-30 1997-09-30 International Business Machines Corporation Automated management of off-site storage volumes for disaster recovery
US5809332A (en) 1996-06-03 1998-09-15 Emc Corporation Supplemental communication between host processor and mass storage controller using modified diagnostic commands
US5765204A (en) 1996-06-05 1998-06-09 International Business Machines Corporation Method and apparatus for adaptive localization of frequently accessed, randomly addressed data
US5732238A (en) 1996-06-12 1998-03-24 Storage Computer Corporation Non-volatile cache for providing data integrity in operation with a volatile demand paging cache in a data storage system
US5748897A (en) 1996-07-02 1998-05-05 Sun Microsystems, Inc. Apparatus and method for operating an aggregation of server computers using a dual-role proxy server computer
US5787485A (en) 1996-09-17 1998-07-28 Marathon Technologies Corporation Producing a mirrored copy using reference labels
US5812754A (en) 1996-09-18 1998-09-22 Silicon Graphics, Inc. Raid system with fibre channel arbitrated loop
US5787470A (en) 1996-10-18 1998-07-28 At&T Corp Inter-cache protocol for improved WEB performance
US6654830B1 (en) * 1999-03-25 2003-11-25 Dell Products L.P. Method and system for managing data migration for a storage system
US6640278B1 (en) * 1999-03-25 2003-10-28 Dell Products L.P. Method for configuration and management of storage resources in a storage network
US6587959B1 (en) * 1999-07-28 2003-07-01 Emc Corporation System and method for addressing scheme for use on clusters
US6633962B1 (en) * 2000-03-21 2003-10-14 International Business Machines Corporation Method, system, program, and data structures for restricting host access to a storage space
US6393535B1 (en) * 2000-05-02 2002-05-21 International Business Machines Corporation Method, system, and program for modifying preferred path assignments to a storage device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000000889A2 (en) * 1998-06-30 2000-01-06 Emc Corporation Method and apparatus for providing data management for a storage system coupled to a network
WO2000029954A1 (en) * 1998-11-14 2000-05-25 Mti Technology Corporation Logical unit mapping in a storage area network (san) environment

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003088050A1 (en) * 2002-04-05 2003-10-23 Cisco Technology, Inc. Apparatus and method for defining a static fibre channel fabric
US7606167B1 (en) 2002-04-05 2009-10-20 Cisco Technology, Inc. Apparatus and method for defining a static fibre channel fabric
US8098595B2 (en) 2002-04-05 2012-01-17 Cisco Technology, Inc. Apparatus and method for defining a static fibre channel fabric
WO2003096190A1 (en) * 2002-05-10 2003-11-20 Silicon Graphics, Inc. Real-time storage area network
US7818424B2 (en) 2002-05-10 2010-10-19 Silicon Graphics International Real-time storage area network
US8589499B2 (en) 2002-05-10 2013-11-19 Silicon Graphics International Corp. Real-time storage area network
US9386100B2 (en) 2002-05-10 2016-07-05 Silicon Graphics International Corp. Real-time storage area network
EP1484668A3 (en) * 2003-06-02 2008-05-28 Hitachi, Ltd. Storage system control method, storage system and storage apparatus
EP1669873A2 (en) 2003-09-17 2006-06-14 Hitachi, Ltd. Remote storage disk control device with function to transfer commands to remote storage devices
EP1669873A3 (en) * 2003-09-17 2010-08-25 Hitachi, Ltd. Remote storage disk control device with function to transfer commands to remote storage devices
CN106598908A (en) * 2015-10-19 2017-04-26 阿里巴巴集团控股有限公司 Computing device and management method and system of storage component thereof

Also Published As

Publication number Publication date
AU2001234413A1 (en) 2002-04-02
WO2002025446A3 (en) 2003-10-16
US6977927B1 (en) 2005-12-20

Similar Documents

Publication Publication Date Title
US6977927B1 (en) Method and system of allocating storage resources in a storage area network
US6606690B2 (en) System and method for accessing a storage area network as network attached storage
US10326846B1 (en) Method and apparatus for web based storage on-demand
US7506073B2 (en) Session-based target/LUN mapping for a storage area network and associated method
JP4455137B2 (en) Storage subsystem management method
US7216148B2 (en) Storage system having a plurality of controllers
US7996560B2 (en) Managing virtual ports in an information processing system
JP4691251B2 (en) Storage router and method for providing virtual local storage
US7522616B2 (en) Method and apparatus for accessing remote storage using SCSI and an IP network
US20020161983A1 (en) System, method, and computer program product for shared device of storage compacting
US7870271B2 (en) Disk drive partitioning methods and apparatus
EP1291755A2 (en) Storage system, a method of file data back up and a method of copying of file data
US7281062B1 (en) Virtual SCSI bus for SCSI-based storage area network
JP2004506980A (en) Architecture for providing block-level storage access over a computer network
JP2010092475A (en) Architecture for generating and maintaining virtual filer on filer
US9602600B1 (en) Method and apparatus for web based storage on-demand
KR100834361B1 (en) Effiviently supporting multiple native network protocol implementations in a single system
US7002956B2 (en) Network addressing method and system for localizing access to network resources in a computer network
CN109302494A (en) A kind of configuration method of network store system, device, equipment and medium
US7751398B1 (en) Techniques for prioritization of messaging traffic
US20040143648A1 (en) Short-cut response for distributed services
US7698424B1 (en) Techniques for presenting multiple data storage arrays to iSCSI clients as a single aggregated network array
CN115623081A (en) Data downloading method, data uploading method and distributed storage system
EP1379956A1 (en) System, method and computer program product for shared device of storage compacting

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP