« PreviousContinue »
METHOD AND APPARATUS FOR ACCESSING
BACKGROUND OF THE RELATED ART
This section is intended to introduce the reader to various aspects of art, which may be related to various aspects of the present invention that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present invention. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
In the field of computer systems, it may be desirable for 15 information to be transferred from a system memory associated with one computer system to a system memory associated with another computer system. The information may be transmitted by upper layer protocols ("ULP"), which may be referred to as consumers, through a network that connects the 20 computer systems together. Many protocols or strategies for transferring data between the memories of computer systems employ queue pairs ("QPs"). Each QP may include a send queue ("SQ") and a receive queue ("RQ"). Typically, each computer involved in a transfer will have both a send queue and a receive queue.
Queue pairs may be defined to expose a memory segment, such as a memory window or memory region, within the local system to a remote system. The information about the 30 memory windows and memory regions may be maintained within a memory translation and protection table ("TPT"). The entries in the TPT may be accessed by steering tags ("STags"), which indicate a specific entry within the TPT. In addition to the TPT, a physical address table ("PAT") may be 35 implemented to convert the fields of the in the TPT to physical addresses of memory.
However, before the memory segments may be accessed, either locally or remotely, the upper layer protocols may 4Q perform various steps to exchange information relating to the memory segment. For instance, the memory segment may first be registered to allow access to that memory segment from the local system or a remote system. Upon completion of the registration, the upper layer protocol may create and 45 send a message with the information relating to the memory segment. The registration process is time consuming and expensive in terms of computing resources. As such, for each command sent from the upper layer protocol, the extensive registration process may result in excessive delays and inef- 50 ficiencies in the operation of the computer system.
BRIEF DESCRIPTION OF THE DRAWINGS
Advantages of the invention may become apparent upon 55 reading the following detailed description and upon reference to the drawings in which:
FIG. 1 is a block diagram illustrating a computer network in accordance with embodiments of the present invention; g0
FIG. 2 is a block diagram that illustrates the use of a queue pair to transfer data between devices in accordance with embodiments of the present invention;
FIG. 3 is a block diagram showing the processing of a memory request from a consumer to memory employing an 65 STag in accordance with embodiments of the present invention; and
FIG. 4 is a process flow diagram a process in accordance with embodiments of the present invention.
DESCRIPTION OF SPECIFIC EMBODIMENTS
One or more specific embodiments of the present invention will be described below. In an effort to provide a concise description of these embodiments, not all features of an actual implementation are described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions may be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
The Remote Direct Memory Access ("RDMA") Consortium, which includes the assignee of the present invention, is developing specifications to improve ability of computer systems to remotely access the memory of other computer systems. One such specification under development is the RDMA Consortium Protocols Verb specification, which is hereby incorporated by reference. The verbs defined by this specification may correspond to commands or actions that may form a command interface for data transfers between memories in computer systems, including the formation and management of queue pairs, memory windows, protection domains and the like.
RDMA may refer to the ability of one computer to directly place information in the memory space of another computer, while minimizing demands on the central processing unit ("CPU") and memory bus. In an RDMA system, an RDMA layer may interoperate over any physical layer in a Local Area Network ("LAN"), Server Area Network ("SAN"), Metropolitan Area Network ("MAN"), or Wide Area Network ("WAN").
Referring now to FIG. 1, a block diagram illustrating a computer network in accordance with embodiments of the present invention is illustrated. The computer network is indicated by the reference numeral 100 and may comprise a first processor node 102 and a second processor node 110, which may be connected to a plurality of I/O devices 126,130,134, and 138 via a switch network 118. Each of the I/O devices 126,130,134 and 138 may utilize a Remote Direct Memory Access-enabled Network Interface Card ("RNIC") to communicate with the other systems. In FIG. 1, the RNICs associated with the I/O devices 126,130,134 and 138 are identified by the reference numerals 124, 128, 132 and 136, respectively. The I/O devices 126, 130, 134, and 138 may access the memory space of other RDMA-enabled devices via their respective RNICs and the switch network 118.
The topology of the network 100 is for purposes of illustration only. Those of ordinary skill in the art will appreciate that the topology of the network 100 may take on a variety of forms based on a wide range of design considerations. Additionally, NICs that operate according to other protocols, such as InfiniBand, may be employed in networks that employ such protocols for data transfer.
The first processor node 102 may include a CPU 104, a memory 106, and an RNIC 108. Although only one CPU 104 is illustrated in the processor node 102, those of ordinary skill in the art will appreciate that multiple CPUs may be included therein. The CPU 104 may be connected to the memory 106 and the RNIC 108 over an internal bus or connection. The
memory 106 may be utilized to store information for use by the CPU 104, the RNIC 108 or other systems or devices. The memory 106 may include various types of memory such as Static Random Access Memory ("SRAM") or Dynamic Random Access Memory ("DRAM"). 5
The second processor node 110 may include a CPU 112, a memory 114, and an RNIC 116. Although only one CPU 112 is illustrated in the processor node 110, those of ordinary skill in the art will appreciate that multiple CPUs may be included therein. The CPU 112, which may include a plurality of processors, may be connected to the memory 114 and the RNIC 116 over an internal bus or connection. The memory 114 may be utilized to store information for use by the CPU 112, the RNIC 116 or other systems or devices. The memory 15 114 may utilize various types of memory such as SRAM or DRAM.
The switch network 118 may include any combination of hubs, switches, routers and the like. In FIG. 1, the switch network 118 comprises switches 120A-120C. The switch 20 120A connects to the switch 120B, the RNIC 108 of the first processor node 102, the RNIC 124 of the I/O device 126 and the RNIC 128 of the I/O device 130. In addition to its connection to the switch 120A, the switch 120B connects to the switch 120C and the RNIC 132 of the I/O device 134. In 25 addition to its connection to the switch 120B, the switch 120C connects to the RNIC 116 of the second processor node 110 and the RNIC 136 of the I/O device 138.
Each of the processor nodes 102 and 110 and the I/O devices 126, 130, 134, and 138 may be given equal priority 30 and the same access to the memory 106 or 114. In addition, the memories may be accessible by remote devices such as the I/O devices 126,130,134 and 138 via the switch network 118. The first processor node 102, the second processor node 110 and the I/O devices 126,130,134 and 138 may exchange 35 information using queue pairs ("QPs"). The exchange of information using QPs is explained with reference to FIG. 2.
FIG. 2 is a block diagram that illustrates the use of a queue pair to transfer data between devices in accordance with 4Q embodiments of the present invention. The figure is generally referred to by the reference numeral 200. In FIG. 2, a first node 202 and a second node 204 may exchange information using a QP. The first node 202 and second node 204 may correspond to any two of the first processor node 102, the 45 second processor node 110 or the I/O devices 126, 130, 134 and 138 (FIG. 1). As set forth above with respect to FIG. 1, any of these devices may exchange information in an RDMA environment.
The first node 202 may include a first consumer 206, which 50 may interact with an RNIC 208. The first consumer 206 may comprise a software process that may interact with various components of the RNIC 208. The RNIC 208, may correspond to one of the RNICs 108, 116, 126, 130, 134 or 138 (FIG. 1), depending on which of devices associated with 55 those RNICs is participating in the data transfer. The RNIC 208 may comprise a send queue 210, a receive queue 212, a completion queue ("CQ") 214, a memory translation and protection table ("TPT") 216, a memory 217 and a QP context 218. 60
The second node 204 may include a second consumer 220, which may interact with an RNIC 222. The second consumer 220 may comprise a software process that may interact with various components of the RNIC 222. The RNIC 222, may correspond to one of the RNICs 108, 116, 126, 130, 134 or 65 138 (FIG. 1), depending on which of devices associated with those RNICs is participating in the data transfer. The RNIC
222 may comprise a send queue 224, a receive queue 226, a completion queue 228, a TPT 230, a memory 234 and a QP context 232.
The memories 217 and 234 may be registered to different processes, each of which may correspond to the consumers 206 and 220. The memories 217 and 234 may comprise a portion of the main memory of the nodes 202 and 204, memory within the RNICs 208 and 222, or other memory associated with the nodes 202 and 204. The queues 210, 212, 214, 224, 226, or 228 may be used to transmit and receive various verbs or commands, such as control operations or transfer operations. The completion queue 214 or 228 may store information regarding the sending status of items on the send queue 210 or 224 and receiving status of items on the receive queue ("RQ") 212 or 226. The TPT 216 or 230 may comprise a simple table or an array of page specifiers that may include a variety of configuration information in relation to the memories 217 or 234.
The QP associated with the RNIC 208 may comprise the send queue 210 and the receive queue 212. The QP associated with the RNIC 222 may comprise the send queue 224 and the receive queue 226. The arrows between the send queue 210 and the receive queue 226 and between the send queue 224 and the receive queue 212 indicate the flow of data or information therebetween. Before communication between the RNICs 208 and 222 (and their associated QPs) may occur, the QPs may be established and configured by an exchange of commands or verbs between the RNIC 208 and the RNIC 222. The creation of the QP may be initiated by the first consumer 206 or the second consumer 220, depending on which consumer desires to transfer data to or retrieve data from the other consumer.
Information relating to the configuration of the QPs may be stored in the QP context 218 of the RNIC 208 and the QP context 232 of the RNIC 222. For instance, the QP context 218 or 232 may include information relating to a protection domain ("PD"), access rights, send queue information, receive queue information, completion queue information, different modes of tags, or information about a local port connected to the QP and/or remote port connected to the QP. However, it should be appreciated that the RNIC 208 or 222 may include multiple QPs that support different consumers with the QPs being associated with one of a number of CQs.
To prevent interferences in the memories 217 or 234, the memories 217 or 234 may be divided into memory regions ("MRs"), which may contain memory windows ("MWs"). An entry in the TPT 216 or 230 may describe the memory regions and may include a virtual to physical mapping of a portion of the address space allocated to a process. A physical address table ("PAT") may also be used to perform memory mapping. Memory regions may be registered with the associated RNIC 208 or 222 and the operating system ("OS"). The nodes 202 and 204 may send a unique steering field or steering tag ("STag") to identify the memory 217 or 234 to be accessed, which may correspond to the memory region or memory window. Access to a memory region by a designated QP may be restricted to STags that have the same protection domain.
The STag may identify a buffer, within the memory 217 or 234, being referenced for a given data transfer. A tagged offset ("TO") may be associated with the STag and may correspond to an offset into the associated buffer. Alternatively, a transfer may be identified by a queue number, a message sequence number and message offset. The queue number may be a 32-bit field, which identifies the queue being referenced. The message sequence number may be a 32-bit field that may be
used as a sequence number for a communication, while the message offset may be a 32-bit field offset from the start of the message.
To access one of the memories 217 and 234, the consumer 206 or 220 may issue a verb or command that may result in the 5 generation of a request, such as an RDMA read or write request or a work request ("WR"). For example, the request may be a WR, which may include a list of memory locations that may have data that is to be accessed. This list, which may be referred to as a scatter/gather list ("SGL"), may reference 10 the TPT 216 or 230. The SGL may be a list or collection of information in a table or array that may point to local data segments of the memory 217 or 234. For instance each element in the SGL may include a local STag, local tagged offset (i.e. virtual address), and length. The interaction between the 15 consumer 206 or 220 and the memory 217 or 234 in the context of data transfers employing STags is explained with reference to FIG. 3.
FIG. 3 is a block diagram showing the interaction of a consumer and memory by employing STags in accordance 20 with embodiments of the present invention. The diagram shown in FIG. 3 is generally referred to by the reference numeral 300. In this diagram 300, a consumer 301, which may be an upper layer protocol, external device, the first consumer 206, or the second consumer 220 of FIG. 2 or the 25 like, may issue a request 302 to access a location in a memory 346, which may be the memory 217 or 234 of FIG. 2. The request 302 may include an STag 306 that references an entry in a TPT 312, which may be the TPT 216 or 230 of FIG. 2. The TPT entries 314-318 may reference a physical address table 30 ("PAT") 338 that may in turn reference a segment 348 of memory 346. By using this memory access mechanism, the consumer 301 may use the STag 306 to access the memory segment 348.
For the consumer 301 to access the memory 346, the 35 request 302 may include various fields and information that indicate and control the access to the specific location within the memory 346. For instance, the request 302 may correspond to a memory access operation and may include an SGL element 3 04. The SGL element 304 may include information, 40 such as the STag 306, a tagged offset 308, and a length 310. The STag 306 may be a 32 bit identifier that is used to access memory 217 or 234. To function as an identifier, the STag 306 may be divided into fields of steering information, such as an STag Key and an STag Index. The STag Key may be provided 45 by the consumer 301 for error detection and correction or to provide security. Also, the STag Index may be managed by the RNIC, such as RNIC 208 or 222 of FIG. 2, to refer entries in the TPT 312. Furthermore, the tagged offset 308 ("TO") may identify the offset in an appropriate buffer or, alternatively, a 50 physical address. The length 310 may be the base and bounds of the memory segment 348 that is being referenced.
In accessing the TPT 312, the STag 306 within the SGL element 304 may correspond to a specific entry 314-318 within the TPT 312, which may correspond to the TPTs 216 55 and 230 of FIG. 2. The TPT entries ("TPTE") 314-318 each may describe an associated memory region or memory window that also includes various fields regarding the location and access rights for the memory segment, such as the memory segment 348. Each of the TPTEs 314-318 may 60 include a group of reference bits 320-324, physical address table ("PAT") base address bits 326-330 and additional information bits 332-336. The reference bits 320-324 may relate the STag 306 to a specific entry within the TPT 312. For instance, if the STag 306 includes the reference bits 324, then 65 the request 302 may be directed to the TPTE 318. The additional information bits 332-336 may include access controls,
key instance data, protection domain data, protection validation bits, window reference count, physical address table size, page size, first page offset, length, or a physical address table pointer, for example.
To access the memory segment 348, the PAT base addresses 326-330 may be utilized for either virtual addressing or physical addressing. For instance, in virtual addressing, the requesting consumer, such as consumer 301, may not have data about the physical address configuration of the memory 348 being accessed. In that situation, the PAT base addresses 326-330, which may correspond to a base address of the PAT 338, may be combined with a portion of the TO 308 to index the PAT 338. The combination may access a corresponding physical address 340-344 in the PAT 338. The combination of the PAT base address 326-330 with at least a portion of the TO 308 may be an arithmetic combination that is subject to adjustment depending on attributes of the associated memory location, memory region, or memory window that is the memory segment 348. The physical addresses 340-344 of the PAT 338 may correspond to the memory segment 348. For instance, the PAT base address 330 may include the physical address 344 in the PAT 338, which references the memory segment 348.
Alternatively, if physical addressing is utilized, then the PAT base address 326-330 may correspond to the memory segment 348. In this situation, the PAT Base addresses 326330 may relate to the specific locations in the memory 346. For instance, if the PAT base address 330 may include the memory address of the memory segment 348. As such, access to the TPTE 318 may provide the physical address of the memory segment 348 without the use of the PAT 338.
In accessing the specific entries 314-318 of the TPT 312, the STAG 306 within the request 302 may be created and managed through different processes. For instance, the STag 306 may be created through a normal process that involves registering the memory 346, which may form a regular STag. The regular STag may be created upon the issuance of a command from the consumer 301. Also, the STag 306 may be created through a fast process that involves preregistering STags so that the memory registration is done before a request is received, which may form a Physical STag. The Physical STag may be created prior to the command being issued from the consumer 30i, which may reduce delays by removing the memory registration process from the path of issuing a command. For instance, if a large data exchange is sent from the consumer, the regular STag may be formed with the memory registration being absorbed by the volume of exchanges that may take place. However, when a single command is being sent from the consumer 301, the slow process with the memory registration delays the transfer of the command. In this single command or smaller transfer, the physical STag may benefit the system because the delay incurred due to resource allocations associated with the memory registration is completed before the command is issued. The operation of these two STag processes is shown in greater detail in FIG. 4.
FIG. 4 is a process flow diagram in accordance with embodiments of the present invention. In the diagram, generally referred to by reference numeral 400, an STag may be created and managed by a system, such as a computer system, according to two different memory access schemes or mechanisms. The process may be divided into various phases that relate to the different phases of the operation of the consumer. For instance, the first phase, which may include blocks 402406 and 410-412, may be an initialization phase that initializes the consumer and allocates resources to the consumer. The second phase, which may include blocks 408 and 414428, may relate to the normal operation or run time operation