US20090157919A1 - Read control in a computer i/o interconnect - Google Patents

Read control in a computer i/o interconnect Download PDF

Info

Publication number
US20090157919A1
US20090157919A1 US12/105,733 US10573308A US2009157919A1 US 20090157919 A1 US20090157919 A1 US 20090157919A1 US 10573308 A US10573308 A US 10573308A US 2009157919 A1 US2009157919 A1 US 2009157919A1
Authority
US
United States
Prior art keywords
read request
read
upstream
completion queue
exceed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/105,733
Inventor
Jeffrey Michael Dodson
Nagamanivel Balasubramaniyan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PLX Technology Inc
Original Assignee
PLX Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PLX Technology Inc filed Critical PLX Technology Inc
Priority to US12/105,733 priority Critical patent/US20090157919A1/en
Assigned to PLX TECHNOLOGY, INC. reassignment PLX TECHNOLOGY, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DODSON, JEFFREY MICHAEL, BALASUBRAMANIYAN, NAGAMANIVEL
Publication of US20090157919A1 publication Critical patent/US20090157919A1/en
Priority to US13/020,702 priority patent/US8015330B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/382Information transfer, e.g. on bus using universal interface adapter
    • G06F13/385Information transfer, e.g. on bus using universal interface adapter for adaptation of a particular data processing system to different peripheral devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4004Coupling between buses
    • G06F13/4022Coupling between buses using switching circuits, e.g. switching matrix, connection or expansion network

Definitions

  • the present invention relates generally to computer I/O interconnects. More particularly, the present invention relates to read control in a computer I/O interconnect.
  • a bus In a computer architecture, a bus is a subsystem that transfers data between computer components inside a computer or between computers. Unlike a point-to-point connection, a different type of computer input/output (I/O) interconnect, a bus can logically connect several peripherals over the same set of wires. Each bus defines its set of connectors to physically plug devices, cards or cables together.
  • I/O computer input/output
  • PCI peripheral component interconnect
  • PCI Express provides higher performance, increased flexibility and scalability for next-generation systems, while maintaining software compatibility with existing PCI applications.
  • PCI Express protocol is considerably more complex, with three layers—the transaction, data link and physical layers.
  • a root complex device connects the processor and memory subsystem to the PCI Express switch fabric comprised of one or more switch devices (embodiments are also possible without switches, however).
  • PCI Express a point-to-point architecture is used. Similar to a host bridge in a PCI system, the root complex generates transaction requests on behalf of the processor, which is interconnected through a local I/O interconnect. Root complex functionality may be implemented as a discrete device, or may be integrated with the processor.
  • a root complex may contain more than one PCI Express port and multiple switch devices can be connected to ports on the root complex or cascaded.
  • PCI Express also supports split read completions. This means that the completion of read request initiated at a particular time may not be performed until a later time. Essentially, the read request must wait in a queue until it is serviced. Since a request is typically only 12-20 bytes, whereas the size of a completion response can range up to 4096 bytes, there is a natural imbalance where requests can accumulate faster than data can be returned.
  • This relative size imbalance between requests and completion data responses can negatively affect performance if too many requests are active at one time. This is especially true in a typical PCIe system where multiple downstream devices all try to read from a single root complex, and wherein the root complex typically services the read requests in a first-come-first-served fashion. If the requests are for large amounts of data, a long read request queue can develop in the root complex as it services the requests. This long queue can be exacerbated if the final data destination (the source of the read request) has less bandwidth than the data supplier (the request destination), which is common in host-centric PCIe systems, where the link closest to the root complex is typically the widest. Once intermediary buffers are filled, the bandwidth of the root complex effectively reduces to the bandwidth of the data sink.
  • a PCIe switch connects a single ⁇ 8 upstream port to two ⁇ 4 downstream ports.
  • One downstream port has a FibreChannel RAM disk that is capable of sending 16 1024 byte memory read requests at a time.
  • the other downstream port is a dual Gigabit Ethernet controller that can send 2 read requests at a time (1 per channel), with the read size being either 16 bytes (for a descriptor) or 1500 bytes (for an Ethernet packet).
  • the root complex sends 64 byte completions, so a 1024 byte read request would result in 16 partial completions.
  • the Ethernet controller may process 1885 Mb/s with a memory read latency of an Ethernet channel being around 340 ns.
  • the FibreChannel RAM disk processes 752 MB/s of completions (the same as it normally does) while the Ethernet controller performs 180 Mb/s.
  • the memory read latency of the Ethernet channel is around 6200 ns.
  • FibreChannel RAM Disk initially fills the switch's buffer with completions at a ⁇ 8 rate, but then the upstream bandwidth drops to a ⁇ 4 rate, due to the switch's downstream link to the FibreChannel device being only ⁇ 4. Due to the congestion, the Ethernet controller takes much longer to get data back, as seen from the increased latency. Since the Ethernet device can have only 2 reads outstanding, a longer response for those reads results in a major drop in performance.
  • a method for controlling reads in a computer input/output (I/O) interconnect is provided.
  • a read request is received over the computer I/O interconnect from a first device, the request requesting data of a first size. Then it is determined whether fulfilling the read request would cause the total size of a completion queue to exceed a first predefined threshold. If fulfilling the read request would cause the total size of the completion queue to exceed the first predefined threshold, then the read request is temporarily restricted from being forwarded upstream
  • a read request is received over the computer I/O interconnect from a first device. Then it is determined if forwarding the read request upstream would cause the rate at which read requests are forwarded to exceed a drain rate of a completion queue by more than a predefined threshold. If forwarding the read request upstream would cause the rate at which read requests are forwarded upstream to exceed a drain rate of the completion queue by more than the predefined threshold, then the read request is temporarily restricted from being forwarded upstream.
  • a system comprising: an interface; and one or more components configured to: receive a read request over the computer I/O interconnect from a first device, the request requesting data of a first size; determine whether fulfilling the read request would cause the total size of a completion queue to exceed a first predefined threshold; and temporarily restrict the read request from being forwarded upstream if fulfilling the read request would cause the total size of the completion queue to exceed the first predefined threshold.
  • a system comprising: an interface; and one or more processors configured to: receive a read request over the computer I/O interconnect from a first device; determine if forwarding the read request upstream would cause the rate at which read requests are forwarded to exceed a drain rate of a completion queue by more than a predefined threshold; and temporarily restrict the read request from being forwarded upstream if forwarding the read request upstream would cause the rate at which read requests are forwarded upstream to exceed a drain rate of the completion queue by more than the predefined threshold.
  • FIG. 1 is a block diagram illustrating a system for controlling reads in a computer I/O interconnect in accordance with an embodiment of the present invention.
  • FIG. 2 is a flow diagram illustrating a method for controlling reads in a computer I/O interconnect in accordance with an embodiment of the present invention.
  • FIG. 3 is a flow diagram illustrating a method for controlling reads in a computer I/O interconnect in accordance with an embodiment of the present invention.
  • the components, process steps, and/or data structures may be implemented using various types of operating systems, programming languages, computing platforms, computer programs, and/or general purpose machines.
  • devices of a less general purpose nature such as hardwired devices, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), or the like, may also be used without departing from the scope and spirit of the inventive concepts disclosed herein.
  • the present invention may also be tangibly embodied as a set of computer instructions stored on a computer readable medium, such as a memory device.
  • FibreChannel RAM Disk can be set such that the read rate or read size is reduced. This solution, however, requires anticipating the problem beforehand. It also requires knowledge of the drivers of the relevant endpoints/components. Many of these drivers may not be known without investigation, and such a solution would require constantly updating the system when new devices are attached.
  • a set of mechanisms are added that balance the rate of requests with the resulting data fulfilling the requests, reducing the maximum size of the destination queue and also ensuring that the destination bandwidth is not reduced to any source bandwidth.
  • These mechanisms may be generically referred to as read pacing and read spacing.
  • the present invention may be applied to any protocol that permits the splitting of read requests and read completions. This includes, but is not limited to, PCIe, PCI-x, Infiniband, RapidIO, and Hypertransport. Additionally, the present invention may be applied to any system or protocol that has been modified to permit the splitting of read requests and read completions. Therefore, while legacy PCI does not typically support the splitting of read requests and read completions, if a system running legacy PCI were modified to permit such splitting, the invention could be applied to it.
  • Read pacing is based on the idea that only so many requests need to be outstanding at a time in order to ensure uninterrupted completion, and any extra read requests beyond that only cause queues to develop.
  • a device with read pacing counts up, per source, how much data is requested in total. The counter may be labeled as “read count”. Each additional request adds its read size to the read count. As the data is returned, the total read count is reduced according to the amount of data returned. Whenever the read count is larger than a threshold, subsequent requests from that source are held in the device and not forwarded to the final destination queue until the read count drops below the threshold again.
  • the threshold is related to the size of a completion buffer and the typical round trip time from read to completion. If, for example, all ports are reading 1 port (a typical host fanout application has all downstream ports read the main memory on the upstream port), then all completions arrive on one port (i.e. there is 1 destination buffer). If 4 ports are sharing the upstream completion buffer, then the threshold can only be 1 ⁇ 4 as much as if there were only 1 aggressive reading device.
  • an aggressive reading device shall be interpreted to mean a device that sends out read requests in a manner that does cause the latency between it and the data source to exceed the typical latency.
  • the threshold for read pacing in this example may be set to approximately 7 KB.
  • the threshold there may be many different ways to enforce the threshold.
  • One way, as described above, is to use a “read count” counter.
  • Another way, however, would be to simply limit the size of the buffer so that it cannot possibly hold more data than the set threshold. In the above example, for instance, the buffer can simply be set with a size of 7 KB.
  • read spacing this addresses the case where multiple reads are sent closely together. If used together with read pacing, read spacing only is concerned with multiple reads when the threshold has not yet been exceeded. There is theoretically no need to send reads closer together than the data can be sent back. Therefore, by spreading out the read requests based on the rate that the source can utilize the resulting data, no performance is lost and the queue in the destination buffer is kept minimal. It should be noted that in one embodiment, the read rate may be higher than the data rate to account for times when the read request cannot be handled immediately by the destination. The read rate will develop a data buffer up to the limit specified by read pacing in order to smooth out completion data traffic.
  • the read spacing is set to allow the read rate to exceed the drain rate by no more than 2 times. However, this can be a programmable value. The reason to program it larger would be to fill an on-chip buffer more quickly, whereas a smaller value would fill it more slowly. If main memory is heavily congested, this means that there is likely multiple downstream branches feeding into it, since the CPU typically wins all accesses to main memory over other devices' accesses. For example, if a root complex has 2 or more downstream ports, each having a PCI switch feeding to yet even more downstream ports, and all downstream ports are trying to read the main memory simultaneously, then the memory controller may get overloaded.
  • read request queue shall be interpreted to mean any queue that contains, or is designed to contain, read requests. Embodiments are possible where the queue also contains other requests or data. Such queues shall also be considered to be read request queues as long as they hold read requests.
  • a computer I/O interconnect shall be defined as a data transmission medium linking devices in a computer system. This may include, for example, a parallel multidrop bus, as is utilized in the PCI-x protocol. This may also include, for example, a point-to-point architecture, as is used in the PCI Express protocol.
  • FIG. 1 is a block diagram illustrating a system for controlling reads in a computer I/O interconnect in accordance with an embodiment of the present invention.
  • Each component in the system may be embodied in hardware, software, or any combination thereof.
  • Devices 100 a - 100 c may be connected to a switch 102 .
  • switches 102 In the PCI Express and other similar protocols, devices that initiate a read request may be known as endpoints.
  • the switch 102 may include an upstream read request queue 104 a, 104 b, 104 c corresponding to each of the ports connected to devices 100 a - 100 c.
  • the switch 102 may be connected to a root complex 106 . Also connected to the root complex 106 are devices 100 d - 100 e.
  • the root complex 106 may also contain an upstream read request queue 108 a, 108 b, 108 c, here with queue 108 a corresponding to the input from switch 102 and queues 108 b and 108 c corresponding tot he inputs from devices 100 d - 100 e.
  • the root complex 106 controls a memory controller 110 , which in turn may also house an upstream read request queue 112 .
  • read requests and read request queues are described in various portions of this specification, the present invention may also be applied to other types of requests and/or queues, and thus the claims are not to be limited to read requests or read request queues unless specifically stated.
  • Each upstream read request queue acts to hold incoming read requests until they can be acted upon by the device housing the queue. Once they are handled, they are placed in a downstream read request queue until they can be sent to another device.
  • the memory controller 110 may control main memory (not pictured).
  • the request may first pass to switch 102 , where it is placed in upstream read request queue 104 a. Once it has been acted upon by switch 102 , it is placed in downstream read request queue 114 until it can be sent to root complex 106 . Once it arrives at root complex 106 , it is placed in upstream read request queue 108 a. Once it has been acted upon by root complex 106 , it is passed to memory controller 110 , where it is placed in upstream read request queue 112 . Once it has emerged from upstream read request queue 112 , it is serviced and the appropriate completion response is formed from the information in memory.
  • This completion response may then be placed in completion queue 116 .
  • the completion response may be passed to root complex 106 , where it is placed in an appropriate downstream completion queue (here, downstream completion queue 118 a, which corresponds to the interconnect between the root complex 106 and switch 102 , in contrast to downstream completion queues 118 b and 118 c, which correspond to the interconnects between the root complex 106 and devices 100 d and 100 e, respectively).
  • downstream completion queue 118 a which corresponds to the interconnect between the root complex 106 and switch 102
  • downstream completion queues 118 b and 118 c which correspond to the interconnects between the root complex 106 and devices 100 d and 100 e, respectively.
  • downstream completion queue 118 a Once the completion has emerged from downstream completion queue 118 a, it may be passed to switch 102 , where it placed in upstream completion queue 120 . Once the switch 102 has finished with the completion, it may be placed in downstream completion queue 122 a, which corresponds to the interconnect between the switch 102 and the device 100 a (in contrast to the downstream completion queues 122 b and 122 c, which correspond to the interconnects between the switch 102 and devices 100 b and 100 c, respectively).
  • the term “final destination read request queue” may be defined as the read request queue closest to the destination where the underlying data to respond to the read request resides.
  • the upstream read request queue 112 is the final destination read request queue.
  • FIG. 2 is a flow diagram illustrating a method for controlling reads in a computer I/O interconnect in accordance with an embodiment of the present invention.
  • Each step of this method may be performed in software, hardware, or any combination thereof. If performed in software, the method may be implemented as computer-readable instructions stored in a program storage device. This method may be generally termed “read pacing.” This method may be performed by one or more components in a computer system. One of those components may be a root complex of a PCIe switch. Another component may be a switch. Another component may be a memory controller.
  • a read request is received over the computer I/O interconnect from a first device, the request requesting data of a first size.
  • the first predefined threshold may be set based on, for example, a size of memory available for the completion queue and a typical round trip time from read to completion from the upstream read request queue. This may include dividing the size of the memory available for the completion queue by the number of ports of the component controlling the upstream read request queue that are connected to an aggressive reading device.
  • the read request is temporarily restricted from being forwarded upstream. If, on the other hand, fulfilling the read request would not cause the total size of the destination queue to exceed the first predefined threshold, then at 206 the read request may be forwarded upstream. Then at 208 , the first size may be added to the read counter. At 210 , once the read request is fulfilled, the first size may be subtracted from the read counter.
  • FIG. 3 is a flow diagram illustrating a method for controlling reads in a computer I/O interconnect in accordance with an embodiment of the present invention.
  • Each step of this method may be performed in software, hardware, or any combination thereof. If performed in software, the method may be implemented as computer-readable instructions stored in a program storage device.
  • This method may be performed by one or more components in a computer system. One of those components may be a root complex of a PCIe switch. Another component may be a switch. Another component may be a memory controller.
  • a read request is received over the computer I/O interconnect from a first device.
  • the read request upstream is determined if forwarding the read request upstream would cause the rate at which read requests are forwarded upstream to exceed a drain rate of the completion queue by more than a second predefined threshold.
  • This threshold may be expressed, for example, as a multiplication factor between the rate at which read requests are forwarded upstream and the drain rate of the completion queue. For example, the threshold may be set at two times the drain rate of the completion queue. If the rate at which read requests are forwarded upstream exceeds this, then the threshold has been breached.
  • the read request is temporarily restricted from being forwarded upstream if forwarding the read request upstream would cause the rate at which read requests are forwarded upstream to exceed a drain rate of the completion buffer by more than a second predefined threshold.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computer Hardware Design (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

In one embodiment, a method for controlling reads in a computer input/output (I/O) interconnect is provided. A read request is received over the computer I/O interconnect from a first device, the request requesting data of a first size. Then it is determined whether fulfilling the read request would cause the total size of a completion queue to exceed a first predefined threshold. If fulfilling the read request would cause the total size of the completion queue to exceed the first predefined threshold, then the read request is temporarily restricted from being forwarded upstream

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This patent application takes priority under 35 U.S.C. 119(e) to (i) U.S. Provisional Patent Application No. 61/014,685, filed on Dec. 18, 2007 (Attorney Docket No. PLXTP001P) entitled “PLX ARCHITECTURE”, by George Apostol, and (ii) U.S. Provisional Patent Application No. 61/015,613 filed on Dec. 20, 2007 (Attorney Docket No. PLXTP002P) entitled “PLX SOFTWARE DEVELOPMENT KIT”, by George Apostol each of which are incorporated by reference in their entirety for all purposes.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates generally to computer I/O interconnects. More particularly, the present invention relates to read control in a computer I/O interconnect.
  • 2. Description of the Related Art
  • In a computer architecture, a bus is a subsystem that transfers data between computer components inside a computer or between computers. Unlike a point-to-point connection, a different type of computer input/output (I/O) interconnect, a bus can logically connect several peripherals over the same set of wires. Each bus defines its set of connectors to physically plug devices, cards or cables together.
  • There are many different computer I/O interconnect standards available. One of the most popular over the years has been the peripheral component interconnect (PCI) standard. PCI allows the bus to act like a bridge, which isolates a local processor bus from the peripherals, allowing a Central Processing Unit (CPU) of the computer to run must faster.
  • Recently, a successor to PCI has been popularized. Termed PCI Express (or, simply, PCIe). PCIe provides higher performance, increased flexibility and scalability for next-generation systems, while maintaining software compatibility with existing PCI applications. Compared to legacy PCI, the PCI Express protocol is considerably more complex, with three layers—the transaction, data link and physical layers.
  • In a PCI Express system, a root complex device connects the processor and memory subsystem to the PCI Express switch fabric comprised of one or more switch devices (embodiments are also possible without switches, however). In PCI Express, a point-to-point architecture is used. Similar to a host bridge in a PCI system, the root complex generates transaction requests on behalf of the processor, which is interconnected through a local I/O interconnect. Root complex functionality may be implemented as a discrete device, or may be integrated with the processor. A root complex may contain more than one PCI Express port and multiple switch devices can be connected to ports on the root complex or cascaded.
  • PCI Express also supports split read completions. This means that the completion of read request initiated at a particular time may not be performed until a later time. Essentially, the read request must wait in a queue until it is serviced. Since a request is typically only 12-20 bytes, whereas the size of a completion response can range up to 4096 bytes, there is a natural imbalance where requests can accumulate faster than data can be returned.
  • This relative size imbalance between requests and completion data responses can negatively affect performance if too many requests are active at one time. This is especially true in a typical PCIe system where multiple downstream devices all try to read from a single root complex, and wherein the root complex typically services the read requests in a first-come-first-served fashion. If the requests are for large amounts of data, a long read request queue can develop in the root complex as it services the requests. This long queue can be exacerbated if the final data destination (the source of the read request) has less bandwidth than the data supplier (the request destination), which is common in host-centric PCIe systems, where the link closest to the root complex is typically the widest. Once intermediary buffers are filled, the bandwidth of the root complex effectively reduces to the bandwidth of the data sink.
  • If a new downstream device sends its first read request into this long queue of requests in the destination, the new read request will wait for the entire read request queue ahead of it to drain before it will get serviced. The long wait time for a response can dramatically impact performance.
  • For example, suppose a PCIe switch connects a single ×8 upstream port to two ×4 downstream ports. One downstream port has a FibreChannel RAM disk that is capable of sending 16 1024 byte memory read requests at a time. The other downstream port is a dual Gigabit Ethernet controller that can send 2 read requests at a time (1 per channel), with the read size being either 16 bytes (for a descriptor) or 1500 bytes (for an Ethernet packet). The root complex sends 64 byte completions, so a 1024 byte read request would result in 16 partial completions.
  • By itself, the Ethernet controller may process 1885 Mb/s with a memory read latency of an Ethernet channel being around 340 ns. When the FibreChannel RAM disk is plugged in, however, the FibreChannel RAM disk processes 752 MB/s of completions (the same as it normally does) while the Ethernet controller performs 180 Mb/s. Here the memory read latency of the Ethernet channel is around 6200 ns. Thus, when both devices are on, the FibreChannel RAM Disk interferes with the Ethernet controller even though the FibreChannel RAM Disk performance itself was not affected. This is because the FibreChannel RAM Disk initially fills the switch's buffer with completions at a ×8 rate, but then the upstream bandwidth drops to a ×4 rate, due to the switch's downstream link to the FibreChannel device being only ×4. Due to the congestion, the Ethernet controller takes much longer to get data back, as seen from the increased latency. Since the Ethernet device can have only 2 reads outstanding, a longer response for those reads results in a major drop in performance.
  • The above example illustrates how the aggressive reading behavior of one device can dramatically and negatively affect another PCIe device. There is nothing forbidden about this configuration, and by themselves the devices each seem to perform quite well, making this a problem that a cursory analysis of the system would not reveal.
  • SUMMARY OF THE INVENTION
  • In one embodiment, a method for controlling reads in a computer input/output (I/O) interconnect is provided. A read request is received over the computer I/O interconnect from a first device, the request requesting data of a first size. Then it is determined whether fulfilling the read request would cause the total size of a completion queue to exceed a first predefined threshold. If fulfilling the read request would cause the total size of the completion queue to exceed the first predefined threshold, then the read request is temporarily restricted from being forwarded upstream
  • In another embodiment, a read request is received over the computer I/O interconnect from a first device. Then it is determined if forwarding the read request upstream would cause the rate at which read requests are forwarded to exceed a drain rate of a completion queue by more than a predefined threshold. If forwarding the read request upstream would cause the rate at which read requests are forwarded upstream to exceed a drain rate of the completion queue by more than the predefined threshold, then the read request is temporarily restricted from being forwarded upstream.
  • In another embodiment a system is provided comprising: an interface; and one or more components configured to: receive a read request over the computer I/O interconnect from a first device, the request requesting data of a first size; determine whether fulfilling the read request would cause the total size of a completion queue to exceed a first predefined threshold; and temporarily restrict the read request from being forwarded upstream if fulfilling the read request would cause the total size of the completion queue to exceed the first predefined threshold.
  • In another embodiment, a system is provided comprising: an interface; and one or more processors configured to: receive a read request over the computer I/O interconnect from a first device; determine if forwarding the read request upstream would cause the rate at which read requests are forwarded to exceed a drain rate of a completion queue by more than a predefined threshold; and temporarily restrict the read request from being forwarded upstream if forwarding the read request upstream would cause the rate at which read requests are forwarded upstream to exceed a drain rate of the completion queue by more than the predefined threshold.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram illustrating a system for controlling reads in a computer I/O interconnect in accordance with an embodiment of the present invention.
  • FIG. 2 is a flow diagram illustrating a method for controlling reads in a computer I/O interconnect in accordance with an embodiment of the present invention.
  • FIG. 3 is a flow diagram illustrating a method for controlling reads in a computer I/O interconnect in accordance with an embodiment of the present invention.
  • DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTS
  • Reference will now be made in detail to specific embodiments of the invention including the best modes contemplated by the inventors for carrying out the invention. Examples of these specific embodiments are illustrated in the accompanying drawings. While the invention is described in conjunction with these specific embodiments, it will be understood that it is not intended to limit the invention to the described embodiments. On the contrary, it is intended to cover alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims. In the following description, specific details are set forth in order to provide a thorough understanding of the present invention. The present invention may be practiced without some or all of these specific details. In addition, well known features may not have been described in detail to avoid unnecessarily obscuring the invention.
  • In accordance with the present invention, the components, process steps, and/or data structures may be implemented using various types of operating systems, programming languages, computing platforms, computer programs, and/or general purpose machines. In addition, those of ordinary skill in the art will recognize that devices of a less general purpose nature, such as hardwired devices, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), or the like, may also be used without departing from the scope and spirit of the inventive concepts disclosed herein. The present invention may also be tangibly embodied as a set of computer instructions stored on a computer readable medium, such as a memory device.
  • One solution to the congestion problem described in the background of the invention would be to tune the system to have one of the devices behave differently. For instance, in the example provided above, the FibreChannel RAM Disk can be set such that the read rate or read size is reduced. This solution, however, requires anticipating the problem beforehand. It also requires knowledge of the drivers of the relevant endpoints/components. Many of these drivers may not be known without investigation, and such a solution would require constantly updating the system when new devices are attached.
  • In an embodiment of the present invention, a set of mechanisms are added that balance the rate of requests with the resulting data fulfilling the requests, reducing the maximum size of the destination queue and also ensuring that the destination bandwidth is not reduced to any source bandwidth. These mechanisms may be generically referred to as read pacing and read spacing.
  • The present invention may be applied to any protocol that permits the splitting of read requests and read completions. This includes, but is not limited to, PCIe, PCI-x, Infiniband, RapidIO, and Hypertransport. Additionally, the present invention may be applied to any system or protocol that has been modified to permit the splitting of read requests and read completions. Therefore, while legacy PCI does not typically support the splitting of read requests and read completions, if a system running legacy PCI were modified to permit such splitting, the invention could be applied to it.
  • Read pacing is based on the idea that only so many requests need to be outstanding at a time in order to ensure uninterrupted completion, and any extra read requests beyond that only cause queues to develop. A device with read pacing counts up, per source, how much data is requested in total. The counter may be labeled as “read count”. Each additional request adds its read size to the read count. As the data is returned, the total read count is reduced according to the amount of data returned. Whenever the read count is larger than a threshold, subsequent requests from that source are held in the device and not forwarded to the final destination queue until the read count drops below the threshold again.
  • By placing a limit on the amount of data requests, the length of the final destination queue is similarly constrained. The limit is related to hardware resources on the device such that all requested data can be stored on the device without overflowing device buffer spaces. In other words, the threshold is related to the size of a completion buffer and the typical round trip time from read to completion. If, for example, all ports are reading 1 port (a typical host fanout application has all downstream ports read the main memory on the upstream port), then all completions arrive on one port (i.e. there is 1 destination buffer). If 4 ports are sharing the upstream completion buffer, then the threshold can only be ¼ as much as if there were only 1 aggressive reading device.
  • For purposes of this document, an aggressive reading device shall be interpreted to mean a device that sends out read requests in a manner that does cause the latency between it and the data source to exceed the typical latency.
  • For example, if there is about 28 KB of space in the buffer available for the upstream completion queue for the upstream port and there are 4 equally aggressive reading downstream ports, each port should get about ¼ of the buffer. Thus, the threshold for read pacing in this example may be set to approximately 7 KB.
  • It should be noted that there may be many different ways to enforce the threshold. One way, as described above, is to use a “read count” counter. Another way, however, would be to simply limit the size of the buffer so that it cannot possibly hold more data than the set threshold. In the above example, for instance, the buffer can simply be set with a size of 7 KB.
  • Turning now to read spacing, this addresses the case where multiple reads are sent closely together. If used together with read pacing, read spacing only is concerned with multiple reads when the threshold has not yet been exceeded. There is theoretically no need to send reads closer together than the data can be sent back. Therefore, by spreading out the read requests based on the rate that the source can utilize the resulting data, no performance is lost and the queue in the destination buffer is kept minimal. It should be noted that in one embodiment, the read rate may be higher than the data rate to account for times when the read request cannot be handled immediately by the destination. The read rate will develop a data buffer up to the limit specified by read pacing in order to smooth out completion data traffic.
  • In one embodiment of the present invention, the read spacing is set to allow the read rate to exceed the drain rate by no more than 2 times. However, this can be a programmable value. The reason to program it larger would be to fill an on-chip buffer more quickly, whereas a smaller value would fill it more slowly. If main memory is heavily congested, this means that there is likely multiple downstream branches feeding into it, since the CPU typically wins all accesses to main memory over other devices' accesses. For example, if a root complex has 2 or more downstream ports, each having a PCI switch feeding to yet even more downstream ports, and all downstream ports are trying to read the main memory simultaneously, then the memory controller may get overloaded.
  • The net effect of these mechanisms is to maintain destination bandwidth and reduce read request queue size in the memory controller, both of which will improve overall performance.
  • It should be noted that the term “read request queue” shall be interpreted to mean any queue that contains, or is designed to contain, read requests. Embodiments are possible where the queue also contains other requests or data. Such queues shall also be considered to be read request queues as long as they hold read requests.
  • The present invention may be implemented in various places in a computer I/O interconnect. For purposes of this document, a computer I/O interconnect shall be defined as a data transmission medium linking devices in a computer system. This may include, for example, a parallel multidrop bus, as is utilized in the PCI-x protocol. This may also include, for example, a point-to-point architecture, as is used in the PCI Express protocol.
  • FIG. 1 is a block diagram illustrating a system for controlling reads in a computer I/O interconnect in accordance with an embodiment of the present invention. Each component in the system may be embodied in hardware, software, or any combination thereof. In this diagram, there are five devices 100 a-e connected to an I/O interconnect system. Devices 100 a-100 c may be connected to a switch 102. In the PCI Express and other similar protocols, devices that initiate a read request may be known as endpoints.
  • It should be noted that while a single switch is depicted in FIG. 1, one of ordinary skill in the art will recognize that multiple switches may be utilized in a parallel, serial, or hierarchical configuration in order to accomplish the same goals. The switch 102 may include an upstream read request queue 104 a, 104 b, 104 c corresponding to each of the ports connected to devices 100 a-100 c. The switch 102 may be connected to a root complex 106. Also connected to the root complex 106 are devices 100 d-100 e. Like the switch 102, the root complex 106 may also contain an upstream read request queue 108 a, 108 b, 108 c, here with queue 108 a corresponding to the input from switch 102 and queues 108 b and 108 c corresponding tot he inputs from devices 100 d-100 e. The root complex 106 controls a memory controller 110, which in turn may also house an upstream read request queue 112.
  • It should be noted that while read requests and read request queues are described in various portions of this specification, the present invention may also be applied to other types of requests and/or queues, and thus the claims are not to be limited to read requests or read request queues unless specifically stated.
  • Each upstream read request queue acts to hold incoming read requests until they can be acted upon by the device housing the queue. Once they are handled, they are placed in a downstream read request queue until they can be sent to another device.
  • The memory controller 110 may control main memory (not pictured). When a device 100 a initiates a read request, the request may first pass to switch 102, where it is placed in upstream read request queue 104 a. Once it has been acted upon by switch 102, it is placed in downstream read request queue 114 until it can be sent to root complex 106. Once it arrives at root complex 106, it is placed in upstream read request queue 108 a. Once it has been acted upon by root complex 106, it is passed to memory controller 110, where it is placed in upstream read request queue 112. Once it has emerged from upstream read request queue 112, it is serviced and the appropriate completion response is formed from the information in memory.
  • This completion response may then be placed in completion queue 116. Once it has emerged from upstream completion queue 116, the completion response may be passed to root complex 106, where it is placed in an appropriate downstream completion queue (here, downstream completion queue 118 a, which corresponds to the interconnect between the root complex 106 and switch 102, in contrast to downstream completion queues 118 b and 118 c, which correspond to the interconnects between the root complex 106 and devices 100 d and 100 e, respectively).
  • Once the completion has emerged from downstream completion queue 118 a, it may be passed to switch 102, where it placed in upstream completion queue 120. Once the switch 102 has finished with the completion, it may be placed in downstream completion queue 122 a, which corresponds to the interconnect between the switch 102 and the device 100 a (in contrast to the downstream completion queues 122 b and 122 c, which correspond to the interconnects between the switch 102 and devices 100 b and 100 c, respectively).
  • Various aspects of the present invention may be implemented at any of the upstream read request queues. For purposes of this document, the term “final destination read request queue” may be defined as the read request queue closest to the destination where the underlying data to respond to the read request resides. In FIG. 1, for example, the upstream read request queue 112 is the final destination read request queue.
  • FIG. 2 is a flow diagram illustrating a method for controlling reads in a computer I/O interconnect in accordance with an embodiment of the present invention. Each step of this method may be performed in software, hardware, or any combination thereof. If performed in software, the method may be implemented as computer-readable instructions stored in a program storage device. This method may be generally termed “read pacing.” This method may be performed by one or more components in a computer system. One of those components may be a root complex of a PCIe switch. Another component may be a switch. Another component may be a memory controller. At 200, a read request is received over the computer I/O interconnect from a first device, the request requesting data of a first size. At 202, it is determined whether fulfilling the read request would cause the total size of a completion queue to exceed a first predefined threshold. This determination may include adding the first size to a read counter and comparing the read counter to the first predetermined threshold. The first predetermined threshold may be set based on, for example, a size of memory available for the completion queue and a typical round trip time from read to completion from the upstream read request queue. This may include dividing the size of the memory available for the completion queue by the number of ports of the component controlling the upstream read request queue that are connected to an aggressive reading device.
  • If fulfilling the read request would cause the total size of the completion queue to exceed the first predefined threshold, then at 206 the read request is temporarily restricted from being forwarded upstream. If, on the other hand, fulfilling the read request would not cause the total size of the destination queue to exceed the first predefined threshold, then at 206 the read request may be forwarded upstream. Then at 208, the first size may be added to the read counter. At 210, once the read request is fulfilled, the first size may be subtracted from the read counter.
  • FIG. 3 is a flow diagram illustrating a method for controlling reads in a computer I/O interconnect in accordance with an embodiment of the present invention. Each step of this method may be performed in software, hardware, or any combination thereof. If performed in software, the method may be implemented as computer-readable instructions stored in a program storage device. This method may be performed by one or more components in a computer system. One of those components may be a root complex of a PCIe switch. Another component may be a switch. Another component may be a memory controller. At 300, a read request is received over the computer I/O interconnect from a first device. At 302, it is determined if forwarding the read request upstream would cause the rate at which read requests are forwarded upstream to exceed a drain rate of the completion queue by more than a second predefined threshold. This threshold may be expressed, for example, as a multiplication factor between the rate at which read requests are forwarded upstream and the drain rate of the completion queue. For example, the threshold may be set at two times the drain rate of the completion queue. If the rate at which read requests are forwarded upstream exceeds this, then the threshold has been breached. At 304, the read request is temporarily restricted from being forwarded upstream if forwarding the read request upstream would cause the rate at which read requests are forwarded upstream to exceed a drain rate of the completion buffer by more than a second predefined threshold.
  • It should be noted that while embodiments are foreseen wherein read pacing is performed without read spacing and vice-versa, in one embodiment of the present invention, both are performed. For example, the steps of FIG. 2 and FIG. 3 above may be combined into a single method, with the read spacing method being performed on read requests that would not cause the total size of a completion queue to exceed a first predefined threshold.
  • While the invention has been particularly shown and described with reference to specific embodiments thereof, it will be understood by those skilled in the art that changes in the form and details of the disclosed embodiments may be made without departing from the spirit or scope of the invention. In addition, although various advantages, aspects, and objects of the present invention have been discussed herein with reference to various embodiments, it will be understood that the scope of the invention should not be limited by reference to such advantages, aspects, and objects. Rather, the scope of the invention should be determined with reference to the appended claims.

Claims (19)

1. A method for controlling reads in a computer input/output (I/O) interconnect, the method comprising:
receiving a read request over the computer I/O interconnect from a first device, the request requesting data of a first size;
determining whether fulfilling the read request would cause the total size of a completion queue to exceed a first predefined threshold; and
temporarily restricting the read request from being forwarded upstream if fulfilling the read request would cause the total size of the completion queue to exceed the first predefined threshold.
2. The method of claim 1, wherein the determining includes adding the first size to a read counter and comparing the read counter to the first predetermined threshold.
3. The method of claim 2, further comprising, if fulfilling the read request would not cause the total size of the completion queue to exceed the first predefined threshold:
forwarding the read request upstream; and
adding the first size to the read counter.
4. The method of claim 3, further comprising:
when the read request is fulfilled, subtracting the first size from the read counter.
5. The method of claim 1, wherein the method is performed in a root complex.
6. The method of claim 1, wherein the first predetermined threshold is set based on a size of memory available for the completion queue and a typical round trip time from read to completion from the upstream read request queue.
7. The method of claim 1, wherein the method is performed in a switch.
8. The method of claim 7, wherein the first predetermined threshold is determined by dividing the memory size available for the completion queue by the number of ports of the switch that are connected to aggressive reading devices.
9. The method of claim 1, wherein the method is performed in a memory controller.
10. The method of claim 1, further comprising:
receiving a read request over the computer I/O interconnect from a second device;
determining whether fulfilling the read request from the second would cause the total size of the completion queue to exceed a first predefined threshold; and
if the read request from the second device would not cause the total size of the completion queue to exceed the first predefined threshold, determining if forwarding the read request upstream would cause the rate at which read requests are forwarded upstream to exceed a drain rate of the completion queue by more than a second predefined threshold; and
temporarily restricting the read request from the second device from being forwarded upstream if forwarding the read request from the second device upstream would cause the rate at which read requests are forwarded upstream to exceed a drain rate of the completion queue by more than the second predefined threshold.
11. The method of claim 10, wherein the second predefined threshold is expressed as a multiplication factor between the rate at which read requests are forwarded upstream and the drain rate of the completion queue.
12. A method for controlling reads in a computer I/O interconnect, the method comprising:
receiving a read request over the computer I/O interconnect from a first device;
determining if forwarding the read request upstream would cause the rate at which read requests are forwarded upstream to exceed a drain rate of a completion queue by more than a predefined threshold; and
temporarily restricting the read request from being forwarded upstream if forwarding the read request upstream would cause the rate at which read requests are forwarded upstream to exceed a drain rate of the completion queue by more than the predefined threshold.
13. The method of claim 12, wherein the predefined threshold is expressed as a multiplication factor between the rate at which read requests are forwarded upstream and the drain rate of the completion queue.
14. A system comprising:
an interface; and
one or more components configured to:
receive a read request over the computer I/O interconnect from a first device, the request requesting data of a first size;
determine whether fulfilling the read request would cause the total size of a completion queue to exceed a first predefined threshold; and
temporarily restrict the read request from being forwarded upstream if fulfilling the read request would cause the total size of the completion queue to exceed the first predefined threshold.
15. The system of claim 14, further comprising:
a switch;
a root complex coupled to the switch;
a memory controller coupled to the root complex; and
a memory coupled to the memory controller.
16. The system of claim 15, wherein the one or more components are located in the memory controller.
17. The system of claim 15, wherein the one or more components are located in the root complex.
18. The system of claim 15, wherein the one or more components are located in the switch.
19. A system comprising:
an interface; and
one or more processors configured to:
receive a read request over the computer I/O interconnect from a first device;
determine if forwarding the read request upstream would cause the rate at which read requests are forwarded upstream to exceed a drain rate of a completion queue by more than a predefined threshold; and
temporarily restrict the read request from being forwarded upstream if forwarding the read request upstream would cause the rate at which read requests are forwarded upstream to exceed a drain rate of the completion queue by more than the predefined threshold.
US12/105,733 2007-12-18 2008-04-18 Read control in a computer i/o interconnect Abandoned US20090157919A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/105,733 US20090157919A1 (en) 2007-12-18 2008-04-18 Read control in a computer i/o interconnect
US13/020,702 US8015330B2 (en) 2007-12-18 2011-02-03 Read control in a computer I/O interconnect

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US1468507P 2007-12-18 2007-12-18
US1561307P 2007-12-20 2007-12-20
US12/105,733 US20090157919A1 (en) 2007-12-18 2008-04-18 Read control in a computer i/o interconnect

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/020,702 Division US8015330B2 (en) 2007-12-18 2011-02-03 Read control in a computer I/O interconnect

Publications (1)

Publication Number Publication Date
US20090157919A1 true US20090157919A1 (en) 2009-06-18

Family

ID=40754762

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/105,733 Abandoned US20090157919A1 (en) 2007-12-18 2008-04-18 Read control in a computer i/o interconnect
US13/020,702 Expired - Fee Related US8015330B2 (en) 2007-12-18 2011-02-03 Read control in a computer I/O interconnect

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/020,702 Expired - Fee Related US8015330B2 (en) 2007-12-18 2011-02-03 Read control in a computer I/O interconnect

Country Status (1)

Country Link
US (2) US20090157919A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090129269A1 (en) * 2007-11-21 2009-05-21 Microchip Technology Incorporated Ethernet Controller
US20100153659A1 (en) * 2008-12-17 2010-06-17 Hewlett-Packard Development Company, L.P. Servicing memory read requests
WO2011112682A1 (en) * 2010-03-09 2011-09-15 Qualcomm Incorporated Interconnect coupled to master device via at least two different connections
US20130185472A1 (en) * 2012-01-17 2013-07-18 Wilocity Ltd. Techniques for improving throughput and performance of a distributed interconnect peripheral bus
US8769175B2 (en) 2011-03-09 2014-07-01 International Business Machines Corporation Adjustment of post and non-post packet transmissions in a communication interconnect
US20150095523A1 (en) * 2013-09-27 2015-04-02 Fujitsu Limited Information processing apparatus, data transfer apparatus, and data transfer method
US20160266928A1 (en) * 2015-03-11 2016-09-15 Sandisk Technologies Inc. Task queues
US9684461B1 (en) 2016-10-31 2017-06-20 International Business Machines Corporation Dynamically adjusting read data return sizes based on memory interface bus utilization
US9892066B1 (en) * 2016-10-31 2018-02-13 International Business Machines Corporation Dynamically adjusting read data return sizes based on interconnect bus utilization
CN108270685A (en) * 2018-01-31 2018-07-10 深圳市国微电子有限公司 The method, apparatus and terminal of a kind of data acquisition

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9002533B2 (en) * 2011-06-28 2015-04-07 Gm Global Technology Operations Message transmission control systems and methods

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5987507A (en) * 1998-05-28 1999-11-16 3Com Technologies Multi-port communication network device including common buffer memory with threshold control of port packet counters
US6425024B1 (en) * 1999-05-18 2002-07-23 International Business Machines Corporation Buffer management for improved PCI-X or PCI bridge performance
US20050198459A1 (en) * 2004-03-04 2005-09-08 General Electric Company Apparatus and method for open loop buffer allocation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5987507A (en) * 1998-05-28 1999-11-16 3Com Technologies Multi-port communication network device including common buffer memory with threshold control of port packet counters
US6425024B1 (en) * 1999-05-18 2002-07-23 International Business Machines Corporation Buffer management for improved PCI-X or PCI bridge performance
US20050198459A1 (en) * 2004-03-04 2005-09-08 General Electric Company Apparatus and method for open loop buffer allocation

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8004988B2 (en) * 2007-11-21 2011-08-23 Microchip Technology Incorporated Ethernet controller
US20090129269A1 (en) * 2007-11-21 2009-05-21 Microchip Technology Incorporated Ethernet Controller
US20100153659A1 (en) * 2008-12-17 2010-06-17 Hewlett-Packard Development Company, L.P. Servicing memory read requests
US8103837B2 (en) * 2008-12-17 2012-01-24 Hewlett-Packard Development Company, L.P. Servicing memory read requests
WO2011112682A1 (en) * 2010-03-09 2011-09-15 Qualcomm Incorporated Interconnect coupled to master device via at least two different connections
US20110225333A1 (en) * 2010-03-09 2011-09-15 Qualcomm Incorporated Interconnect Coupled to Master Device Via at Least Two Different Connections
US8380904B2 (en) 2010-03-09 2013-02-19 Qualcomm Incorporated Interconnect coupled to master device via at least two different bidirectional connections
US8769175B2 (en) 2011-03-09 2014-07-01 International Business Machines Corporation Adjustment of post and non-post packet transmissions in a communication interconnect
US9256564B2 (en) * 2012-01-17 2016-02-09 Qualcomm Incorporated Techniques for improving throughput and performance of a distributed interconnect peripheral bus
US20130185472A1 (en) * 2012-01-17 2013-07-18 Wilocity Ltd. Techniques for improving throughput and performance of a distributed interconnect peripheral bus
US9311265B2 (en) 2012-01-17 2016-04-12 Qualcomm Incorporated Techniques for improving throughput and performance of a distributed interconnect peripheral bus connected to a host controller
US20150095523A1 (en) * 2013-09-27 2015-04-02 Fujitsu Limited Information processing apparatus, data transfer apparatus, and data transfer method
US20160266928A1 (en) * 2015-03-11 2016-09-15 Sandisk Technologies Inc. Task queues
US9965323B2 (en) 2015-03-11 2018-05-08 Western Digital Technologies, Inc. Task queues
US10073714B2 (en) * 2015-03-11 2018-09-11 Western Digital Technologies, Inc. Task queues
US10379903B2 (en) 2015-03-11 2019-08-13 Western Digital Technologies, Inc. Task queues
US11061721B2 (en) 2015-03-11 2021-07-13 Western Digital Technologies, Inc. Task queues
US9684461B1 (en) 2016-10-31 2017-06-20 International Business Machines Corporation Dynamically adjusting read data return sizes based on memory interface bus utilization
US9892066B1 (en) * 2016-10-31 2018-02-13 International Business Machines Corporation Dynamically adjusting read data return sizes based on interconnect bus utilization
US10176125B2 (en) * 2016-10-31 2019-01-08 International Business Machines Corporation Dynamically adjusting read data return sizes based on interconnect bus utilization
CN108270685A (en) * 2018-01-31 2018-07-10 深圳市国微电子有限公司 The method, apparatus and terminal of a kind of data acquisition

Also Published As

Publication number Publication date
US20110125947A1 (en) 2011-05-26
US8015330B2 (en) 2011-09-06

Similar Documents

Publication Publication Date Title
US8015330B2 (en) Read control in a computer I/O interconnect
US10649924B2 (en) Network overlay systems and methods using offload processors
US8797857B2 (en) Dynamic buffer pool in PCIExpress switches
US8516177B2 (en) Avoiding non-posted request deadlocks in devices by holding the sending of requests
US9286472B2 (en) Efficient packet handling, redirection, and inspection using offload processors
US9286236B2 (en) I/O controller and method for operating an I/O controller
WO2018076793A1 (en) Nvme device, and methods for reading and writing nvme data
US20110072172A1 (en) Input/output device including a mechanism for transaction layer packet processing in multiple processor systems
US10887252B2 (en) Efficient scatter-gather over an uplink
US9734115B2 (en) Memory mapping method and memory mapping system
US20110153875A1 (en) Opportunistic dma header insertion
EP4004753B1 (en) Programmable network interface device comprising a host computing device and a network interface device
US20140082119A1 (en) Processing data packets from a receive queue in a remote direct memory access device
US7761529B2 (en) Method, system, and program for managing memory requests by devices
WO2020263658A1 (en) Interconnect address based qos regulation
US20180336034A1 (en) Near memory computing architecture
US10817446B1 (en) Optimized multiport NVMe controller for multipath input/output applications
TW200407712A (en) Configurable multi-port multi-protocol network interface to support packet processing
US8769175B2 (en) Adjustment of post and non-post packet transmissions in a communication interconnect
Larsen et al. Platform io dma transaction acceleration
Inoue et al. Low-latency and high bandwidth TCP/IP protocol processing through an integrated HW/SW approach
Dittia et al. DMA Mechanisms for High Performance Network Interfaces

Legal Events

Date Code Title Description
AS Assignment

Owner name: PLX TECHNOLOGY, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DODSON, JEFFREY MICHAEL;BALASUBRAMANIYAN, NAGAMANIVEL;REEL/FRAME:020832/0035;SIGNING DATES FROM 20080414 TO 20080416

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION