BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to communications network switching and, in particular, to reduction of memory buffering requirements when interfacing between two networks.
2. Description of the Prior Art
Computing systems are useful tools for the exchange of information among individuals. The information may include, but is not limited to, data, voice, graphics, and video. The exchange is established through interconnections linking the computing systems together in a way that permits the transfer of electronic signals that represent the information. The interconnections may be either wired or wireless. Wired connections include metal and optical fiber elements. Wireless connections include, but are not limited to, infrared and radio wave transmissions.
A plurality of interconnected computing systems having some sort of commonality represents a network. For example, individuals associated with a college campus may each have a computing device. In addition, there may be shared printers and remotely located application servers sprinkled throughout the campus. There is commonality among the individuals in that they all are associated with the college in some way. The same can be said for individuals and their computing arrangements in other environments including, for example, healthcare facilities, manufacturing sites and Internet access users. In most cases, it is desirable to permit communication or signal exchange among the various computing systems of the common group in some selectable way. The interconnection of those computing systems, as well as the devices that regulate and facilitate the exchange among the systems, represent a network. Further, networks may be interconnected together to establish internetworks.
The process by which the various computing systems of a network or internetwork communicate is generally regulated by agreed-upon signal exchange standards and protocols embodied in network interface cards or circuitry. Such standards and protocols were borne out of the need and desire to provide interoperability among the array of computing systems available from a plurality of suppliers. Two organizations that have been substantially responsible for signal exchange standardization are the Institute of Electrical and Electronic Engineers (IEEE) and the Internet Engineering Task Force (IETF). In particular, the IEEE standards for internetwork operability have been established, or are in the process of being established, under the purview of the 802 committee on Local Area Networks (LANs) and Metropolitan Area Networks (MANs).
The primary connectivity standard employed in the majority of wired LANs is IEEE802.3 Ethernet. In addition to establishing the rules for signal frame sizes and transfer rates, the Ethernet standard may be divided into two general connectivity types: full duplex and half-duplex. In a full duplex arrangement, two connected devices may transmit and receive signals simultaneously in that independent transfer lines define the connection. On the other hand, a half duplex arrangement defines one-way exchanges in which a transmission in one direction must be completed before a transmission in the opposing direction is permitted. The Ethernet standard also establishes the process by which a plurality of devices connected via a single physical connection share that connection to effect signal exchange with minimal signal collisions. In particular, the devices must be configured so as to sense whether that shared connector is in use. If it is in use, the device must wait until it senses no present use and then transmits its signals in a specified period of time, dependent upon the particular Ethernet rate of the LAN. Full duplex exchange is preferred because collisions are not an issue. However, half duplex connectivity remains a significant portion of existing networks.
While the IETF and the IEEE have been substantially effective in standardizing the operation and configuration of networks, they have not addressed all matters of real or potential importance in networks and internetworks. In particular regard to the present invention, there currently exists no standard, nor apparently any plans for a standard, that enables the interfacing of network devices that operate at different transmission rates, different connectivity formats, and the like. Nevertheless, it is common for disparate networks to be connected. When they are, problems include signal loss and signal slowing. Both are unacceptable conditions as the desire for faster and more comprehensive signal exchange increases. For that reason, it is often necessary for equipment vendors to supply, and end users to have, interface devices that enable transition between devices that otherwise cannot communicate with one another. An example of such an interface device is an access point that links an IEEE802.3 wired Ethernet system with an IEEE802.11 wireless system.
The traditional way of dealing with interfacing dissimilar (different speed networks) is to match or exceed the buffering of the Ethernet network, as shown in FIG. 1, by an amount determined to be sufficient to prevent data loss due to inefficiencies of the slower network. In this model, as any Ethernet port transmits data, the receiving network accepts the data at the transmitted Ethernet rate and stores it in buffers until the data can be retransmitted at the slower rate. As a result, buffers 10 are required for each port that may transmit. The non-Ethernet network interface 20 requires equivalent buffering to the Ethernet device 30 (such as an Ethernet switch engine) to ensure adequate data throughput. In the case where the non-Ethernet network interface 20 cannot process data as fast as the Ethernet device 30, buffering in the non-Ethernet network interface 20 must be larger than that used on the Ethernet device 30 side. It had been the practice to add as much memory as needed to ensure desired performance. That approach can be costly and complex and can use up valuable device space.
Matching buffering capacity is generally done in one of two ways, discrete memory components and/or memory arrays implemented in logic cores, e.g., Field Programmable Gate Arrays (FPGAs) or Application Specific Integrated Circuits (ASICs); both methods are costly. Of the two, adding discrete memory chips is more common. As indicated, adding discrete memory chips increases component count on the board; translating directly into higher cost and lower reliability (higher chance of component failure). Whereas, implementing memory in logic core devices is gate intensive. Memory arrays require high gate counts to implement. Chewing up logic gates limits functionality within the device that could otherwise be used for enhanced features or improved functionality. In addition, FPGA and ASIC vendors charge a premium for high gate count devices. This impact is why adding discrete memory components is usually pursued over implementing memory in logic core devices.
Therefore, what is needed is a system and method to ensure compatible performance between network devices, including at least one having multiple data exchange ports, operating at different rates while minimizing the need for extra memory and/or complex memory schemes. An additional desired feature of such a system and method is to provide priority servicing for the exchange ports.
SUMMARY OF THE INVENTION
It is an object of the present invention to provide a system and method to ensure compatible performance between network devices, including at least one having multiple data exchange ports, operating at different rates while minimizing the need for extra memory and/or complex memory schemes. It is also an object of the present invention to provide such a system and method with priority servicing for the exchange ports.
These and other objects are achieved in the present invention, which includes an interface block with flow control circuitry that manages the transfer of data from a multiport network device. The interface block includes memory sufficient to enable transfer of the data forward at a rate that is compatible with the downstream device, whether that device is slower or faster than the multiport device. Further, the transfer is achieved without dropping data packets as a result of rate differentials.
This invention uses the hardware flow control feature, common in widely available Ethernet switch engines, to reduce memory buffer requirements. The memory buffers are located in a hardware interface between a common Ethernet switch engine and a dissimilar network interface, such as an 802.11 wireless LAN. Memory buffering can be reduced to one or less buffers per port in a hardware interface by using hardware flow control to prevent buffer overflow. In addition to reducing the memory buffer requirements, this invention can provide priority service classifications of Ethernet switch ports connected to a common flow control mechanism. The hardware interface can be a custom designed circuit, such as a FPGA or an ASIC, or can be formed of discrete components.
An embodiment of the present invention uses half-duplex, hardware Flow Control between an FPGA and a common Ethernet switch engine to reduce the amount of internal buffering required inside an FPGA. This maintains a high level of performance by taking advantage of the inherent buffering available inside a switch engine while reducing the external memory buffer requirements to the absolute minimum needed for packet processing. Port service priority can be implemented in simple logic to control the back-pressure mechanism to the packet source rather than adding more external buffering to store packets while controlling their transmission priority with logic at the buffer output.
Other particular advantages of the invention over what has been done before include, but are not limited to:
The use of Hardware Flow Control back-pressure to control a group of related ports rather than a single point to point link as it was originally intended allows the multiplexing of several Ethernet ports onto a single port of a dissimilar network type.
Using the half-duplex back-pressure mechanism allows implementation of a priority-based service scheme across a group of related Ethernet ports.
These and other advantages of the present invention will become apparent upon review of the following detailed description, the accompanying drawings, and the appended claims.