Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20020099851 A1
Publication typeApplication
Application numberUS 09/768,374
Publication dateJul 25, 2002
Filing dateJan 22, 2001
Priority dateJan 22, 2001
Also published asUS8090859, US20060123130
Publication number09768374, 768374, US 2002/0099851 A1, US 2002/099851 A1, US 20020099851 A1, US 20020099851A1, US 2002099851 A1, US 2002099851A1, US-A1-20020099851, US-A1-2002099851, US2002/0099851A1, US2002/099851A1, US20020099851 A1, US20020099851A1, US2002099851 A1, US2002099851A1
InventorsHemal Shah, Greg Regnier
Original AssigneeShah Hemal V., Regnier Greg J.
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Decoupling TCP/IP processing in system area networks
US 20020099851 A1
Abstract
Proxy nodes perform TCP/IP processing on behalf of application nodes, utilize lightweight protocols to communicate with application nodes, and communicate with network nodes and network clients using Transmission Control Protocol/Internet Protocol (TCP/IP).
Images(13)
Previous page
Next page
Claims(30)
What is claimed is:
1. A method comprising:
receiving a packet at a proxy node in a system area network from a first node that generated the packet using a first protocol;
translating the packet using a second protocol used by a second node; and
sending the translated packet from the proxy node to the second node.
2. The method of claim 1 wherein translating the packet comprises translating a single packet into multiple packets and wherein sending the translated packet comprises sending several translated packets.
3. The method of claim 1 wherein receiving the packet comprises receiving multiple packets, translating the packet comprises translating the multiple packets into a single packet and sending the translated packet comprises sending the single translated packet.
4. The method of claim 1 wherein the first protocol is based on Transmission Control Protocol/Internet Protocol (TCP/IP) and the second protocol is based on a lightweight protocol.
5. The method of claim 1 wherein the first protocol is based on a lightweight protocol and the second protocol is based on Transmission Control Protocol/Internet Protocol (TCP/IP).
6. The method of claim 1 wherein the first node comprises a network client coupled to the proxy node through a network node, and the second node comprises an application node.
7. The method of claim 1 wherein the first node comprises an application node and the second node comprises a network client coupled to the proxy node through a network node.
8. A method of protocol processing comprising:
receiving a packet at a proxy node in a system area network from a first node that generated the packet using a first protocol wherein the packet is addressed to a second node in the system area network that uses a second protocol;
processing the packet in the proxy node; and
sending a response from the proxy node to the first node using the first protocol.
9. The method of claim 8 wherein the first protocol is based on Transmission Control Protocol/Internet Protocol (TCP/IP) and the second protocol is based on a lightweight protocol.
10. The method of claim 8 wherein the first protocol is based on a lightweight protocol and the second protocol is based on Transmission Control Protocol/Internet Protocol (TCP/IP).
11. A system area network comprising:
a network node;
a proxy node;
an application node; and
a network client;
wherein the proxy node comprises a processor configured for:
receiving a first packet from the network client through the network node addressed to the application node using a first protocol; and
if the first packet meets a specified criterion, translating the first packet using a second protocol used by the application node, and sending the translated first packet to the application node.
12. The system area network of claim 11 wherein the proxy node processor is further configured for processing the first packet if the first packet does not meet the specified criteria.
13. The system area network of claim 12 wherein the proxy node processor is further configured for sending a response to the network client through the network node using the first protocol.
14. The system area network of claim 11 wherein the proxy node processor is further configured for receiving a second packet from the application node addressed to the network client using the second protocol;
if the second packet meets a specified criterion, translating the second packet using the first protocol and sending the translated second packet to the network client through the network node.
15. The system area network of claim 14 wherein the proxy node processor is further configured for processing the second packet if the second packet does not meet the specified criteria,.
16. The system area network of claim 15 wherein the proxy node processor is further configured for sending a response to the application node using the second protocol.
17. The system area network of claim 11 wherein the first protocol is based on Transmission Control Protocol/Internet Protocol (TCP/IP) and the second protocol is based on a lightweight protocol.
18. The system area network of claim 11 further comprising a plurality of network nodes, a plurality of proxy nodes and a plurality of application nodes, and a plurality of network clients wherein each proxy node comprises a respective processor configured for:
receiving an input packet from one of the network clients through one of the network nodes addressed to a particular one of the application nodes using a first protocol; and
if the input packet meets a specified criterion, translating the input packet using a second protocol used by the particular application node, and sending the translated input packet to the particular application node.
19. The system area network of claim 18 wherein each network node comprises a processor configured for performing load balancing among the proxy nodes based on protocol processing requirements.
20. The system area network of claim 18 wherein the proxy node processors are further configured for performing load balancing among the application nodes based on application processing requirements.
21. An apparatus comprising:
a plurality of network ports; and
a processor configured for:
receiving through one of the network ports a first packet from a network client through a network node in a system area network that generated the first packet using a first protocol; and
if the first packet meets a specified criterion, translating the first packet using a second protocol used by an application node and sending the translated first packet through one of the network ports to the application node.
22. The apparatus of claim 21 wherein the processor is further configured for processing the first packet and sending a response to the network client through the network node using the first protocol if the first packet does not meet the specified criterion.,.
23. The apparatus of claim 21 wherein the processor is further configured for:
receiving a second packet through one of the network ports from the application node using the second protocol;
if the second packet meets a specified criterion, translating the second packet using the first protocol and sending the translated second packet to the network client through the network node.
24. The apparatus of claim 23 wherein the processor is further configured for processing the first packet and sending a response to the application node using the second protocol if the second packet does not meet the specified criteria.,
25. The apparatus of claim 21 wherein the processor is further configured for performing load balancing among application nodes connected to the network ports based on application processing requirements.
26. The apparatus of claim 21 wherein the first protocol is based on a lightweight protocol and the second protocol is based on Transmission Control Protocol/Internet Protocol (TCP/IP).
27. The apparatus of claim 21 wherein the first protocol is based on Transmission Control Protocol/Internet Protocol (TCP/IP).
28. An article comprising a computer-readable medium that stores computer executable instructions for causing a computer system to:
receive a first packet at a proxy node in a system area network from a network client through a network node using a first protocol;
if the first packet meets a specified criterion, translate the first packet using a second protocol used by an application node and send the translated first packet to the application node.
29. The article of claim 28 further comprising instructions for causing the computer system to process the first packet and send a response to the network client through the network node using the first protocol if the first packet does not meet the specified criterion.,
30. The article of claim 28 further comprising instructions for causing the computer system to:
receive a second packet at the proxy node from the application node using the second protocol;
translate the second packet using the first protocol; and
send the translated second packet to the network client through the network node.
Description
BACKGROUND

[0001] The invention relates to decoupling Transmission Control Protocol/Internet Protocol (TCP/IP) processing in system area networks (SANs).

[0002] SANs provide computer network clients with access to computer applications and services stored on application nodes. Network clients typically utilize Transmission Control Protocol/Internet Protocol (TCP/IP) to communicate with application nodes. Application node operating systems have been responsible for processing TCP/IP packets. TCP/IP processing demand at application node resources can slow down the application processing speed.

BRIEF DESCRIPTION OF DRAWINGS

[0003]FIG. 1 illustrates a system area network.

[0004]FIG. 2 illustrates a proxy node according to the invention.

[0005]FIG. 3a-3 g are flowcharts according to the invention.

[0006]FIG. 4 is a flowchart according to the invention.

[0007]FIG. 5 is a timeline of communications.

[0008]FIG. 6 is a state diagram.

DETAILED DESCRIPTION

[0009]FIG. 1 illustrates a computer system 10 including network clients 12, a system area network (SAN) 14 and a SAN management node 22. The network clients 12 can be configured, for example, to access services provided by application nodes 20 a, 20 b, 20 c . . . 20 k through either a local area network (LAN) or a wide area network (WAN). The SAN 14 has one or more network nodes 16 a . . . 16 k, one or more proxy nodes 18 a . . . 18 k, , and one or more application nodes 20 a, 20 b, 20 c . . . 20 k . Each node includes at least one processor, a memory unit, and at least one network connection port.

[0010] The network nodes 16 a . . . 16 k are platforms that provide an interface between the network clients 12 and the SAN 14. The network nodes 16 a . . . 16 k may be configured to perform load balancing across multiple proxy nodes 18 a . . . 18 k.

[0011] The proxy nodes 18 a . . . 18 k are platforms that can provide various network services including network firewall functions, caching functions, network security functions, and load balancing. The proxy nodes 18 a . . . 18 k also perform TCP/IP processing on behalf of the application nodes 20 a, 20 b, 20 c . . . 20 k. The proxy node 18 a may, for example, include a computer configured to accomplish the tasks described below. The application nodes 20 a, 20 b, 20 c . . . 20 k are platforms that function as hosts to various applications, such as a web service, mail service, or directory service.

[0012] SAN channels 24 interconnect the various nodes. SAN channels 24 may be configured to connect a single network node 16 a . . . 16 k to multiple proxy nodes 18 a 18 k, to connect a single proxy node 18 a . . . 18 k to multiple network nodes 16 a . . . 16 k and to multiple application nodes 20 a, 20 b, 20 c . . . 20 k, and to connect a single application node 20 a, 20 b, 20 c . . . 20 k to multiple proxy nodes 18 a . . . 18 k.

[0013] In FIG. 1, the network clients 12 utilize TCP/IP to communicate with the network nodes 16 a . . . 16 k and proxy nodes 18 a . . . 18 k. A TCP/IP packet enters the SAN 14 at a network node 16 a and travels through a SAN channel 24 to a proxy node 18 a. The proxy node 18 a processes and translates the TCP/IP packet using a lightweight protocol.

[0014] The term “lightweight protocol” refers to a protocol that has low operating system resource overhead requirements. Examples of lightweight protocols include Winsock-DP Protocol and Credit Request/Response Protocol. If required, one or more lightweight protocol messages travel through another SAN channel 24 to an application node 20 a.

[0015] Packets also can flow in the opposite direction, starting, for example, at the application node 20 a as a lightweight protocol message. The lightweight protocol message travels through a SAN channel 24 to the proxy node 18 a. The proxy node 18 a processes the lightweight protocol message and if necessary, translates the message into one or more TCP/IP packets. The TCP/IP packets then travel from the proxy node 18 a to a network node 16 a through a SAN channel 24. The TCP/IP packets exit the SAN 14 through the network node 16 a and are received by the network clients 12.

[0016] A single TCP/IP packet may be translated into one or more lightweight protocol messages, a single lightweight protocol message may be translated into one or more TCP/IP packets, multiple TCP/IP packets may be translated into a single lightweight protocol message, or multiple lightweight protocol messages may be translated into a single TCP/IP packet. A single TCP/IP packet may not generate any lightweight protocol messages, and a single lightweight protocol message may not generate any TCP/IP packets.

[0017] As shown in FIG. 2 each proxy node, such as the proxy node 18 a, incorporates the functions of TCP/IP processing, protocol translating and lightweight protocol processing. The proxy node 18 a includes two interfaces 34, 30 for connection to other SAN 14 components. A TCP/IP packet enters or exits the proxy node 18 a at the interface 34. A TCP/IP processing module 32 accomplishes TCP/IP processing. A protocol translation engine 26 translates the TCP/IP packets and communicates the data using a lightweight protocol. A lightweight protocol processing module 28 accomplishes lightweight protocol processing. Lightweight protocol messages exit or enter the proxy node 18 a at the interface 30.

[0018] Proxy nodes 18 a . . . 18 k also create or destroy SAN channels 24 between the network nodes 16 a . . . 16 k and the applications nodes 20 a, 20 b, 20 c . . . 20 k . Proxy nodes 18 a 18 k also create or destroy TCP endpoints. Proxy nodes also relay TCP/IP byte streams arriving from network clients 12 through network nodes 16 a . . . 16 k, and addressed to application nodes 20 a, 20 b, 20 c . . . 20 k using a lightweight protocol. Proxy nodes 18 a . . . 18 k also communicate lightweight protocol data arriving from application nodes 20 a, 20 b, 20 c . . . 20 k to the network clients 12 through network nodes 16 a . . . 16 k using TCP/IP.

[0019] The proxy node 18 a also may incorporate other standard proxy node services 36 including caching, firewall, and load balancing.

[0020] Proxy nodes 18 a . . . 18 k communicate with network nodes 16 a . . . 16 k and application nodes 20 a . . . 20 k using SAN channels 24. SAN channels 24 may be established at the time of service startup (i.e. the initial offering of an application's services on a proxy node 18 a . . . 18 k). SAN channels 24 may be destroyed, for example during a service shutdown, for SAN 14 resource management reasons, or due to a catastrophic error occurring in a SAN 14.

[0021] Referring to FIG. 3a-3 g and FIG. 4, proxy nodes 18 a . . . 18 k perform protocol processing on behalf of application nodes 20 a, 20 b, 20 c . . . 20 k.

[0022] A proxy node 18 a may receive 50, for example, a JOIN SERVICE message 52 from an application node 20 a, 20 b, 20 c . . . 20 k. Proxy nodes 18 a . . . 18 k maintain lists of application nodes that may be accessed through the proxy node 18 a . . . 18 k. The JOIN_SERVICE message 52 indicates, that a particular application node 20 ashould be included on the list of application nodes offered by the proxy node 18 a and be available for accessing by a network client 12. Upon receiving the JOIN_SERVICE message, the proxy node 18 a determines 54 whether a corresponding TCP endpoint already exists on the proxy node 18 a for that application service. If a corresponding TCP endpoint exists, the proxy node 18 a adds 56 the application node 20 ato the list of application nodes associated with that service and ends the process 278. If a corresponding TCP endpoint does not exist, the proxy node 18 a creates 58 a corresponding TCP endpoint (with associated IP address and TCP port number) and sets 60 the TCP endpoint in a TCP LISTEN state. The proxy node 18 a adds 56 the application node 20 ato the list of application nodes associated with that service and then ends 278 the process.

[0023] Various TCP states are available. A CLOSED TCP state indicates that a TCP endpoint is closed. A LISTEN TCP state indicates that the TCP endpoint is listening. A SYN_SENT_TCP state indicates that a SYN packet has been sent on the TCP endpoint. A SYN_RCVD state indicates that a SYN signal has been sent, a response has been received on a TCP endpoint, and receipt of an acknowledgement (ACK) signal is pending. An ESTABLISHED state indicates that a connection has been established and that data is being transferred. A CLOSE_WAIT state indicates that a finish (FIN) signal has been received and an application is closing. A FIN_WAIT1 state indicates that the endpoint has closed, a FIN signal has been sent to an application, and receipt of an ACK and FIN is pending. A CLOSING state indicates that an application is closing and the TCP endpoint is awaiting receipt of an ACK. A LAST_ACK state indicates that a FIN TCP packet has been received, an application has closed, and the TCP endpoint is awaiting receipt of an ACK. A FIN_WAIT2 state indicates that an application has closed and the TCP endpoint is awaiting receipt of a FIN signal. A TIME_WAIT-2MSL (maximum segment lifetime) state is a wait state for a TCP endpoint after actively closing.

[0024] A proxy node 18 a . . . 18 k, for example proxy node 18 a, may receive a LEAVE_SERVICE message 62 from an application node 20 a, 20 b, 20 c . . . 20 k, for example application node 20 a. A LEAVE_SERVICE message 62 indicates that a particular application node 20 a should be removed from the list of application nodes 20 a, 20 b . . . 20 k that may be accessed through a particular proxy node 18 a. Upon receiving a LEAVE_SERVICE message 62, the proxy node 18 a removes 70 the application node 20 a from the list of accessible application nodes. The proxy node 18 a then determines 64 whether the list of application nodes available through the proxy node 18 a is empty. If the list is empty, the proxy node 18 a closes 66 the corresponding TCP endpoint and cleans 68 any resources associated with the corresponding TCP endpoint. The proxy node 18 a then ends 278 the process.

[0025] A proxy node 18 a . . . 18 k, for example proxy node 18 a, may receive a CONNECTION_REQUEST message 72 from an application node 20 a, 20 b . . . 20 k, for example application node 20 a. A CONNECTION_REQUEST message indicates that an application node 20 a wants to create a connection with a network client 12. The proxy node 18 a creates 74 a corresponding TCP endpoint, the proxy node 18 a then actively opens 76 the TCP endpoint, and sets 78 the TCP endpoint to a TCP SYN_SENT state. The proxy node 18 a then sends 80 a TCP SYN packet to the corresponding network client 12, through a network node 16 a . 16 k.

[0026] A proxy node 18 a . . . 18 k, for example proxy node 18 a, may receive an ACCEPT_CONNECTION message 82 from an application node 20 a, 20 b . . . 20 k , for example application node 20 a. An ACCEPT_CONNECTION message 82 is sent by an application node 20 a to accept a connection request. The proxy node 18 a updates 84 the TCP endpoint's connection information and directs 86 subsequent TCP flow to that application node 20 a. The proxy node 18 a then ends the process.

[0027] A proxy node 18 a . . . 18 k, for example proxy node 18 a, may receive a REJECT_CONNECTION message 88 from an application node 20 a, 20 b . . . 20 k, for example application node 20 a. The proxy node 18 a closes 90 the corresponding TCP connection and sends 92 a TCP RST packet to a network client 12 through a network node 16 a . . . 16 k. The proxy node 18 a then ends 278 the process.

[0028] A proxy node 18 a . . . 18 k, for example proxy node 18 a, may receive DATA 94 from an application node 20 a . . . 20 k , for example application node 20 a. The proxy node 18 a stores 96 the DATA on a send queue of the corresponding TCP endpoint. The proxy node then determines 302 whether a TCP/IP packet can be sent on the TCP endpoint. If not, the proxy node 18 a ends 278 the process. If a TCP/IP packet can be sent, the proxy node 18 a computes 304 the size of the packet that can be sent. The proxy node 18 a constructs 306 the TCP/IP packet header and sets 308 the TCP flags in the TCP/IP packet according to the TCP state. The proxy node 18 a then determines 310 if the data can be sent in the packet. If it can be sent, the proxy node 18 a extracts 312 the data from the transmission control block (TCB) send queue and sends 313 the TCP/IP packet to a network client 12 through a network node 16 a. A TCB is used by TCP/IP to store various information related to a particular TCP endpoint. TCBs typically contain information such as foreign and local IP addresses, foreign and local port numbers, options for endpoints, state information for each TCP endpoint, sequence numbers, window sizes, retransmission timers, etc.

[0029] A proxy node 18 a . . . 18 k, for example proxy node 18 a, may receive a CLOSE_CONNECTION message 102 from an application node 20 a, 20 b . . . 20 k, for example application node 20 a. The proxy node determines 107 whether the TCP endpoint is in the LISTEN or SYN_SENT state. If the TCP endpoint is in either of these states, the proxy node 18 a closes 114 the TCP endpoint and ends 278 the process. If the TCP endpoint is not in one of these states, the proxy node 18 a determines 109 whether the TCP endpoint is in the SYN_RCVD, ESTABLISHED, or CLOSE_WAIT state. If it is, the proxy node then determines 103 whether there is unacknowledged data on the TCB send queue. If there is no unacknowledged data, the proxy node 18 a determines 104 whether the TCP endpoint is in the SYN_RECVD or ESTABLISHED state. If it is, the proxy node 18 a changes 106 the TCP state to FIN_WAIT1. If the TCP endpoint is in the CLOSE_WAIT state, the proxy node 18 a changes 110 the state of the TCP endpoint to LAST_ACK. The proxy node 18 a then sends 112 a TCP FIN packet to the corresponding network client 12, through a network node 16 a . . . 16 k, and eventually closes the TCP/IP connection. If it is determined (in block 103) that there is unacknowledged data on the TCB send queue, the proxy node 18 a marks 105 the CLOSE13CONNECTION13RECEIVED flag for the TCP endpoint and proceeds to determine whether a TCP/IP packet can be sent on the TCP endpoint (block 302).

[0030] If a TCP endpoint needs to be shutdown on a particular proxy node 18 a, the proxy node sends a SHUTDOWN_SERVICE message to the application node associated with that TCP endpoint. When the application node 20 a receives a SHUTDOWN_SERVICE message from the proxy node 18 a, the application node performs appropriate service cleanup, including shutting down the application in case any proxy service is unavailable.

[0031] Proxy nodes 18 a . . . 18 k process TCP/IP packets received from network clients 12 through network nodes 16 a . . . 16 k.

[0032] A proxy node 18 a may receive 150 a TCP/IP packet from a network client 12 through a network node 16 a. The proxy node 18 a typically performs 152 a TCP checksum process. If the TCP checksum process result is unacceptable, the proxy node 18 a drops 280 the packet and ends 278 the process.

[0033] If the result of the checksum process is acceptable, the proxy node attempts to identify 154 the TCP connection associated with the packet. If a TCP endpoint corresponding to TCP connection associated with the packet is identified, then the proxy node 18 a continues with block 156. If no TCP connection is identified, the proxy node 18 a determines 140 whether the SYN flag is set on the packet. If the SYN flag is not set, the proxy node 18 a resets 144 the connection and ends 278 the process. If the SYN flag is set, the proxy node 18 a determines 142 whether the corresponding TCP endpoint is in the LISTEN state. If it is not, the proxy node 18 a resets 144 the connection and ends 278 the process. If the corresponding TCP endpoint is in the LISTEN state, the proxy node 18 a creates 146 a new TCP endpoint. The proxy node then duplicates the TCB information and continues the process in block 156.

[0034] If a TCP connection is identified, the proxy node 18 a determines 156 whether a reset (RST) flag on the TCP packet is set.

[0035] In one implementation the TCP header contains six flag bits, although in other cases there may be fewer or more flag bits in the TCP header. An URG flag bit indicates whether a TCP packet is urgent. An ACK flag bit indicates whether the TCP packet acknowledgement number is valid. A PSH flag bit indicates whether a proxy node 18 a . . . 18 k should pass the packet data to an application node 20 a, 20 b, 20 c . . . 20 k as soon as possible. A RST flag bit indicates whether the TCP endpoint should be reset. A SYN flag bit indicates whether sequence numbers should be synchronized to initiate a connection. A FIN flag bit indicates whether the sender is finished sending data.

[0036] If the RST flag is set, the proxy node 18 a resets 158 the TCP/IP endpoint. The proxy node 18 a then determines 160 whether a CLOSE_CONNECTION message should be sent to the application node 20 a. If such a message is needed, the proxy node 18 a sends 162 a CLOSE_CONNECTION message to the application node 20 a. The proxy node 18 a then ends 278 the process. If a CLOSE_CONNECTION message is not required, the proxy node 18 a simply ends 278 the process. If a RST flag is not set in the TCP/IP packet, the proxy node 18 a determines 157 whether the TCP endpoint is in a LISTEN state. If not, the proxy node 18 a processes 159 the TCP options.

[0037] The proxy node 18 a also determines 164 whether a SYN flag is set in the TCP packet. If the proxy node 18 a determines 164 that the SYN flag is set, the proxy node 18 a determines 166 whether the TCP endpoint is in a LISTEN state. If the TCP endpoint is in the LISTEN state, the proxy node 18 a initializes 168 the associated TCB.

[0038] After initializing 168 the associated TCB, the proxy node 18 a sends 170 a TCP SYN+ACK packet to the network client 12 and ends 278 the process. If, on the other hand, the TCP endpoint is not in the LISTEN state, the proxy node 18 a determines 167 whether the TCP endpoint is in the SYN13SENT state. If the TCP endpoint is not in the SYN13SENT state, the proxy node 18 a sends a RST to a network client 12 through the network node 16 a. If the TCP endpoint is in the SYN_SENT state, the proxy node 18 a determines 172 whether an ACK flag is set in the TCP packet. If the ACK flag is set, then the proxy node 18 a changes 174 the TCP endpoint state to ESTABLISHED. The proxy node 18 a then sends 176 a TCP ACK packet to the network client 12. The proxy node 18 a identifies 178 the application node 20 a corresponding to the TCP endpoint, and determines 180 whether delayed binding is required. If delayed binding is not required, the proxy node 18 a determines 181 if an ACCEPT_CONNECTION message is required. If an ACCEPT_CONNECTION message is required, the proxy node 18 a sends 182 an ACCEPT CONNECTION message to the application node 20 a. The proxy node 18 a determines 185 if FIN or DATA is in the packet. If they are not, the proxy node 18 a ends 278 the process. If either FIN or DATA is in the packet, the process continues with block 184. If the TCP endpoint is in the SYN_SENT state and the ACK flag is not set in the TCP packet, the proxy node sets 284 the TCP endpoint state to SYN_ORCVD and determines 185 whether FIN or DATA is included in the packet.

[0039] The proxy node 18 a determines 184 whether a received TCP packet received is the next TCP packet expected on a particular TCP endpoint. If it is not, then the proxy node 18 a places 186 the TCP packet on a re-sequencing queue, and strips off 188 SYN/DATA/FIN. If the proxy node 18 a determines that the packet received is the next packet expected, the proxy node 18 a trims 185 any packet data not within the window, and determines 190 whether the TCP ACK flag is set in the packet. If the ACK flag is set, the proxy node 18 a determines 189 if it is a duplicate ACK. If it is a duplicate ACK the proxy node 18 a performs 191 a fast recovery algorithm. The proxy node then updates 192 the TCB information and removes 193 ACKED data from the TCB send queue. The proxy node 18 a then determines 194 whether the TCP endpoint is in the SYN_RCVD state. If the TCP endpoint is in the SYN_RCVD state, the proxy node determines 195 whether there is an ACK number error. If there is not an ACK number error, the proxy node 18 a changes 196 the TCP endpoint state to ESTABLISHED. If there is an ACK number error, the proxy node 18 a resets 199 the connection and sends a RST to the network client 12 through the network nodes 16 a . . . 16 k. The proxy node 18 a then ends 278 the process.

[0040] After changing the TCP/IP endpoint state to ESTABLISHED, the proxy node 18 a identifies 198 the application node 20 a that corresponds to the TCP endpoint and determines 200 whether delayed binding is required. If delayed binding is not required, the proxy node 18 a determines 197 whether the corresponding TCP endpoint was opened passively. If the TCP endpoint was opened passively, the proxy node 18 a determines 201 whether a CONNECTION13REQUEST message is required. If a CONNECTION_REQUEST message is required, the proxy node 18 a sends a CONNECTION_REQUEST message to the application node 20 a, if needed. The process continues with block 227. If the TCP endpoint was not opened passively, then the proxy node 18 a determines 203 whether an ACCEPT_CONNECTION message is required. If an ACCEPT_CONNECTION message is required, then the proxy node 18 a sends 205 an ACCEPT_CONNECTION message to the application node 20 a. The process then continues with block 227.

[0041] The proxy node 18 a determines 204 whether the TCP endpoint is in the CLOSE_WAIT or ESTABLISHED state. If the TCP endpoint is in CLOSE_WAIT or ESTABLISHED state, the proxy node 18 a determines 208 whether there is any unacknowledged data on the TCB send queue. The proxy node 18 a then determines 210 whether a CLOSE_CONNECTION message already has been received from the application node 20 a. The proxy node 18 a sends 212 a FIN to the network client 12 through the network node 16 a and determines 213 whether the TCP endpoint is in the ESTABLISHED state. If the TCP endpoint is in the ESTABLISHED state, the proxy node 18 a changes 214 the TCP endpoint state to FIN WAIT 1. The process continues in with block 227. If the TCP endpoint is in the CLOSE_WAIT state, the proxy node 18 a changes 215 the TCP endpoint state to LAST13ACK. The process continues with block 268 in which the proxy node 18 a scans 268 the resequencing queue.

[0042] The proxy node 18 a may determine 216 that the TCP endpoint is in the FIN_WAIT1 state. If the TCP endpoint is in the FIN_WAIT1 state, the proxy node 18 a determines 217 whether the FIN is acknowledged (ACKED). If it is not, the process continues with block 227. If the FIN is ACKED, the proxy node 18 a changes 218 the TCP endpoint state to FIN_WAIT132. The process then continues with block 227.

[0043] The proxy node 18 a determines 220 that the TCP endpoint is in the CLOSING state. If the TCP endpoint is in the CLOSING state, the proxy node 18 a determines 219 whether the FIN is ACKED. If it is not, the process continues with block 227. If the FIN is ACKED, the proxy node 18 a changes 222 the TCP endpoint state to TIME_WAIT. The process then continues with block 227.

[0044] The proxy node 18 a may determine 224 that the TCP endpoint is in the LAST_ACK state. If the TCP endpoint is in the LAST ACK state, the proxy node 18 a determines 225 whether the FIN is ACKED. If not, the process continues with block 227. If the FIN is ACKED, the proxy node closes 226 the connection and cleans up the TCP endpoint. The proxy node 18 a then ends 287 the process.

[0045] As indicated by block 227, the proxy node 18 a may process any TCP urgent data. The proxy node 18 a then determines 229 if there is any data in the packet. If there is not, the proxy node proceeds to block 246. If there is data in the packet, the proxy node 18 a processes 228 the TCP/IP packet data. The proxy node 18 a determines 230 whether the corresponding TCP endpoint is in any of the TCP states SYN_RCVD, ESTABLISHED, FIN_WAIT1 or FIN_WAIT2. The data is only accepted 234 in these states. Otherwise, the data is rejected 232. If the data is accepted 234, the proxy node 18 a places 236 the data on the receive queue of the TCP endpoint. The proxy node 18 a then strips 238 off the TCP/IP header.

[0046] The proxy node 18 a determines 282 if a data packet is the first data packet received on a particular TCP/IP endpoint. The proxy node 18 a then determines 245 whether delayed binding is required. If it is not, the proxy node 18 a communicates 240 the data to an application node 20 a. The proxy node 18 a then determines 242 if the data had been successfully communicated to the application node 20 a. If the data had been successfully communicated, the proxy node 18 a removes 244 the data from the receive queue of the TCP endpoint.

[0047] If the packet received is not the first packet received on the TCP/IP endpoint, the proxy node 18 a communicates 241 the available data on the TCB receive queue to the application node. The proxy node 18 a then removes 243 successfully communicated data from the TCB receive queue. The process continues with block 246.

[0048] If delayed binding is required, the proxy node determines 247 whether the TCP/IP endpoint was opened passively. If it was opened passively, the proxy node 18 a determines 233 whether a CONNECTION_REQUEST is required. If a CONNECTION_REQUEST is required, the proxy node 18 a sends 249 a CONNECTION REQUEST message to the application node 20 a. If the endpoint was not opened passively, the proxy node 18 a determines 235 whether an ACCEPT_CONNECTION is required. If an ACCEPT_CONNECTION is required, the proxy node 18 a sends 251 an ACCEPT CONNECTION message to the application node 20 a. The process continues with block 240 as described above.

[0049] The proxy node 18 a may determine 246 that a FIN flag is set in a particular TCP packet. If a FIN flag is set, the proxy node 18 a determines 248 if the TCP endpoint is in the SYN_RCVD or ESTABLISHED state. If the TCP endpoint is in either of these states, the proxy node 18 a changes 250 the TCP endpoint state to CLOSE_WAIT. The proxy node determines 252 if a CLOSE_CONNECTION message should be sent to the application node 20 a. If so, the proxy node 18 a sends 254 the CLOSE13CONNECTION message to the application node 20 a.

[0050] The proxy node may determine 256 that the TCP endpoint is in FIN_WAIT1 state. If the TCP endpoint is in the FIN_WAIT1 state, the proxy node 18 a determines 258 whether any unacknowledged data exists on the corresponding TCB send queue. If the TCP endpoint is in FIN_WAIT1 state and there is no unacknowledged data, the proxy node 18 a changes 260 the TCP endpoint state to TIME_WAIT. Otherwise, the proxy node changes 262 the TCP endpoint state to CLOSING.

[0051] The proxy node 18 a may determine 264 that a TCP endpoint is in FIN13WAIT2 state. If the TCP endpoint is in FIN_WAIT2 state, then the proxy node 18 a changes 266 the TCP endpoint state to TIME_WAIT.

[0052] The proxy node 18 a scans 268 the re-sequencing queue of the TCP endpoint and determines 269 whether the resequencing queue is empty. If it is not empty, the proxy node 18 a de-queues 271 the next packet from the desequencing queue and determines 273 whether the packet is obsolete 273. If it is obsolete, the proxy node 18 a frees 275 the packet and returns to block 275. If the proxy node 18 a determines that the resequencing queue is empty, the process continues with block 302 as described above.

[0053] The proxy node 18 a communicates data to the application node 20 a using a lightweight protocol. The proxy node 18 a determines 316 whether the TCB receive queue is empty. If the TCB receive queue is empty, the proxy node 18 a ends 278 the process. If the TCB receive queue is not empty, then the proxy node 18 a determines 318 whether data on the receive queue can be communicated to the application node 20 a. If the data cannot be communicated, the proxy node 18 a ends 278 the process. If the data can be communicated, the proxy node 18 a communicates 241 the available data on the TCB receive queue to the application node 20 a. The proxy node 18 a then removes communicated data from the TCB receive queue. The proxy node 18 a may perform processes described above periodically as a part of timer functions.

[0054]FIG. 5 shows a set of exemplary communications between a network client 12, a network node 16 a, a proxy node 18 a and an application node 20 a. Time is shown along the vertical axis. The end components included within the region identified by 320 communicate utilizing TCP/IP protocol. The end components included within the region identified by 330 communicate utilizing a lightweight protocol.

[0055] The timeline in FIG. 5 begins with a network client 12 issuing a SYN TCP/IP packet 380. The SYN packet 380 is a request from the network client 12 addressed to an application service on an application node 20 a to establish a connection with a particular application service. This packet is passed through the network node 16 a and is intercepted by the proxy node 18 a. Since no information is required from the application node 20 a, the proxy node 18 a processes the TCP/IP SYN packet 380 and responds to the network client 12 with a SYN+ACK TCP/IP packet 382 that is sent through the network node 16 a and that indicates that the SYN request 380 has been acknowledged. Each connection request is acknowledged before a connection can take place. The network client 12 receives the SYN+ACK signal 382 and responds with an ACK packet 384. The connection then is established between the network client 12 and the proxy node 18 a.

[0056] The network client 12 begins transmitting data 386 to the proxy node 18 a through the network node 16 a over the established connection. At that point, the proxy node 18 a realizes that involvement from the application node 20 a is needed. Therefore, the proxy node 18 a sends a CONNECTION REQUEST message 390 with or without the translated data to the application node 20 a using a lightweight protocol. The application node 20 a responds by issuing an ACCEPT CONNECTION lightweight protocol message 392 with any necessary data. The proxy node 18 a processes and translates the message 392 . . . 398 and communicates the translated data 394 . . . 402 to the network client 12 through the network node 16 a using TCP/IP.

[0057] When the network client 12 receives data it sends acknowledgement ACK packets 404 . . . 410 back to the proxy node 18 a. When the application node 20 a is finished sending all requested data, it sends a CLOSE CONNECTION message 406 to the proxy node 18 a. Multiple lightweight protocol messages may be combined with each other. The proxy node 18 a then sends a FIN TCP/IP packet 408 through the network node 16 a to the network client 12 indicating that the connection should be terminated. The network client 12 acknowledges receipt of the signal 408 by sending a FIN+ACK TCP/IP packet 412 through the network node 16 a to the proxy node 18 a. The proxy node then sends an ACK packet 414 to the network client 12.

[0058]FIG. 6 illustrates state transition diagrams for an IngressPool 504 buffer and an EgressPool 506 buffer. Proxy nodes, for example proxy node 18 a, may be optimized for TCP/IP-to-lightweight protocol translation by using IngressPool 504 buffers and EgressPool 506 buffers to perform zero copy translation of data. The proxy node 18 a may receive 500 data (from network clients 12 through network nodes 16 a . . . 16 k) and may receive 502 data (from application nodes 20 a . . . 20 k) simultaneously. The protocol translator 26 can be configured to perform zero-copy translation of data in a proxy node 18 a. Other techniques also can be utilized to enable zero-copy translation of data. Proxy nodes 18 a . . . 18 k may maintain two pools of registered buffers. IngressPool 504 buffers may store incoming data and EgressPool 506 buffers may store outgoing data. Both of these pools may be registered with each interface 30, 34 in the proxy nodes 18 a . . . 18 k. The recommended size of the IngressPool 504 buffer may typically be determined by considering the maximum number of concurrent TCP connections to network clients 12 through network nodes 16 a . . . 16 k supported, the maximum size of receive windows advertised to the network client 12, and the maximum number of outstanding receive descriptors on the SAN channels 24. The recommended size of the EgressPool 506 buffer may typically be determined by considering the maximum number SAN channels 24 used for communication with application nodes 20 a, 20 b, 20 c . . . 20 k, the maximum amount of credits available per SAN channel 24, the maximum number of concurrent TCP connections supported, and the maximum size of receive windows advertised to the network clients. Each buffer may be described by a memory buffer that tracks the memory handle, offset within a buffer, and the length of data. Each buffer can exist in one of several main states: posted (508 & 512), received (500 & 502), and freed (510 & 514). By maintaining the memory buffer structure and states of the buffer, the proxy node 18 a may achieve zero-copy translation.

[0059] Any higher layer service that executes policies above the transport layer can be built on top of the decoupling technique on a proxy node 18 a. For example, a layer 5 (L5) web switch that maintains Hyper Text Transfer Protocol (HTTP) 1.0, HTTP-S (Secure) 1.0, HTTP 1.1, HTTP-S 1.1 connections with the network clients and HTTP 1.1 connections with the application nodes 20 a, 20 b, 20 c . . . 20 k can be built on top of a proxy node 18 a. Additionally, HTTP 1.0, HTTP-S 1.0, HTTP 1.1, HTTP-S 1.1 data exchanged with network clients 12 may use TCP/IP; and HTTP 1.1 data exchanged between proxy nodes 18 a . . . 18 k and application nodes 20 a, 20 b . . . 20 k can use lightweight protocol. Each HTTP transaction from a network client 12 can be mapped onto a HTTP 1.1 transaction to an application node 20 a, 20 b . . . 20 k.

[0060] In addition to the foregoing techniques, the proxy node 18 a can be configured to perform other processing related to TCP/IP including, for example, timers, algorithms such as congestion control, slow start, fast retransmit, and nagle.

[0061] Computer systems implementing these techniques may realize one or more of the following advantages. First, the techniques can result in faster SAN 14 operating speeds as a result of the elimination of system bottlenecking effects caused by extensive protocol processing overhead at the application nodes 20 a, 20 b, 20 c . . . 20 k . Second, decoupling TCP/IP processing from the application nodes 20 a, 20 b, 20 c . . . 20 k to the proxy nodes 18 a 18 b can allow independent scaling of system TCP/IP processing capabilities and application processing capabilities. Third, since these techniques are typically not constrained by legacy API (sockets), operating system environment, or hardware platform, systems may be optimized to meet both TCP/IP and lightweight protocol processing demands. Fourth, other value-added services, such as, load balancing, caching, firewall, content transformation, and security protocol processing can be built on top of these decoupling techniques. SANs 14 incorporating the techniques described above can incorporate two levels of load balancing. At one level, the network nodes 16 a . . . 16 k can perform session level load balancing on a group of proxy nodes 18 a . . . 18 k using network address translation techniques or Internet protocol tunneling techniques. At a second level, each proxy node 18 a . . . 18 k can perform application level load balancing on a group of application nodes 20 a, 20 b, 20 c . . . 20 k.

[0062] Furthermore, systems using the techniques described above can provide increased flexibility. In addition, failures incurred in TCP/IP processing may be treated independently from application failures, thus providing improved SAN 14 reliability. Additionally, resource contention between application processing demands and TCP/IP processing demands can be significantly reduced.

[0063] Various features of the system may be implemented with hardware, software or with a combination of hardware and software. For example, some aspects of the system can be implemented in computer programs executing on programmable computers. Each program can be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. Furthermore, each such computer program can be stored on a storage medium, such as read-only memory (ROM) readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage medium is read by the computer to perform the functions described above.

[0064] Other implementations are within the scope of the following claims.

Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US6895590Sep 26, 2001May 17, 2005Intel CorporationMethod and system enabling both legacy and new applications to access an InfiniBand fabric via a socket API
US7024479Jan 22, 2001Apr 4, 2006Intel CorporationFiltering calls in system area networks
US7877507 *Feb 29, 2008Jan 25, 2011Red Hat, Inc.Tunneling SSL over SSH
US8190771Dec 17, 2010May 29, 2012Red Hat, Inc.Tunneling SSL over SSH
US8238241 *Jul 28, 2004Aug 7, 2012Citrix Systems, Inc.Automatic detection and window virtualization for flow control
US8380873Feb 24, 2012Feb 19, 2013Red Hat, Inc.Tunneling SSL over SSH
US20070133511 *Dec 8, 2005Jun 14, 2007International Business Machines CorporationComposite services delivery utilizing lightweight messaging
WO2011137175A1 *Apr 27, 2011Nov 3, 2011Interdigital Patent Holdings, Inc.Light weight protocol and agent in a network communication
Classifications
U.S. Classification709/246, 709/250, 709/227, 709/230
International ClassificationH04L29/06, H04L29/08
Cooperative ClassificationH04L67/1029, H04L67/1097, H04L67/1031, H04L67/1008, H04L69/326, H04L69/169, H04L69/16, H04L69/163, H04L69/08, H04L69/10, H04L67/1002, H04L69/32, H04L69/161
European ClassificationH04L29/06J7, H04L29/06J19, H04L29/06J3, H04L29/08A4, H04L29/06E, H04L29/06J, H04L29/08A, H04L29/06F
Legal Events
DateCodeEventDescription
Apr 23, 2001ASAssignment
Owner name: INTEL CORPORATION, CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SHAH, HEMAL V.;REGNIER, GREG J.;REEL/FRAME:011747/0529
Effective date: 20010405