TIME_WAIT on Loopback interface

Why do TCP connections to the loopback interface end with TIME_WAIT

(socket closed with SO_DONTLINGER

), but identical connections to another host do not end with TIME_WAIT

(they are reset / destroyed immediately)?

Here are the scripts to illustrate:

(A) Both client and server applications run on the same Windows machine. The client connects to the server through the server's loopback interface (127.0.0.1, port xxxx), sends data, receives data, and closes the socket (set SO_DONTLINGER

).

Let's say that the connections are very short lived, so the client application establishes and kills a large number of connections every second. The end result is that the sockets end in with TIME_WAIT

, and the client eventually runs out of the maximum number of sockets (which is 3900 by default, and we assume this value will not be changed in the registry).

(B) Same two applications as script (A), but the server is on a different host (the client is still running on Windows). The connections are identical in every respect, except that they are not intended for 127.0.0.1, but a different IP address is used instead. Here, connections on the client machine are NOT included TIME_WAIT

, and the client application can continue to establish connections indefinitely.

Why the discrepancy?

+1


source to share


1 answer


The TIME_WAIT state only occurs at one end of the connection — the end that closes first. For the loopback interface, both ends are on the same machine, so you will always see TIME_WAIT.



Otherwise, try looking at a different car. I think you will see TIME_WAIT sockets there.

+3


source







All Articles