Replay write and write latency using Java sockets

I've read that a combination of the three causes something like 200ms latency with TCP: Nagle algorithm, acknowledgment latency, and "write-write-read" combination. However, I cannot reproduce this delay with Java sockets and therefore I am not sure if I understood correctly.

I am running a test on Windows 7 with Java 7 with two threads using loopback sockets. I haven't touched the tcpNoDelay option on any socket (default is false) and haven't played with any OS TCP settings. The bulk of the code in the client is shown below. The server replies with a byte after every two bytes it receives from the client.

for (int i = 0; i < 100; i++) {
    client.getOutputStream().write(1);
    client.getOutputStream().write(2);
    System.out.println(client.getInputStream().read());
}

      

I don't see any delay. Why not?

+3


source to share


1 answer


I believe you are seeing confirmation of the delay. You are writing 4 and 4 bytes to the socket. The server's TCP stack receives a segment (which is probably at least 4 bytes from an int) and wakes up the server's application thread. This stream writes a byte back to the stream, and this byte is sent to the client in an ACK segment. That is, the TCP stack enables the application to immediately send a response. Thus, you see no delay. You can write a traffic dump and also run an experiment between two computers to see what is really going on.



0


source







All Articles