Why doesn't the server receive all UDP packets in local transmission using sockets in C #?

I have a server and a client application where the client sends a bunch of packets to the server. The protocol used is UDP. The client application creates a new thread to send packets in a loop. The server application also uses a new thread to loop through packets.

Both of these applications must update the user interface with the transfer success. How to properly keep the updated UI was solved with this question . Basically, both server and client applications will raise an event (code below) for each iteration of the loop, and both will keep the updated UI with progress. Something like that:

private void EVENTHANDLER_UpdateTransferProgress(long transferedBytes) {
    receivedBytesCount += transferedBytes;
    packetCount++;
}

      

The timer in each app will update the user interface with the latest information from receivedBytesCount

and packetCount

.

The client application has no problem at all, everything works as expected, and the UI is updated as expected every time a packet is sent. The server is problematic ...

When the transfer is complete, receivedBytesCount

and packetCount

will not match the total size in bytes sent or the number of packets sent by the client. By the way, each packet is 512 bytes in size. The server application counts the packets received immediately after the call from Socket.ReceiveFrom()

. And it seems that for some reason it doesn't get all the packages it should.

I know that I am using UDP which does not guarantee that packets will actually arrive at their destination and no retransmission will be performed, so packet loss may occur. But my question is that since I am actually testing this locally, the server and client are on the same machine, why exactly is this happening?

If I put Thread.Sleep(1)

(which seems to pause 15ms) on the client's send loop, the server will receive all packets. Since I am doing this locally, the client is sending packets so quickly (without being called Sleep()

) that the server cannot keep up. Is this a problem, or is it somewhere else?

+3


source to share


1 answer


'If I put Thread.Sleep (1) (which seems to pause at 15ms) in the client's send loop, the server will receive all packets

The socket buffers fill up and the stack discards messages. UDP has no flow control, so if you try to send a huge number of datagrams in a tight loop, some will be discarded.



Use a sleep () loop, (ugh!), Implement some form of flow control on top of UDP, implement some form of non-network flow control (for example, using asynchronous calls, buffer pools and internetworking, thread comms), or use another protocol with built-in streaming management.

If you dig up stuff on the networking stack faster than you can digest it, you shouldn't be surprised if it throws up sometimes.

+3


source







All Articles