Socket: measures the data transfer rate (in bytes / second) between two applications

I have an application that continues to transfer data to a second application (consumer application) using a TCP socket. How can I calculate the total time it takes for data to be sent by the first application until the data is received by the second application? Both apps are coded using C / C ++. My current approach is as follows (in pseudocode):

struct packet{
   long sent_time;  
   char* data;  
}

      

FIRST APP (EMITTER):

packet p = new packet();
p.data = initialize data (either from file or hard coded)
p.sent_time = get current time (using gettimeofday function)

//send the packet struct (containing sent time and packet data)
send (sockfd, p, ...); 

      

SECOND APP (CONSUMER)

packet p = new packet();
nbytes = recv (sockfd, p, .....); // get the packet struct (which contains the sent time and data)
receive_time = get current time
data transfer time = receive time - p.senttime (assume I have converted this to second)
data transfer rate = nbytes / data transfer time; // in bytes per second

      

However, the problem is that the local time between the two applications (emitter and consumer) is not the same as they are running on different computers, resulting in this completely useless result. Is there another better way to do this properly (programmatically) and get the baud rate as accurate as possible?

+3


source to share


2 answers


If your protocol allows it, you can send back an acknowledgment from the server to the received packet. It is also necessary if you want to be sure that the server received / processed the data.

Once you have this, you can simply calculate the rate on the client. Just subtract RTT from the send + ACK interconnect length and you get a pretty accurate measurement.



Alternatively, you can use a time synchronization tool like NTP to synchronize clocks on two servers.

0


source


First of all: even if your time is synced, you will be calculating latency, not bandwidth. For each network connection, the chances are that there is more than one packet in transit at a given time, making your one-packet approach useless for measuring throughput.

eg. Compare the ping time from your mobile device with the HTTP server with the maximum download speed - the ping time will be tens of microseconds, the packet size will be approx. 1.5KB, which will result in significantly less bandwidth than watcher at download.

If you want to measure real throughput use a blocking socket on the sender side and send for example. 1 million packets as quickly as the system will allow you to measure the time between the arrival of the first packet and the arrival of the last packet on the receiving end.

If OTOH you want to measure the delay accurately, use



struct packet{
   long sent_time;  
   long reflect_time;  
   char* data;  
}

      

and the server reflects the packet. On the client side, check all three timestamps, then reverse roles to gain control over asymmetric delays.

Edit: I meant: the reflection time will be "different" hours, so when you run the test back and forth, you will be able to filter the offset.

0


source







All Articles