Huge difference in netcat and iperf results for 10G link
I am confused to see the huge difference between netcat and iperf results. I have a 10 G link connecting my server and client. I get about 10 Gbps for iperf, but only ~ 280 Mbps for netcat. What could be the mistake?
For Iperf
Server
iperf -s
Client
iperf -c 172.79.56.27 -i1 -t 10
Result:
Client connecting to 172.79.56.27, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 172.79.56.28 port 46058 connected with 172.79.56.27 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 1.07 GBytes 9.23 Gbits/sec
[ 3] 1.0- 2.0 sec 1.09 GBytes 9.35 Gbits/sec
[ 3] 2.0- 3.0 sec 1.09 GBytes 9.35 Gbits/sec
[ 3] 3.0- 4.0 sec 1.09 GBytes 9.35 Gbits/sec
[ 3] 4.0- 5.0 sec 1.09 GBytes 9.36 Gbits/sec
[ 3] 5.0- 6.0 sec 1.09 GBytes 9.35 Gbits/sec
[ 3] 6.0- 7.0 sec 1.09 GBytes 9.36 Gbits/sec
[ 3] 7.0- 8.0 sec 1.09 GBytes 9.35 Gbits/sec
[ 3] 8.0- 9.0 sec 1.09 GBytes 9.36 Gbits/sec
[ 3] 9.0-10.0 sec 1.09 GBytes 9.35 Gbits/sec
[ 3] 0.0-10.0 sec 10.9 GBytes 9.34 Gbits/sec
For netcat,
Server
nc -v -v -l -n 2222 >/dev/null
Client
time dd if=/dev/zero | nc -v -v -n 172.79.56.27 2222
Connection to 172.79.56.27 2222 port [tcp/*] succeeded!
^C6454690+0 records in
6454690+0 records out
3304801280 bytes (3.3 GB) copied, 11.4463 s, 289 MB/s
real 0m11.449s
user 0m6.868s
sys 0m15.372s
source to share
Mr user1352179,
Run netcat test and see htop in another window. I bet you will see that the bottleneck here is the single read stream dd / dev / zero. Try running the test again in parallel with "n" instances of dd | netcat, where "n" = the number of cores on your system. Then add the total bandwidth along with all the parallel runs to see the real result. (Make sure you are transmitting on different ports and your receiving end is multi-stream and also listening on multiple ports).
source to share