Server Application Design Techniques
During the undergrad, I took a network programming course using POSIX sockets for Linux. We used the book Internetworking Using TCP / IP (comer and stevens) as a reference. 2008 its rather outdated and still applicable texts that span from one to several server projects.
One project that is not shown in the book is the case where a client connects to an application server, which sends multiple requests to the server over a single TCP connection. Since multiple requests are coming through the slave socket to the server, and the responses are sent through the same slave socket. The slave socket becomes overloaded as a response and requests are received over the same socket, would it be better to allow a second TCP connection between the endpoints to ensure full duplex communication speed? What other architectures can you use to improve server performance?
source to share
Since a socket is just a number attached to a packet (call it a number or a routing address), I cannot imagine the socket itself becoming overloaded.
Your socket handling code can, but it should be fairly easy to fix by distributing packages as they come in.
You can also encode inbound and outbound packet processing for different streams, or even queue packets for distribution across multiple streams.
But I really don't see the original guess being completely accurate. I could be wrong ...
source to share
TCP connections are already full duplex. You can (simplify) think of a TCP connection as two unidirectional connections (one send and one receive).
Sending multiple requests over a single connection is actually used to improve performance across multiple protocols (since reusing the connection avoids the handshake and slow startup overhead). One example is HTTP-aware connections. Another way to improve performance is to use pipelining (sending multiple requests without waiting for responses), which obviously can only be done if you reuse a TCP connection for multiple requests.
source to share