TcpListener vs SocketAsyncEventArgs
Is there a good reason not to use a TcpListener
high performance / high throughput server to implement TCP
instead SocketAsyncEventArgs
?
I have already implemented this high performance / high performance TCP server using SocketAsyncEventArgs
, went through all sorts of headaches to handle these sticky buffers using a large dedicated array byte
and pools SocketAsyncEventArgs
to accept and accept, build using some low level stuff and brilliant smart code with some data streams TPL and some Rx, and it works great; almost the text of a book in this endeavor - in fact, I've learned over 80% of these things from other code.
However, there are some problems and problems:
- Complexity . I cannot delegate any modifications to this server to another. a member of the team. This limits me to such tasks, and I may not pay enough attention to other parts of other projects.
- Memory Usage (Attached Arrays
byte
):SocketAsyncEventArgs
When using , pools are needed to be preallocated. So for handling 100,000 concurrent connections (worst condition, even on different ports) a large pile of RAM hovers around there uselessly; (even if these conditions are met only a few times, the server should be able to handle 1 or 2 such peaks every day). -
TcpListener
really works well . I actually putTcpListener
in a test (with some tricks like usingAcceptTcpClient
on a dedicated thread rather thanasync
versioning and then sending accepted connections onConcurrentQueue
and not creatingTask
in place and the like) and with the latest version of .NET it worked very well, almost as good asSocketAsyncEventArgs
, data loss and low memory which helps not to waste too much RAM on the server and does not require pre-allocation.
So why can't I see what is TcpListener
used everywhere and everyone (including me) is using SocketAsyncEventArgs
? Did I miss something?
source to share
I don't see any evidence that this question is relevant TcpListener
at all. You seem to be only interested in the code that pertains to the already accepted connection. Such a connection is independent of the listener.
SocketAsyncEventArgs
- CPU load optimization. I am convinced that you can achieve a higher ops speed with it. How significant is the difference with regular asynchronous APM / TAP IO? Of course, less than an order of magnitude. Probably between 1.2x and 3x. The last time I compared the transaction speed of a TCP transaction, I found that the kernel took about half of the CPU usage. This means your application can get a maximum of 2x faster while being infinitely optimized.
Remember, it SocketAsyncEventArgs
was added to the BCL in 2000 or so, when processors were much less capable.
Use SocketAsyncEventArgs
only when you have proof that you need it. It makes you less productive. More room for error.
Here's the template your socket loop should look like:
while (ConnectionEstablished()) {
var someData = await ReadFromSocketAsync(socket);
await ProcessDataAsync(someData);
}
Very simple code. No callbacks thanks await
.
If you're worried about managed heap fragmentation: allocate a new byte[1024 * 1024]
at startup. If you want to read from a socket, read one byte into some free part of this buffer. When that single-byte read completes, you ask how many bytes are actually there ( Socket.Available
) and pull the rest in sync. This way you are just attaching one rather small buffer and you can still use async IO to wait for data to be received.
This method does not require polling. Since it Socket.Available
can grow without reading from the socket, we don't run the risk of doing a read that is too small by accident.
Alternatively, you can combat heap fragmentation by allocating some very large buffers and serving chunks.
Or, if you don't find that this is a problem, you don't need to do anything.
source to share