Is blocking Netty IO Threads a good idea for backpressure?

Ok, so I understand that doing a long / blocking operation from the Netty IO thread is a bad idea because it will block Netty's event dispatching.

However, I thought it was actually a good idea to implement some sort of backpressure mechanism.

Let me explain.

Imagine you are building a client application that uses Netty to connect to a large number of servers, such as HTTP servers, but that doesn't matter. You are querying a lot of servers and you are processing responses and may even have to make some other network call to pass the result. Imagine that the speed at which you receive data from all these HTTP servers is much greater than the speed at which you can process it.

Now, in this scenario, you could follow the best practices not to block Netty threads, discard these time-consuming operations and run them on a separate thread of threads, which is great, but this thread will be hot as responses are piling up from Netty IO threads. You will see OOM shortly if your Executor has an unlimited queue for tasks.

So, if you really do all these expensive operations on I / O streams, then it can serve as a backpressure mechanism. You will only allow Netty to go as fast as you handle speed ..... [EDIT] Alternatively you can use a queue lock in the ExecutorService which will have the same effect -> it will block the calling thread when the queue is full [/ EDIT ]

Does it make sense to do it this way? If not, what would be the recommended way to deal with slow consumption in such a scenario?

+3


source to share





All Articles