How does single threaded NGINX handle so many connections?

NGNIX uses epoll notification to see if there is any data on the socket to read.

Suppose: There are two requests to the server. nginx is notified of these two requests and starts:

  • get the first request

  • parse ist headers

  • check boudary (body size)

  • send first request to upstream server

  • and etc.

nginx is single threaded and can only perform one operation at a time.

But what happens with the second request?

  • Does nginx receive a second request while parsing the first?

  • Or does it start processing the second request after the first execution?

  • Or something else that I don't understand.

If 1. is correct, I do not understand how this is possible within a single thread.

If 2.correct, how can nginx be so fast? because nginx processes all incoming requests sequentially. Only one request can be processed at any given time.

Please help me understand. Thanks to

+3


source to share


1 answer


Nginx is not a single threaded application. It doesn't start a thread for every connection, but it does start multiple worker threads when it starts. The nginx architecture is well documented at http://www.aosabook.org/en/nginx.html .

In fact, a single threaded, non-blocking application is the most efficient design for single processor hardware. When we only have one processor and the application does not completely block the application, it can use the full processor power. A non-blocking application means that the application does not call a function that can wait for an event. All I / O operations are asynchronous. This means that the application does not call idle read()

from the socket, because the call can wait until data is available. The non-blocking application uses some kind of notification mechanism that data is available and it can callread()

without the risk of the challenge waiting for something. Therefore, an ideal non-blocking application only needs one thread for one processor on the system. Since nginx uses non-blocking calls, processing in multiple threads does not make sense because the CPU will not be executing additional threads.



The actual data received from the network card into the buffer is executed in the kernel when the network card issues an interrupt. Then nginx receives the request in the buffer and processes it. It doesn't make sense to start processing another request until the current request is processed, or until the current request processing requires an action that might block (such as reading a disk).

+1


source







All Articles