Node.js stream is written in a loop

I have been trying to compare some of the functionalities of node.js lately and figured out some creepy results that I cannot figure out. Here is a simple code that I tested and compared the results:

http://pastebin.com/0eeBGSV9

You can see that it was doing a healthy 8553 requests per second over 100k requests with 200 concurrency. I was then instructed by a friend that I should not use async in this case, as this loop is not large enough to interfere with the node event loop, so I refactored the code to use in a loop and it increased the benchmark result even higher:

http://pastebin.com/0jgRPNEC

Here we have 9174 requests per second. Well appointed. (for the loop version, it was faster than the async version, even when I changed the iteration count to 10k, oddly enough).

But then my friend was wandering if this result could be pushed even further by using streaming instead of dumping all data after the loop is complete. Once again, I refactored the code to use res.write to handle the data output:

http://pastebin.com/wM0x5nh9

aaaaand we have 2860 requests per second. What happened here? Why is streaming email so sluggish? Is there some mistake in my code or is this how node works with streams?

Node version 0.10.25 on ubuntu with default settings from apt installation.

I also tested the same code for JXCore and HHVM (using the async.js version of the node code) at the beginning with the results here: http://pastebin.com/6tuYGhYG and got a curious node cluster result which is faster than the latest jxcore 2.3.2.

Any criticism would be greatly appreciated.

EDIT: @Mscdex, I was curious if the problem with calling res.write () could be the problem, so I changed the way I pass the data to the new thread created to consume res. I naively figured that maybe this way node would somehow optimize output buffering and streaming data in an efficient way. While this solution also worked, it was even slower than before:

http://pastebin.com/erF6YKS5

+3


source to share


1 answer


My guess would be the overhead of having many separate syscalls.



Node v0.12 + added the "capping" functionality so that you can do res.write()

as much as you want, but you can plug and uncork the stream so that all these entries result in a single syscall write()

. This is what you are doing now, with output concatenation, except capping does it for you. In some places in the core node, this plug function can also be used automatically behind the scenes, so you don't need to explicitly plug / uncork to get good performance.

+1


source







All Articles