At what point are websockets less efficient than polling?

While I understand that the answer to the above question is somewhat determined by your application architecture, I am mainly interested in simple scenarios.

Basically, if my app pings changes every 5 seconds, or every minute that data is sent to maintain an open web socket connection, is that more than the amount you spend on a simple poll?

Basically, I'm wondering if there is a way to quantify the amount of inefficiency you take using frameworks like Meteor if the application doesn't necessarily need real-time updates, but only periodic checks.

Note that the focus here is on bandwidth usage, not necessarily on database access times, as frameworks like Meteor have highly optimized methods for requesting only database updates.

+3


source to share


2 answers


The whole point of connecting to a websocket is that you never have to ping an application for changes. Instead, the client just connects once, and then the server can just send the changes directly to the client when they are available. The client should never ask. The server simply sends data when available.

For any type of server initiated data, this is more bandwidth efficient than HTTP polling. In addition to providing you with much more timely results (results are delivered immediately, rather than being discovered by the client only at the next polling interval).

For pure bandwidth utilization, the details will depend on the specific circumstances. An HTTP poll request should set up a TCP connection and confirm this connection (even more data if it is an SSL connection) then it should send an HTTP request including any matching cookies owned by this host and including the corresponding headers and Enter URL. The server should then send a response. And most of the time, all of that polling overhead will be completely lost in bandwidth because nothing new is being reported.



WebSocket starts with a simple HTTP request and then updates the protocol to the webSocket protocol. The webSocket connection itself doesn't need to send any data at all until the server can send something to the client, in which case the server will just send the packet. Sending the data itself also has a much lower overhead. No cookies, no headers, etc ... just data. Even if you use some keepers on the webSocket, this amount of data is incredibly tiny compared to the overhead of an HTTP request.

So, how accurately you save in bandwidth depends on the details of the circumstances. If it takes 50 poll requests before it finds any payload, then each of those HTTP requests will be completely wasted compared to the webSocket scenario. The difference in bandwidth can be huge.

You asked about an application that only requires periodic checks. Once you have a periodic check that doesn’t fetch data, that wasted bandwidth. This is the whole idea of ​​webSocket. You are consuming no bandwidth (or close to no bandwidth) when there is no data to send.

+7


source


I believe @ jfriend00 answered the question very clearly. However, I want to add a thought.

Throwing a worst-case (and incredible) scenario for Websockets vs. HTTP, you will clearly see that a Websocket connection will always take advantage of bandwidth (and possibly all-round performance).

This is the worst case scenario for Websockets v / s HTTP:

  • your code uses Websocket connections just like it uses HTTP requests to poll.

    (this is not what you would do, I know, but this is the worst case scenario).

  • Each poll event responds positively, which means that the HTTP requests were not in vain.

This is the worst situation for Web Sockets, which are designed to push data, not poll ... Web sites will save both bandwidth and CPU cycles.

Seriously, even by ignoring the DNS query (done by the client, so you might not like it) and the TCP / IP handshake (which is expensive for both the client and the server), Websocket connections are even more efficient and cost effective.

I'll explain :



Each HTTP request includes a lot of data such as cookies and other headers. In many cases, every HTTP request is also subject to client authentication ... rarely is data sent to anyone.

This means that HTTP connections transmit all of this data (and possibly do client authentication) once per request. [Without citizenship]

However, Websocket connections are healthy. Data is sent only once (instead of each request). Client authentication occurs only during Websocket connection negotiation.

This means that Websocket connections transmit the same data (and possibly perform client validation) once per connection (once for all polls).

So even in this worst case, where the poll is always positive, and the web descriptors are used for polling and not for transferring data, Websockets will still save your server both bandwidth and other resources (i.e. CPU time) ...

I think the answer to your question simply posed is "never". Websockets are never less efficient than polling.

+5


source







All Articles