How HTTP2 Solves the Head of Line (HOL) Blocking Problem

How does HTTP2 solve the Head of Line (HOL) blocking problem?

This issue is very common in http1.1, but I heard that HTTP2 fixed this issue. Can someone explain exactly how HTTP2 fixed the problem?

+3


source to share


2 answers


Row Blocking HTTP Header

Line header blocking in HTTP terms often refers to the fact that each browser / client has a limited number of connections to the server and making a new request on one of those connections has to wait until it can disconnect it early before completing.

The head of query lines blocks subsequent ones.

HTTP / 2 solves this by introducing multiplexing so that you can issue new requests over the same connection without waiting for previous ones to complete.



In theory, HTTP / 1.1 pipelining also offered a way around HOL, but it was difficult and very error-prone to implement in practice. This made it widely off the net until today.

TCP Chapter Locking Rows

HTTP / 2 still suffers from a different type of HOL, namely the TCP layer. One lost packet in a TCP stream causes all streams to wait until that packet is retransmitted and received. This HOL is processed using the QUIC protocol ...

QUIC is a "TCP-like" protocol implemented over UDP where each stream is independent, so that a lost packet only stops the stream to which the lost packet belongs, while other streams can continue.

+12


source


HTTP / 1 is basically a request and response protocol: the browser requests a resource (be it an HTML page, a CSS file, an image ... whatever) and then waits for a response. During this time, this connection cannot do anything else - it blocks waiting for a response.

HTTP / 1 really introduced the concept of pipeliningso that you can send more requests while waiting. This should improve the situation as there is currently no latency in sending requests and the server can start processing them earlier. The responses should still return in order, so it is not a true multi-request protocol, but is a good improvement (if it worked - see below). This resulted in a row locking (HOLB) ​​problem on the connection: if the first request takes a long time (for example, it needs to do a database lookup, then do some other heavy processing to create the page), then all other requests are queued follow him even if they are ready to go. In fact, it is true, it was said that HOLB was already a problem even without pipelining.as the browser had to queue requests anyway, until the connection was free to send it. Piping has just made the problem more apparent at the connection level.

In addition to this, pipelining in HTTP / 1 has never been so well supported that it is difficult to implement and can cause security issues. So even without the HOLB issue, it's still not that useful.



To get around all of this, HTTP / 1 uses multiple connections to the server (usually 6-8), so it can send requests in parallel. It takes effort and resources on both the client and server side to configure and manage. Also TCP connections are quite inefficient for various reasons and take time to reach maximum efficiency - by then you've probably done the hard work and no longer require multiple connections.

HTTP / 2, on the other hand, has the concept of bi-directional, multiplexed streams baked from the start. I'll explain in detail what they are here: What muxing does in HTTP / 2 . This removes the blocking nature of HTTP / 1 requests, introduces a much better, fully accepted, fully supported version of pipelining, and even allows partial submission of response parts mixed with other responses. All this together solves HOLB - or more accurately prevents it even as a problem.

It should be noted that although HTTP HOLB permits it, it is still built on TCP and it has its own TCP HOLB problem, which could be worse over HTTP / 2.l as this is the only connection! If one TCP packet is lost, then the TCP connection must request a retry and wait for that packet to be retransmitted before it can process subsequent TCP packets - even if those packets are for other HTTP / 2 streams that might theoretically, processed during this time (eg wild ones happen under true separate HTTP / 1 connections). Google is experimenting with HTTP / 2 over non-guaranteed UDP rather than guaranteed TCP in the QUIC protocol to fix this issue, and this is in the process of being installed as a web standard (alsohow SPDY - originally Google's implementation - was standardized for HTTP / 2).

+3


source







All Articles