HTTP/2 - HTTP/1

HTTP/1.1 assumes that a TCP connection should be kept open unless directly told to close.
HTTP2 reduces latency by using multiplexing, compression and prioritization.
An application level API would still create messages in the conventional HTTP formats, but the underlying layer converts the payload into binary (binary framing).
HTTP/2 establishes a single connection object between the two machines. Within this connection there are multiple streams of data. Each stream consists of multiple messages in the familiar request/response format. Finally, each of these messages split into smaller units called frames.
Multiplexing: Several requests and responses can run in parallel using a single TCP connection without blocking each other. This reduces processor and memory resources and the SSL handshakes.
Stream prioritization feature allows developers to prioritize the requests by assigning a weight between 1 and 256 to each stream. The higher number indicates higher priority.
 server can send a resource to a client along with the requested HTML page, providing the resource before the client asks for it. This process is called server push. 
Additionally, server can send a resource to a client along with the requested HTML page, providing the resource before the client asks for it. This process is called server push.

Read more: https://www.digitalocean.com/community/tutorials/http-1-1-vs-http-2-what-s-the-difference

Comments