HTTP2 Allows Multiple Requests to Share the Same Connection
Hello and welcome back! Today, we’ll discuss a little optimization that HTTP2 does.
HTTP1
In HTTP1, every request a client sends to a server needs its own connection (i.e. 2 sockets, one living in the client machine and another in the server machine).
When the same client wants to send a new request, it has to open up a new connection. This will result in 2 new sockets being created (one in the client and one in the server). In other words, every request requires the creation of 2 sockets.
HTTP2
HTTP2 does away with this. It allows multiple requests (from the same client to the same server) to travel over the same connection. It does this by a technique called multiplexing.
Multiplexing is basically figuring out a way to merge several signals/message-streams such that they travel on one line/pipe, and then at the receiving end, separating them back into their original signals/message-streams.
Let’s take a look at how HTTP2 achieves this, but remember that the same/similar technique is used in other multiplexing scenarios.
Let’s say you (the client) have 3 requests (request1, request2, request3) to send to the server. First, a connection is created, thus 2 sockets are created (one at the client, one at the server). You then break down each request into smaller chunks called “frames”. Each frame has a streamID field, which is used to identify which request the frame belongs to. For example, the frames of request1, will have their streamID set to 1. The frames of request 2 will have their streamID set to 2, and so on.
You then send all these frames down the connection. Here is what the receiving end does:
- It receives all the frames for all the requests.
- Each frame is labeled with the request it belongs to.
- It puts all the frames of request1 together, in order.
- It puts all the frames of request2 together, in order.
- etc
- It then processes each request just as if it was received on its own connection!
Once the receiver has processed all the requests, it sends back the responses. The responses are also broken down into frames, and sent back to the client, in the same manner that the client broke things down!
Optimization Gain
Why do this? Well, it’s faster. Creating sockets is relatively expensive. So once you have created a connection (i.e. the two sockets), you want to reuse them as much as possible. This is what HTTP2 does.
Let’s think of this in a different, more general way. Think of batch processing. Generally, the reason you do batch processing is that the per-item overhead is high. Instead of paying that overhead every single time, for every single item, you pay the overhead once for a bunch of items.
HTTP2 is doing the same thing. It’s paying the overhead of creating a connection once, and then using that connection to send multiple requests.
Batch processing is a really simple way to optimize when you have a per-item overhead.
That’s it for today! Have an awesome day!