In chatting with other developers, many identify why HTTP/2 is something to be excited about but are unsure how to optimize their applications to take advantage of it. This article should make that more clear.
HTTP/2 is here to make connections faster and to reduce latency. Like all improvements to hit the underlying protocols of the world wide web, it’s slowly oozing its way to ubiquity. Most web-scale sites and delivery networks already have HTTP/2 baked in. Fly is no exception; we fully support HTTP/2. Let’s dive in…
We introduced the basics of HTTP/2 when we announced support for HTTP/2 Server Push. HTTP/2 is a leap over HTTP/1.1. For maximum, context-y goodness, you should have a handle on the following improvements. We won’t get too far down the rabbit hole….
Binary, Not Textual
Deep down, HTTP/2 is 0s and 1s. It modifies how HTTP frames, or wraps, request and response data. HTTP/1.1 was textual, meaning it needed to apply logic to decipher how to treat strings, deal with whitespace, parsing and security issues. It was nice to debug, a human might tell you. Although, a machine might tell you that it was heavy and inefficient. It might also just beep and whirrrr at you.
One TCP Connection
HTTP/2 provides a single TCP connection. Under the previous protocol,browsers would allow on average 6 connections from a single client to a host. The number would differ from browser to browser and version to version. Each connection acted like a pipeline, sending data in a linear 1-2-3 format. It was prone to congestion.
Within one TCP connection, HTTP/2 can establish multiple, parallelized data-streams between client and server. This is a vast speed improvement over HTTP/1.1 pipelining. More assets can be sent at a time, in parallel, in both directions.
Old HTTP headers could grow to around 1kb of overhead per transfer, when you include cookies. Web servers limit HTTP header size to varying degrees; Apache limiting 8190 bytes while Nginx limiting between 4000-8000 bytes. A cookie can be up to 4096 bytes. This means that a potential request under the old method could be as as large as 12kb! HTTP/2 improves compression, shrinking the average and potential size of an HTTP header.
The client and server now share a common list of headers. The list is encoded with digitized labels to eliminate repetition and character bloat; the labels act as an index for quick associations. Within a request, it’s unlikely that, for example the :user-agent will change during an active connection. The
:path: will change. By assigning numerical values to each field, follow-up requests can say: “I’m changing field 29 to
This is a superficial introduction; the Huffman Coding Algorithm makes this an efficient lossless method of compression, when compared to a text-based header. If you’d like to plunge into the greater depths, here’s a brilliant video by Tom Scott about the algorithm and what it does to header sizes.
Brain Softening: Engage
To mitigate the bottlenecks in HTTP/1.1 developers concocted clever work-arounds. Yesterday’s work-arounds are today’s anti-patterns. To squeeze the most out of HTTP/2, we’ll need to step away from the old paradigm and make an adjustment to a fundamental axiom.
Create fewer, heavier, generalized assets.
Create many, lighter, specialized assets.
HTTP/1.1 conditioned us to treat connections as precious; in the best case, you’d have 6 TCP connections to your server. If you had 48 files, that’s 8 round-trips to receive all of your assets; sub-optimal performance! Each initial trip, a latency-bloated SSL handshake. To optimize around this there were two methods.
Our correct-at-the-time logic had us sending fewer and larger files; it was a wise way to deal with limitations. This behaviour is present in most modern build systems.
The second method involved sharding your hostname. Now we believe in the One Hostname to Rule Them All philosophy. But, previously, domain sharding was used to split assets up into differing
CNAMEs. That way, you’d get 6 connections to
onehostname.com, 6 to
x.onehostname.com and 6 to
y.onehostname.com, and on. Splitting your assets up into various destination hostnames would - in theory - allow more connections and, therefore, speed things up.
When we drive away from these patterns, we can reap the benefits of HTTP/2. We should look at the model we’re building on the client as a dynamic and cumulative representation of smaller and neater assets instead a mass made up of tangled, giant blobs.
We looked at what we no longer want to do, now let’s look at what we’d like to do. To bask in the glow of HTTP/2, we should be mindful of the following…
Caching, Cache Invalidation
If we’ve blob-ularized everything into one in-line page or ballooned a handful of monolithic assets, what happens when we make an adjustment to our assets? Even if we’ve changed one single bit, the entire asset will need to be regenerated. That’s expensive! Cache should be efficient.
Microservices are popular because the idea of clean and separate concerns is attractive. This same lovely logic can be applied to your assets. On top of that, you can simplify your build process.
pricingWidget.js until someone navigates to your pricing page. If you change
In this way, cache remains valuable. What hasn’t changed can stick around, what needs to change can be refreshed by the server or receive faster time-outs. You no longer need to have a build step concatenate your assets together. HTTP/1.1 and the scarcity of connections made this type of “assets for everything” thinking gluttonous – not so, with HTTP/2.
Prioritization and Server Push
Upon first reading into HTTP/2’s Server Push there was some concern: the server can push assets without a specific request? What about bandwidth limits?
What first seemed like a caveat soon revealed value. If desired, Push can be disabled or the client can set a bandwidth limit. If the caching is modular enough, you can reduce the bandwidth used by ensuring only code that is needed is sent to the client. If you send a blob and cache a blob for a user interaction that may be limited, that’s much more taxing – both up-front with the initial load and over-time as caches refresh.
Push applications are rife with potential. For example, if you determine the most likely “path” your users may take through your application, you can modularize and push your assets in advance.
Your users can travel an optimized path through your application, as if they were following a well-worn trail. The application can have consistent, high speeds and get better piece-by-piece, step-by-step. As a developer, this is something you can control via your HTTP/2 implementation or unlock within a service like Fly.
HTTP/2 is fascinating. The browser will measure your site’s index in concordance with your assets and dependencies. Priorities are assigned to the assets, by a weight value, after measuring the type and context of the asset. Depending on the assigned weight, assets are requested from the server. This should prevent your
product image from loading after your
company logo and enable the same “load what’s important” approach to development that disparate microservices offer.
Content Delivery Networks
One of the ever-green rules for optimizing application performance, as demonstrated by the author of High Performance Browser Networking Ilya Grigorik,…
If you’re using a delivery network like Fly, HTTP/2 support is present on each point of presence. Instead of calibrating a single server at a single location, a delivery network allows you to apply the logic seen above in caching and prioritization to many points in a global network. While this isn’t a technical limitation, browsers require that a connection be in SSL for HTTP/2 to work. Most delivery networks support that, too.
To reiterate, the overall goal of HTTP/2 is to reduce latency and speed up connections. A delivery network will help reduce latency by shortening each trip – a major benefit during the first trip, in particular, when the SSL handshake takes place and caching is established. Being able to control caching to a granular degree from your delivery network will extend the benefit into each and every user interaction.
Much of the deep-thinking required for HTTP/2 is done when the browser communicates with the host. Luckily, you don’t need to do too much to receive the juicy benefits of HTTP/2. If you can steer away from some comfortable old patterns and into the sunny new world of modularized, prioritized, and internationally cached assets, you’re in for some speedy times.
If you haven’t found a delivery network you like that supports HTTP/2 and granular caching, Fly is like a reverse-proxy and a global load balancer; it’s a CDN for developers.
Fly started when we wondered “what would a programmable edge look like”? Developer workflows work great for infrastructure like CDNs and optimization services. You should really see for yourself, though.