/ New Features

Fly + HTTP/2: Server Push

HTTP/2 provides improvements to the current HTTP protocol, HTTP/1.1. It takes a major step forward in parallelism and compression. We'll dive into the details, then show you how you can use Fly to supercharge your applications with HTTP/2 Push.

Parallelism and Compression

HTTP/2 an umbrella atop two fancy specifications. The first is the Hypertext Transfer Protocol Version 2. The second is HPACK: Header Compression for HTTP/2. How does this protocol tag-team make things faster? Among many optimizations...

  • Binary instead of Textual: Deep down, it's 0s and 1s. With HTTP/1.x being textual, this meant it needed to apply logic to decipher how to treat strings, deal with whitespace, parsing and security issues. While it was nice to debug, the benefits of a clean binary stream provide better and more sane performance.
  • One TCP Connection: HTTP/2 provides a single TCP connection.
  • Multiplexing: Within one TCP data-stream, HTTP/2 can establish multiple, concurrent connections. This is a vast improvement over the previous "get-in-line" approach seen in HTTP/1.0 and a much tidier vision than pipelining seen in HTTP/1.1.
  • Header Compression: This is where HPACK comes in. HPACK provides inflexible and simple structuring of HTTP headers. It eliminates redundant fields and can mitigates security issues that crop-up from mis-use.
  • Server Push: Pre-loading of static assets. Let's dig in!

Fly: Push It to the Limit

A typical request/response cycle that doesn't have any cache to work with looks like this:

→ A client makes a request to the server.

← Server sends client HTML.

→ Client requparses HTML into a DOM, requests CSS, JavaScript, and images. This can be time consuming and become blocked by misbehaving assets.

← Server sends client CSS, JavaScript, and images.

What Push introduces is the ability to pre-load these assets in the client's cache, anticipating their usage:

→ A client makes a request to the server.

← Server sends HTML to user; CSS, JavaScript, and images arrive in cache.

You can pre-load assets well in advance and avoid blocking that may interfere during the parsing of the DOM. If you know a user is going to wind up at a series of pages, during a sign-up or set-up process, for example, you can front-load the initial connection for more graceful progress. In short, your site will load faster; your users will have a smoother and faster overall experience. There are some caveats, though.

You need to be cautious when pushing assets to users; while it's quicker to send your assets in one batch, it's also more data. If your client is on a poor connection or keeping tabs on a bandwidth limit, they may not appreciate the abundant front-loads. To manage this, apply some discretion and only push assets you are confident will be accessed within a given session.

Now, on to the how! Primarily, you can apply this feature by adding a Link header to your requests. A link header with your assets included may look something like this:

Link: </css/application.css>; rel=preload; as=style, </js/widgets.js>; rel=preload; as=script, </img/banner.png>; rel=preload; as=image

There are a few things in this example to make note of...

First, is that the as= tag is used to denote which asset type the file is; this needs to be accurate. For a list of asset types and the as= directive they coincide with, check out this link. Notables include: script, image, style, audio, track, video.

The second thing is that each asset type is separate by a comma, each field by a semi-colon.

Finally, each rel= attribute is given the directive preload. You may have used this before using something like preconnect for assets stored on a CDN. preconnect establishes an early connection to external assets, performing DNS lookup and the TCP/SSL handshake earlier in order to reduce latency when the asset is actually required. preload is the push directive for our local, as opposed to CDN, assets.


HTTP/2 push promises tantalizing improvements to the performance of today's demanding web applications. Through the compression of HTTP headers and the parallelized preference for push'd, multiplexed, single-TCP-connections, your HTTP requests are set to get speedy... If you have a server and can accommodate, of course!

Using Fly.io, our global network of smart edge-proxies is ready to provide you with the goodness of HTTP/2. Fly started when we wondered "what would a programmable edge look like"? Developer workflows work great for infrastructure like CDNs and optimization services. You should really see for yourself, though.

Kellen Evan Person


Kellen Evan Person

A polite, forest-dwelling Canadian who enjoys writing and nature. He's spent near two decades building web applications and strives to keep development fun and light-hearted.

North Vancouver, Canada