/ Load Balancing

Load Balancing HTTPS with Let's Encrypt and HAProxy

Let's Encrypt wants to encrypt the World Wide Web. We can get on board with that vision; an encrypted internet is safer for everyone. In tandem with their Automatic Certificate Management Environment (ACME), Let's Encrypt promises to make it much easier to obtain a browser-trusted TLS/HTTPS certificate.

In this article, we'll show you how to setup an HAProxy load balancer with an automatically renewing Let's Encrypt TLS/HTTPS certificate.


  • Take a look into what Let's Encrypt is.
  • Configure an HAProxy load balancer with HTTPS using an automatically renewing Let's Encrypt certificate.
  • Demonstrate the same configuration within Fly.io.

Right. Onward!

Free Encryption for All!

Let's Encrypt is a non-profit maintained by the Internet Security Research Group (ISRG), which is a consortium of industry professionals based out of California. The functionality of Let's Encrypt can be broken into two parts:

  • 1: Domain Validation.
  • 2: Certificate Issuance and Revocation.

As a result of performing these functions, they are considered a global Certificate Authority (CA); the best part is: they're a free global Certificate Authority. What separates Let's Encrypt from other Certificate Authorities is that they provide their service through a client that runs alongside your servers.

Before they arrived, you would need to generate your own RSA keys then purchase and install the Certificate Authority's root certificate on your servers; you would need to maintain and renew it yourself.

The end result of the Let's Encrypt service is encrypted HTTPS traffic between your visitors and your web server. To best explain what it does and how it works, we will cruise through a practical example.

If you're foggy on why HTTPS is important, checkout our Fly Fundamentals: HTTPS article.

Let's Encrypt HAProxy

To set this up, here's what we'll need:

  • A private web server.
  • Your own domain name.

Your server can be hosted anywhere; this guide will use a Ubuntu 16.04 Digital Ocean VPS.

Domain Validation needs to take place in order for HTTPS to work. When your visitor connects to your application - for example wizardry.io - HTTPS verifies that they are, in fact, reaching wizardry.io and not a malicious actor in-between.

To validate our domain, we need to setup an A Record that ties our domain to the public IP address of our server running HAProxy:

Let's Encrypt uses a utility called certbot to help configure our free certificate. Given that it lives on the server with an IP tied to your domain, it is able to vouch on its behalf. Our certbot will also help with simplified certificate renewals. Let's grab it.

sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install certbot 

Neat! We don't have an HAProxy yet, but we can still checkout what certbot can do...

goodroot@wizardryio:~# certbot help

  certbot [SUBCOMMAND] [options] [-d DOMAIN] [-d DOMAIN] ...

Certbot can obtain and install HTTPS/TLS/SSL certificates.  By default, it will attempt to use a webserver both for obtaining and installing the cert.


That sounds powerful! Let's do some preparations before booting up HAProxy.

sudo mkdir -p /etc/haproxy/ssl

We've created a place where our certificates will live; we should secure it.

sudo chmod -R go-rwx /etc/haproxy/ssl

Great! Now, let's see what certbot can do.

certbot certonly --standalone

We're setting certbot up in standalone mode. This briefly creates a web server which connects back to Let's Encrypt over port :80 to verify domain ownership and generate certificates. You'll receive a prompt to enter your domain. If you don't have your A Record setup, it will fail ( :( ):

goodroot@wizardryio:/etc/haproxy/ssl# certbot certonly --standalone

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Starting new HTTPS connection (1): acme v01.api.letsencrypt.org

Please enter in your domain name(s) (comma and/or space separated)  (Enter 'c' to cancel): wizardry.io

If it succeeded, you'll see:

 - Congratulations! Your certificate and chain have been saved at: /etc/letsencrypt/live/wizardry.io/fullchain.pem. 

Your cert will expire on 2017-07-12. To obtain a new or tweaked version of this certificate in the future, simply run certbot again. To non-interactively renew *all* of your certificates, run "certbot renew"

 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
   Donating to EFF: https://eff.org/donate-le

Great! Our certificates have been created. There's a useful bit of information here, pointing us to where our new certificates are located:


If we check out the parent directory of the fullchain.pem file, we see there is more than one file there:

goodroot@wizardryio:~ # ls /etc/letsencrypt/live/wizardry.io/
cert.pem  chain.pem  fullchain.pem  privkey.pem

Four files. HAProxy will need us to concatenate two of the four files into one .pem certificate in order to serve HTTPS:

DOMAIN='wizardry.io' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /etc/haproxy/ssl/$DOMAIN.pem'

We now have one certificate to rule them all - until it expires in 90 days, of course. Later, we'll talk about how to automatically regenerate certificates every 60 days. For now, let's install HAProxy...

sudo apt-get install haproxy

HAProxy is an excellent tool to use as a load balancer. Using HAProxy, you can distribute load to various backends, serve all of our traffic from subfolders on your hostname, and - if you setup multiple HAProxy instances - provide high-availability.

vi /etc/haproxy/haproxy.cfg # Or nano

We're going to leave the defaults alone for now and get into an example proxy configuration. Within haproxy.cfg, we want to create two frontends and a series of backends:

frontend www-http
  bind [PUBLIC_IP]:80
  reqadd X-Forwarded-Proto:\ http
  default_backend web-backend

For our frontend, instead of [PUBLIC_IP] you would place the same IP address you used within your A Record - the public IP address of your HAproxy. We've created this frontend to catch HTTP traffic over the typical port :80.

Next, let's create a frontend for HTTPS traffic.

frontend www-https
  bind [PUBLIC_IP]:443 ssl crt /etc/haproxy/ssl/wizardry.io.pem
  reqadd X-Forwarded-Proto:\ https
  default_backend web-backend

Our HTTPS backend is, naturally, more sophisticated. You want to add the HAProxy [PUBLIC_IP] and replace the certificate path with your own: /etc/haproxy/ssl/[YOUR_DOMAIN].pem.

Both of our front-ends direct us to web-backend. Let's configure it.

backend web-backend
  redirect scheme https if !{ ssl_fc }
  server web-1  [PRIVATE_IP]:80 check
  server web-2  [PRIVATE_IP]:80 check

Our backends are our web servers. They are our applications, blogs, documentation, services; backends can be many things; typically a backend receives a request and provides a response. If, for example, we have two other Digital Ocean VPS' running our application over NodeJS, we'd put their private IPs here; these servers are considered "upstream".

HTTPS traffic is terminated by HAProxy and we specify which upstream servers we'll want to send our data to. Note that upstream data travels over :80 unencrypted! This is a common way in which HTTPS is implemented.

Before going into automatic certificate renewals, let's adjust a default setting. In our frontends, we referenced X-Forwarded-Proto which is an HTTP header that HAProxy can attach to your requests. They do not show up by default, so we'll need to activate them.

Within /etc/haproxy/haproxy.cfg, under defaults, add:

option forwardfor

Using headers, we can detect information about our requests. In our case, we know if it's HTTP or HTTPS traffic. With the header, we handle each appropriately.

Automagical Renewals

In a simple configuration like this where you have a single domain, one combined certificate, and a single instance of HAProxy, renewal is relatively painless. Recall earlier we needed to combine the fullchain.pem and the privkey.pem from Let's Encrypt into a single .pem file:

DOMAIN='wizardry.io' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /etc/haproxy/ssl/$DOMAIN.pem'

We'll need to run this command each time we generate new certificates. Although solutions can become much more complex, we simply want to turn this into a bash script. Let's put it within /etc/haproxy/ssl/ and call it: renewal.sh.

vi /etc/haproxy/ssl/renewal.sh

Now, enter:

certbot renew --pre-hook "service haproxy stop" --post-hook "service haproxy start"
DOMAIN='wizardry.io' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /etc/haproxy/ssl/$DOMAIN.pem'

Nice and simple.

We use certbot renew with a --pre-hook and --post-hook to stop HAProxy, renew the certificates, concatenate fullchain.pem and privkey.pem into domain.pem, then start HAProxy again... You can imagine how complex this scripting might get with more robust, distributed infrastructure.

Our certificates last for 90 days, but we'd rather err on the side of caution and renew them every 60 days. Let's create an entry in our crontab to run our script every 60 days:

crontab -e

Then enter:

* * */60 * * /etc/haproxy/ssl/renew.sh

Beautiful. Let's reflect.


  • Created an A Record for our domain.
  • Created verified certificates for our domain using Let's Encrypt via certbot.
  • Configured those certificates into an HAProxy to receive HTTPS traffic.
  • Forwarded HTTP traffic upstream to our backends. (Eeeeek!)
  • Configured certbot to automatically renew our certificates every 60 days.

Anyone visiting https://wizardry.io will receive an encrypted page - great! But, what if our infrastructure was more complex and required multiple HAProxys for high availability? What can we do about our traffic being exposed enroute to our backends? These are tough engineering questions.

Before we dive too deep, let's look into Fly.

Let's Fly

Fly is an easy way to get an automatically renewing Let's Encrypt certificate. Here's the steps that you would need to take:

1: Add your site to Fly using your hostname.

2: Create a CNAME or ALIAS Record within your DNS host that coincides with your Let's Encrypt-ed Fly.io hostname. Once confirmed, you'll get the green checkmark:

3: Add a backend by connecting the Fly agent to it, if applicable. For things like GitHub pages or Ghost Blogs, no agent is required.

Once completed, you now have:

  • A global network of highly available load balancers to receive your visitors.

  • End-to-end HTTPS using automatically renewing Let's Encrypt certificates; encryption from visitor right to your application.

It's common practice to forward unencrypted HTTP traffic after TLS/SSL terminates at the load balancer; we did so, above. Imagine if we had "upstream" servers or load balancers in various locations receiving unencrypted HTTP traffic - that would be bedlam; we want that traffic protected! Using the Fly agent, we secure your routes directly to your application.


Let's Encrypt is a powerful tool for security. We set up a single HAProxy to serve automatically renewing Let's Encrypt certificates and it was quite a bit of work. Most infrastructure requirements are significantly more sophisticated than what we demonstrated.

Using Fly, you can easily apply Let's Encrypt'd end-to-end HTTPS to even the most robust, distributed and micro-serviced of infrastructure.

Kellen Evan Person


Kellen Evan Person

A polite, forest-dwelling Canadian who enjoys writing and nature. He's spent near two decades building web applications and strives to keep development fun and light-hearted.

North Vancouver, Canada