/ Nginx

Delivering Multiple Apps on One Hostname

TLDR; Proper application delivery can be a sticky wicket. We’ll show you how to balance multiple applications on one hostname using HAProxy and Nginx. After that, we’ll show you how to do the same thing using Fly.io.

This is going to be a journey!

Here’s how we are going to tackle this:


  • Outline what we want to configure and why.
  • Do a quick run through of application delivery basics.
  • Shimmy through configuration step-by-step.
  • Show you how to do the same thing with Fly.

Right, then. Let’s roll.


In this fantastical scenario, we want to bootstrap a potent new web property. Our web property will contain multiple backends that we want to serve on different subfolders. We will use the example domain borganics.org for our made-up property.

At Borganics, resistant is futile, healthy, and pleasant!

It’ll be made up of …

  • A GitHub Pages backend served from borganics.org/docs.
  • A NodeJS Ghost Blog backend served from borganics.org/blog.
  • Two identical Rails applications served from borganics.org/.

Three different applications, one domain. To deliver all of these applications to our users, we want to use a Load Balancer and have sane routing. We could go deep down into a scaling rabbit hole, but we’ll only demonstrate mild redundancy.

Application Delivery 101

A world without a Load Balancer is straight forward. But, not in a good way.

Visitors -> Internet -> Server -> Database

Your visitors hit your website directly. This isn’t good for a number of reasons:

  • You have a single point of failure. If your monolithic server goes down, you’re out of luck.
  • Your server is in one location. The further your visitors are, the slower your application will be for them.
  • You are in a difficult position to scale anyway but vertically. If demand accelerates beyond the capability of your monolith, it’s a mad scramble to boost your architecture to sate the demand.

A step up from no Load Balancer would be a Layer 4 Load Balancer. Introducing a Load Balancer puts a proxy or proxy-like service in-front of your application to distribute traffic within your infrastructure.

Visitors -> Load Balancer  -- > ServerA ( / ) - > DatabaseA
                           -- > ServerB ( / ) - > DatabaseA
                           -- > ServerC ( / ) - > DatabaseA

Requests arrive and are routed to identical instances of your front-end servers. We improve upon the no Load Balancer scenario by providing redundancy. If ServerA bites the dust, then identical ServerB and ServerC will pick up the slack. This opens the door to high availability and horizontal scale; you distribute load across identical instances.

A further step from the Layer 4 Load Balancer is a Layer 7 Load Balancer. Within Layer 7 is where we can intelligently balance traffic to different applications on different routes.

Visitors -> Load Balancer -- > BlogA ( /blog/ ) - > DatabaseA
                          -- > BlogB ( /blog/ ) - > DatabaseA
                          -- > ServerA ( / )    - > DatabaseA
                          -- > ServerB ( / )    - > DatabaseA

If we are only using HAProxy to put different applications on the same domain name using subfolders, we have created a Reverse Proxy. The difference between a Load Balancer and a Reverse Proxy is when multiple, identical, redundant instances of the same applications alternate the processing of traffic. As seen in the Load Balancer configuration above, it is load sharing between identical BlogA/BlogB and ServerA/ServerB.

Now, borganics.org will apply a Layer 7 Load Balancer using Nginx and HAProxy. It’ll look like this:

Visitors -> Load Balancer -- > BlogA ( /blog/ ) - > DatabaseA
                          -- > DocsA ( /docs/ )
                          -- > ServerA ( / )    - > DatabaseA*
                          -- > ServerB ( / )    - > DatabaseA*


borganics.org needs 4 servers and one GitHub Page. We’ll start with the GitHub page. Our steps will proceed as such:

  • 1: Setup GitHub Page.
  • 2: Setup a PostgreSQL database.
  • 3: Setup 2x Rails frontends.
  • 4: Setup a Ghost Blog.
  • 5: Configure HAProxy and Nginx.
  • 6: Nap.

1: GitHub Page

GitHub Pages provides simplified and free hosting of static sites. With GitHub pages, among many static-site generating options, you can choose:

Create a public or private repository, add or generate your code, then rename the repository githubuser/githubuser.github.io. Your site will be accessible via githubuser.github.io. This is a fantastic way to host quick and easy documentation.

Once you have a static page accessible via https://githubuser.github.io, you’re good to go.

2: PostgreSQL Database

Whether you’re hosting your database yourself or using a cloud provider like Compose or Heroku, the end-result of your configuration will be similar. You will receive the following set of data to access your database.

DATABASE_HOST: database9729.provider.com
DATABASE_PASSWORD: supersecretpasscode
DATABASE_URI: postgres://jeffery:supersecretpasscode@database9729.provider.com/postgres

Keep these data handy.

3: 2x Rails Frontend

One of the many HTTP servers that pairs with Rails is Unicorn. We’ll roll through a simple setup of a Rails application running Unicorn through Nginx.

Let’s assume that we’re setting this up using a Digital Ocean Ubuntu VPS. Using Digital Ocean, we can clone a snapshot of one server to have two identical servers to Load Balance between.

First, set your VPS to utilize Nginx. You can follow their handy documentation for how to set it up in secure sanity.

Unicorn has an example nginx.conf which we can use. You can find it here. Within the default configuration, note the lines that read /path/to/. For logging, you likely want to set the path to begin /var/log. For your static path and other paths for asset service, we’ll use /var/www.

We can use curl to bring this configuration file right where we’d like it to be.

curl -o /etc/nginx/nginx.conf.new https://github.com/defunkt/unicorn/blob/master/examples/nginx.conf

Change the /path/to/ entries, then overwrite the previous config file.

mv nginx.conf.new nginx.conf

Next, we’ll make some adjustments to group, user, and security settings.

sudo useradd -s /sbin/nologin -r nginx
sudo usermod -a -G web nginx

Furthermore, we’ll create a place where Unicorn will live - its magical stable, if you will.

sudo mkdir /var/www
sudo chgrp -R web /var/www # Web now owns /var/www.
sudo chmod -R 775 /var/www # Group write permissions.

… Add your Nginx user to the web group…

sudo usermod -a -G web [YOUR_USERNAME]

Next, unleash the Unicorn.

gem install unicorn

Create your Unicorn-enabled application within /var/www:

cd /var/www
rails new unicornstatic

You can now lock this into Git and begin building out your static application. If you already have a site running Unicorn, you can simply clone it in to the /var/www directory.

We’ll now want to copy over a sane configuration for Unicorn…

curl -o config/unicorn.rb https://raw.github.com/defunkt/unicorn/master/examples/unicorn.conf.rb

… Then make some path adjustments, in particular setting our APP_PATH

APP_PATH = "/var/www/unicornstatic"
working_directory APP_PATH

stderr_path APP_PATH + "/log/unicornstatic.stderr.log"
stdout_path APP_PATH + "/log/unicornstatic.stderr.log"

pid APP_PATH + "/tmp/pid/unicornstatic.pid"

Finally, engage the Nginx daemon to start serving your static page. We’ll start it in production.

unicorn_rails -c /var/www/unicorn/config/unicorn.rb -D -E production

Your site will now be running at its public IP, and accessible within your ‘datacenter’ via its private IP. As mentioned earlier, we want — and I know this is going to sound crazy — … two Unicorns.

Create a snapshot, clone a secondary instance. You now have two private IP addresses to reach two identical Rails applications.

If we have something meatier than a static site and require a database, we’d configure config/database.yml to look something like this:

postgresql: &postgresql 
adapter: postgresql 
database: <%=ENV[‘DATABASE_NAME’]%> 
username: <%=ENV[‘DATABASE_USERNAME’]%> 
password: <%=ENV[‘DATABASE_PASSWORD’]%> 
host: <%=ENV[‘DATABASE_HOST’]%> 

4: Ghost Blog

We have our documentation at githubuser.github.io and two Rails apps riding on the back of Unicorn. We’re creating an empire, so we’ll want a blog to share our thoughts with our community. Again, we’ll assume that we’re creating this in something like a Digital Ocean VPS with Nginx and Node installed.

Similar to our Rails application, we want to launch everything from /var/www.

mkdir -p /var/www/
cd /var/www/
wget https://ghost.org/zip/ghost-latest.zip
unzip -d ghost ghost-latest.zip
cd ghost

Next, install all of our packages.

npm install --production

We’ll want to legitimize, then alter our configuration file…

mv config.example.js config.js
vi config.js # Or nano. Whatever ya' like.

… Now, we’ll make some edits…

production: {
    url: 'https://borganics.org/blog',
    fileStorage: false,
    mail: {},
    database: {
        client: 'postgres',
        connection: {
          host: process.env.DATABASE_HOST,
          user: process.env.DATABASE_USER,
          password: process.env.DATABASE_PASSWORD,
          database: process.env.DATABASE_NAME,
          port: '5432'
        debug: false

We changed:

  • url: The domain that we’ll serve this from.
  • Added fileStorage:false underneath the url: line.
  • The database: field to postgres instead of sqlite3.
  • Added our environment variables to securely pass our database credentials through.

Now, to configure Nginx. Here we will edit a site-available for Ghost.

vi /etc/nginx/sites-available/ghost

Edit the site instance with the correct SERVER_IP information:

server {
    listen 80;
    server_name [SERVER_IP];
    location / {
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   Host      $http_host;

… Link to our /etc/nginx/sites-available/ghost into /etc/nginx/sites-enabled/ghost to, err, enable…

ln -s /etc/nginx/sites-available/ghost /etc/nginx/sites-enabled/ghost

... And restart Nginx to activate.

service nginx restart

We’ll need to organize our user and ownership structure. To conclude, we’ll get Ghost running “forever”!

sudo adduser --shell /bin/bash --gecos 'Ghost application' ghost
sudo chown -R ghost:ghost /var/www/ghost/

Become the Ghost user (BoOooOoOo!):

su - ghost

Now, in typical Ghost fashion we’ll haunt, er, run forever.

npm install -g forever
NODE_ENV=production forever start index.js

It’s running!

In summary, we now have:

  • A Ghost Blog.
  • Two Rails + Unicorn static pages.
  • A GitHub Page.

Let’s bring the balance.

5: HAProxy and Nginx

Application delivery gets tricky when you want to thread all of your applications together under one hostname, manage redundancy, and evenly distribute load. We’ll give it a go. Create a new server instance; again, we’ll assume Ubuntu on Digital Ocean. This is our fourth and final server.

Install HAProxy ~

sudo apt-get update
sudo apt-get install haproxy
vi /etc/default/haproxy # Or nano

Change ENABLED= to 1.

We can spy on our HAProxy defaults by looking at /etc/haproxy/haproxy.cfg. We see that our default mode and option are http and httplog respectively. This is an out-of-the-box Layer 7 Load Balancer. Nice.

HAProxy conducts the routing to specific subfolders by using Access Control Lists, or ACLs. We can edit our haproxy.cfg to expect various ACLs. We’ll want two! One for docs and one for blog.

frontend www
   bind [SERVER_PUBLIC_IP]:80
   option http-server-close
   acl url_blog path_beg /blog
   acl url_docs path_beg /docs
   use_backend blog-backend if url_blog
   use_backend docs-backend if url_docs
   default_backend web-backend

Here is what we have configured:

  • bind [SERVER_PUBLIC_IP]:80: Our Load Balancer is available at our public IP. This is where we direct our traffic, now. The rest of the IPs we’ll be dealing with will be private IPs.
  • option http-server-close: We’ve enabled HTTP server-close to allow HAProxy to process multiple requests on a single connection.
  • acl: We specify url paths for our ACL.
  • use_backend: We specify backends for our ACLs.
  • default_backend: Where all other requests go!

We’re close, now! Stay with me! We need a backend group for each of our applications. Each of our droplets has a private IP address. Our GitHub pages has a forwarding URL.

We will use them, now.

First, Ghost:

backend blog-backend
   reqrep ^([^\ :]*)\ /blog/(.*) \1\ /\2
   server blog-1 [GHOST_PRIVATE_IP]:2368 check

… Then, GitHub Pages…

backend docs-backend
  http-request set-header Host githubuser.github.io
  server docs1 githubuser.github.io:80

… Now — stayyy withh meeee — our Unicorn Rails apps…

backend web-backend
   server web-1 [RAILS_PRIVATE_IP]:80 check
   server web-2 [RAILS_PRIVATE_IP]:80 check

Steady now — restart HAProxy.

service haproxy restart

Hot dog! Our robust, Load Balanced, semi-redundant and somewhat highly available web property is online! We have taken the first steps towards having a well-delivered application. We’d now associate the public IP with our domain name, so that our users can visit all of our sites from borganics.org.

Let’s proceed to step 6, then talk about Fly.

6: Nap

… ZzzzZZzz…


We’ve created our infrastructure. That was a lot of work.

Visitors -> Load Balancer -- > BlogA ( /blog/ ) - > DatabaseA
                          -- > DocsA ( /docs/ )
                          -- > ServerA ( / )    - > DatabaseA*
                          -- > ServerB ( / )    - > DatabaseA*
* Optional

Here’s what we’re still missing:

  • We’re not using HTTPS!
  • We have no redundancy with our HAProxy.
  • We have yet to configure logging.
  • Our visitors still connect to a fixed topographical location, we have no regional diversity.

We also have quite a gizmo to manage and update; there’s lots to do! We just want to build features into our app, right? Right — here’s how Fly can help.

Fly is powerful, topographically distributed, and application aware. There are many locations and tools that you can use to host your applications. Fly fundamentally changes how you serve your applications and route your visitor requests.

Let’s say that, instead of hosting all of the above on our own VPS infrastructure, we hosted them wherever is most simple. Our Rails applications and Ghost Blogs, for example, could be hosted on Heroku, AWS, or GCP. We could have all applications within one host, or several — it doesn’t matter.

Within Fly, you would add your site by verifying your hostname. To continue on with our example, we would verify borganics.com.

We would then add our backends:


We’d add…

  • Heroku backends by adding the application name, including the Fly build pack, and then placing our FLY_TOKEN in an environment variable.
  • GitHub Pages backends by including the repository name.
  • Any containerized application running with Docker or Kubernetes.

After that, we have full control over your routing. Want to slot /blog/ traffic to borganics.org/blog/? Simply put in a rule and set the priority.

Along with simplified discovery and routing, Fly empowers you with…

  • Full end-to-end HTTPS from your visitor, through the Load Balancer, to your application.
  • Mighty Middleware to give you Google Analytics, Google Authentication, Geo IP, Render Speed Tracking, and more with only a few clicks.
  • Your own global Application Delivery Network —  topographically distributed edge-nodes for a fast, world-wide network.

We think you’ll enjoy what we’ve built! For more information on how you can use Fly to build your next application, checkout our quick start. Or, email us at support@fly.io — we’ll be happy to help you get rolling.

Fly started when we wondered "what would a programmable edge look like"? Developer workflows work great for infrastructure like CDNs and optimization services. You should really see for yourself, though.

Kellen Evan Person


Kellen Evan Person

A polite, forest-dwelling Canadian who enjoys writing and nature. He's spent near two decades building web applications and strives to keep development fun and light-hearted.

North Vancouver, Canada