Getting Started with LiteFS on Fly.io

Overview

This guide will walk you through the steps of getting a LiteFS cluster up and running on Fly.io. For a full, working example of a LiteFS application, please see the litefs-example repository.

Creating a LiteFS Cloud Cluster (optional)

You can choose to create a LiteFS Cloud cluster, which will provide automatic backups managed by Fly.io. With LiteFS Cloud, you’ll have the ability to restore your LiteFS database to any point in time near instantaneously. You can use LiteFS Cloud to back up and restore your database running anywhere (whether you plan to run your application on the Fly Platform, or somewhere else).

You can create your LiteFS Cloud cluster in the Fly.io dashboard. First, sign up for an account if you haven’t already. After you’re signed in, navigate to the LiteFS Cloud section on the left navbar, and then use the Create button to create a new cluster. Here’s a handy link to the LiteFS Cloud dashboard if you prefer!

When you create your LiteFS Cloud cluster, you’ll be asked to save an auth token. Keep this handy - you’ll use it later.

Just a note: if you decide not to create a LiteFS Cloud cluster right now, you can always change your mind and add it later. Check out the LiteFS Cloud backup docs for more details.

Installing LiteFS

Dependencies

The litefs binary is self-contained, but you’ll need to install the fuse3 library so LiteFS is able to mount a local file system. You’ll also need ca-certificates if you’re connecting to Consul, and you’ll almost certainly want to install sqlite. This installation depends on your package manager, but here is a line you can add to your Dockerfile for alpine-based or debian-based images:

# for alpine-based images
RUN apk add ca-certificates fuse3 sqlite
# for debian/ubuntu-based images
RUN apt-get update -y && apt-get install -y ca-certificates fuse3 sqlite3

Installing LiteFS

LiteFS is meant to run inside your container alongside your application. You can pull in the litefs binary by copying it from the official Docker image:

COPY --from=flyio/litefs:0.5 /usr/local/bin/litefs /usr/local/bin/litefs

It’s recommended that you run LiteFS as root in Docker instead of using the USER command to change users. If you need to run your application as another user, use the su command to run your application as a non-root user.

Take a look at the example Dockerfile in the litefs-example repo for an example.

Configuring LiteFS

Most configuration options for LiteFS are set via a YAML configuration file called litefs.yml. This file is typically placed in /etc/litefs.yml, but you can change the path by using the -config flag.

Take a look at the litefs.yml file in the litefs-example repo for a full example, or follow along with the rest of this section for a more detailed explanation!

File system

Let’s first set two fields to tell LiteFS where to mount its file system and where to store its internal data.

# This directory is where your application will access the database.
fuse:
  dir: "/litefs"

# This directory is where LiteFS will store internal data.
# You must place this directory on a persistent volume.
data:
  dir: "/var/lib/litefs"

If you’re running on Fly.io, you should create a volume:

fly volumes create litefs --size 10

And then specify it as a mount in your fly.toml:

[mounts]
  source = "litefs"
  destination = "/var/lib/litefs"

Lease configuration

LiteFS only allows a single node to be the primary at any given time. The primary node is the only one that can write data to the database. The other nodes are called replicas and they provide a read-only copy.

The primary is determined by using a distributed lease. In this guide, we’ll be using a Consul lease as it allows the primary to automatically failover in order to have high write availability. You can add a Consul URL to your app with:

fly consul attach

That will set a FLY_CONSUL_URL secret for the app, which will have the cluster url.

Then in your litefs.yml, set the lease section:

lease:
  type: "consul"

  # Specifies if this node can become primary. The expression below evaluates
  # to true on nodes that are run in the primary region. Nodes in other regions
  # act as non-candidate, read-only replicas.
  candidate: ${FLY_REGION == PRIMARY_REGION}

  # If true, then the node will automatically become primary after it has
  # connected with the cluster and sync'd up. This makes it easier to run
  # migrations on start up.
  promote: true

  # The API URL that other nodes will use to connect to this node.
  advertise-url: "http://${FLY_ALLOC_ID}.vm.${FLY_APP_NAME}.internal:20202"

  consul:
    # The URL of the Consul cluster.
    url: "${FLY_CONSUL_URL}"

    # A unique key shared by all nodes in the LiteFS cluster.
    # Change this if you are running multiple clusters in a single app!
    key: "${FLY_APP_NAME}/primary"

You can find more details in the lease management section of the configuration guide.

LiteFS Cloud Configuration

If you’re using LiteFS Cloud for backups, you’ll need to provide a LiteFS Cloud API authentication token, which you got when you created the cluster (or you can create a new one from the LiteFS Cloud section in Fly.io dashboard).

You should add this as a secret for your application named LITEFS_CLOUD_TOKEN with:

fly secrets set LITEFS_CLOUD_TOKEN=$(litefs cloud auth token)

Configuring the proxy

LiteFS requires that all writes occur on the primary node, which means that applications need to redirect write requests to the current primary. It’s also possible to issue a write to the primary and then read from a replica before the change is propagated to that replica.

Most web applications can take advantage of a thin, built-in proxy inside LiteFS that automatically handles these write redirection and replica consistency issues. In order to make use of this proxy, your application needs to follow these rules:

  • GET requests never perform write operations (e.g. INSERT, UPDATE, etc).

  • Clients have cookies enabled.

To configure the proxy, you’ll need to set the proxy section in the config file:

proxy:
  # Bind address for the proxy to listen on.
  addr: ":8080"

  # Hostport of your application - replace 8081 with whatever port
  # your application is listening on!
  target: "localhost:8081"

  # Filename of the SQLite database you want to use for TXID tracking.
  db: "my.db"

You can find more details on the proxy configuration guide.

Running LiteFS

The main command used to start LiteFS is the litefs mount command. This mounts a FUSE file system and then starts an API server for LiteFS nodes to communicate with each other. You can use this as the ENTRYPOINT in your Dockerfile:

ENTRYPOINT litefs mount

Running as a supervisor

LiteFS can either be run on its own or it can act as a simple supervisor process for your application. Running as a supervisor lets LiteFS wait to start the application until after it has connected to the cluster.

You can specify one or more commands in the exec section of your config. If you set lease.promote to true, then you can specify to run your migration scripts only on candidate nodes. This means that candidates will automatically promote to the primary and run the migrations.

exec:
  # Only run migrations on candidate nodes.
  - cmd: "rails db:migrate"
    if-candidate: true

  # Then run the application server on all nodes.
  - cmd: "rails server"

Testing your LiteFS instance

Once LiteFS is mounted, you can use SQLite clients or the sqlite3 CLI to interact with databases on the mount directory:

sqlite3 /litefs/my.db

LiteFS only allows files in the root of the mount and it does not currently support subdirectories.

Importing your database

If you have an existing database, you can import it using the litefs import command.

litefs import -name my.db /path/to/database

Refer to the litefs import documentation for more details.

You should only interact with SQLite databases on LiteFS through a SQLite client or through the litefs tooling.

Do not use cp to copy a database into place.