Getting Started with LiteFS
Overview
This guide will walk you through the steps of getting a LiteFS cluster up and running. Some of the steps may include Fly.io commands for convenience, however, you can run LiteFS anywhere that runs Linux.
For a full, working example of a LiteFS application, please see the litefs-example repository.
Installing LiteFS
Dependencies
The litefs
binary is self-contained but you'll need to install the fuse3
library so LiteFS is able to mount a local file system. You'll also need ca-certificates
if you're connecting to Consul. This installation depends on your package manager but some examples are:
# Install on Alpine
$ apk add ca-certificates fuse3
# Install on Debian/Ubuntu
$ apt install ca-certificates fuse3
With Docker
LiteFS is meant to run inside your container along side your application. You can pull in the litefs
binary by copying it from the official Docker image:
COPY --from=flyio/litefs:0.4 /usr/local/bin/litefs /usr/local/bin/litefs
It's recommended that you run LiteFS as root
in Docker instead of using the USER
command to change users. If you need to run your application as another user, use the su
command to run your application as a non-root user.
Without Docker
If you're running on a Linux VPS without Docker, you can download the latest version from our LiteFS releases page. LiteFS can run on either amd64
or arm64
architectures.
Configuring LiteFS
Most configuration options for LiteFS are set via a YAML configuration file called litefs.yml
. This file is typically placed in /etc/litefs.yml
but you can change the path by using the -config
flag.
File System
Let's first set two fields to tell LiteFS where to mount its file system and where to store its internal data.
# This directory is where your application will access the database.
fuse:
dir: "/litefs"
# This directory is where LiteFS will store internal data.
# You must place this directory on a persistent volume.
data:
dir: "/var/lib/litefs"
If you're running on Fly.io, you should create a volume:
fly volumes create litefs --size 10
And then specify it as a mount in your fly.toml
:
[mounts]
source = "litefs"
destination = "/var/lib/litefs"
Lease Configuration
LiteFS only allows a single node to be the primary at any given time. The primary node is the only one that can write data to the database. The other nodes are called replicas and they provide a read-only copy.
The primary is determined by using a distributed lease. In this guide, we'll be using a Consul lease as it allows the primary to automatically failover in order to have high write availability.
In you're running on Fly.io, you can add a Consul URL to your app with:
fly consul attach
That will set a FLY_CONSUL_URL
secret for the app, which will have the cluster url.
Then in your litefs.yml
, set the lease
section:
lease:
type: "consul"
# Specifies if this node can become primary. The expression below evaluates
# to true on nodes that are run in the primary region. Nodes in other regions
# act as non-candidate, read-only replicas.
candidate: ${FLY_REGION == PRIMARY_REGION}
# If true, then the node will automatically become primary after it has
# connected with the cluster and sync'd up. This makes it easier to run
# migrations on start up.
promote: true
# The API URL that other nodes will use to connect to this node.
advertise-url: "http://${FLY_ALLOC_ID}.vm.${FLY_APP_NAME}.internal:20202"
consul:
# The URL of the Consul cluster.
url: "${FLY_CONSUL_URL}"
# A unique key shared by all nodes in the LiteFS cluster.
# Change this if you are running multiple clusters in a single app!
key: "${FLY_APP_NAME}/primary"
You can find more details in the lease management section of the configuration guide.
Running LiteFS
The main command used to start LiteFS is the litefs mount
command. This mounts a FUSE file system and then starts an API server for LiteFS nodes to communicate with each other.
litefs mount
Running as a Supervisor
LiteFS can either be run on its own or it can act as a simple supervisor process for your application. Running as a supervisor lets LiteFS wait to start the application until after it has connected to the cluster.
You can specify one or more commands in the exec
section of your config. If you set lease.promote
to true
, then you can specify to run your migration scripts only on candidate nodes. This means that candidates will automatically promote to the primary and run the migrations.
exec:
# Only run migrations on candidate nodes.
- cmd: "rails db:migrate"
if-candidate: true
# Then run the application server on all nodes.
- cmd: "rails server"
Testing Your LiteFS Instance
Once LiteFS is mounted, you can use SQLite clients or the sqlite3
CLI to interact with databases on the mount directory:
sqlite3 /litefs/my.db
LiteFS only allows files in the root of the mount and it does not currently support subdirectories.
Importing Your Database
If you have an existing database, you can import it using the ltx import
command.
ltx import -name my.db /path/to/database
See the ltx import
documentation for more details.
litefs
tooling. Do not use cp
to copy a database into place.Configuring the Proxy (Fly.io Only)
LiteFS has a few differences from regular SQLite since it is a distributed system. LiteFS requires that all writes occur on the primary node which means that applications need to redirect write requests to the current primary. It's also possible to issue a write to the primary and then read from a replica before the change is propagated to that replica.
For most web applications, they can take advantage of a thin, built-in proxy inside LiteFS that automatically handles these write redirection and replica consistency issues if your application follows some simple rules:
GET
requests should not perform write operations (e.g.INSERT
,UPDATE
, etc).Clients must have cookies enabled.
To configure the proxy, you'll need to set the proxy
section in the config file:
proxy:
# Bind address for the proxy to listen on.
addr: ":8080"
# Hostport of your application.
target: "localhost:8081"
# Filename of the SQLite database you want to use for TXID tracking.
db: "my.db"
Then you'll need to update your application server's port to what is specified in the target
field (8081
).
You can find more details on the proxy configuration guide.