---
title: Getting Started with LiteFS on Fly.io
layout: docs
nav: litefs
toc: true
redirect_from:
- /docs/litefs/example/
- /docs/litefs/getting-started/
---
## Overview
This guide will walk you through the steps of getting a LiteFS cluster up and
running on Fly.io. For a full, working example of a LiteFS application, please see the
[litefs-example][] repository.
[litefs-example]: https://github.com/superfly/litefs-example
## Installing LiteFS
### Dependencies
The `litefs` binary is self-contained, but you'll need to install the `fuse3`
library so LiteFS is able to mount a local file system. You'll also need
`ca-certificates` if you're connecting to Consul, and you'll almost certainly
want to install `sqlite`. This installation depends on your package manager, but
here is a line you can add to your Dockerfile for alpine-based or debian-based images:
```dockerfile
# for alpine-based images
RUN apk add ca-certificates fuse3 sqlite
```
```dockerfile
# for debian/ubuntu-based images
RUN apt-get update -y && apt-get install -y ca-certificates fuse3 sqlite3
```
### Installing LiteFS
LiteFS is meant to run inside your container alongside your application. You
can pull in the `litefs` binary by copying it from the official Docker image:
```Docker
COPY --from=flyio/litefs:0.5 /usr/local/bin/litefs /usr/local/bin/litefs
```
It's recommended that you run LiteFS as `root` in Docker instead of using the
`USER` command to change users. If you need to run your application as another
user, use the `su` command to run your application as a non-root user.
Take a look at [the example Dockerfile][Dockerfile] in the `litefs-example` repo
for an example.
[Dockerfile]: https://github.com/superfly/litefs-example/blob/main/Dockerfile
## Configuring LiteFS
Most configuration options for LiteFS are set via a YAML configuration file
called `litefs.yml`. This file is typically placed in `/etc/litefs.yml`, but
you can change the path by using the `-config` flag.
Take a look at
[the litefs.yml file](https://github.com/superfly/litefs-example/blob/main/fly-io-config/etc/litefs.yml)
in the `litefs-example` repo for a full example, or follow along with the
rest of this section for a more detailed explanation!
[litefs yml file]: https://github.com/superfly/litefs-example/blob/main/fly-io-config/etc/litefs.yml
### File system
Let's first set two fields to tell LiteFS where to mount its file system and
where to store its internal data.
```yml
# This directory is where your application will access the database.
fuse:
dir: "/litefs"
# This directory is where LiteFS will store internal data.
# You must place this directory on a persistent volume.
data:
dir: "/var/lib/litefs"
```
If you're running on Fly.io, you should create a volume:
```sh
fly volumes create litefs --size 10
```
And then specify it as a mount in your `fly.toml`:
```toml
[mounts]
source = "litefs"
destination = "/var/lib/litefs"
```
### Lease configuration
LiteFS only allows a single node to be the _primary_ at any given time. The
primary node is the only one that can write data to the database. The other
nodes are called _replicas_ and they provide a read-only copy.
The primary is determined by using a [_distributed lease_](https://martinfowler.com/articles/patterns-of-distributed-systems/time-bound-lease.html).
In this guide, we'll be using a [Consul][] lease as it allows the primary to
automatically failover in order to have high write availability. You can add a
Consul URL to your app with:
```
fly consul attach
```
That will set a `FLY_CONSUL_URL` secret for the app, which will have the cluster url.
Then in your `litefs.yml`, set the `lease` section:
```yml
lease:
type: "consul"
# Specifies if this node can become primary. The expression below evaluates
# to true on nodes that are run in the primary region. Nodes in other regions
# act as non-candidate, read-only replicas.
candidate: ${FLY_REGION == PRIMARY_REGION}
# If true, then the node will automatically become primary after it has
# connected with the cluster and sync'd up. This makes it easier to run
# migrations on start up.
promote: true
# The API URL that other nodes will use to connect to this node.
advertise-url: "http://${FLY_ALLOC_ID}.vm.${FLY_APP_NAME}.internal:20202"
consul:
# The URL of the Consul cluster.
url: "${FLY_CONSUL_URL}"
# A unique key shared by all nodes in the LiteFS cluster.
# Change this if you are running multiple clusters in a single app!
key: "${FLY_APP_NAME}/primary"
```
You can find more details in the [lease management][] section of the configuration
guide.
[Consul]: https://consul.io
[fly.io]: https://fly.io
[lease management]: /docs/litefs/config#lease-management
### Configuring the proxy
LiteFS requires that all writes occur on the primary node, which means
that applications need to redirect write requests to the current primary. It's
also possible to issue a write to the primary and then read from a replica
before the change is propagated to that replica.
Most web applications can take advantage of a thin, built-in proxy
inside LiteFS that automatically handles these write redirection and replica
consistency issues. In order to make use of this proxy, your application needs
to follow these rules:
- `GET` requests never perform write operations (e.g. `INSERT`, `UPDATE`, etc).
- Clients have cookies enabled.
To configure the proxy, you'll need to set the `proxy` section in the config file:
```yml
proxy:
# Bind address for the proxy to listen on.
addr: ":8080"
# Hostport of your application - replace 8081 with whatever port
# your application is listening on!
target: "localhost:8081"
# Filename of the SQLite database you want to use for TXID tracking.
db: "my.db"
```
You can find more details on the [proxy configuration guide][].
[proxy configuration guide]: /docs/litefs/proxy
## Running LiteFS
The main command used to start LiteFS is the `litefs mount` command. This mounts
a FUSE file system and then starts an API server for LiteFS nodes to
communicate with each other. You can use this as the `ENTRYPOINT` in your
Dockerfile:
```dockerfile
ENTRYPOINT litefs mount
```
### Running as a supervisor
LiteFS can either be run on its own or it can act as a simple supervisor process
for your application. Running as a supervisor lets LiteFS wait to start the
application until after it has connected to the cluster.
You can specify one or more commands in the `exec` section of your config. If
you set `lease.promote` to `true`, then you can specify to run your migration
scripts only on candidate nodes. This means that candidates will automatically
promote to the primary and run the migrations.
```yml
exec:
# Only run migrations on candidate nodes.
- cmd: "rails db:migrate"
if-candidate: true
# Then run the application server on all nodes.
- cmd: "rails server"
```
### Testing your LiteFS instance
Once LiteFS is mounted, you can use SQLite clients or the `sqlite3` CLI to
interact with databases on the mount directory:
```
sqlite3 /litefs/my.db
```
LiteFS only allows files in the root of the mount and it does not currently
support subdirectories.
## Importing your database
If you have an existing database, you can import it using the `litefs import`
command.
```sh
litefs import -name my.db /path/to/database
```
Refer to the [`litefs import`](/docs/litefs/import) documentation for more details.
<div class="warning icon">
You should only interact with SQLite databases on LiteFS through
a SQLite client or through the `litefs` tooling.
<br><br>
Do not use `cp` to copy a database into place.
</div>
## Next steps
[Back up your LiteFS cluster](/docs/litefs/backup/).
Getting Started with LiteFS on Fly.io
Overview
This guide will walk you through the steps of getting a LiteFS cluster up and
running on Fly.io. For a full, working example of a LiteFS application, please see the
litefs-example repository.
Installing LiteFS
Dependencies
The litefs binary is self-contained, but you’ll need to install the fuse3
library so LiteFS is able to mount a local file system. You’ll also need
ca-certificates if you’re connecting to Consul, and you’ll almost certainly
want to install sqlite. This installation depends on your package manager, but
here is a line you can add to your Dockerfile for alpine-based or debian-based images:
# for alpine-based imagesRUN apk add ca-certificates fuse3 sqlite
LiteFS is meant to run inside your container alongside your application. You
can pull in the litefs binary by copying it from the official Docker image:
It’s recommended that you run LiteFS as root in Docker instead of using the
USER command to change users. If you need to run your application as another
user, use the su command to run your application as a non-root user.
Most configuration options for LiteFS are set via a YAML configuration file
called litefs.yml. This file is typically placed in /etc/litefs.yml, but
you can change the path by using the -config flag.
Take a look at
the litefs.yml file
in the litefs-example repo for a full example, or follow along with the
rest of this section for a more detailed explanation!
File system
Let’s first set two fields to tell LiteFS where to mount its file system and
where to store its internal data.
# This directory is where your application will access the database.fuse:dir:"/litefs"# This directory is where LiteFS will store internal data.# You must place this directory on a persistent volume.data:dir:"/var/lib/litefs"
If you’re running on Fly.io, you should create a volume:
LiteFS only allows a single node to be the primary at any given time. The
primary node is the only one that can write data to the database. The other
nodes are called replicas and they provide a read-only copy.
The primary is determined by using a distributed lease.
In this guide, we’ll be using a Consul lease as it allows the primary to
automatically failover in order to have high write availability. You can add a
Consul URL to your app with:
fly consul attach
That will set a FLY_CONSUL_URL secret for the app, which will have the cluster url.
Then in your litefs.yml, set the lease section:
lease:type:"consul"# Specifies if this node can become primary. The expression below evaluates# to true on nodes that are run in the primary region. Nodes in other regions# act as non-candidate, read-only replicas.candidate:${FLY_REGION == PRIMARY_REGION}# If true, then the node will automatically become primary after it has# connected with the cluster and sync'd up. This makes it easier to run# migrations on start up.promote:true# The API URL that other nodes will use to connect to this node.advertise-url:"http://${FLY_ALLOC_ID}.vm.${FLY_APP_NAME}.internal:20202"consul:# The URL of the Consul cluster.url:"${FLY_CONSUL_URL}"# A unique key shared by all nodes in the LiteFS cluster.# Change this if you are running multiple clusters in a single app!key:"${FLY_APP_NAME}/primary"
You can find more details in the lease management section of the configuration
guide.
Configuring the proxy
LiteFS requires that all writes occur on the primary node, which means
that applications need to redirect write requests to the current primary. It’s
also possible to issue a write to the primary and then read from a replica
before the change is propagated to that replica.
Most web applications can take advantage of a thin, built-in proxy
inside LiteFS that automatically handles these write redirection and replica
consistency issues. In order to make use of this proxy, your application needs
to follow these rules:
GET requests never perform write operations (e.g. INSERT, UPDATE, etc).
Clients have cookies enabled.
To configure the proxy, you’ll need to set the proxy section in the config file:
proxy:# Bind address for the proxy to listen on.addr:":8080"# Hostport of your application - replace 8081 with whatever port# your application is listening on!target:"localhost:8081"# Filename of the SQLite database you want to use for TXID tracking.db:"my.db"
The main command used to start LiteFS is the litefs mount command. This mounts
a FUSE file system and then starts an API server for LiteFS nodes to
communicate with each other. You can use this as the ENTRYPOINT in your
Dockerfile:
ENTRYPOINT litefs mount
Running as a supervisor
LiteFS can either be run on its own or it can act as a simple supervisor process
for your application. Running as a supervisor lets LiteFS wait to start the
application until after it has connected to the cluster.
You can specify one or more commands in the exec section of your config. If
you set lease.promote to true, then you can specify to run your migration
scripts only on candidate nodes. This means that candidates will automatically
promote to the primary and run the migrations.
exec:# Only run migrations on candidate nodes.-cmd:"railsdb:migrate"if-candidate:true# Then run the application server on all nodes.-cmd:"railsserver"
Testing your LiteFS instance
Once LiteFS is mounted, you can use SQLite clients or the sqlite3 CLI to
interact with databases on the mount directory:
sqlite3 /litefs/my.db
LiteFS only allows files in the root of the mount and it does not currently
support subdirectories.
Importing your database
If you have an existing database, you can import it using the litefs import
command.
litefs import -name my.db /path/to/database
Refer to the litefs import documentation for more details.
You should only interact with SQLite databases on LiteFS through
a SQLite client or through the litefs tooling.