When deploying a Rails application to production, its rare to question the idea of running a Postgres or MySQL database to persist data and Redis for caching, background job processing, and WebSockets, but what if it could all be done without running these services?
“No part is the best part”, right? Here’s the thing, Redis and Postgres run as separate processes that need to be monitored. Usually these processes are running on a separate server that’s accessed over a network connection, which can also break. You’ll either have to deal with that complexity yourself or pay to have it managed with services like Upstash Redis.
What if we could run everything on one server and not have to worry about “more parts” that can break? You can with SQLite and Litestack.
Litestack is a gem that has all the adapters needed to get a Rails application using SQLite for the database, ActiveJob, ActionCable, caching, and more.
SQLite is a database that stores data in a single file without the need to run a server. In some cases, writing data to SQLite is faster than writing files to disk! The story about how SQLite was designed to work on a battleship is equally as impressive as its implementation.
Combining Litestack and SQLite means you can run an entire Rails application on one box with a multi-process/threaded server, like Puma, and and persist application data, process background jobs, publish WebSocket data, and store cached data.
Here’s how you do it on Fly.io.
First, install Litestack in your SQLite Rails application by running:
bundle add litestack
Then install the adapters and update the configuration files with this Rails generator:
rails g litestack:install
This command configures your Rails environments with the settings they need to use Litestack for ActiveJob, ActionCable, ActionCaching, and your database.
If you don’t have an existing application, but want to try one out, clone the demo blog project, run bundle, and continue below.
Finally let’s deploy it to Fly.io! If you haven’t already, Install flyctl and create a Fly.io account. Then from the root of your Rails project launch your app.
# ... name your app and select the deployment region
This will create a
fly.toml file in your project working directory environment, and provision resources on Fly.io. A
Dockerfile is also generated, which is a description of the server that will be running in production.
You should see a
LITESTACK_DATA_PATH environment in your
Dockerfile that points to
/data. Litestack uses this path to know which folder to save SQLite databases for caching, job processing, and ActionCable.
Let’s deploy the application and see it run.
# ... app deploys
Congrats! You now have a Rails application running in production that uses SQLite for the database, background job processing, caching, and WebSockets.
Let’s scale up the Fly machine from 256Mb RAM to 512Mb so the Rails app doesn’t run out of memory.
fly scale memory 512
If 512 MB of memory is a bit tight for your Rails application, you can always dial it up by running the
fly scale memory command with the amount of memory needed to run your app.
When you’re ready to backup your database, run the
fly sftp command and it will download the SQLite database file your workstation. Here’s what that looks like.
$ fly sftp get /data/production.sqlite3
28672 bytes written to production.sqlite3
$ sqlite3 production.sqlite3
SQLite version 3.39.5 2022-10-14 20:58:05
Enter ".help" for usage hints.
ar_internal_metadata schema_migrations ...
There’s our backup! It doesn’t get much easier than that for database backups. Fly.io is the fastest way to get a SQLite Rails app running in production.
How does it work?
Most of the pieces that work with Rails, like ActiveJob, ActiveRecord, etc. ship with an adapter layer that allows developers to implement different backends to fulfill the services job.
The Litestack gem is a library of adapters between these Rails building blocks and SQLite. A quick peek inside the Litestack source code and you’ll see the names of familiar Rails services.
fly launch command is run, Fly.io runs the dockerfile-rails gem, which detects the Litestack gem and configures the
Dockerfile with the
ENV directives needed to store and access SQLite databases on persistent volume.
fly launch command then detects the
VOLUME directives from the
Dockerfile and adds them to the
fly.toml file, which configures a persistent volume on the Fly Machine deployment target.
# Fly adds this volume configuration to the `fly.toml` file.
source = "data"
destination = "/data"
Once that’s configured and provisioned, the
fly deploy command is run which builds an image from the
Dockerfile, configures the volumes, and gets everything running.
That seems like a lot of steps, and it is, but Fly.io does all of that for you so you only have to run
fly launch and
fly deploy to have a SQLite Rails app running in production.
Is it fast?
Heck yeah it is! Think about it for a moment—your Rails app reads and writes data from a NVMe disk on a physical server. There’s no network connection to traverse; it’s all right there on the same physical machine.
Litestack maintains a set of benchmarks that compares the performance of Litestack vs other libraries. Take it with a grain of salt though, these benchmarks are a baseline that helps understand the relative performance of litestack vs other libraries under conditions that don’t represent a production workload.
For small or hobby Rails apps that fit on one server, you really can’t beat using SQLite. It will be faster, easier to maintain, less stuff will break, easier to backup (just download your sqlite file), and easier to upgrade. Like anything though, it comes with trade-offs that are really important to understand before going to production.
What happens when the server crashes?
Your service goes down! The good news is that you can quickly bring the service back up on a new machine and bind it to the same volume.
As for your data, Fly.io will snapshot it once daily for up to five days of snapshots and of course you can download the SQLite file whenever you want to back it up. You’ll want to understand how Fly.io stores and manages data on volumes, which is very well documented including information on how to manage snapshots, access a volume, and make it bigger as your database grows.
But SQLite isn’t recommended for production Rails deployments!
It’s true, there’s even a configuration flag for it. Why would this warning be here if running SQLite in production was safe?
SQLITE3_PRODUCTION_WARN = "You are running SQLite in production, this is generally not recommended."\
" You can disable this warning by setting \"config.active_record.sqlite3_production_warning=false\"."
initializer "active_record.sqlite3_production_warning" do
if config.active_record.sqlite3_production_warning && Rails.env.production?
There’s a few assumptions that the Rails community makes for contemporary production environments:
- Host file systems are ephemeral - When deploying Rails applications, we assume either the volumes are “read-only” and a Rails app can’t write to desk or we assume if we can write to disk, it will get replaced on every deploy.
- Databases services are persistent and available from multiple nodes - When data is written to a data store, we expect it to be there when we ask for it again from any node that’s running on the same cluster.
rails new is run, it defaults to the SQLite3 database adapter. This makes setting up a development environment incredibly easy because no additional services need to be installed on the workstation like a database or Redis. This creates a problem though—when people new to Rails deploy the SQLite default to production they might get errors about the volume being read-only or if they’re unlucky, the application would write data that would be deleted on the next deploy. Yikes!
In our case we know we only want to run our Rails application on one node and have taken extra care to store the SQLite data files on a volume that persists between deploys. Since we have taken care of those two foot guns, it’s completely viable to run SQLite in production, so add this to your
config/environments/production.rb file to affirm.
# Put this in your config/environments/production.rb file
Wrap-up: the SQLite in production checklist
Running a Rails application in production entirely on SQLite is a real possibility. It can lower your applications operational complexity, which will likely save you time and money, and even run faster since application data is stored on the same NVMe disk as the Rails application.
Here’s the checklist to know if your application is suitable for SQLite:
- Runs on one node - Make sure your Rails application can run on a single node. If running multiple nodes for your application is a requirement, you’ll want to stick with the more traditional client/server database stack.
- Data volumes are writable and persist between deploys - Make sure the path your SQLite database is writing to doesn’t get wiped out between deploys. Fly will setup a persistent
/dataVolume for you if it detects you’re running a Rails sqlite application and at least keep your data around between deploys.
- A few seconds of connection queuing between deploys is OK - When the Rails application is deployed and the server restarts, your application will technically be down. The Fly.io proxy will queue connections until the health checks on the new instance are passing. Once the server is back up and running, the queued connections that haven’t timed out will be sent to the server to fulfill the requests. Your users experience this as their browser taking a few seconds longer to load your website.
- You want to reduce complexity and costs - Running Fly.io in production requires less servers and monitoring, which can be a great way to keep your application stack simple and costs down.
Fortunately Fly.io and Litestack provide a reasonable set of defaults that will make your small or hobby Rails app deployable to production without having to worry much about the issues above. When you find your application outgrows running SQLite on one instance, Fly.io is there for the next step with multiple solutions including LiteFS, Postgres, or bigger machines with more storage, memory, and CPU cores.