Deploy MinIO Object Storage to

Object storage is useful when your apps need access to unstructured data like images, videos, or documents. Amazon’s S3 is an obvious solution, but you can also host your own S3-compatible object storage using the AGPLv3-licensed MinIO.

We’re going to set up a single instance of MinIO on, backed by Fly Volume storage.

A note: it’s been observed that some features of the MinIO browser console are memory-intensive enough to crash a VM with 256MB of RAM. For even small-scale production use you’ll probably need a larger (non-free) VM. 512MB seems to work for poking around and uploading small objects. (For perspective, MinIO docs recommend a minimum of 128GB RAM for deployment at scale.)

This is a guide to a demonstration deployment of the official standalone MinIO Docker image, and isn’t tuned in any way. Check out the MinIO docs docs for more sophisticated use.


We’ll use the official minio/minio Docker image, but with a custom start command. Here’s the Dockerfile to build that:

FROM minio/minio

CMD [ "server", "/data", "--console-address", ":9001"]

This just takes the MinIO image and runs it with the command server /data --console-address :9001, so that the server runs using the /data directory for storage. We’ll get to that directory in a moment. The --console-address flag specifies which port to run the web-based admin console on. Otherwise, MinIO chooses a random port. See below for how to access this panel after MinIO has been deployed.

Initializing the app

Initialize the app with fly launch. Here we use flags to specify an app name and the organization it belongs to. The most important flag is --no-deploy, because we still have Things to Do before we deploy:

fly launch --name my-minio --org personal --no-deploy
Creating app in /Users/jfent/fly-tests/my-minio
Scanning source code
Detected a Dockerfile app
Some regions require a paid plan (bom, fra, maa).
See to set up a plan.

? Choose a region for deployment: London, United Kingdom (lhr)
App will use 'lhr' region as primary

Created app 'my-minio' in organization 'personal'
Admin URL:
Wrote config file fly.toml
Validating /Users/jfent/fly-tests/my-minio/fly.toml
Platform: machines
✓ Configuration is valid
Your app is ready! Deploy with `flyctl deploy`

(Hold your horses. Your app is not, in fact, ready.)

(Un)configure networking

There’s no need for this app to be accessible from the public internet, if you run your object storage within the same Fly IPV6 private network as the app(s) that want to connect to it.

Delete the [http_service] block in fly.toml. Now the Fly proxy won’t pass any requests to the app from outside your private network.

Disk storage

The application’s VM and image are ephemeral. When the app is stopped or moved it loses any data written to its file system. For persistent storage, provision a volume with a name and a size in the same region as the app.

fly vol create miniodata --region lhr
Warning! Every volume is pinned to a specific physical host. You should create two or more volumes per application to avoid downtime. Learn more at
? Do you still want to use the volumes feature? Yes
                  ID: vol_6vjd0dkm5ny038mv
                Name: miniodata
                 App: my-minio
              Region: lhr
                Zone: cbfa
             Size GB: 3
           Encrypted: true
          Created at: 13 Nov 23 12:07 UTC
  Snapshot retention: 5

We didn’t specify a size, so we got the default: 3GB.

Tell the app to mount this volume onto its /data directory, by appending to fly.toml:

source = "miniodata"
destination = "/data"


MinIO uses environment variables MINIO_ROOT_USER and MINIO_ROOT_PASSWORD to store administrator login information. Instead of using normal environment variables, use fly secrets set to pass these sensitive values to the server in encrypted form. They’ll only be decrypted at runtime.



Now the app is, in fact, ready to deploy:

fly deploy

Accessing the web-based MinIO admin panel

MinIO has a web interface. It’s served on the port specified by --console-address in the Dockerfile, which we set to 9001. We can access this panel over our private WireGuard network.

One way to do this is to set up a regular WireGuard tunnel, and visit http://my-minio.internal:9001 in the browser.

A simpler way is to use flyctl’s user-mode WireGuard to proxy a local port to the app:

fly proxy 9001 
Proxying local port 9001 to remote [my-minio.internal]:9001

Leave this running and visit localhost:9001 with the browser.

Log into the admin panel with the MINIO_ROOT_USER and MINIO_ROOT_PASSWORD values set using Fly Secrets above, and you can create buckets, do administration, and upload and download files right from the browser.

Using the mc MinIO Client

You can connect to your MinIO with the mc command-line MinIO Client.

MinIO listens for non-browser connections on port 9000, by default. If you’re connecting using fly proxy, you’ll have to proxy port 9000 to use mc.

You can set up an alias to connect more conveniently to your MinIO.

If you’re using fly proxy:

mc alias set proxy-to-minio http://localhost:9000 <ROOT-USER> <ROOT-PASS>

If you’re hooked up with WireGuard:

mc alias set miniotest http://my-minio.internal:9000 <ROOT-USER> <ROOT-PASS>

Test the alias by checking your MinIO’s status:

mc admin info miniotest

Create a new non-admin user with readwrite permissions. This user won’t have full admin privileges, but will be able to create and save files to buckets.

mc admin user add miniotest <NEW-USER> <NEW-USER-PASS>

At this point <NEW-USER> can log in and read the contents of your buckets. Make that read-write access:

mc admin policy set miniotest readwrite user=<NEW-USER>

Multi-node deployment

What if you want high availability? You can deploy MinIO in a multi-node configuration!

Each node needs to know where to find the other nodes in the configuration. This can be done using either hostnames or IP addresses, but MinIO requires that they’re sequential. We’ll use hostnames since we can modify /etc/hosts to ensure their “sequential-ness”. Ensuring sequential IP addresses is much more tricky!

There’s going to be a bit of back-and-forth here since we don’t have any private IPs to put into /etc/hosts until we create the Machines, but we need a Dockerfile to create them in the first place. Bear with me!


We need to change the CMD in the Dockerfile to declare that we intend to run a multi-node deployment:

FROM minio/minio

ENV MINIO_SERVER_URL="http://minio1.local:9000"

CMD [ "server", "http://minio{1...3}.local:9000/data", "--console-address", ":9001"]

We’re using expansion notation to tell MinIO that there will be 3 nodes in the cluster.

Disk Storage

Create 3 volumes, one for each node:

fly vol create miniodata --count 3 --region lhr
                ID: vol_8r6kj28xq7e7qw14
              Name: miniodata
               App: my-minio
            Region: lhr
              Zone: 81b8
           Size GB: 3
         Encrypted: true
        Created at: 13 Nov 23 12:38 UTC
Snapshot retention: 5
                ID: vol_9vle38y30eog2p8r
              Name: miniodata
               App: my-minio
            Region: lhr
              Zone: d88f
           Size GB: 3
         Encrypted: true
        Created at: 13 Nov 23 12:38 UTC
Snapshot retention: 5
                ID: vol_7r11e3jk1p8g17pr
              Name: miniodata
               App: my-minio
            Region: lhr
              Zone: cbfa
           Size GB: 3
         Encrypted: true
        Created at: 13 Nov 23 12:38 UTC
Snapshot retention: 5

Initial deployment

You need to deploy and scale the app in order for private IPs to be assigned. First, deploy:

fly deploy

…then scale up to 3 Machines:

fly scale count 3

Private networking

One of the great things about is the private networking we get for free. We’ll use our Machines’ private IPs to define hostnames in /etc/hosts which our nodes will use to communicate with each other.

Get the private IPs of your Machines:

fly ip private ls
ID              REGION  IP
4d89703f427258  lhr     fdaa:3:7c6d:a7b:8f:143c:79a5:2
080e475b1d2758  lhr     fdaa:3:7c6d:a7b:172:ac21:8674:2
683d479cde4368  lhr     fdaa:3:7c6d:a7b:172:21da:b0bc:2

Ideally we’d update /etc/hosts directly in the Dockerfile, but Docker won’t let us do this at build time. Instead, we’ll create a script called that will add our host definitions at runtime:


cat <<EOF >> /etc/hosts
fdaa:3:7c6d:a7b:8f:143c:79a5:2  minio1.local
fdaa:3:7c6d:a7b:172:ac21:8674:2 minio2.local
fdaa:3:7c6d:a7b:172:21da:b0bc:2 minio3.local

# This script comes with the minio/minio image
/usr/bin/ server http://minio{1...3}.local:9000/data --console-address :9001

Final deployment

Now update your Dockerfile to execute

FROM minio/minio

ENV MINIO_SERVER_URL="http://minio1.local:9000"

ENTRYPOINT [ "/bin/sh" ]

CMD [ "" ]

When we fly deploy, we should have a working multi-node MinIO deployment!

If you want to see it in action, try proxying to two different Machines. In one terminal:

fly proxy 9001:9001 -s
? Select instance: (fdaa:3:7c6d:a7b:172:21da:b0bc:2)
Proxying local port 9001 to remote [fdaa:3:7c6d:a7b:172:21da:b0bc:2]:9001

…and in a different terminal:

fly proxy 9002:9001 -s
? Select instance: (fdaa:3:7c6d:a7b:172:ac21:8674:2)
Proxying local port 9002 to remote [fdaa:3:7c6d:a7b:172:ac21:8674:2]:9001

Visit http://localhost:9001 and http://localhost:9002 in separate browser tabs, and log in to both with your MINIO_ROOT_USER and MINIO_ROOT_PASSWORD.

Create a bucket in one tab and then refresh the other tab. If everything is working, you should see the bucket in both!

What’s next?

This was a basic guide for getting an instance of MinIO deployed on, scratching the surface of what you might want to do with an S3-compatible object store. You can use your MinIO bucket storage right from the web interface. Your apps can talk to it from within the same private network. MinIO docs cover more advanced usage.