Your applications’ output to
stdout become logs in Fly.io. The logs for all VM instances within any given app can be viewed within an app’s Monitoring page. However, these logs are intermingled and do not persist very long.
If you need or want your logs to be aggregated to one spot, more organized, or to persist longer than we keep them, you can!
Here’s the easiest way to ship your logs to a location of your choosing.
We provide an application that hooks into Fly.io’s internal log stream. We’ve named it Fly Log Shipper. You run this in your Fly.io organization like any other application!
You can select one or more items from the list of supported Providers (sinks) and configure the application to run those sinks.
Each provider just needs some environment variables (or secrets) set for them to work.
Here’s an example. To ship logs to Logtail, you would do the following:
# Make a directory for your log shipper app mkdir logshippper cd logshippper # Create the app but don't deploy just yet fly launch --no-deploy --image ghcr.io/superfly/fly-log-shipper:latest # Set some secrets. The secret / env var you set # determines which "sinks" are configured fly secrets set ORG=personal fly secrets set ACCESS_TOKEN=$(fly auth token) fly secrets set LOGTAIL_TOKEN=<token provided by logtail source>
You can configure as many providers as you’d like by adding more secrets. The secrets needed are determined by which provider(s) you want to use.
Before launching your application, you should edit the generated
fly.toml file and delete the entire
[[services]] section. Replace it with this:
[[services]] http_checks =  internal_port = 8686
Then you can deploy it:
By default, the log shipper gets logs from every application running within your organization (which organization is set by the
ORG secret/environment variable).
To narrow that down, you can set a
SUBJECT environment variable in your instance of the Fly Log Shipper. That can be set as a secret, or as an environment variable in your
Subjects are in the format
logs.<app_name>.<region>.<instance_id>. You can set the Log Shipper to narrow down to a specific instance, a specific region, and/or a specific application.
There are 2 wildcards you can use:
*wildcards go between strings and can be used multiple times
>wildcards go at the end of a string and can be used once
For example, to only ship logs for an application named
sandwich, you would set the
SUBJECT environment variable like so (in your Log Shipper
[env] SUBJECT = "logs.sandwich.>"
This uses wildcard
> to say to grab all logs from application
sandwich no matter what region or instance they came from.
[env] SUBJECT = "logs.*.dfw.>"
SUBJECT says to grab logs from all instances of any applications hosted in the
Based on your provider (or preferences), it may be necessary to customize the Vector configuration. This is done with a
vector.toml configuration file and, thanks to Machine files, it’s as simple as copying the source
vector.toml to a local directory, modifying it according to your requirements, then saving it and redeploying:
fly deploy --file-local="/etc/vector/vector.toml=/path/to/local/vector.toml"
That’s it! The baked in config file is overwritten and Vector will use your modified config.
Fly.io ships logs through a NATS stream. This is available to all of your applications via
nats://[fdaa::3]:4223, which is where the Log Shipper grabs the logs.
The Vector configuration to grab those logs within the Log Shipper is seen here.
You can contribute to (or, you know, fork) this repository to add providers (sinks) of your own!
If you would like to run more than one Machine for high availability, the NATS endpoint supports subscription queues to ensure messages are only sent to one subscriber of the named queue. The
QUEUE secret can be set to configure an arbitrary queue name if you want to run multiple log processes for HA and avoid duplicate messages being shipped.
[env] QUEUE = "org-logs"