Incoming! 6PN Private Networks

More often than not, modern applications are really ensembles of cooperating services, running independently and transacting with each other over the network. At, we’d like it to be not just possible to express these kinds of applications, but pleasant, perhaps even boring.

Up till now, that’s been a hard promise for us to fulfill, because services deployed on ran as strangers to each other. You could arrange to have a front-end cache service talk to a backend app service, but they’d need to rendezvous through public IP addresses. More frustratingly, you’d need to secure their connection somehow, and the best answer to that is usually mTLS and certificates. Ack! Thbhtt!

It shouldn’t be this hard. is fully connected through a WireGuard mesh joining every point in our network where services can run. We already promise a secure transport for your packets. You might derive satisfaction from running your own CA, and if that’s your thing, we’re not here to judge. But you shouldn’t have to. And now you don’t.

Introducing 6PN

6PN (for IPv[6] [P]rivate [N]etworking) is our answer to the basic “VPC” feature most cloud providers offer. It’ll soon be on by default and requires no additional configuration. A 6PN network connects all the applications in a organization.

Every instance of every application in your organization now has an additional IPv6 address — its “6PN address”, in /etc/hosts as fly-local-6pn. That address is reachable only within your organization. Bind services to it that you want to run privately.

It’s pretty inefficient to connect two IPv6 endpoints by randomly guessing IPv6 addresses, so we use the DNS to make some introductions. Each of your Fly apps now has an internal DNS zone. If your application is fearsome-bagel-43, its DNS zone is fearsome-bagel-43.internal — that DNS resolves to all the IPv6 6PN addresses deployed for the application. You can find hosts by region: nrt.fearsome-bagel-43.internal are your instances in Japan. You can find all the regions for your application: the TXT record at regions.fearsom-bagel-43.internal. And you can find the “sibling” apps in your organization with the TXT record at _apps.internal.

To fully enable this feature, you need to add a snippet of config to your fly.toml:

  private_network = true

Some Examples

Caching services

Let’s say we want to run a high-capacity sharded nginx cache. We can create an almost-vanilla nginx.conf with an upstream for our cache nodes:

upstream shards { 
  hash "${scheme}fearsome-bagel-43${request-uri}" consistent;
  # SHARDS  
server { 
  listen 8080 default_server;
  location / { 
    proxy_pass http://shards/;

In the Dockerfile for this application, we can include a script that looks up the 6PN addresses for our application — in bash, you can just use dig aaaa fearsome-bagel-43.internal +short to get them. Substitute them into nginx.conf as server lines:

server [fdaa:0:1:a01:a0a:dead:beef:2]:8080 ;

… and then reload nginx. The consistent hashing feature in nginx balances traffic across your shards, and minimizes disruption as instances join and leave.


Here’s a simpler example: let’s run a Redis server for the apps in our organization. We can start from the standard Redis Dockerfile — FROM redis, and write a trivial

redis-server —bind $(grep fly-local-6pn /etc/hosts | awk ‘{print $1}’)

Create a Fly app, like redis-bagel-43 (I don’t know what it is with me today). The rest of your apps will see it once it’s deployed, as redis-bagel-43.internal.

The nice thing about this is that Redis is default-locked to your organization, just as a consequence of how 6PN works. You could set up TLS certificates to authenticate clients, but if you’re not doing something elaborate with your organization, there’s probably no need.

Messaging with NATS

Or, how about linking up all your applications with a global messaging fabric? You could use that to build a chat app, or to ship logs, or to build event-driven applications. Once again: we can use an almost-verbatim vendor Dockerfile:

FROM nats:2.1.9-scratch
ADD nats.conf /etc/nats.conf
CMD ["-c", "/etc/nats.conf"]

And the only interesting part of that configuration:

cluster: {
    host: "fly-local-6pn",
    routes: [

NATS will configure itself with available peers, and your other applications can get to it at nats-bagel-43.internal.

Behind The Scenes

I’ll take a second to explain a bit about how this works. Skip ahead if you don’t care!

IPv6 addresses are big. You just won’t believe how vastly, hugely, mind-boggling big they are. Actually, no, you will; they’re 16 bytes wide. So, just “pretty big”.

We use that space to embed routing and access information; an identifier for your organization, an identifier for the Fly host that your app is running on, and an identifier for the individual instance of your app. These addresses are assigned directly by our orchestration system. You’ll see them in your instance, on eth0; they’re the addresses starting in fdaa.

We route with a sequence of small BPF programs; they enforce access control (you can’t talk to one 6PN network from another), and do some silly address rewriting footwork so that we can use WireGuard’s cryptokey routing to get packets from one host to another, without running a dynamic routing protocol.

It’s a boring detail, but in case you’re wondering: our service discovery system populates a database on each host that we run a Rust DNS server off of, to serve the “internal” domain. We inject the IP of that DNS server into your resolv.conf — the IP address of that server is always fdaa::3.

Where Our Heads Are At

There’s a theme to the way we build things at Fly (or at least, Kurt has a theme that he keeps hitting us over the head with). We like interesting internals — WireGuard, Firecracker, Rust, eBPF — but boring, simple UX. Things should just work, in the manner you’d hope they would. Managing app ensembles shouldn’t even be close to anyone’s full-time job.

So we’ve kept 6PN as boring as we can. You can make things interesting and weird if you want! Run a Serf cluster between all your apps! Boot up Consul or etcd. Set up a CA. You can make service discovery and security as interesting as you want. We’re going to try to stay out of the way.

You might be able to guess what our next steps are: we’re going to make it boring to connect other networks and services to your private network. Follow us on for early announcements of new networking features.

We got private networking, y’all

Launch your Docker apps on Fly and get baked in, secure private networking between instances.

Try it for free