Fly Kubernetes does more now

Frankie, or Linda (I can't tell who's who), the hot-air balloon mascot, at the helm of a sailing ship. The ocean is rolling with waves, and two shark fins are visible in pursuit of the ship. There are some unreasonably happy-looking smaller fish leaping out of the water around the ship. Frankie or Linda looks pretty chill and content.
Image by Annie Ruygt

Eons ago, we announced we were working on Fly Kubernetes. It drummed up enough excitement to prove we were heading in the right direction. So, we got hard to work to get from barebones “early access” to a beta release. We’ll be onboarding customers to the closed beta over the next few weeks. Email us at and we’ll hook you up.

Fly Kubernetes is the “blessed path"™️ to using Kubernetes backed by infrastructure. Or, in simpler terms, it is our managed Kubernetes service. We take care of the complexity of operating the Kubernetes control plane, leaving you with the unfettered joy of deploying your Kubernetes workloads. If you love and K8s, this product is for you.

What even is a Kubernete?

So how did this all come to be—and what even is a Kubernete?

You can see more fun details in Introducing Fly Kubernetes.

If you wade through all the YAML and CNCF projects, what’s left is an API for declaring workloads and how it should be accessed.

But that’s not what people usually talk / groan about. It’s everything else that comes along with adopting Kubernetes: a container runtime (CRI), networking between workloads (CNI) which leads to DNS (CoreDNS). Then you layer on Prometheus for metrics and whatever the logging daemon du jour is at the time. Now you get to debate which Ingress—strike that—Gateway API to deploy and if the next thing is anything to do with a Service Mess, then as they like to say where I live, "bless your heart”.

Finally, there’s capacity planning. You’ve got to pick and choose where, how and what the Nodes will look like in order to configure and run the workloads.

When we began thinking about what a Fly Kubernetes Service could look like, we started from first principles, as we do with most everything here. The best way we can describe it is the scene from Iron Man 2 when Tony Stark discovers a new element. As he’s looking at the knowledge left behind by those that came before, he starts to imagine something entirely different and more capable than could have been accomplished previously. That’s what happened to JP, but with K3s and Virtual Kubelet.

OK then, WTF (what’s the FKS)?

We looked at what people need to get started—the API—and then started peeling away all the noise, filling in the gaps to connect things together to provide the power. Here’s how this looks currently:

  • Containerd/CRI → flyd + Firecracker + our init: our system transmogrifies Docker containers into Firecracker microVMs
  • Networking/CNI → Our internal WireGuard mesh connects your pods together
  • Pods → Fly Machines VMs
  • Secrets → Secrets, only not the base64’d kind
  • Services → The Fly Proxy
  • CoreDNS → CoreDNS (to be replaced with our custom internal DNS)
  • Persistent Volumes → Fly Volumes (coming soon)

Now…not everything is a one-to-one comparison, and we explicitly did not set out to support any and every configuration. We aren’t dealing with resources like Network Policy and init containers, though we’re also not completely ignoring them. By mapping many of the core primitives of Kubernetes to a resource, we’re able to focus on continuing to build the primitives that make our cloud better for workloads of all shapes and sizes.

A key thing to notice above is that there’s no “Node”.

Virtual Kubelet plays a central role in FKS. It’s magic, really. A Virtual Kubelet acts as if it’s a standard Kubelet running on a Node, eager to run your workloads. However, there’s no Node backing it. It instead behaves like an API, receiving requests from Kubernetes and transforming them into requests to deploy on a cloud compute service. In our case, that’s Fly Machines.

So what we have is Kubernetes calling out to our Virtual Kubelet provider, a small Golang program we run alongside K3s, to create and run your pod. It creates your pod as a Fly Machine, via the Fly Machines API, deploying it to any underlying host within that region. This shifts the burden of managing hardware capacity from you to us. We think that’s a cool trick—thanks, Virtual Kubelet magic!


You can deploy your workloads (including GPUs) across any of our available regions using the Kubernetes API.

You create a cluster with flyctl:

fly ext k8s create --name hello --org personal --region iad

When a cluster is created, it has the standard default namespace. You can inspect it:

kubectl get ns default --show-labels
default   Active   20d,

The label shows the name of the Fly App that corresponds to your cluster.

It would seem appropriate to deploy the Kubernetes Up And Running demo here, but since your pods are connected over an IPv6 WireGuard mesh, we’re going to use a fork with support for IPv6 DNS.

kubectl run \ \
  --labels="app=kuard-fks" \

And you can see its Machine representation via:

fly machine list --app fks-default-7zyjm3ovpdxmd0ep
ID              NAME        STATE   REGION  IMAGE                           IP ADDRESS                      VOLUME  CREATED                 LAST UPDATED            APP PLATFORM    PROCESS GROUP   SIZE
1852291c46ded8  kuard       started iad     jipperinbham/kuard-amd64:blue   fdaa:0:48c8:a7b:228:4b6d:6e20:2         2024-03-05T18:54:41Z    2024-03-05T18:54:44Z                                    shared-cpu-1x:256MB

This is important! Your pod is a Fly Machine! While we don’t yet support all kubectl features, tooling will “just work” for cases where we don’t yet support the kubectl way. So, for example, we don’t have kubectl port-forward and kubectl exec, but you can use flyctl to forward ports and get a shell into a pod.

Expose it to your internal network using the standard ClusterIP Service:

kubectl expose pod kuard \
  --name=kuard \
  --port=8080 \
  --target-port=8080 \

ClusterIP Services work natively, and internal DNS supports them. Within the cluster, CoreDNS works too.

Access this Service locally via flycast: Get connected to your org’s 6PN private WireGuard network. Get kubectl to describe the kuard Service:

kubectl describe svc kuard
Name:              kuard
Namespace:         default
Labels:            app=kuard-fks
Annotations: configured
Selector:          app=kuard-fks
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv6
IP:                fdaa:0:48c8:0:1::1a
IPs:               fdaa:0:48c8:0:1::1a
Port:              <unset>  8080/TCP
TargetPort:        8080/TCP
Endpoints:         [fdaa:0:48c8:a7b:228:4b6d:6e20:2]:8080
Session Affinity:  None
Events:            <none>

You can pull out the Service’s IP address from the above output, and get at the KUARD UI using that: in this case, http://[fdaa:0:48c8:0:1::1a]:8080.

Using internal DNS: http://<service_name>.svc.<app_name>.flycast:8080. Or, in our example: http://kuard.svc.fks-default-7zyjm3ovpdxmd0ep.flycast:8080.

And finally CoreDNS: <service_name>.<namespace>.svc.cluster.local resolves to the fdaa IP and is routable within the cluster.

Get in on the FKS beta

Email us at


The Fly Kubernetes Service is free during the beta. Fly Machines and Fly Volumes you create with it will cost the same as for your other projects. It’ll be $75/mo per cluster after that, plus the cost of the other resources you create.

Today and the future

Today, Fly Kubernetes supports only a portion of the Kubernetes API. You can deploy pods using Deployments/ReplicaSets. Pods are able to communicate via Services using the standard K8s DNS format. Ephemeral and persistent volumes are supported.

The most notable absences are: multi-container pods, StatefulSets, network policies, horizontal pod autoscaling and emptyDir volumes. We’re working at supporting autoscaling and emptyDir volumes in the coming weeks and multi-container pods in the coming months.

If you’ve made it this far and are eagerly awaiting your chance to tell us and the rest of the internet “this isn’t Kubernetes!”, well, we agree! It’s not something we take lightly. We’re still building, and conformance tests may be in the future for FKS. We’ve made a deliberate decision to only care about fast launching VMs as the one and only way to run workloads on our cloud. And we also know enough of our customers would like to use the Kubernetes API to create a fast launching VM in the form of a Pod, and that’s where this story begins.