MicroVM vs Container: How the Kernel Boundary Changes Everything

MicroVM vs Container: How the Kernel Boundary Changes Everything

Published

Running untrusted code in a shared environment forces a real architectural question: does your workload need its own kernel, or is process-level isolation enough?

You’re building a platform that runs user-submitted code. Maybe it’s a CI runner, an AI code execution sandbox, or a multi-tenant API worker. You need to decide how to isolate each workload from the others and from the host. The instinct is to reach for containers — they’re fast, familiar, and cheap to run. But then you start thinking about what actually separates one tenant’s process from another, and the answer is: namespaces, cgroups, and a shared kernel. That’s the microVM vs container question in its sharpest form. Not “which is newer” or “which has better tooling,” but “what is the actual trust boundary, and is it strong enough for what I’m running?”

Getting this right has real consequences. Choose containers when you need microVMs, and a kernel exploit or a container escape puts every workload on that host at risk. Choose microVMs when containers would have been fine, and you’re paying more in memory and startup time than the workload justifies. Neither is universally correct. The right answer depends on whether your threat model requires a VM-style hardware boundary or whether process isolation is a reasonable fit.

This page covers how containers and microVMs actually work under the hood, where each one breaks down, and a practical decision framework for choosing between them.

Key takeaways

  • A container isolates a process using Linux namespaces and cgroups but shares the host kernel, meaning all containers on a host run on the same kernel and a kernel-level vulnerability affects all of them.
  • A microVM runs a minimal virtual machine with its own kernel per instance, creating a hardware-enforced boundary that is much harder to escape, at the cost of higher memory overhead and slightly longer boot times.
  • If you’re running untrusted code, multi-tenant workloads, or anything where one tenant’s blast radius must not reach another’s, microVMs give you the isolation model you actually need rather than the one that’s easiest to set up.
  • A well-chosen isolation model means a compromised workload stays contained: a container escape doesn’t reach the host, and a microVM guest can’t touch the hypervisor or neighboring VMs.

What is a microVM?

A microVM is a minimal virtual machine that runs its own kernel, has its own memory space, and communicates with the host through a hypervisor boundary rather than directly through the host kernel. The “micro” part means it strips out everything a traditional VM carries: no BIOS emulation, no virtual GPU, no legacy device support. Just enough to boot a kernel and run a workload.

The most well-known implementation is Firecracker, which Amazon built for Lambda and Fargate. Firecracker boots a minimal Linux kernel in under a second and exposes a small, audited API surface to the host. The guest kernel handles syscalls from the workload. The host kernel never sees them directly. If the workload exploits a kernel vulnerability, it’s exploiting the guest kernel inside the VM. Getting from there to the host requires breaking through the hypervisor boundary, which is a much harder problem.

This is what “hardware-enforced isolation” means in practice. The CPU’s virtualization extensions (Intel VT-x, AMD-V) enforce the boundary between guest and host. A process inside the microVM cannot directly address host memory and cannot make host syscalls. The attack surface between the workload and the host is dramatically smaller than in a container.

A container, by contrast, is not a lightweight VM. It’s a process (or a group of processes) running on the host kernel, wrapped in Linux primitives that make it look isolated. Namespaces give a process its own view of system resources: its own PID namespace, network namespace, mount namespace, and a few others. Cgroups limit how much CPU, memory, and I/O a container can consume. Together, these two mechanisms create the illusion of isolation. The container thinks it’s alone on the machine. It isn’t.

How does each isolation model work?

How containers isolate workloads

Every container on a host shares the same kernel. When your container makes a syscall, it goes to the host kernel. If there’s a vulnerability in that kernel or in the syscall interface, a malicious workload can potentially escape the namespace boundary and reach the host or other containers. Container escapes are a real class of vulnerability, not a theoretical one. For workloads you trust (your own application code, your own services), this is usually an acceptable trade-off. For workloads you don’t fully trust, it’s a meaningful risk.

The upside of this architecture is real: containers start in milliseconds, consume very little memory overhead per instance, and pack densely onto a host. For most web apps, background workers, and internal services, containers are the right default.

How microVMs isolate workloads

Each microVM needs its own kernel loaded into memory, its own boot sequence, and its own virtual devices. That adds memory per instance and a small amount of startup latency compared to containers. For workloads where you’re running thousands of instances on a single host, that overhead adds up.

The isolation payoff is that the CPU’s virtualization extensions enforce the boundary between guest and host at the hardware level. A process inside the microVM cannot directly address host memory or make host syscalls. The attack surface between the workload and the host is dramatically smaller than in a container, and escaping it requires breaking the hypervisor rather than exploiting a kernel namespace boundary.

MicroVM vs container: a practical comparison

Here’s how the two models compare across the dimensions that matter most in practice:

Dimension Container MicroVM
Boot time Milliseconds (process fork) Under a second (kernel boot)
Memory overhead per instance Low (shared kernel, minimal per-process overhead) Higher (each instance carries its own kernel)
Kernel sharing Shared with host and all other containers Per-instance kernel, isolated from host
Isolation boundary Namespace and cgroup boundary (software) Hypervisor boundary (hardware-enforced)
Syscall surface exposed to host Full host kernel syscall interface Minimal hypervisor API only
Escape difficulty Container escape vulnerabilities exist and are documented Requires breaking hypervisor, significantly harder
Density on a single host High Lower

The startup latency gap is smaller than it used to be. Modern microVM implementations boot fast enough to handle on-demand workloads, including serving HTTP requests. The memory overhead gap is real and persistent. If you’re running many short-lived instances, the per-instance kernel cost matters.

The isolation gap is also real and persistent. Namespaces are a software boundary enforced by the kernel. The hypervisor boundary is enforced by hardware. These are not equivalent, and no amount of seccomp profiles or AppArmor policies fully closes the gap.

When to use a microVM vs a container

The decision comes down to your threat model and your workload characteristics.

Use containers when:

  • You’re running your own code that you wrote and trust
  • You need maximum density on a host and startup speed is critical
  • You’re running internal services, web apps, or background workers where a shared kernel is an acceptable risk
  • Operational simplicity matters and you want to use standard container tooling (Docker, Kubernetes, etc.)

Use microVMs when:

  • You’re running code you didn’t write, or code submitted by users of your platform
  • You’re building a multi-tenant system where one tenant’s workload must not be able to affect another’s
  • You’re executing AI-generated code, arbitrary scripts, or anything where the input is not fully trusted
  • A security breach in one workload must not be able to reach the host or neighboring workloads
  • You’re operating in a regulated environment where the isolation model needs to be defensible

The footgun here is subtle. Developers often evaluate containers and microVMs on startup time and memory, pick containers because they’re faster and lighter, and then discover later that their threat model actually required stronger isolation. The blast radius of a container escape in a multi-tenant system is the entire host. In a microVM setup, it’s one VM. That difference is worth paying for when the workload justifies it.

One more practical note: microVMs and containers are not mutually exclusive. Some platforms run containers inside microVMs, getting the operational familiarity of container tooling with the isolation guarantees of a VM boundary. This is how AWS Lambda works under the hood.

Common challenges and trade-offs

Memory overhead at scale. Each microVM carries its own kernel in memory. When you’re running hundreds or thousands of instances on a single host, that per-instance cost adds up fast. Containers share the host kernel, so the per-instance memory footprint is much lower. If density is a hard constraint, this trade-off needs to be part of the calculation.

Tooling maturity. The container tooling ecosystem (Docker, Kubernetes, container registries, build pipelines) is mature and widely understood. MicroVM tooling is improving but is less standardized. Firecracker has a solid API, but orchestrating microVMs at scale requires more custom work than dropping a Dockerfile into a Kubernetes cluster.

Boot time still matters for some workloads. Sub-second boot times are good enough for most on-demand workloads, but if you’re building something that needs to respond to a burst of requests in under 100ms, the kernel boot overhead in a microVM is a real constraint. Containers win on cold-start latency, and that gap doesn’t fully close.

Operational complexity. Running microVMs means you’re closer to the hypervisor layer. Debugging a workload that’s misbehaving inside a microVM is harder than attaching to a container. Logging, tracing, and observability tooling built for containers doesn’t always translate directly.

The false sense of security from containers. The flip side of the trade-off: teams sometimes add layers of seccomp profiles, AppArmor policies, and read-only filesystems to containers and convince themselves they’ve achieved microVM-level isolation. They haven’t. A sufficiently motivated attacker with a kernel exploit doesn’t care about your seccomp profile. If the threat model genuinely requires hardware-enforced isolation, the only answer is a hardware-enforced boundary.

MicroVMs and containers on Fly.io

Fly.io builds on hardware-virtualized containers called Fly Machines. Each Machine runs in its own isolated VM, which means workloads get a VM-style trust boundary without requiring you to manage hypervisors or write infrastructure configuration from scratch. Machines boot fast enough to handle HTTP requests and scale to zero when idle, so you’re not paying for idle capacity.

For workloads that need stricter isolation, Fly offers Sprites: hardware-isolated sandboxes designed specifically for running untrusted or AI-generated code. Each Sprite runs in its own isolated environment with dedicated CPU, memory, and a private filesystem. You can checkpoint the environment, run arbitrary code, and restore if something breaks. The isolation model is the point: each Sprite is its own contained environment, so a broken or malicious workload stays inside its boundary.

The networking model follows the same principle. Each sandbox gets its own private network by default, with encryption built in. Isolation is the default, not an add-on you configure after the fact.

Frequently asked questions

What is the main difference between a microVM and a container?

A microVM runs each workload inside a minimal virtual machine with its own separate kernel, while a container shares the host kernel and isolates processes using namespaces and cgroups.

Which offers stronger security isolation, a microVM or a container?

MicroVMs provide stronger isolation because each instance runs its own kernel, creating a more VM-like trust boundary compared to containers, which share the host kernel and have a smaller security boundary.

Are containers faster to start than microVMs?

Containers start faster and consume fewer resources than microVMs because they avoid the overhead of booting a separate kernel for each instance.

When does using a microVM make more sense than a container?

MicroVMs suit workloads where stronger isolation and a stricter trust boundary outweigh the trade-offs of higher memory usage and longer boot times.

How do microVMs and containers compare in resource efficiency?

Containers use less memory and have lower per-instance overhead than microVMs, making them more resource-efficient when running many workloads on the same host.

What factors matter most when choosing between a microVM and a container?

The comparison between microVM vs container typically comes down to startup latency, resource efficiency, isolation strength, and the security requirements of the deployment environment.