Am I in a container or a microVM?
This is dedicated to the person who asked me about this after a conference talk earlier this summer. Having found a great build job script injection to a privilege escalation … there was nothing else to do or go. It seemed like nothing persisted across runs either, even though it should if it was in the configuration we thought we had.
We have limited time to work with, so let’s not waste any going nowhere. While there’s a lot of “trick questions” here, we can use that to our advantage too. These things that don’t always add up can give us insight into our runtime and let us pivot before getting too frustrated. (╯°□°)╯︵ ┻━┻
What’s a microVM?
Container runtimes built on “microVMs” are becoming more common - especially for high-risk workloads. These runtimes actually run little virtual machines and “shim” the expected container runtime interface (CRI) to make it look like a container. Examples of these include Firecracker and Kata Containers .
There are also other isolated runtimes that are not quite microVMs, like gVisor , which intercepts syscalls and emulates them in a userland process that emulates the kernel. It’s a different approach to isolation, but it still provides a layer of security between the container and the host (and difficulty in our exploits).
These solutions are more common because they’re much easier to deploy and use with in production with an orchestrator like Kubernetes than they used to be. There’s tremendous value in an additional layer of security gained by isolating workloads from the host kernel. It used to be quite complex to set up, so much so that one of the most popular blog posts I wrote was please stop saying “Just use Firecracker”, fueled by my own pain and suffering. This is a welcome improvement. 💖
Where are these runtimes found?
You’ll find these runtimes in use under the hood of many cloud container platforms. They’re also common for running serverless workloads, like AWS Lambda, where the isolation is important for security and resource management. As an example, Firecracker was built by AWS specifically for Lambda.
🎯 Outside of the large cloud providers, these runtimes are typically found for workloads that are
- well-staffed, as these are still not as simple as “just run a container”
- technically astute, as these aren’t the “default choice”, making this a deliberate architecture choice
- high risk, as these runtimes provide an additional layer of isolation
- multi-tenant, as these runtimes provide a way to isolate workloads from each other
Good examples of these use cases include anything with “enterprise-wide” or “private-cloud compute” for your containers. “Remote code execution as a service” use cases, like CI/CD systems used for GitLab runners or GitHub Actions or Jenkins, are frequent use cases. They can also be found in systems that dedicate hardware for users or for analysis of untrusted code. These are also growing in popularity for machine learning workloads, as they often require isolation from the host kernel to prevent data leakage or model poisoning.
These runtimes are designed to be lightweight and secure, but they can also be a challenge to figure out what they are from the inside looking outwards.
How do you know you haven’t just escaped a “container” only to land into a very tiny VM?
Some clues
There aren’t a lot of good ways to tell if you’re in a microVM from the inside of a container. There are a couple of reasonably reliable clues, though.
System time
MicroVM systems work by spinning up ephemeral VMs that only live for the lifetime of the workload. This means that the system uptime is usually quite short.
1
2
3
4
5
6
7
8
9
10
11
12
user@escapes:~$ docker run -it ghcr.io/some-natalie/some-natalie/whoami:latest
6b3250ce4074:/$ cat /proc/uptime
781.29 1532.25
6b3250ce4074:/$ uptime
03:30:36 up 13 min, 0 users, load average: 0.00, 0.00, 0.00
6b3250ce4074:/$ top
Mem: 918480K used, 3075840K free, 1092K shrd, 21264K buff, 641580K cached
CPU: 0.1% usr 0.3% sys 0.0% nic 99.5% idle 0.1% io 0.0% irq 0.0% sirq
Load average: 0.00 0.00 0.00 1/153 10
PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND
10 1 user R 4544 0.1 0 0.0 top
1 0 user S 4372 0.1 1 0.0 /bin/sh -l
There’s some nuance here, though. If the uptime is low, no matter how many times I try to hit it, I suspect that I’m in a microVM. These spin up and down for the duration of a workload, like my CI job, not long term. It takes less than 10 seconds to spin up a microVM, usually much less than that. While it also doesn’t take long for a managed Kubernetes cluster autoscaler to add additional nodes to handle more work, it is substantially longer than that - think 1-2 minutes to start, plus another few minutes to join the cluster and start scheduling new work.
If the uptime is a couple seconds, no matter how many times I try to hit it, I suspect that I’m in a microVM. If it’s more than a couple of minutes, I suspect that I’m not isolated within a microVM. This is at best an educated guess.
Kernel version
It’s possible that folks running the system append additional data about the runtime in easily-surfaced telemetry, like the kernel version or /etc/os-release
. This is not guaranteed, but it’s worth a check. Here are a few places to look:
1
2
3
4
5
6
[root@ebfc38a396cf /]# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-6.8.0-64-generic root=/dev/mapper/ubuntu--vg-ubuntu--lv ro
[root@ebfc38a396cf /]# cat /proc/version
Linux version 6.8.0-64-generic (buildd@bos03-arm64-062) (aarch64-linux-gnu-gcc-13 (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0, GNU ld (GNU Binutils for Ubuntu) 2.42) #67-Ubuntu SMP PREEMPT_DYNAMIC Sun Jun 15 20:23:40 UTC 2025
[root@ebfc38a396cf /]# uname -a
Linux ebfc38a396cf 6.8.0-64-generic #67-Ubuntu SMP PREEMPT_DYNAMIC Sun Jun 15 20:23:40 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
There can be some good hints in the kernel version, like distribution name, architecture, and more. It isn’t uncommon to tag any custom versions with some info in the name, like debug
or the name of any special kernel modification. gVisor also includes itself at the end of the kernel version as -gvisor
, so you can look for that too.
CPU info
This one is a little harder, but if you can read /proc/cpuinfo
or run lscpu
, there are a few things that might indicate virtualization - but mean that either the container is in a microVM or the host being a VM.
Some things that don’t make sense to look out for:
- the presence of the
hypervisor
flag - small few CPU numbers of both cores and total count
- a lot of CPUs with very low clock speeds
- very little information about the CPU available
Lastly a mismatch between the reported model and the actual number of cores or threads … did this combination of CPUs actually ever get manufactured?
Putting it all together
Let’s put this all together
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[root@ae1c1cece2e3 /]# cat /proc/uptime
32498.55 389344.54
[root@ae1c1cece2e3 /]# lscpu
Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: Apple
Model: 0
Thread(s) per core: 1
Core(s) per cluster: 2
Socket(s): -
Cluster(s): 1
Stepping: 0x0
BogoMIPS: 48.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asim
dhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha51
2 asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp f
lagm2 frint bf16 afp
# # # and much more output # # #
[root@ae1c1cece2e3 /]# uname -a
Linux ae1c1cece2e3 6.8.0-64-generic #67-Ubuntu SMP PREEMPT_DYNAMIC Sun Jun 15 20:23:40 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
[root@ae1c1cece2e3 /]# cat /etc/os-release
NAME="Red Hat Enterprise Linux"
VERSION="9.6 (Plow)"
# # # and much more output # # #
❓ Where does the info in /etc/os-release
come from?
hint
Is that file shared in a namespace or part of the container's filesystem?answer
It's part of the container's filesystem. We know our container is some variety of Red Hat 9.6 or derivative distribution (UBI, CentOS, etc).❓ What can we learn about the container host?
hint
Look at the output ofuname -a
for the kernel version and architecture. What does it tell you about the host?
answer
The kernel version output gives you the version, the distribution (Ubuntu), and the architecture (aarch64), as well as a few more tidbits of info. This information comes from the host kernel. Containers rely on the host to schedule resources.❓ What about the CPU info?
hint
Looking atlscpu
output, what do you notice about the CPU architecture and vendor?
answer
The CPU architecture isaarch64
and the vendor is Apple. I'm not sure that Apple ever made a 2-core ARM64 CPU ... it may not be worth looking into this now, though.
❓ Are we likely in a microVM?
probably an answer
🤨 Ubuntu running on a 2-core Apple ARM CPU, yet/etc/os-release
says it’s Red Hat? The system uptime is over 9 hours. We're in some sort of combination of virtualization and container runtime here, but very likely not a microVM.
❓ What should we make of this information?
probably an answer
If I had to guess, given this combination, it's someone's laptop or desktop. Desktop container runtimes, like Docker Desktop or Podman Desktop, are Linux virtual machines and do provide some host. Apple is known for many things, but a native container runtime is not one of them. I would guess it is Apple hardware, an Ubuntu virtual machine, and a Red Hat container image.Container escapes are a dime a dozen, but VM escapes are 6-figure bug bounties. Actual escape here is out of scope for a “container escapes 101” workshop.
Pivot strategies
Once you suspect you could be in a microVM, my advice is to pivot into another system. Maybe look at network connections and file permissions to see where else you can go without escaping. Or, much like the dog in the image, let’s try to find a way to get what we want even inside our confinement. 🐶
Here’s a few ideas to consider:
- CI/CD systems can usually write to some other file storage, like an S3 bucket or a Git repository, for other systems to pick up (deploy and run). Many build pipelines have other dependencies or take inputs with minimal validation. That gets you movement in and movement out.
- AI workloads seem to be getting quite a lot of attention, as they often both have valuable data coming in and out and direct access to valuable hardware. Is there a flaw here to look into instead of escaping the container?
- Any credentials or secrets to be found and used?
- Anything with “hardware in the loop” may have extra devices mounted to look at too.
Additionally, many of these aren’t treated or monitored as production systems. 😢
Conclusion
Mapping out whether you’re in a container or a microVM is a bit of an art. There are some clues that can provide some insight, though. If you suspect you’re in a microVM, pivoting to another system is often the best course of action.