Kubernoodles
Kubernoodles is a reference architecture to demonstrate a lot of “how to securely devops” things, mostly for actions-runner-controller within a larger business. This is how I’ve built and maintain my demo environment.
Why I’ve made certain design choices is based on experiences shared below:
- Thoughts on self-hosted architecture
- Considerations on containerizing CI at an enterprise scale
- Securely using actions-runner-controller
Here’s what has been done and where we’re going.
- Initial cluster setup - Kubernetes cluster setup in a managed provider, installing cilium and hubble to power observability, and actions-runner-controller with default runners in a scaling set.
- Testing runner scalability - Create a few Actions to test, scale, and debug our self-hosted runners.
- What are your users really doing? - Dive into understanding what’s being run in our self-hosted GitHub Actions runners with eBPF and tetragon .
- Continuous delivery for custom runners - Setup the Kubernetes cluster, GitHub, and actions-runner-controller to work together, then make a GitHub Actions workflow to create and remove test deployments from a Helm chart.
- Building custom runner images - How to build your own custom images for actions-runner-controller!
- Building containers in ARC with Kaniko - Using Kaniko in actions-runner-controller to build containers without privileged pods.
- Continuous integration for custom runner images - CI for your CI, or how to test your custom runner images on each change.
- Writing tests for Actions runners - Test your enterprise CI images with the same rigor as your other software.
- Reducing your software vulnerabilities - Reduce the number of CVEs in your runner images using wolfi to improve the security posture and eliminate many compliance headaches in regulated environments.
- Building multi-architecture runners - Why not use ARM too? Adding extra CPU architectures to our runner image builds was easy.
Last updated in November 2024 with the updated versions of Kubernetes, actions-runner-controller, etc. that I am currently using.
Maybe soon:
- Log streaming
- More fun with eBPF
- File caching
- Fun with metrics!
My environment uses mostly on-prem resources and now almost always is used to build other software or demonstrate other parts of Kubernetes management. I used to have more access to Azure services, so there’s a lot of references there as well. I’ll try to call out using vanilla k8s as much as possible for portability.