Post

Setting up automated deployments via Helm + Actions

As we work on testing and building and deploying custom Actions runners, it’ll save a ton of time if each set can deploy itself automatically. This walk-through will set up GitHub’s hosted Actions runners to deploy our self-hosted runners via Helm - a fantastic idea for internet-connected environments and for fast iteration. For fully disconnected environments, have an A/B deployment pattern where the UBI8 runners deploy Ubuntu runners and vice versa, or use a proper deployment tool.

There’s a decent chance that, at a larger company, there’s already an incumbent solution for continuous deployment in Kubernetes - e.g., Argo CD or Flux CD. You should likely use that instead to leverage existing expertise and simplify the overall tool stack - unless it’s Jenkins. 😉

In this walkthrough, we’re going to do the following:

Creating the service account

First, for each namespace, create a service account for GitHub Actions to use to deploy itself. You’ll take the kubeconfig file for each account, then encode it for storage in GitHub Secrets. Here’s an example in the test-runners namespace that we’ll walk through in more detail.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
# Create the service account scoped to our `test-runners` namespace with a secret to use
apiVersion: v1
kind: ServiceAccount
metadata:
  name: test-deploy-user
  namespace: test-runners
secrets:
- name: ghe-actions-deploy

---
# Create a role for it in that namespace
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: test-deploy-user-full-access
  namespace: test-runners
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["*"]
  verbs: ["*"]
- apiGroups: ["actions.github.com"]
  resources: ["*"]
  verbs: ["*"]

---
# Bind that service account to the role we created above
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: test-deploy-user-view
  namespace: test-runners
subjects:
- kind: ServiceAccount
  name: test-deploy-user
  namespace: test-runners
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: test-deploy-user-full-access

---
# Create the credential for us to use for deployment
apiVersion: v1
kind: Secret
metadata:
  name: ghe-actions-deploy
  namespace: test-runners
  annotations:
    kubernetes.io/service-account.name: test-deploy-user
type: kubernetes.io/service-account-token

Now, create the account and all the goodness from above by running the commands below.

1
kubectl apply -f test-deploy-user.yml

Repeat this process for each namespace to use. While a touch tedious, segregating our service accounts by namespace is quite helpful in isolating each deployment.

Storing the service account configs in GitHub Secrets

GitHub has a built-in secret store (creatively named “Secrets”) that can manage credentials for an organization, repository, or user. Because it’s unsafe to use structured data as a secret in GitHub (source), we need to do a little magic to make kubeconfig file usable by GitHub. Once the service accounts have been created, here’s what needs to happen.

First, we need to get the kubeconfig file for the service account. You can get this directly out of a program like Lens, but let’s do this the hard way.

Hope it goes without saying, but these are fake secrets for a demo, shamelessly pilfered from the guide I used for part of this. Don’t go sharing the real values from your cluster around thoughtlessly.

  1. Find the name of the token generated by creating the account. In this case, it’s test-user-token-2vpgp.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    
     $ kubectl describe sa test-user -n test-runners
    
     Name:                test-user
     Namespace:           test-runners
     Labels:              <none>
     Annotations:         <none>
     Image pull secrets:  <none>
     Mountable secrets:   ghes-actions-deploy (not found)
                          test-user-token-2vpgp
     Tokens:              test-user-token-2vpgp
     Events:              <none>
    
  2. Fetch that token.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    
     $ kubectl describe secrets test-user-token-2vpgp -n test-runners
    
     Name:         test-user-token-2vpgp
     Namespace:    test-runners
     Labels:       <none>
     Annotations:  kubernetes.io/service-account.name: test-user
                   kubernetes.io/service-account.uid: a7413efa-b40e-4a24-ba7d-21d8c38bd07a
    
     Type:  kubernetes.io/service-account-token
    
     Data
     ====
     namespace:  12 bytes
     token:      eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InNoaXBwYWJsZS1kZXBsb3ktdG9rZW4tN3Nwc2oiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoic2hpcHBhYmxlLWRlcGxveSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImMyMTE3ZDhlLTNjMmQtMTFlOC05Y2NkLTQyMDEwYThhMDEyZiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OnNoaXBwYWJsZS1kZXBsb3kifQ.ZWKrKdpK7aukTRKnB5SJwwov6PjaADT-FqSO9ZgJEg6uUVXuPa03jmqyRB20HmsTvuDabVoK7Ky7Uug7V8J9yK4oOOK5d0aRRdgHXzxZd2yO8C4ggqsr1KQsfdlU4xRWglaZGI4S31ohCApJ0MUHaVnP5WkbC4FiTZAQ5fO_LcCokapzCLQyIuD5Ksdnj5Ad2ymiLQQ71TUNccN7BMX5aM4RHmztpEHOVbElCWXwyhWr3NR1Z1ar9s5ec6iHBqfkp_s8TvxPBLyUdy9OjCWy3iLQ4Lt4qpxsjwE4NE7KioDPX2Snb6NWFK7lvldjYX4tdkpWdQHBNmqaD8CuVCRdEQ
     ca.crt:     1099 bytes
    
  3. Get the server’s certificate info. We want the values for the certificate-authority-data and server fields.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    
     $ kubectl config view --flatten --minify
    
     apiVersion: v1
     clusters:
     - cluster:
         certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURDekNDQWZPZ0F3SUJBZ0lRZmo4VVMxNXpuaGRVbG15a3AvSVFqekFOQmdrcWhraUc5dzBCQVFzRkFEQXYKTVMwd0t3WURWUVFERXlSaVl6RTBOelV5WXkwMk9UTTFMVFExWldFdE9HTmlPUzFrWmpSak5tUXlZemd4TVRndwpIaGNOTVRnd05EQTVNVGd6TVRReVdoY05Nak13TkRBNE1Ua3pNVFF5V2pBdk1TMHdLd1lEVlFRREV5UmlZekUwCk56VXlZeTAyT1RNMUxUUTFaV0V0T0dOaU9TMWtaalJqTm1ReVl6Z3hNVGd3Z2dFaU1BMEdDU3FHU0liM0RRRUIKQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURIVHFPV0ZXL09odDFTbDBjeUZXOGl5WUZPZHFON1lrRVFHa3E3enkzMApPUEQydUZyNjRpRXRPOTdVR0Z0SVFyMkpxcGQ2UWdtQVNPMHlNUklkb3c4eUowTE5YcmljT2tvOUtMVy96UTdUClI0ZWp1VDl1cUNwUGR4b0Z1TnRtWGVuQ3g5dFdHNXdBV0JvU05reForTC9RN2ZpSUtWU01SSnhsQVJsWll4TFQKZ1hMamlHMnp3WGVFem5lL0tsdEl4NU5neGs3U1NUQkRvRzhYR1NVRzhpUWZDNGYzTk4zUEt3Wk92SEtRc0MyZAo0ajVyc3IwazNuT1lwWDFwWnBYUmp0cTBRZTF0RzNMVE9nVVlmZjJHQ1BNZ1htVndtejJzd2xPb24wcldlRERKCmpQNGVqdjNrbDRRMXA2WXJBYnQ1RXYzeFVMK1BTT2ROSlhadTFGWWREZHZyQWdNQkFBR2pJekFoTUE0R0ExVWQKRHdFQi93UUVBd0lDQkRBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFCQwpHWWd0R043SHJpV2JLOUZtZFFGWFIxdjNLb0ZMd2o0NmxlTmtMVEphQ0ZUT3dzaVdJcXlIejUrZ2xIa0gwZ1B2ClBDMlF2RmtDMXhieThBUWtlQy9PM2xXOC9IRmpMQVZQS3BtNnFoQytwK0J5R0pFSlBVTzVPbDB0UkRDNjR2K0cKUXdMcTNNYnVPMDdmYVVLbzNMUWxFcXlWUFBiMWYzRUM3QytUamFlM0FZd2VDUDNOdHJMdVBZV2NtU2VSK3F4TQpoaVRTalNpVXdleEY4cVV2SmM3dS9UWTFVVDNUd0hRR1dIQ0J2YktDWHZvaU9VTjBKa0dHZXJ3VmJGd2tKOHdxCkdsZW40Q2RjOXJVU1J1dmlhVGVCaklIYUZZdmIxejMyVWJDVjRTWUowa3dpbHE5RGJxNmNDUEI3NjlwY0o1KzkKb2cxbHVYYXZzQnYySWdNa1EwL24KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
         server: https://35.203.181.169
     name: gke_jfrog-200320_us-west1-a_cluster
     contexts:
     - context:
         cluster: gke_jfrog-200320_us-west1-a_cluster
         user: gke_jfrog-200320_us-west1-a_cluster
     name: gke_jfrog-200320_us-west1-a_cluster
     current-context: gke_jfrog-200320_us-west1-a_cluster
     kind: Config
     preferences: {}
     users:
     - name: gke_jfrog-200320_us-west1-a_cluster
     user:
         auth-provider:
         config:
             access-token: ya29.Gl2YBba5duRR8Zb6DekAdjPtPGepx9Em3gX1LAhJuYzq1G4XpYwXTS_wF4cieZ8qztMhB35lFJC-DJR6xcB02oXXkiZvWk5hH4YAw1FPrfsZWG57x43xCrl6cvHAp40
             cmd-args: config config-helper --format=json
             cmd-path: /Users/ambarish/google-cloud-sdk/bin/gcloud
             expiry: 2018-04-09T20:35:02Z
             expiry-key: '{.credential.token_expiry}'
             token-key: '{.credential.access_token}'
         name: gcp
    
  4. Now let’s put it all together into a file that we’re going to call kubeconfig.txt. It will look a lot like this:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    
     apiVersion: v1
     kind: Config
     users:
     - name: svcs-acct-dply
       user:
         token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InNoaXBwYWJsZS1kZXBsb3ktdG9rZW4tN3Nwc2oiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoic2hpcHBhYmxlLWRlcGxveSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImMyMTE3ZDhlLTNjMmQtMTFlOC05Y2NkLTQyMDEwYThhMDEyZiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OnNoaXBwYWJsZS1kZXBsb3kifQ.ZWKrKdpK7aukTRKnB5SJwwov6PjaADT-FqSO9ZgJEg6uUVXuPa03jmqyRB20HmsTvuDabVoK7Ky7Uug7V8J9yK4oOOK5d0aRRdgHXzxZd2yO8C4ggqsr1KQsfdlU4xRWglaZGI4S31ohCApJ0MUHaVnP5WkbC4FiTZAQ5fO_LcCokapzCLQyIuD5Ksdnj5Ad2ymiLQQ71TUNccN7BMX5aM4RHmztpEHOVbElCWXwyhWr3NR1Z1ar9s5ec6iHBqfkp_s8TvxPBLyUdy9OjCWy3iLQ4Lt4qpxsjwE4NE7KioDPX2Snb6NWFK7lvldjYX4tdkpWdQHBNmqaD8CuVCRdEQ
     clusters:
     - cluster:
         certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURDekNDQWZPZ0F3SUJBZ0lRZmo4VVMxNXpuaGRVbG15a3AvSVFqekFOQmdrcWhraUc5dzBCQVFzRkFEQXYKTVMwd0t3WURWUVFERXlSaVl6RTBOelV5WXkwMk9UTTFMVFExWldFdE9HTmlPUzFrWmpSak5tUXlZemd4TVRndwpIaGNOTVRnd05EQTVNVGd6TVRReVdoY05Nak13TkRBNE1Ua3pNVFF5V2pBdk1TMHdLd1lEVlFRREV5UmlZekUwCk56VXlZeTAyT1RNMUxUUTFaV0V0T0dOaU9TMWtaalJqTm1ReVl6Z3hNVGd3Z2dFaU1BMEdDU3FHU0liM0RRRUIKQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURIVHFPV0ZXL09odDFTbDBjeUZXOGl5WUZPZHFON1lrRVFHa3E3enkzMApPUEQydUZyNjRpRXRPOTdVR0Z0SVFyMkpxcGQ2UWdtQVNPMHlNUklkb3c4eUowTE5YcmljT2tvOUtMVy96UTdUClI0ZWp1VDl1cUNwUGR4b0Z1TnRtWGVuQ3g5dFdHNXdBV0JvU05reForTC9RN2ZpSUtWU01SSnhsQVJsWll4TFQKZ1hMamlHMnp3WGVFem5lL0tsdEl4NU5neGs3U1NUQkRvRzhYR1NVRzhpUWZDNGYzTk4zUEt3Wk92SEtRc0MyZAo0ajVyc3IwazNuT1lwWDFwWnBYUmp0cTBRZTF0RzNMVE9nVVlmZjJHQ1BNZ1htVndtejJzd2xPb24wcldlRERKCmpQNGVqdjNrbDRRMXA2WXJBYnQ1RXYzeFVMK1BTT2ROSlhadTFGWWREZHZyQWdNQkFBR2pJekFoTUE0R0ExVWQKRHdFQi93UUVBd0lDQkRBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFCQwpHWWd0R043SHJpV2JLOUZtZFFGWFIxdjNLb0ZMd2o0NmxlTmtMVEphQ0ZUT3dzaVdJcXlIejUrZ2xIa0gwZ1B2ClBDMlF2RmtDMXhieThBUWtlQy9PM2xXOC9IRmpMQVZQS3BtNnFoQytwK0J5R0pFSlBVTzVPbDB0UkRDNjR2K0cKUXdMcTNNYnVPMDdmYVVLbzNMUWxFcXlWUFBiMWYzRUM3QytUamFlM0FZd2VDUDNOdHJMdVBZV2NtU2VSK3F4TQpoaVRTalNpVXdleEY4cVV2SmM3dS9UWTFVVDNUd0hRR1dIQ0J2YktDWHZvaU9VTjBKa0dHZXJ3VmJGd2tKOHdxCkdsZW40Q2RjOXJVU1J1dmlhVGVCaklIYUZZdmIxejMyVWJDVjRTWUowa3dpbHE5RGJxNmNDUEI3NjlwY0o1KzkKb2cxbHVYYXZzQnYySWdNa1EwL24KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
         server: https://35.203.181.169
     name: self-hosted-cluster
     contexts:
      - name: kubernetes-admin@kubernetes
        context:
          user: test-user
          cluster: kubernetes-admin@kubernetes
          namespace: test-runners
     current-context: kubernetes-admin@kubernetes
    
  5. Encode it using base64 to create a big one-line string of gibberish that we can store in GitHub Secrets.

    1
    
     cat kubeconfig.txt | base64
    

Set up GitHub Environments

This project is set up to manage runners for an entire enterprise and/or multiple organizations and/or repositories within a single monorepo. The code for the runner images, deployment targets, and credentials for those are stored in that one repository. Each deployment (for the whole enterprise, an organization, or another repository) is its’ own namespace in Kubernetes and its’ own environment in this GitHub repository.

Create an environment for the test-runners namespace. I called mine test. Store the big string of gibberish as an environment secret called DEPLOY_ACCOUNT. I’ve found it easiest if the secrets have a naming convention across environments, so every environment has this secret with this name targeting the correct namespace/cluster.

environments

Create GitHub credentials for ARC

In the initial setup of actions-runner-controller (from part 1), we created a personal access token or GitHub app for authentication between it and GitHub.

An app was used in this example, which has 3 components: an installation ID, an application ID, and a private certificate. Of these, only the certificate is a secret. The other two are stored as variables instead, which are “not secret secrets”. Here’s what the storage of those looks like from the repository settings.

Secrets has our DEPLOY_ACCOUNT from the prior step, one per environment, and our GitHub App’s private certificate (called ARC_APP_PRIVATE_KEY). It has multiple lines, but it is not required to be base64 encoded to work. Secrets passes it in as is and that works (and redacts) just fine.

secrets

If we’d used a personal access token for enterprise-wide deployments, you’d only need one secret instead.

The non-secret configurations we’ll use in our Helm chart to authenticate our runners to GitHub are stored in the variables tab as ARC_APP_ID AND ARC_INSTALL_ID.

variables

Create workflows for deployment

Now that we have Kubernetes, GitHub, and actions-runner-controller all set up to talk to each other, let’s create a workflow to deploy and remove runner scale sets on demand. This will make our inner-loop testing much easier as we develop custom images and solutions around self-hosted runners. We’ll need two workflows - one to create a set of runners and another to remove it. The two look very similar, so we’ll only go over one.

Here’s the workflow file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
name: 🐝 Manually deploy runners

on:
  workflow_dispatch: # deploy on demand
    inputs:
      target_scale_set:  # name our runner set
        # In this repository, this corresponds to the helm chart name in `/deployments/helm-***.yml`.
        # e.g., `ubi8` would target `/deployments/helm-ubi8.yml`
        description: "Which scale set to deploy?"
        type: string
        required: true
      environment_name:  # this corresponds to the environments we set up for our `kubeconfig` files
        description: "Which environment to deploy to?"
        type: choice  # drop-down menus are fantastic!
        required: true
        options:  # change these to your own names or change :point_up: to a `string` for freeform entry.
        - "bare-metal"
        - "test"
        - "production" 
        default: "test"
      runner_namespace:
        description: "Which namespace to deploy to?"
        type: choice
        required: true
        options:  # again, change this to your own namespaces
        - "runners"
        - "test-runners"
        default: "test-runners"

jobs:
  deploy:
    runs-on: ubuntu-latest # use the GitHub hosted runners to deploy the self-hosted runners in GHEC
    # If using GHES or GHAE, use another deployment, such as having CentOS redeploy Ubuntu and vice versa
    environment: ${{ github.event.inputs.environment_name }}

    steps:
      - name: Checkout
        uses: actions/checkout@v4

      - name: Write out the kubeconfig info
        run: |
          echo ${{ secrets.DEPLOY_ACCOUNT }} | base64 -d > /tmp/config

      - name: Update deployment (using latest chart of actions-runner-controller-charts/auto-scaling-runner-set)
        run: |
          helm install ${{ github.event.inputs.target_scale_set }} \
          --namespace "${{ github.event.inputs.runner_namespace }}" \
          --set githubConfigSecret.github_app_id="${{ vars.ARC_APP_ID }}" \
          --set githubConfigSecret.github_app_installation_id="${{ vars.ARC_INSTALL_ID }}" \
          --set githubConfigSecret.github_app_private_key="${{ secrets.ARC_APP_PRIVATE_KEY }}" \
          -f deployments/helm-${{ github.event.inputs.target_scale_set }}.yml \
          oci://ghcr.io/actions/actions-runner-controller-charts/auto-scaling-runner-set

        env:
          KUBECONFIG: /tmp/config

      - name: Remove kubeconfig info
        run: rm -f /tmp/config

And here it is deploying and removing our development runners! 🎉

create-deployment

remove-deployment

Next

In part 5, we create our own ✨ awesome ✨ custom runner image.

This post is licensed under CC BY 4.0 by the author.