This guide assumes that you are using MicroK8s for your Kubernetes cluster and will cover steps to configure the Bitbucket Pipelines self-hosted Docker runner autoscaler using on-prem Kubernetes infrastructure. (in this case, MicroK8s).
Learn more about MicroK8s here: MicroK8s - Zero-ops Kubernetes for Developers, Edge, and IoT.
Prerequisites
-
Access to Bitbucket Cloud Premium version (You need a Premium account for runner autoscaler specifically) and necessary credentials for the Runner Autoscaler. Please Refer to the setup guide here: Autoscaler for Runners on Kubernetes : Bitbucket Cloud Docs.
Note: I suggest the use of OAuth Tokens as they are easy to create, scale, and maintain. OAuth credentials must be Base64-encoded for Kubernetes secrets
-
At least two VMs with network access to each other:
- 1 Master Node
- 1 Worker Node (you can have multiple worker nodes, but ensure they have network access to the master).
Step 1: Prepare the Master Node
Recommended: Go through the official MicroK8s documentation and understand how MicroK8s works before proceeding.
1
2
3
| sudo apt-get update -y
sudo apt-get upgrade -y
sudo apt install -y apt-transport-https curl gnupg lsb-release
|
2. Install MicroK8s
1
2
| sudo snap install microk8s --classic
sudo microk8s status --wait-ready
|
1
2
| sudo vi /etc/netplan/00-installer-config.yaml
sudo netplan apply
|
4. Set the Hostname
1
| hostnamectl set-hostname kubemaster # or any other name you prefer
|
5. Add Worker Nodes
Generate a join command on the master:
Follow the on-screen instructions to join worker nodes using the generated token.
6. Install Helm
You need Helm to install the Runner Autoscaler as it is packaged as a Helm chart.
1
| sudo snap install helm --classic
|
Step 2: Prepare the Worker Nodes
1. Update and Install MicroK8s
1
2
3
4
| sudo apt-get update -y
sudo apt install -y vim
sudo snap install microk8s --classic
sudo microk8s status --wait-ready
|
2. Join the Worker Node to the Master Node
1
| sudo microk8s join <MASTER_NODE_IP>:<PORT>/<TOKEN> --worker
|
1. Clone the Runner Autoscaler Repository
1
2
3
| git clone git@bitbucket.org:bitbucketpipelines/runners-autoscaler.git
cd runners-autoscaler/kustomize
git checkout 3.7.0 # Make sure to check out the latest available version
|
Edit the runners_config.yaml
and kustomization.yaml
files to include your Bitbucket OAuth credentials and other necessary configurations as described here: Autoscaler for Runners on Kubernetes: Bitbucket Cloud Docs.
1
2
| sudo vi values/runners_config.yaml
sudo vi values/kustomization.yaml
|
Example runners_config.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
| constants:
default_sleep_time_runner_setup: 10 # value in seconds
default_sleep_time_runner_delete: 5 # value in seconds
runner_api_polling_interval: 600 # value in seconds
runner_cool_down_period: 300 # value in seconds
groups:
- name: "Linux Docker Runners"
workspace: "YOURWORKSPACENAME" # workspace name
# repository: "{repository_uuid}"
# optional, only needed if you want repository runners - include the curly braces
labels: # Labels assigned to each runner
- "self.hosted"
- "linux"
- "runner.docker"
namespace: "default" # target namespace, where runner will be created.
strategy: "percentageRunnersIdle"
# Set up the parameters for runners to create/delete via Bitbucket API.
parameters:
min: 4 # recommended minimum 1 must be in UI because it fails when a new build is starting.
max: 8 # maximum runners allowed to be deployed. Please take into consideration resources available before updating this.
scale_up_threshold: 0.5 # The percentage of busy runners at which the number of desired runners are re-evaluated to scale up.
scale_down_threshold: 0.2 # The percentage of busy runners at which the number of desired runners are re-evaluated to scale down.
scale_up_multiplier: 1.5 # scale_up_multiplier > 1.
scale_down_multiplier: 0.5 # 0 < scale_down_multiplier < 1.
# Set up the resources for the Kubernetes job template.
# This section is optional. If not provided, the default values for memory "4Gi" and CPU "1000m" in requests and limits will be used.
resources:
requests:
memory: "4Gi"
cpu: "2000m"
limits:
memory: "4Gi"
cpu: "2000m"
|
Example kustomization.yaml
This can vary based on the Authentication method you use to connect your K8s Cluster to Bitbucket Cloud Infrastructure.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
| apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
# Review the ./runners_config.yaml file, especially workspace UUID and labels.
configMapGenerator:
- name: runners-autoscaler-config
files:
- runners_config.yaml
options:
disableNameSuffixHash: true
# The namespace for the runners autoscaler resources.
# It is not the same namespace for runner pods, which can be specified in the runners_config.yaml.
namespace: bitbucket-runner-control-plane
commonLabels:
app.kubernetes.io/part-of: runners-autoscaler
images:
- name: bitbucketpipelines/runners-autoscaler
newTag: 3.7.0 # Ensure this matches the version you checked out earlier.
patches:
- target:
version: v1
kind: Secret
name: runner-bitbucket-credentials
# There are 2 options.
# Choose one of them, uncomment and specify the values.
# PS: Values must be encoded in base64.
# 1) OAuth - Specify the OAuth client ID and secret.
# 2) App password - Specify the Bitbucket username and Bitbucket app password.
patch: |-
### Option 1 ###
- op: add
path: /data/bitbucketOauthClientId
value: "ENTER_BITBUCKET_OAUTH_CLIENT_ID_HERE_WITHIN_QUOTES"
- op: add
path: /data/bitbucketOauthClientSecret
value: "ENTER_BITBUCKET_OAUTH_CLIENT_SECRET_HERE_WITHIN_QUOTES"
### Option 2 ###
# - op: add
# path: /data/bitbucketUsername
# value: ""
# - op: add
# path: /data/bitbucketAppPassword
# value: ""
- target:
version: v1
kind: Deployment
labelSelector: "inject=runners-autoscaler-envs"
# Uncomment the same option you've chosen for the Secret above.
patch: |-
### Option 1 ###
- op: replace
path: /spec/template/spec/containers/0/env
value:
- name: BITBUCKET_OAUTH_CLIENT_ID
valueFrom:
secretKeyRef:
key: bitbucketOauthClientId
name: runner-bitbucket-credentials
- name: BITBUCKET_OAUTH_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: bitbucketOauthClientSecret
name: runner-bitbucket-credentials
### Option 2 ###
# - op: replace
# path: /spec/template/spec/containers/0/env
# value:
# - name: BITBUCKET_USERNAME
# valueFrom:
# secretKeyRef:
# key: bitbucketUsername
# name: runner-bitbucket-credentials
# - name: BITBUCKET_APP_PASSWORD
# valueFrom:
# secretKeyRef:
# key: bitbucketAppPassword
# name: runner-bitbucket-credentials
|
Apply Kustomization Files:
Apply the Kustomize configurations to deploy the Runner Autoscaler:
1
| sudo microk8s kubectl apply -k values
|
Monitor Runner Logs:
Monitor the logs to ensure the Runner Autoscaler is functioning correctly:
1
| sudo microk8s kubectl logs -f -l app=runner-controller -n bitbucket-runner-control-plane
|
Step 4: Validate the Setup
1. Check Node Status (On Workers)
1
| sudo microk8s kubectl get nodes
|
2. Check Pod Status
1
| sudo microk8s kubectl get pods -n bitbucket-runner-control-plane --field-selector=status.phase=Running
|
3. Inspect Runner Logs
1
| sudo microk8s kubectl logs -f runner-controller-<pod-name> -n bitbucket-runner-control-plane
|
Additional Tips
Additional Improvements
- Use MicroCeph or just Ceph to manage storage that backs the MicroK8s nodes for better flexibility and scalability. Refer to the MicroCeph Multi-Node Install Guide.