GitLab Runner is an open source application written in Go. GitLab Runner serves as an agent that runs the continuous integration and continuous delivery (CI/CD) jobs distributed by GitLab. The computing power required by CI jobs periodically changes. Container Compute Service (ACS) provides resources on demand and supports fast scaling to meet the dynamic resource requirements of CI jobs. ACS simplifies capacity planning for your business and reduces overall resource costs. ACS provides elasticity based on scalable cloud resources, which greatly improves the concurrency of CI jobs. This topic describes how to use GitLab Runner to deploy the production environment in ACS and introduces the recommended configurations.
Background information
GitLab Runner is an open source project for running jobs in GitLab CI/CD pipelines. GitLab Runner is an external job execution framework that provides various executors, such as Shell and Kubernetes, to allow you to run CI jobs in data centers or in the cloud. After a job is complete, GitLab Runner returns the result to GitLab.
The key configurations of GitLab Runner include the configurations and initialization settings of the runner manager pod and the configurations of the Kubernetes executor. For more information, see Configure GitLab Runner.
Procedure
The following figure shows the procedures described in this topic.
Prerequisites
A kubectl client is connected to the cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Installation procedure
In this example, gitlab-runner 17.3.1 is installed. The chart version used to install gitlab-runner 17.3.1 is 0.68.1. For more information about how to install gitlab-runner, see GitLab Runner Helm chart.
Obtain the GitLab Runner chart.
Add the GitLab Helm repository.
helm repo add gitlab https://charts.gitlab.io
Update the chart.
helm repo update gitlab
Obtain the GitLab runner chart.
helm pull gitlab/gitlab-runner --version 0.68.1 && tar zvxf gitlab-runner-0.68.1.tgz
Register a runner.
You need to register a token and record the runner token in the GitLab console. For more information, see Registering runners.
Create a file named values.yaml to configure initialization settings.
Parameter
Description
gitlabUrl
The URL of the GitLab server on which you want to register a runner. Example:
https://gitlab.example.com
.runnerToken
Specifies the runner token you obtained in the previous step. The token of a runner is the identifier of the runner. The runner manager associates the pods and Secrets created by the manager with a token based on the token.
rbac
Specifies whether to enable role-based access control (RBAC). After you enable RBAC, a service account is automatically created.
rbac: create: true
concurrent
The concurrency of the job. Default value: 10. The manager pod consumes a specific amount of memory when managing jobs. If your job has high concurrency, we recommend that you increase the resource configurations of the manager pod.
unregisterRunners
To terminate the manager pod, run the
unregister
command. When you use runner registration tokens to register runners, jobs may be disassociated each time a new token is generated. You can add this configuration to resolve this issue. For more information, see FAQ.runners.config
The configurations of the runner. The configurations are specified in multi-line strings. You can modify the strings to modify the runner configurations.
Install GitLab Runner in your ACS cluster.
helm install --namespace default gitlab-runner -f values.yaml --version 0.68.1 gitlab/gitlab-runner
Run the following command to check the status of GitLab Runner:
kubectl get pod | grep gitlab
Expected output:
gitlab-runner-7c5b4xxxxx-xxxxx 1/1 Running 0 5m17s
The output shows that GitLab Runner is installed.
Build an image
This section describes how to build an image in Docker-in-Docker mode and by using kaniko. In this section, the features of ACS are used in the build procedure. For more information about the sample project used in this section, see Java demo.
Build an image in Docker-in-Docker mode
Create a file named values.yaml to reconfigure the runner configurations.
You can also modify the configurations of a runner by modifying the gitlab-runner ConfigMap in the same namespace as the runner. We recommend that you run the
helm upgrade
command to update the runner configurations.... runners: config: | [[runners]] [runners.kubernetes] namespace = "{{.Release.Namespace}}" image = "registry.cn-hangzhou.aliyuncs.com/acs-demo-ns/docker:27-dind" privileged = true cpu_limit = 2 cpu_request = 2 memory_limit = "4Gi" memory_request = "4Gi" ephemeral_storage_request = "30Gi" ephemeral_storage_limit = "30Gi" [[runners.kubernetes.volumes.empty_dir]] name = "docker-certs" mount_path = "/certs/client" medium = "Memory" [[runners.feature_flags]] FF_USE_POD_ACTIVE_DEADLINE_SECONDS = true ...
The following table describes the parameters in the preceding code block.
Parameter
Description
image
The base image used to build an image in Docker-in-Docker mode. In this example, the dind image provided by the Docker community is used.
privileged
Specifies whether to enable the privileged mode for the runner pod. In this example, the value is set to true.
NoteTo perform this operation, you must submit a ticket to enable the privileged mode for the ACS pod.
cpu_request
cpu_limit
The CPU specification of the container. By default, the minimum CPU specification supported by ACS is 0.25 vCores. You can adjust the CPU specification based on your business requirements.
memory_request
memory_limit
The memory specification of the container. By default, the minimum memory specification supported by ACS is 0.5 GiB. You can adjust the memory specification based on your business requirements.
ephemeral_storage_request
ephemeral_storage_limit
The size of ephemeral storage. By default, ACS provides 30 GiB of ephemeral storage free of charge. When ACS creates a pod, the image is automatically cached by using ephemeral storage. The image cache can accelerate subsequent job startups. If you require larger ephemeral storage space, you can adjust the configuration based on your business requirements.
FF_USE_POD_ACTIVE_DEADLINE_SECONDS
Specifies whether to enable the
activeDeadlineSeconds
feature gate. If you enable the feature gate, theactiveDeadlineSeconds
parameter is set to the timeout period of the Job. This terminates pods that are disassociated due to unknown causes.For more information about other parameters, see Kubernetes executor.
Create a file named gitlab-ci to configure the build procedure.
In this example, the dind container is not claimed by the service container. Instead, the dockerd process is directly launched in the build container.
image: registry.cn-hangzhou.aliyuncs.com/acs-demo-ns/docker:27-dind stages: - build variables: # When using dind service, you must instruct Docker to talk with # the daemon started inside of the service. The daemon is available # with a network connection instead of the default # /var/run/docker.sock socket. DOCKER_HOST: tcp://localhost:2376 # # The 'docker' hostname is the alias of the service container as described at # https://docs.gitlab.com/ee/ci/services/#accessing-the-services. # If you're using GitLab Runner 12.7 or earlier with the Kubernetes executor and Kubernetes 1.6 or earlier, # the variable must be set to tcp://localhost:2376 because of how the # Kubernetes executor connects services to the job container # DOCKER_HOST: tcp://localhost:2376 # # Specify to Docker where to create the certificates. Docker # creates them automatically on boot, and creates # `/certs/client` to share between the service and job # container, thanks to volume mount from config.toml DOCKER_TLS_CERTDIR: "/certs" # These are usually specified by the entrypoint, however the # Kubernetes executor doesn't run entrypoints # https://gitlab.com/gitlab-org/gitlab-runner/-/issues/4125 DOCKER_TLS_VERIFY: 1 DOCKER_CERT_PATH: "$DOCKER_TLS_CERTDIR/client" before_script: - echo "before task" - sh /usr/local/bin/dockerd-entrypoint.sh & - sleep 10s build_image: stage: build tags: - demo script: - docker info - sleep 1d - docker build --network host -t demo:v1.0.0 -f Dockerfile . - docker push demo:v1.0.0
The following table describes the steps in the build procedure.
Step
Description
before_scrip
This step launches the dockerd process. Then, the system waits for the docker process to be initialized.
build_image
This step builds the image.
ImportantThe image build procedure uses the host network mode to allow the dockerd process to communicate with the external network through the container network.
Use kaniko to build an image in non-privileged mode
kaniko is an open source tool that can run in an environment that does not have Docker installed. In addition, kaniko does not require the privileged mode. kaniko is suitable for scenarios where the system limits access to Docker or the system runs in the Kubernetes environment.
Configure the values.yaml file.
... runners: config: | [[runners]] [runners.kubernetes] namespace = "{{.Release.Namespace}}" ephemeral_storage_request = "30Gi" ephemeral_storage_limit = "30Gi" [[runners.feature_flags]] FF_USE_POD_ACTIVE_DEADLINE_SECONDS = true ...
Create a file named gitlab-ci to configure the build procedure.
ImportantGitLab CI depends on the Shell executor for command execution. This means that the base image must support the sh command. In this example, the debug version of the kaniko executor image is used.
stages: - build variables: KUBERNETES_POD_LABELS_1: "alibabacloud.com/compute-class=general-purpose" KUBERNETES_POD_LABELS_2: "alibabacloud.com/compute-qos=best-effort" build_image: stage: build image: name: registry.cn-hangzhou.aliyuncs.com/acs-demo-ns/kaniko-executor:v1.21.0-amd64-debug entrypoint: [""] tags: - demo script: - /kaniko/executor --context "${CI_PROJECT_DIR}" --dockerfile "${CI_PROJECT_DIR}/Dockerfile" --destination "${CI_REGISTRY_IMAGE}:${CI_COMMIT_TAG}"
Reduce CI resource costs based on ACS scaling policies
You can select one of the following methods to use the BestEffort pods provided by ACS to reduce the resource costs of job execution in the CI/CD pipeline.
Configure the BestEffort QoS class for all pods in the cluster or the pods that belong to the project.
We recommend that you configure the BestEffort QoS class for all pods in the cluster. You can also select other QoS classes based on your business requirements.
Configure the runners parameter to specify the BestEffort QoS class for all pods in the cluster.
... runners: config: | [[runners]] [runners.kubernetes] ... pod_labels_overwrite_allowed = ".*" # Allow the system to overwrite the labels of project pods based on variables. [[runners.kubernetes.pod_labels]] "app" = "acs-gitlab-runner" "alibabacloud.com/compute-class" = "general-purpose" "alibabacloud.com/compute-qos" = "best-effort" # Configure the BestEffort QoS class for pods. ...
You can use the gitlab-ci.yml file to configure pod QoS classes for each repository based on your business requirements. To do this, add the
variables
parameter to the gitlab-ci.yml file. For more information, see the Use kaniko to build an image in non-privileged mode section in this topic.
Configure a ResourcePolicy to ensure elasticity.
You can use a ResourcePolicy to configure a custom scheduling policy to reduce resource costs for your CI jobs. This way, your CI jobs are preferentially scheduled to BestEffort jobs, which are cost-effective. When BestEffort pods are exhausted in a region, the system automatically creates Default pods to ensure the continuity and availability of your CI jobs.
apiVersion: scheduling.alibabacloud.com/v1alpha1 kind: ResourcePolicy metadata: name: rp-demo namespace: default spec: selector: # Specify the label selector used to select pods. In this example, the ResourcePolicy is applied to pods that have the app=stress label. app: acs-gitlab-runner units: # Specify the priorities of different types of nodes for pod scheduling. - resource: acs # Prioritize BestEffort resources. podLabels: alibabacloud.com/compute-class: general-purpose alibabacloud.com/compute-qos: best-effort - resource: acs # Apply for Default resources when the preceding resources are out of stock. podLabels: alibabacloud.com/compute-class: general-purpose alibabacloud.com/compute-qos: default
FAQ
How do I delete residual pods after the manager pod is recreated or restarted?
Cause
The system used a different runner token to recreate or restart the manager pod. The token of a runner is the identifier of the runner. In this case, the manager pod cannot manage the original job pods.
Solution
Go to the GitLab console and recreate the runner. Specify the runner token in the runner configuration file.
If a
registration token
is used for runner registration during the startup process, we recommend that you mount a Secret to the manager pod and specify the token generated during initial runner registration in the Secret. This ensures that the initial token is used each time the manager pod is restarted.We recommend that you enable the
FF_USE_POD_ACTIVE_DEADLINE_SECONDS
feature gate. This allows you to specify the time to live (TTL) of each runner worker, which is similar to a resource reclaim policy.