All Products
Search
Document Center

Container Service for Kubernetes:Use ACK One Gitops and Argo Rollouts to perform canary releases

Last Updated:Jun 06, 2024

ACK One integrates Argo CD GitOps capabilities and works with the progressive rollout component named Argo Rollouts to support automated canary releases based on Git commits. This topic describes how to use ACK One Gitops and Argo Rollouts to perform canary releases.

Prerequisites

Terms

View GitOps terms

GitOps

GitOps is a framework that uses Git repositories to manage application templates and implement continuous deployment for applications. In the GitOps framework, Git is used as the single source of truth to continuously deploy new application configurations. For more information about GitOps, see GitOps overview.

image

View Argo Rollouts terms

Argo Rollouts

Argo Rollouts is a Kubernetes controller that provides advanced deployment capabilities, such as blue-green releases, canary releases, and Kubernetes progressive delivery. For more information about Argo Rollouts, see What is Argo Rollouts?

image

View canary release terms

Canary release

Canary release is a deployment strategy. The canary release strategy allows you to slowly roll out a new application version to a small subset of users before you roll it out to the entire production environment. To perform a canary release, the application version must support traffic control, such as traffic control based on the Ingress controller.

Benefits of canary releases

Based on the canary release strategy, you can verify an application version in the production environment instead of the staging environment. If an error occurs in the new version, only the subset of users of the new version are affected. You can quickly roll back the version by redirecting traffic to an earlier version.

Canary release workflow

image

Usage notes

  • If you want to use GitHub repositories, we recommend that you do not create your ACK cluster in a region in the Chinese mainland. If you have already created an ACK cluster in a region in the Chinese mainland, use a GitHub service provider.

  • In this example, the Fleet instance and the associated ACK cluster are deployed in the China (Hong Kong) region.

Step 1: Deploy the Argo Rollouts component in the ACK cluster

Run the following command to deploy the Argo Rollouts component in the ACK cluster. For more information about how to install the Argo Rollouts component, see Controller Installation.

kubectl create namespace argo-rollouts
kubectl apply -n argo-rollouts -f https://github.com/argoproj/argo-rollouts/releases/latest/download/install.yaml

Step 2: Deploy the ack-arms-prometheus component in the ACK cluster

  1. Log on to the ACK console. In the left-side navigation pane, click Cluster.

  2. On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose Operations > Add-ons.

  3. On the Add-ons page of the ACK console, click the Logs and Monitoring tab and then find ack-arms-prometheus.

    • If Installed is displayed in the card, the ack-arms-prometheus component is already installed when the system creates the ACK cluster.

    • If Install is displayed in the card, click Install to install the component in the ACK cluster.

Step 3: Use ACK One GitOps to deploy an application

You can use GitOps to deploy applications in one of the following ways.

  • Use the Argo CD CLI to deploy applications. The following example shows how to deploy applications by using Argo CD CLI.

  • Use the GitOps console to deploy applications. For more information, see Work with GitOps.

  1. Run the following command to add a Git repository:

    argocd repo add https://github.com/AliyunContainerService/gitops-demo.git --name gitops-demo

    Expected output:

    Repository 'https://github.com/AliyunContainerService/gitops-demo.git' added
  2. Run the following command to query Git repositories:

    argocd repo list

    Expected output:

    TYPE  NAME         REPO                                                       INSECURE  OCI    LFS    CREDS  STATUS      MESSAGE  PROJECT
    git   gitops-demo  https://github.com/AliyunContainerService/gitops-demo.git  false     false  false  false  Successful
  3. Run the following command to query clusters:

    argocd cluster list

    Expected output:

    SERVER                          NAME                                                                 VERSION  STATUS      MESSAGE                                                  PROJECT
    https://192.168.XX.XX:6443      c76073b011afb4de2a8****-ack-gitops-demo-192-10-110-0-0-16  1.26+    Successful
    https://kubernetes.default.svc  in-cluster                                                                    Unknown     Cluster has no applications and is not being monitored.
  4. Run the following command to create an application by defining an Argo CD Application CustomResourceDefinition (CRD):

    argocd app create rollouts-demo --repo https://github.com/AliyunContainerService/gitops-demo.git --project default --sync-policy automated --revision rollouts --path  . --dest-namespace default --dest-server https://192.168.XX.XX:6443 
  5. Run the following command to query applications:

    argocd app list

    Expected output:

    NAME           CLUSTER                     NAMESPACE  PROJECT  STATUS  HEALTH   SYNCPOLICY  CONDITIONS  REPO                                            PATH  TARGET
    rollouts-demo  https://192.168.XX.XX:6443  default    default  Synced  Healthy  Auto        <none>      https://github.com/AliyunContainerService/gitops-demo.git  .     rollouts
  6. Run the following command to query the rollout status:

    kubectl argo rollouts get rollout rollouts-demo --watch

    Expected output:

    1234..png

Step 4: Perform a canary release

When you use Argo Rollouts to perform a canary release, you can use one of the following methods to check whether the new application version works as expected, and then increase the weight of traffic routed to the new version or route all traffic to the new version based on the verification results. You can manually check whether the new application version works as expected or use Managed Service for Prometheus to collect performance metrics from the new application version.

Manually check the new application version

  1. Modify the rollout.yaml file based on the following sample code, and save and submit the file:

    apiVersion: argoproj.io/v1alpha1
    kind: Rollout
    metadata:
      name: rollouts-demo
    spec:
      replicas: 4
      strategy:
        canary:            # You do not need to configure metric analysis. 
          canaryService: rollouts-demo-canary
          stableService: rollouts-demo-stable
          trafficRouting:
            nginx:
              stableIngress: rollouts-demo-stable
          steps:
            - setWeight: 20
            - pause: {}          # Pause the canary release. The canary release continues only if it is resumed or promoted. 
            - setWeight: 40
            - pause: {duration: 5m}
            - setWeight: 60
            - pause: {duration: 5m}
            - setWeight: 80
            - pause: {duration: 5m}
      revisionHistoryLimit: 2
      selector:
        matchLabels:
          app: rollouts-demo
      template:
        metadata:
          labels:
            app: rollouts-demo
        spec:
          containers:
          - name: rollouts-demo
            image: argoproj/rollouts-demo:yellow   # Modify the image tag. 
            ports:
            - name: http
              containerPort: 8080
              protocol: TCP
            resources:
              requests:
                memory: 32Mi
                cpu: 5m        
  2. Run the following command to query the rollout status:

    kubectl argo rollouts get rollout rollouts-demo --watch

    Expected output:

    123..png

    No duration is specified for the first pause in the rollout.yaml file. Therefore, the canary release can continue only if it is resumed or promoted.

  3. Resume the canary release.

    1. Modify the rollout.yaml file based on the following sample code, and save and submit the file:

            steps:
              - setWeight: 20
              - pause: {duration: 10s}   # Specify the duration of the pause.

    2. Run the following command to resume the canary release:

      kubectl argo rollouts get rollout rollouts-demo --watch

      Expected output:3..png

    After the canary release is complete, the results in the following figure are returned.

    4..png

Use Managed Service for Prometheus to collect metrics

  1. Modify the rollout.yaml file based on the following sample code, and save and submit the file:

      strategy:
        canary:
          analysis:        # Configure metric analysis. 
            templates:     # Configure the metric analysis template. 
            - templateName: success-rate
            startingStep: 2 #delay starting analysis run until setWeight: 40%
            args:
              - name: service-name
                value: rollouts-demo-stable
          canaryService: rollouts-demo-canary
          stableService: rollouts-demo-stable
          trafficRouting:
            nginx:
              stableIngress: rollouts-demo-stable
          steps:
            - setWeight: 20
            - pause: {duration: 5m}
            - setWeight: 40
            - pause: {duration: 5m}
            - setWeight: 60
            - pause: {duration: 5m}
            - setWeight: 80
            - pause: {duration: 5m}
      revisionHistoryLimit: 2
      selector:
        matchLabels:
          app: rollouts-demo
      template:
        metadata:
          labels:
            app: rollouts-demo
        spec:
          containers:
          - name: rollouts-demo
            image: argoproj/rollouts-demo:blue  # Specify a new image tag.         
  2. Obtain the endpoint of Managed Service for Prometheus.

    Managed Service for Prometheus is exposed through a Kubernetes Service address in the following format: {ServiceName}.{Namespace}.svc.{ClusterDomain}:{ServicePort}. In this example, the Service that exposes Managed Service for Prometheus is deployed in the arms-prom namespace. The name of the Service is arms-prom-server, the default ClusterDomain is cluster.local, and the Service port is 9090. Therefore, the endpoint of Managed Service for Prometheus is http://arms-prom-server.arms-prom.svc.cluster.local:9090.

  3. Add configurations related to Managed Service for Prometheus metric collection to the analysis.yaml file.

    Set the value of the address parameter to the endpoint of Managed Service for Prometheus that is obtained in the preceding step. In the following sample code, a condition is added to verify successful canary releases. If the ratio of successful canary requests reaches 95% within the metric collection cycle, the canary release is considered successful and automatically proceeds to the next phase. For more information about how to configure metrics, see Prometheus Metrics.

    apiVersion: argoproj.io/v1alpha1
    kind: AnalysisTemplate
    metadata:
      name: success-rate
    spec:
      args:
      - name: service-name
      metrics:
      - name: success-rate
        interval: 5m
        successCondition: result[0] >= 0.95   # Add a condition to verify successful canary releases. 
        failureLimit: 10
        provider:
          prometheus:   # Specify the endpoint of the metric collection Service. 
            address: http://arms-prom-server.arms-prom.svc.cluster.local:9090  # Specify the endpoint of Managed Service for Prometheus that is obtained in the preceding step. 
            query: |
              sum(
                irate(nginx_ingress_controller_requests{status=~"(1|2).*", canary!="" ,service="{{args.service-name}}"}[5m]))
                /
                sum(irate(nginx_ingress_controller_requests{canary!="",service="{{args.service-name}}"}[5m])
              )
  4. Make sure that Managed Service for Prometheus can continuously collect metric data.

    1. Run the following command to query the Ingress of the application:

      kubectl get ingress

      Expected output:

      NAME                                        CLASS   HOSTS                 ADDRESS         PORTS   AGE
      rollouts-demo-rollouts-demo-stable-canary   nginx   rollouts-demo.local   8.217.XX.XX   80      9h
      rollouts-demo-stable                        nginx   rollouts-demo.local   8.217.XX.XX   80      9h
    2. Add the following configuration to the local Hosts file:

      8.217.XX.XX  rollouts-demo.local
    3. Open another command prompt and run the following command to continuously send requests to the application:

       while true; do curl -s "http://rollouts-demo.local/" | grep -o "<title>.*</title>" ;sleep 200ms;done
  5. Run the following command to query the rollout status:

    kubectl argo rollouts get rollout rollouts-demo --watch

    Expected output:

    自动..png

  6. View the metrics collected by Managed Service for Prometheus.

    1. Log on to the ACK console. In the left-side navigation pane, click Cluster.

    2. On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose Operations > Prometheus Monitoring.

    3. On the Prometheus Monitoring page, click the Network Monitoring tab and then click the Ingresses tab.

      The following figure shows the metrics.

      arms..png

    After the canary release is complete, the results in the following figure are returned.

    自动发布完成..png

Optional. Step 5: Roll back the canary release

If your application encounters errors during the canary release, you can modify the rollout.yaml file to roll back the canary release by performing the following operations.

Modify the rollout.yaml file based on the following sample code, and save and submit the file:

    spec:
      containers:
      - name: rollouts-demo
        image: argoproj/rollouts-demo:yellow  # Specify the image tag of a stable application version, save the file, and then submit the file to the Git repository.

Expected output:

回滚..png

References