All Products
Search
Document Center

Container Service for Kubernetes:Update Expiring Certificates for ACK Dedicated Clusters

Last Updated:Mar 11, 2026

This topic describes how to update expiring certificates for ACK dedicated clusters. You can update all node certificates using the console or kubectl, or you can manually update the certificates for master nodes and worker nodes.

Note

ACK automatically updates master node certificates in ACK managed clusters. Manual intervention is not required.

Update All Node Certificates Using the Console

  1. Log on to the ACK console. In the left navigation pane, click Clusters.

  2. To the right of the cluster whose certificates are expiring, click Update Certificate. The Update Certificate page appears.

    Note

    The Update Certificate option appears if the cluster certificates are due to expire within approximately two months.

    更新证书

  3. On the Update Certificate page, click Update Certificate. Follow the on-screen instructions to update the certificates.

    After the cluster certificates are updated, the following changes occur:

    • The Update Certificate page shows The certificate has been updated..

    • On the Clusters page, the Update Certificate prompt is no longer displayed for the destination cluster.

Automatically Update All Node Certificates Using kubectl

Update Certificates

On any master node in the cluster, run the following command to update the certificates for all nodes in the cluster.

curl http://aliacs-k8s-cn-hangzhou.oss-cn-hangzhou.aliyuncs.com/public/cert-update/renew.sh | bash

Verify the Result

You must be connected to the cluster using kubectl. For more information, see Connect to a Kubernetes cluster using kubectl.
  1. Run the following command to view the status of the master nodes and worker nodes in the cluster.

    kubectl get nodes

    nodes

  2. Run the following command. The certificates are updated when the value of COMPLETIONS for the master nodes is 1 and the value of COMPLETIONS for the worker nodes matches the number of worker nodes in the cluster.

    kubectl -n kube-system get job

    nodes

Manually Update Master Node Certificates

  1. In any path, copy the following content to create a file named job-master.yml.

    apiVersion: batch/v1
    kind: Job
    metadata:
      name: ${jobname}
      namespace: kube-system
    spec:
      backoffLimit: 0
      completions: 1
      parallelism: 1
      template:
        spec:
          activeDeadlineSeconds: 3600
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: kubernetes.io/hostname
                    operator: In
                    values:
                    - ${hostname}
          containers:
          - command:
            - /renew/upgrade-k8s.sh
            - --role
            - master
            image: registry.cn-hangzhou.aliyuncs.com/acs/cert-rotate:v1.0.0
            imagePullPolicy: Always
            name: ${jobname}
            securityContext:
              privileged: true
            volumeMounts:
            - mountPath: /alicoud-k8s-host
              name: ${jobname}       
          hostNetwork: true
          hostPID: true
          restartPolicy: Never
          schedulerName: default-scheduler
          securityContext: {}
          tolerations:
          - effect: NoSchedule
            key: node-role.kubernetes.io/master
          volumes:
          - hostPath:
              path: /
              type: Directory
            name: ${jobname}
  2. Obtain information about the cluster, such as the number of master nodes and the node names.

    • Method 1: Use the command line

      Run the following command:

      kubectl get nodes

      nodes

    • Method 2: Use the console

      1. Log on to the ACK console. In the left navigation pane, click Clusters.

      2. On the Clusters page, click the name of the destination cluster or click View Details in the Actions column.

      3. In the navigation pane on the left of the cluster details page, choose Nodes > Nodes to view the number of master nodes, their names, IP addresses, and instance IDs.

  3. Run the following command to replace the ${jobname} and ${hostname} variables in the job-master.yml file.

    sed 's/${jobname}/cert-job-2/g; s/${hostname}/hostname/g' job-master.yml > job-master2.yml

    where:

    • ${jobname}: the name of the job. Set this to cert-job-2.

    • ${hostname}: the name of the master node in the cluster. Replace hostname with the master node name that you obtained in Step 2.

  4. Run the following command to create the job.

    kubectl create -f job-master2.yml
  5. Run the following command to view the job status. The certificate update is complete when the value of COMPLETIONS is 1.

    kubectl get job -nkube-system
  6. Repeat Step 3 to Step 5 to update the certificates for all master nodes.

Manually Update Worker Node Certificates

  1. In any path, copy the following content to create a file named job-node.yml.

    apiVersion: batch/v1
    kind: Job
    metadata:
      name: ${jobname}
      namespace: kube-system
    spec:
      backoffLimit: 0
      completions: ${nodesize}
      parallelism: ${nodesize}
      template:
        spec:
          activeDeadlineSeconds: 3600
          affinity:
            podAntiAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                  matchExpressions:
                  - key: job-name
                    operator: In
                    values:
                    - ${jobname}
                topologyKey: kubernetes.io/hostname
          containers:
          - command:
            - /renew/upgrade-k8s.sh
            - --role
            - node
            - --rootkey
            - ${key}
            image: registry.cn-hangzhou.aliyuncs.com/acs/cert-rotate:v1.0.0
            imagePullPolicy: Always
            name: ${jobname}
            securityContext:
              privileged: true
            volumeMounts:
            - mountPath: /alicoud-k8s-host
              name: ${jobname}
          hostNetwork: true
          hostPID: true
          restartPolicy: Never
          schedulerName: default-scheduler
          securityContext: {}
          volumes:
          - hostPath:
              path: /
              type: Directory
            name: ${jobname}
    Note

    If your worker nodes have taints, you must add tolerations for the taints to the job-node.yml file. To do this, add the following content between securityContext: {} and volumes:. If you have n worker nodes with taints, copy this content n times.

          tolerations:
          - effect: NoSchedule
            key: ${key}
            operator: Equal
            value: ${value}

    To obtain the values for ${name} and ${value}, perform the following steps:

    1. In any path, copy the following content to create a file named taint.tml.

      {{printf "%-50s %-12s\n" "Node" "Taint"}}
                          {{- range .items}}
                          {{- if $taint := (index .spec "taints") }}
                          {{- .metadata.name }}{{ "\t" }}
                          {{- range $taint }}
                          {{- .key }}={{ .value }}:{{ .effect }}{{ "\t" }}
                          {{- end }}
                          {{- "\n" }}
                          {{- end}}
                          {{- end}}
    2. Run the following command to query the ${name} and ${value} for the worker nodes that have taints.

      kubectl get nodes -o go-template-file="taint.tml"

  2. Run the following command to obtain the CAKey of the cluster.

    sed '1d' /etc/kubernetes/pki/ca.key | base64 -w 0
  3. Run the following command to replace the ${jobname}, ${nodesize}, and ${key} variables in the job-node.yml file.

    sed 's/${jobname}/cert-node-2/g; s/${nodesize}/nodesize/g; s/${key}/key/g' job-node.yml > job-node2.yml

    Where:

    • ${jobname}: the name of the job. Set this to cert-node-2.

    • ${nodesize}: the number of worker nodes. For information about how to obtain this value, see Step 1 in Manually Update Worker Node Certificates. Replace nodesize with the number of worker nodes in the cluster.

    • ${key}: the CAKey of the cluster. Replace key with the CAKey that you obtained in Step 2 of Manually Update Worker Node Certificates.

  4. Run the following command to create the job.

    kubectl create -f job-node2.yml
  5. Run the following command to view the job status. The certificate update is complete when the value of COMPLETIONS matches the number of worker nodes in the cluster.

    kubectl get job -nkube-system

    nodes