All Products
Search
Document Center

Container Service for Kubernetes:Update Expiring Certificates for ACK Dedicated Clusters

Last Updated:Mar 26, 2026

This topic describes how to update expiring certificates for ACK dedicated clusters. You can update all node certificates using the console or kubectl, or you can manually update the certificates for master nodes and worker nodes.

Note

ACK automatically updates master node certificates in ACK managed clusters. Manual intervention is not required.

Prerequisites

Before you begin, ensure that you have:

Update all node certificates using the console

  1. Log on to the ACK console. In the left navigation pane, click Clusters.

  2. To the right of the cluster whose certificates are expiring, click Update Certificate. The Update Certificate page appears.

    Note

    The Update Certificate option appears if the cluster certificates are due to expire within approximately two months.

    Update Certificate page

  3. On the Update Certificate page, click Update Certificate. Follow the on-screen instructions to update the certificates. After the certificates are updated:

    • The Update Certificate page shows The certificate has been updated.

    • The Update Certificate prompt is no longer displayed for the cluster on the Clusters page.

Update all node certificates using kubectl

Run the following command on any master node in the cluster to update certificates for all nodes.

curl http://aliacs-k8s-cn-hangzhou.oss-cn-hangzhou.aliyuncs.com/public/cert-update/renew.sh | bash

Verify the update

  1. Run the following command to view the status of the master nodes and worker nodes.

    kubectl get nodes

    kubectl get nodes output

  2. Run the following command to check the Job completion status. The certificates are updated when the COMPLETIONS value for master nodes is 1 and the COMPLETIONS value for worker nodes matches the number of worker nodes in the cluster.

    kubectl -n kube-system get job

    kubectl get job output

Manually update master node certificates

Step 1: Create the Job manifest

In any directory, create a file named job-master.yml with the following content:

apiVersion: batch/v1
kind: Job
metadata:
  name: ${jobname}
  namespace: kube-system
spec:
  backoffLimit: 0
  completions: 1
  parallelism: 1
  template:
    spec:
      activeDeadlineSeconds: 3600
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                - ${hostname}
      containers:
      - command:
        - /renew/upgrade-k8s.sh
        - --role
        - master
        image: registry.cn-hangzhou.aliyuncs.com/acs/cert-rotate:v1.0.0
        imagePullPolicy: Always
        name: ${jobname}
        securityContext:
          privileged: true
        volumeMounts:
        - mountPath: /alicoud-k8s-host
          name: ${jobname}
      hostNetwork: true
      hostPID: true
      restartPolicy: Never
      schedulerName: default-scheduler
      securityContext: {}
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
      volumes:
      - hostPath:
          path: /
          type: Directory
        name: ${jobname}

Step 2: Get master node information

Get the number and names of master nodes in the cluster using one of the following methods:

  • kubectl: Run kubectl get nodes.

    kubectl get nodes output

  • Console: Log on to the ACK console. On the Clusters page, click the cluster name or click View Details in the Actions column. In the left navigation pane of the cluster details page, choose Nodes > Nodes to view the number of master nodes, their names, IP addresses, and instance IDs.

Step 3: Set variables and create the Job

  1. Replace the ${jobname} and ${hostname} variables in job-master.yml, then save the output to job-master2.yml.

    sed 's/${jobname}/cert-job-2/g; s/${hostname}/hostname/g' job-master.yml > job-master2.yml

    Replace the placeholders with the following values:

    Placeholder Value
    ${jobname} The Job name. Set this to cert-job-2.
    ${hostname} The master node name obtained in step 2.
  2. Create the Job.

    kubectl create -f job-master2.yml
  3. Check the Job status. The certificate update is complete when the COMPLETIONS value is 1.

    kubectl get job -nkube-system
  4. Repeat steps 1–3 for each master node in the cluster.

    Master node certificate update complete

Manually update worker node certificates

Step 1: Create the Job manifest

In any directory, create a file named job-node.yml with the following content:

apiVersion: batch/v1
kind: Job
metadata:
  name: ${jobname}
  namespace: kube-system
spec:
  backoffLimit: 0
  completions: ${nodesize}
  parallelism: ${nodesize}
  template:
    spec:
      activeDeadlineSeconds: 3600
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: job-name
                operator: In
                values:
                - ${jobname}
            topologyKey: kubernetes.io/hostname
      containers:
      - command:
        - /renew/upgrade-k8s.sh
        - --role
        - node
        - --rootkey
        - ${key}
        image: registry.cn-hangzhou.aliyuncs.com/acs/cert-rotate:v1.0.0
        imagePullPolicy: Always
        name: ${jobname}
        securityContext:
          privileged: true
        volumeMounts:
        - mountPath: /alicoud-k8s-host
          name: ${jobname}
      hostNetwork: true
      hostPID: true
      restartPolicy: Never
      schedulerName: default-scheduler
      securityContext: {}
      volumes:
      - hostPath:
          path: /
          type: Directory
        name: ${jobname}
Note

If worker nodes have taints, add a tolerations entry to job-node.yml for each tainted node. Insert the following block between securityContext: {} and volumes:, repeating it once per tainted node:

      tolerations:
      - effect: NoSchedule
        key: ${key}
        operator: Equal
        value: ${value}

To get the taint key and value for each worker node:

  1. Create a file named taint.tml with the following content:

     {{printf "%-50s %-12s\n" "Node" "Taint"}}
                  {{- range .items}}
                  {{- if $taint := (index .spec "taints") }}
                  {{- .metadata.name }}{{ "\t" }}
                  {{- range $taint }}
                  {{- .key }}={{ .value }}:{{ .effect }}{{ "\t" }}
                  {{- end }}
                  {{- "\n" }}
                  {{- end}}
                  {{- end}}
  2. Run the following command to query the taint keys and values for each worker node:

       kubectl get nodes -o go-template-file="taint.tml"

    Worker node taint query output

Step 2: Get the cluster CA key

Run the following command on a master node to get the cluster CA key:

sed '1d' /etc/kubernetes/pki/ca.key | base64 -w 0

Step 3: Set variables and create the Job

  1. Replace the ${jobname}, ${nodesize}, and ${key} variables in job-node.yml, then save the output to job-node2.yml.

    sed 's/${jobname}/cert-node-2/g; s/${nodesize}/nodesize/g; s/${key}/key/g' job-node.yml > job-node2.yml

    Replace the placeholders with the following values:

    Placeholder Value
    ${jobname} The Job name. Set this to cert-node-2.
    ${nodesize} The number of worker nodes in the cluster.
    ${key} The CA key obtained in step 2.
  2. Create the Job.

    kubectl create -f job-node2.yml
  3. Check the Job status. The certificate update is complete when the COMPLETIONS value matches the number of worker nodes in the cluster.

    kubectl get job -nkube-system

    Worker node certificate update job status