This topic describes how to precheck and optimize CoreDNS before you update it. This topic also describes how to configure Container Service for Kubernetes (ACK) to automatically update CoreDNS.

Prerequisites

A kubectl client is connected to your cluster. For more information, see Connect to ACK clusters by using kubectl.

Precautions for updating CoreDNS

  • When ACK updates CoreDNS, ACK overwrites the YAML file of CoreDNS. ACK updates the coredns ConfigMap but does not change the number of CoreDNS pods.
  • If you modified the toleration, CPU and memory requests, or CPU and memory limits in the YAML file of CoreDNS, ACK overwrites the changes when it updates CoreDNS. To retain these changes in the YAML file of CoreDNS, you must manually update CoreDNS, or apply these changes to the YAML file of CoreDNS again after CoreDNS is automatically updated. For more information about how to manually upgrade CoreDNS, see Manually update CoreDNS.
  • If the load balancing mode of kube-proxy is set to IP Virtual Server (IPVS), all DNS queries within the cluster may fail or time out after CoreDNS is updated. This situation lasts about 5 minutes. You can use one of the following methods to avoid this issue:
  • ACK requires about 2 minutes to update CoreDNS. The time consumption varies based on the number of CoreDNS pods. If the newly provisioned CoreDNS pods cannot be scheduled or launched, Submit a ticket. When ACK updates CoreDNS, ACK does not stop the CoreDNS pods that are provisioned before CoreDNS is updated. Therefore, the update process does not interrupt DNS resolution services in the cluster. The update can be rolled back within 10 minutes after the update starts.

Enable the ready plug-in

If you manually updated CoreDNS and the version of CoreDNS is later than 1.5.0, you must check whether the ready plug-in is enabled in the CoreDNS ConfigMap before ACK updates CoreDNS. If the ready field does not exist, add the ready field to the ConfigMap. Otherwise, CoreDNS cannot be launched as normal after it is updated.

  1. Log on to the ACK console.
  2. In the left-side navigation pane of the ACK console, click Clusters.
  3. On the Clusters page, find the cluster that you want to manage and click the name of the cluster or click Details in the Actions column. The details page of the cluster appears.
  4. In the left-side navigation pane of the details page, choose Configurations > ConfigMaps.
  5. In the upper part of the ConfigMap page, set Namespace to kube-system. Then, find the coredns ConfigMap and click Edit YAML in the Actions column.
  6. In the View in YAML panel, check whether the ready field exists. If the field does not exist, add the ready field and click OK.
    apiVersion: v1
    data:
      Corefile: |
        .:53 {
            errors
            health {
                lameduck 15s
            }
            ready # Add this line and make sure that the word "ready" is aligned with the word "kubernetes". 
            kubernetes cluster.local in-addr.arpa ip6.arpa {
                pods verified
                fallthrough in-addr.arpa ip6.arpa
            }
            prometheus :9153
            forward . /etc/resolv.conf {
                max_concurrent 1000
            }
            cache 30
            loop
            log
            reload
            loadbalance
        }
  7. Run the following command to print the log of a CoreDNS pod and check whether the new configuration is loaded. In most cases, a CoreDNS pod requires about 30 seconds to complete hot loading.
    kubectl logs coredns-78d4b8bd88-n6wjm -n kube-system

    If the output contains plugin/reload, the new CoreDNS configuration is loaded.

Update CoreDNS

You can navigate to the Add-ons page of the ACK console and then update CoreDNS.

  1. Log on to the ACK console.
  2. In the left-side navigation pane of the ACK console, click Clusters.
  3. On the Clusters page, find the cluster that you want to manage and choose More > Manage System Components in the Actions column.
  4. On the Add-ons page, find CoreDNS and click Upgrade.

Change the UDP timeout period in IPVS mode

If kube-proxy runs in IPVS mode, DNS resolution may fail in the first 5 minutes after CoreDNS is updated due to the session persistence policy of IPVS. You can use one of the following methods to reduce the timeout period of UDP sessions in IPVS mode to 10 seconds. This way, less DNS resolution errors occur after CoreDNS is updated. If applications that use UDP are deployed in your cluster, evaluate the impact on these applications before you update CoreDNS. You can also Submit a ticket to request technical support.
Note If kube-proxy does not run in IPVS mode, you do not need to change the timeout period of UDP sessions. For more information about how to check the load balancing mode of kube-proxy, see View basic information.

ACK clusters that run Kubernetes 1.18 or later

Use the ACK console

  1. Log on to the ACK console.
  2. In the left-side navigation pane of the ACK console, click Clusters.
  3. On the Clusters page, find the cluster that you want to manage and click the name of the cluster or click Details in the Actions column. The details page of the cluster appears.
  4. In the left-side navigation pane of the details page, choose Configurations > ConfigMaps.
  5. On the ConfigMap page, select the kube-system namespace, find the kube-proxy-worker ConfigMap, and then click Edit YAML in the Actions column.
  6. In the View in YAML panel, add udpTimeout: 10s to the ipvs field and click OK.
    apiVersion: v1
    data:
      config.conf: |
        apiVersion: kubeproxy.config.k8s.io/v1alpha1
        kind: KubeProxyConfiguration
        # Irrelevant fields are not shown. 
        mode: ipvs
        // If the ipvs field does not exist, you must add the field. 
        ipvs:
          udpTimeout: 10s
  7. Recreate all of the pods named kube-proxy-worker.
    1. In the left-side navigation pane of the details page, choose Workloads > DaemonSets.
    2. On the DaemonSets page, find and click kube-proxy-worker.
    3. On the Pods tab of the kube-proxy-worker page, select a pod and choose More > Delete in the Actions column. In the message that appears, click Confirm.
      Repeat the preceding steps to delete all of the pods. After you delete the pods, the system automatically recreates the pods.
  8. Check whether the timeout period of UDP sessions is changed.
    1. Run the following command to install ipvsadm.

      ipvsadm is a tool that you can use to manage IPVS. For more information, see ipvsadm.

      yum install -y ipvsadm
    2. Log on to an Elastic Compute Service (ECS) instance in your cluster and then run the following command to check the third value in the output:
      ipvsadm -L --timeout
      If the third value in the output is 10, the timeout period of UDP sessions is changed.
      Note After the timeout period of UDP sessions is changed, wait at least 5 minutes before you proceed.

Use the CLI

  1. Run the following command to modify the kube-proxy-worker ConfigMap:
    kubectl -n kube-system edit configmap kube-proxy-worker
  2. Add udpTimeout: 10s to the ipvs field of the kube-proxy-worker ConfigMap. Then, save the modification and exit.
    apiVersion: v1
    data:
      config.conf: |
        apiVersion: kubeproxy.config.k8s.io/v1alpha1
        kind: KubeProxyConfiguration
        # Irrelevant fields are not shown. 
        mode: ipvs
        // If the ipvs field does not exist, you must add the field. 
        ipvs:
          udpTimeout: 10s
  3. Run the following command to recreate all of the pods named kube-proxy-worker.
    1. Run the following command to query the pods:
      kubectl -n kube-system get pod -o wide | grep kube-proxy-worker
    2. Run the following command to delete all of the pods that are returned in the preceding step. Then, the system recreates the pods named kube-proxy-worker.
      kubectl -n kube-system delete pod <kube-proxy-worker-****>
      Note Replace <kube-proxy-worker-****> with the name of a pod that is returned in the preceding step.
  4. Check whether the timeout period of UDP sessions is changed.
    1. Run the following command to install ipvsadm.

      ipvsadm is a tool that you can use to manage IPVS. For more information, see ipvsadm.

      yum install -y ipvsadm
    2. Log on to an Elastic Compute Service (ECS) instance in your cluster and then run the following command to check the third value in the output:
      ipvsadm -L --timeout
      If the third value in the output is 10, the timeout period of UDP sessions is changed.
      Note After the timeout period of UDP sessions is changed, wait at least 5 minutes before you proceed.

ACK clusters that run Kubernetes 1.16 or earlier

kube-proxy in an ACK cluster that runs Kubernetes 1.16 or an earlier version does not support the udpTimeout parameter. To change the timeout period of UDP sessions, we recommend that you use Operation Orchestration Service (OOS) to run the following ipvsadm commands on all ECS instances in the cluster at the same time:
yum install -y ipvsadm
ipvsadm -L --timeout > /tmp/ipvsadm_timeout_old
ipvsadm --set 900 120 10
ipvsadm -L --timeout > /tmp/ipvsadm_timeout_new
diff /tmp/ipvsadm_timeout_old /tmp/ipvsadm_timeout_new

For more information about how to use OOS to manage multiple ECS instances at the same time, see Manage multiple instances.

What to do next

After CoreDNS is updated, you can optimize the configurations of CoreDNS as needed. For more information, see Properly configure CoreDNS.