All Products
Search
Document Center

Alibaba Cloud Service Mesh:Configure ACK resources that can be dynamically overcommitted in a sidecar proxy

Last Updated:Jul 20, 2023

When the dynamic resource overcommitment mode is enabled for a Container Service for Kubernetes (ACK) Pro cluster connected to a Service Mesh (ASM) instance, the sidecar proxy injected into a Deployment supports resources that can be dynamically overcommitted to improve the resource utilization of the cluster. This topic describes how to configure ACK resources that can be dynamically overcommitted in a sidecar proxy.

Prerequisites

  • An ACK Pro cluster is created. For more information, see Create an ACK Pro cluster.

    Note

    Only ACK Pro clusters support the configuration of resources that can be dynamically overcommitted.

  • The ACK Pro cluster is enabled with the dynamic resource overcommitment feature. For more information, see Dynamic resource overcommitment.

  • The ACK Pro cluster is added to an ASM instance whose version is 1.16 or later. For more information, see Add a cluster to an ASM instance.

Introduction to the dynamic resource overcommitment feature

In Kubernetes, the kubelet manages the resources that are used by the pods on a node based on the quality of service (QoS) classes of the pods. For example, the kubelet controls the out of memory (OOM) priorities. The QoS class of a pod can be Guaranteed, Burstable, or BestEffort. The QoS classes of pods depend on the requests and limits of CPU and memory resources that are configured for the pods.

ack-koordinator can dynamically overcommit resources. ack-koordinator monitors the loads of a node in real time and then schedules the resources that are allocated to pods but are not in use. To differentiate resources that can be dynamically overcommitted from regular resources, ack-koordinator assigns the Batch priority to the resources that can be dynamically overcommitted, including batch-cpu and batch-memory. For more information, see Dynamic resource overcommitment.

Procedure

  1. Deploy the ack-slo-config ConfigMap.

    1. Create a file named configmap.yaml that contains the following content.

      You can flexibly manage resources by modifying the configuration items in the ConfigMap. For more information about the configuration items, see Dynamic resource overcommitment.

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: ack-slo-config
        namespace: kube-system
      data:
        colocation-config: |
          {
            "enable": true,
            "metricAggregateDurationSeconds": 60,
            "cpuReclaimThresholdPercent": 60,
            "memoryReclaimThresholdPercent": 70,
            "memoryCalculatePolicy": "usage"
          }
    2. Check whether the ack-slo-config ConfigMap exists in the kube-system namespace.

      • If the ack-slo-config ConfigMap exists, you can run the kubectl patch command to update the ConfigMap. This avoids changing other settings in the ConfigMap.

        kubectl patch cm -n kube-system ack-slo-config --patch "$(cat configmap.yaml)"
      • If the ack-slo-config ConfigMap does not exist, run the following command to create a ConfigMap named ack-slo-config:

        kubectl apply -f configmap.yaml
  2. Run the following command to query the total amount of current Batch resources:

    # Replace $nodeName with the name of the node that you want to query. 
    kubectl get node $nodeName -o yaml

    Expected output:

    #Node
    status:
      allocatable:
        # Unit: millicores. In the following example, 50 cores can be allocated. 
        kubernetes.io/batch-cpu: 50000
        # Unit: bytes. In the following example, 50 GB of memory can be allocated. 
        kubernetes.io/batch-memory: 53687091200
  3. Configure the resources that can be dynamically overcommitted in a sidecar proxy.

    You can configure ACK resources that can be dynamically overcommitted for the injected sidecar proxy container and isito-init container.

    1. Log on to the ASM console. In the left-side navigation pane, choose Service Mesh > Mesh Management.
    2. On the Mesh Management page, click the name of the ASM instance. In the left-side navigation pane, choose Dataplane Component Management > Sidecar Proxy Setting.
    3. On the global tab of the Sidecar Proxy Setting page, click Resource Settings, select Set ACK Resources That Can Be Dynamically Overcommitted for Sidecar Proxy, configure the related parameters, and then click Update Settings in the lower part of the page.

      Resource Limits and Required Resources can be set to the same value or different values. We recommend that you configure Resource Limits and Required Resources based on the workload type.

      • If the QoS class of a workload is Guaranteed, we recommend that you set both to the same value.

      • If you use other types of pods, we recommend that you keep the Required Resources value smaller than the Resource Limits value. This configuration requirement also applies to regular resources.

      Configuration examples:

      Configuration item

      Child configuration item

      Description

      Configure Resources for Injected Sidecar Proxy (ACK Dynamically Overcommitted Resources)

      Resource Limits

      In this example, set CPU to 2000 millicores and Memory to 2048 MiB.

      Required Resources

      In this example, set CPU to 200 millicores and Memory to 256 MiB.

      Configure Resources for istio-init Container (ACK Dynamically Overcommitted Resources)

      Resource Limits

      In this example, set CPU to 1000 millicores and Memory to 1024 MiB.

      Required Resources

      In this example, set CPU to 100 millicores and Memory to 128 MiB.

  4. Deploy an application and apply for Batch resources.

    1. Create a file named demo.yaml that contains the following content.

      The following YAML file creates a Deployment and applies for Batch resources. Add a label to the pod to specify the QoS class of the pod and specify the Batch resources in the requests and limits fields. This way, the pod can use resources that can be dynamically overcommitted.

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: sleep
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: sleep
        template:
          metadata:
            labels:
              app: sleep
              # Required. Set the QoS class of the pod to BestEffort. 
              koordinator.sh/qosClass: "BE"
          spec:
            terminationGracePeriodSeconds: 0
            containers:
            - name: sleep
              image: curlimages/curl
              command: ["/bin/sleep", "infinity"]
              imagePullPolicy: IfNotPresent
              resources:
                requests:
                  # Unit: millicores. In the following example, the CPU request is set to one core. 
                  kubernetes.io/batch-cpu: "1k"
                  # Unit: bytes. In the following example, the memory request is set to 1 GB. 
                  kubernetes.io/batch-memory: "1Gi"
                limits:
                  kubernetes.io/batch-cpu: "1k"
                  kubernetes.io/batch-memory: "1Gi"
    2. Run the following command to deploy demo.yaml as the test application:

      kubectl apply -f demo.yaml
  5. Optional: Check whether the resource limits of the BestEffort pod take effect.

    After you deploy an application and apply for Batch resources, you can check whether the resource limits of the pod take effect. For more information, see Dynamic resource overcommitment.

    1. Log on to the node where the pod resides and run the following command to check the CPU limit:

      cat /sys/fs/cgroup/cpu,cpuacct/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod****.slice/cri-containerd-****.scope/cpu.cfs_quota_us

      Expected output:

      # The CPU limit in the cgroup is set to 1 core. 
      100000
    2. Log on to the node where the pod resides and run the following command to check the memory limit:

      cat /sys/fs/cgroup/memory/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod****.slice/cri-containerd-****.scope/memory.limit_in_bytes

      Expected output:

      # The memory limit in the cgroup is set to 1 GB. 
      1073741824

    If the CPU and memory limits in the output are the same as those set in Step4, the resource limits of the BestEffort pod take effect.