Sidecar proxies consume CPU in bursts -- high during connection setup, low during steady-state traffic. This bursty pattern means that most of the CPU and memory allocated to sidecars sits idle most of the time. By assigning ACK dynamically overcommitted resources (Batch CPU and Batch memory) to sidecar containers, you reclaim that idle capacity and make it available to other pods, improving overall cluster utilization.
This topic walks through deploying the overcommitment ConfigMap, configuring Batch resource limits for sidecar containers in the ASM console, deploying a sample workload, and verifying the results.
How dynamic resource overcommitment works
In Kubernetes, the kubelet manages pod resources based on quality of service (QoS) classes: Guaranteed, Burstable, and BestEffort. These classes are determined by the CPU and memory requests and limits set on each pod.
ack-koordinator extends this model by monitoring node loads in real time and reclaiming resources that are allocated but not actively used. To distinguish these reclaimed resources from regular CPU and memory, ack-koordinator assigns them the Batch priority:
| Resource type | Resource key | Unit |
|---|---|---|
| Batch CPU | kubernetes.io/batch-cpu | Millicores |
| Batch memory | kubernetes.io/batch-memory | Bytes |
Pods that consume Batch resources must carry the label koordinator.sh/qosClass: "BE", which maps to the Koordinator BestEffort QoS class. This label tells the scheduler to treat the pod as a low-priority consumer of reclaimed capacity.
For details on the overcommitment model, see Enable dynamic resource overcommitment.
Prerequisites
Before you begin, make sure you have:
An ACK Pro cluster. For more information, see Create an ACK Pro cluster
Note Only ACK Pro clusters support dynamically overcommitted resources.The dynamic resource overcommitment feature enabled on the ACK Pro cluster. For more information, see Enable dynamic resource overcommitment
The ACK Pro cluster added to an ASM instance that runs version 1.16 or later. For more information, see Add a cluster to an ASM instance
Step 1: Deploy the ack-slo-config ConfigMap
The ack-slo-config ConfigMap controls how ack-koordinator reclaims idle resources on each node.
Create a file named
configmap.yamlwith the following content: The following table describes the key parameters: For details about all configuration items, see Enable dynamic resource overcommitment.Parameter Description Example value enableEnables resource overcommitment on the node. truemetricAggregateDurationSecondsInterval in seconds for aggregating node resource metrics. 60cpuReclaimThresholdPercentPercentage of allocated CPU that can be reclaimed. 60memoryReclaimThresholdPercentPercentage of allocated memory that can be reclaimed. 70memoryCalculatePolicyMethod for calculating memory availability. usagebases calculations on actual memory consumption."usage"apiVersion: v1 kind: ConfigMap metadata: name: ack-slo-config namespace: kube-system data: colocation-config: | { "enable": true, "metricAggregateDurationSeconds": 60, "cpuReclaimThresholdPercent": 60, "memoryReclaimThresholdPercent": 70, "memoryCalculatePolicy": "usage" }Apply the ConfigMap.
If the
ack-slo-configConfigMap already exists in thekube-systemnamespace, patch it to preserve other settings:kubectl patch cm -n kube-system ack-slo-config --patch "$(cat configmap.yaml)"If the ConfigMap does not exist, create it:
kubectl apply -f configmap.yaml
Verify the available Batch resources on a node. Replace
<node-name>with the actual node name. In the output, look for theallocatablesection: Ifkubernetes.io/batch-cpuandkubernetes.io/batch-memoryappear underallocatable, the overcommitment ConfigMap is working correctly.kubectl get node <node-name> -o yamlstatus: allocatable: # Unit: millicores. In this example, 50 cores are available. kubernetes.io/batch-cpu: 50000 # Unit: bytes. In this example, 50 GB of memory is available. kubernetes.io/batch-memory: 53687091200
Step 2: Configure Batch resources for sidecar containers
Set Batch resource requests and limits for the injected sidecar proxy container and the istio-init container in the ASM console.
Log on to the ASM console.
In the left-side navigation pane, choose Service Mesh > Mesh Management.
On the Mesh Management page, click the name of the target ASM instance.
In the left-side navigation pane, choose Data Plane Component Management > Sidecar Proxy Setting.
On the Sidecar Proxy Setting page, click the global tab, then click Resource Settings.
Select Set ACK Resources That Can Be Dynamically Overcommitted for Sidecar Proxy and configure the resource values. Set Resource Limits and Required Resources based on your workload QoS class: The following table shows example values:
QoS class Guidance Guaranteed Set Resource Limits and Required Resources to the same value. Burstable or BestEffort Set Required Resources lower than Resource Limits. This guidance applies to regular resources as well. Container Setting CPU Memory Configure Resources for Injected Sidecar Proxy (ACK Dynamically Overcommitted Resources) Resource Limits 2000 millicores 2048 MiB Required Resources 200 millicores 256 MiB Configure Resources for istio-init Container (ACK Dynamically Overcommitted Resources) Resource Limits 1000 millicores 1024 MiB Required Resources 100 millicores 128 MiB Click Update Settings.
Step 3: Deploy a workload with Batch resources
Deploy a sample application that requests Batch resources. Each pod that uses Batch resources must carry the koordinator.sh/qosClass: "BE" label so that ack-koordinator schedules it as a BestEffort consumer of reclaimed capacity.
Create a file named
demo.yamlwith the following content:apiVersion: apps/v1 kind: Deployment metadata: name: sleep spec: replicas: 1 selector: matchLabels: app: sleep template: metadata: labels: app: sleep # Required. Assigns BestEffort QoS to the pod. koordinator.sh/qosClass: "BE" spec: terminationGracePeriodSeconds: 0 containers: - name: sleep image: curlimages/curl command: ["/bin/sleep", "infinity"] imagePullPolicy: IfNotPresent resources: requests: # 1 core (1000 millicores) kubernetes.io/batch-cpu: "1k" # 1 GiB kubernetes.io/batch-memory: "1Gi" limits: kubernetes.io/batch-cpu: "1k" kubernetes.io/batch-memory: "1Gi"Deploy the application:
kubectl apply -f demo.yaml
Step 4 (optional): Verify resource limits
After deployment, confirm that the cgroup-level resource limits match your configured values.
Log on to the node where the pod is running and check the CPU limit: Expected output:
cat /sys/fs/cgroup/cpu,cpuacct/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod****.slice/cri-containerd-****.scope/cpu.cfs_quota_us# 1 core = 100000 microseconds per CFS period 100000Check the memory limit: Expected output:
cat /sys/fs/cgroup/memory/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod****.slice/cri-containerd-****.scope/memory.limit_in_bytes# 1 GB = 1073741824 bytes 1073741824
If both values match the Deployment spec, the Batch resource limits are in effect.