High availability is essential to distributed applications. In an Alibaba Cloud Container Compute Service (ACS) cluster, you can spread distributed applications across zones based on Kubernetes-native scheduling semantics to ensure high availability. You can add topology labels by configuring the TopologyKey field of the topology spread constraint to distribute workloads across zones. This topic describes the limits and usage of topology spread constraints in ACS.
Prerequisites
kube-scheduler is installed and its version meets the following requirements.
ACS cluster version
Scheduler version
1.31
v1.31.0-aliyun-1.2.0 and later
1.30
v1.30.3-aliyun-1.1.1 and later
1.28
v1.28.9-aliyun-1.1.0 and later
acs-virtual-node is installed and its version is v2.12.0-acs.4 or later.
Usage notes
All nodes in the ACS cluster are virtual nodes. You can add topology labels to virtual nodes by configuring the TopologyKey field of the topology spread constraint to distribute workloads across zones.
The following table describes the topology labels supported by ACS for different types of nodes.
Virtual node type | Label | Description | Example |
Regular virtual node | topology.kubernetes.io/zone | Network zone | topology.kubernetes.io/zone: cn-shanghai-b |
GPU-HPN virtual node | topology.kubernetes.io/zone | Network zone | topology.kubernetes.io/zone: cn-shanghai-b |
alibabacloud.com/hpn-zone | High-performance network zone | alibabacloud.com/hpn-zone: B1 |
ACS supports various compute classes. The following constraints exist when you configure other fields of topology distribution constraints for pods of different compute classes.
Compute class | Field | Description | Constraint |
| labelSelector | This field is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in the topology domain. | GPU-accelerated pods and GPU-HPN pods are not counted. |
matchLabelKeys | A list of pod label keys that are used to select the pods for which distribution is calculated. | ||
| labelSelector | This field is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in the topology domain. | Pods in other compute classes, such as the general-purpose and performance-enhanced compute classes, are not counted. |
matchLabelKeys | A list of pod label keys that are used to select the pods for which distribution is calculated. | ||
nodeAffinityPolicy | This field specifies how to treat nodeAffinity or nodeSelector of pods when the pod topology spread skew is calculated. | Not supported. | |
nodeTaintsPolicy | This field specifies how to treat node taints when the pod topology spread skew is calculated. | Not supported. |
For more information about the fields, see Pod Topology Spread Constraints.
Procedure
Run the following command to view the nodes in the cluster:
kubectl get nodeExpected output:
NAME STATUS ROLES AGE VERSION virtual-kubelet-cn-hangzhou-i Ready agent 5h42m v1.28.3-xx virtual-kubelet-cn-hangzhou-j Ready agent 5h42m v1.28.3-xxCreate a file named dep-spread-demo.yaml and add the following content to the file:
apiVersion: apps/v1 kind: Deployment metadata: name: dep-spread-demo labels: app: spread-demo spec: replicas: 4 selector: matchLabels: app: spread-demo template: metadata: labels: app: spread-demo spec: containers: - name: spread-demo image: registry.cn-hangzhou.aliyuncs.com/acs/stress:v1.0.4 command: - "sleep" - "infinity" # Specify the spread constraint. The value of maxSkew indicates that the difference in the number of pods between zones cannot exceed 1. topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: app: spread-demoRun the following command to deploy dep-spread-demo to the cluster:
kubectl apply -f dep-spread-demo.yamlRun the following command to view the distribution results of pods:
kubectl get pod -o wideExpected output:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dep-spread-demo-7c656dbf5f-6twkc 1/1 Running 0 2m29s 192.168.xx.xxx virtual-kubelet-cn-hangzhou-i <none> <none> dep-spread-demo-7c656dbf5f-cgxr8 1/1 Running 0 2m29s 192.168.xx.xxx virtual-kubelet-cn-hangzhou-j <none> <none> dep-spread-demo-7c656dbf5f-f4fz9 1/1 Running 0 2m29s 192.168.xx.xxx virtual-kubelet-cn-hangzhou-j <none> <none> dep-spread-demo-7c656dbf5f-kc6xf 1/1 Running 0 2m29s 192.168.xx.xxx virtual-kubelet-cn-hangzhou-i <none> <none>The output indicates that four pods are distributed in two zones.