High availability is essential to distributed applications. In an Alibaba Cloud Container Compute Service (ACS) cluster, you can spread distributed applications across zones based on Kubernetes-native scheduling semantics to ensure high availability. You can add labels to nodes by configuring the TopologyKey field of the topology spread constraint to distribute workloads across zones. This topic describes the limits and usage of topology spread constraints in ACS.
Prerequisites
An ACS cluster is created. For more information, see Create an ACS cluster.
kube-scheduler is installed. For more information, see kube-scheduler.
acs-virtual-node v2.12.0-acs.4 or later is installed.
Usage notes
All nodes in the ACS cluster are virtual nodes. You can add labels to nodes by configuring the TopologyKey field of the topology spread constraint to distribute workloads across zones.
The following table describes the topology labels supported by ACS for different types of nodes.
Node type | Label | Description | Example |
Regular node | topology.kubernetes.io/zone | Network zone | topology.kubernetes.io/zone: cn-shanghai-b |
ACS supports multiple compute classes. For different compute classes, the following constraints exist when you use other fields of topology distribution constraints.
Compute class | Field | Description | Constraint |
| labelSelector | This field is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in the topology domain. | Pods in other compute classes, such as the GPU-accelerated compute class, are not counted. |
matchLabelKeys | A list of pod label keys that are used to select the pods for which distribution is calculated. | ||
| labelSelector | This field is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in the topology domain. | Pods in other compute classes, such as the general-purpose and performance-enhanced compute classes, are not counted. |
matchLabelKeys | A list of pod label keys that are used to select the pods for which distribution is calculated. | ||
nodeAffinityPolicy | This field specifies how to treat nodeAffinity or nodeSelector of pods when the pod topology spread skew is calculated. | Not supported. | |
nodeTaintsPolicy | This field specifies how to treat node taints when the pod topology spread skew is calculated. | Not supported. |
For more information about the fields, see Pod Topology Spread Constraints.
Procedure
Run the following command to view the nodes in the cluster:
kubectl get node
Expected output:
NAME STATUS ROLES AGE VERSION virtual-kubelet-cn-hangzhou-i Ready agent 5h42m v1.28.3-xx virtual-kubelet-cn-hangzhou-j Ready agent 5h42m v1.28.3-xx
Create a file named dep-spread-demo.yaml and add the following content to the file:
apiVersion: apps/v1 kind: Deployment metadata: name: dep-spread-demo labels: app: spread-demo spec: replicas: 4 selector: matchLabels: app: spread-demo template: metadata: labels: app: spread-demo spec: containers: - name: spread-demo image: registry.cn-hangzhou.aliyuncs.com/acs/stress:v1.0.4 command: - "sleep" - "infinity" # Specify the spread constraint. The value of maxSkew indicates that the difference in the number of pods between zones cannot exceed 1. topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: app: spread-demo
Run the following command to deploy dep-spread-demo to the cluster:
kubectl apply -f dep-spread-demo.yaml
Run the following command to view the distribution results of pods:
kubectl get pod -o wide
Expected output:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dep-spread-demo-7c656dbf5f-6twkc 1/1 Running 0 2m29s 192.168.xx.xxx virtual-kubelet-cn-hangzhou-i <none> <none> dep-spread-demo-7c656dbf5f-cgxr8 1/1 Running 0 2m29s 192.168.xx.xxx virtual-kubelet-cn-hangzhou-j <none> <none> dep-spread-demo-7c656dbf5f-f4fz9 1/1 Running 0 2m29s 192.168.xx.xxx virtual-kubelet-cn-hangzhou-j <none> <none> dep-spread-demo-7c656dbf5f-kc6xf 1/1 Running 0 2m29s 192.168.xx.xxx virtual-kubelet-cn-hangzhou-i <none> <none>
The output indicates that four pods are distributed in two zones.