ACS clusters represent nodes as virtual nodes and expose node attributes — such as availability zone, region, and GPU model — as Kubernetes labels. Use nodeSelector or nodeAffinity in your pod spec to schedule pods on virtual nodes with specific attributes.
Prerequisites
Before you begin, ensure that you have:
kube-scheduler installed at the minimum version for your cluster:
ACS cluster version Minimum kube-scheduler version 1.31 v1.31.0-aliyun-1.2.0 1.30 v1.30.3-aliyun-1.1.1 1.28 v1.28.9-aliyun-1.1.0 acs-virtual-node v2.12.0-acs.4 or later installed
The Enable custom labels for GPU-HPN nodes and scheduler option is enabled by default in newer versions of kube-scheduler. No manual setup is required. For details, see kube-scheduler.
Choose a scheduling method
| Method | When to use |
|---|---|
nodeSelector | Pin pods to nodes with a specific label. Simplest option — use this first. |
nodeAffinity | Express richer scheduling rules, such as multiple label conditions or operator-based matching (In, NotIn, Exists). |
Available node labels
ACS virtual nodes expose the following labels for scheduling:
| Label key | Description | Example value |
|---|---|---|
topology.kubernetes.io/zone | Availability zone | cn-hangzhou-j |
topology.kubernetes.io/region | Region | cn-hangzhou |
kubernetes.io/hostname | Virtual node name | virtual-kubelet-cn-hangzhou-j |
nodeSelector
nodeSelector matches pods to virtual nodes by label. Add the target label under nodeSelector in your pod spec. For a worked example, see Schedule pods to a specific zone.
nodeAffinity
nodeAffinity supports the same label-based matching as nodeSelector but with a more expressive syntax. It has two modes:
| Mode | Behavior |
|---|---|
requiredDuringSchedulingIgnoredDuringExecution (hard affinity) | The scheduler places the pod only on a node that satisfies the rule. If no matching node exists, the pod is not scheduled. |
preferredDuringSchedulingIgnoredDuringExecution (soft affinity) | The scheduler tries to find a matching node. If none is available, the pod is still scheduled on any eligible node. |
If you specify both nodeSelector and nodeAffinity on the same pod, both must be satisfied for the pod to be scheduled. Within nodeSelectorTerms, multiple terms are evaluated with OR logic — the pod is scheduled if any one term matches. Within a single term, multiple matchExpressions entries are evaluated with AND logic — all expressions must match.
Constraints for GPU-HPN pods
The following constraints apply when all three conditions are true:
The pod uses a GPU-HPN (High-Performance Network GPU) compute type.
The pod's
schedulerNameisdefault-scheduler.Enable Custom Tags And Scheduler For GPU-HPN Nodes is not selected in the scheduler component configuration.
| Field | Constraint |
|---|---|
requiredDuringSchedulingIgnoredDuringExecution | In nodeSelectorTerms, only the supported affinity labels are allowed in matchExpressions. matchFields cannot be specified. |
preferredDuringSchedulingIgnoredDuringExecution | Not supported. |
For general-purpose, compute-optimized, and GPU instance types, nodeAffinity has no such constraints.
Schedule pods to a specific zone
This example uses nodeSelector to schedule a Deployment to the cn-hangzhou-j zone.
List the virtual nodes in your cluster.
kubectl get nodeExpected output:
NAME STATUS ROLES AGE VERSION virtual-kubelet-cn-hangzhou-i Ready agent 5h42m v1.28.3-xx virtual-kubelet-cn-hangzhou-j Ready agent 5h42m v1.28.3-xxCreate a file named
dep-node-selector-demo.yamlwith the following content.apiVersion: apps/v1 kind: Deployment metadata: name: dep-node-selector-demo labels: app: node-selector-demo spec: replicas: 4 selector: matchLabels: app: node-selector-demo template: metadata: labels: app: node-selector-demo spec: containers: - name: node-selector-demo image: registry-cn-hangzhou.ack.aliyuncs.com/acs/stress:v1.0.4 command: - "sleep" - "infinity" # Pin pods to the cn-hangzhou-j zone nodeSelector: topology.kubernetes.io/zone: cn-hangzhou-jApply the manifest.
kubectl apply -f dep-node-selector-demo.yamlVerify that all pods are scheduled to the
cn-hangzhou-jzone.kubectl get pod -o wideExpected output:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dep-node-selector-demo-b4578576b-cgpfq 1/1 Running 0 112s 192.168.xx.xxx virtual-kubelet-cn-hangzhou-j <none> <none> dep-node-selector-demo-b4578576b-fs8kl 1/1 Running 0 110s 192.168.xx.xxx virtual-kubelet-cn-hangzhou-j <none> <none> dep-node-selector-demo-b4578576b-nh8zm 1/1 Running 0 2m8s 192.168.xx.xxx virtual-kubelet-cn-hangzhou-j <none> <none> dep-node-selector-demo-b4578576b-rpp8l 1/1 Running 0 2m8s 192.168.xx.xxx virtual-kubelet-cn-hangzhou-j <none> <none>All four pods run on
virtual-kubelet-cn-hangzhou-j, confirming zone-level scheduling works as expected.