All Products
Search
Document Center

Container Compute Service:Node Affinity Scheduling

Last Updated:Mar 26, 2026

ACS clusters represent nodes as virtual nodes and expose node attributes — such as availability zone, region, and GPU model — as Kubernetes labels. Use nodeSelector or nodeAffinity in your pod spec to schedule pods on virtual nodes with specific attributes.

Prerequisites

Before you begin, ensure that you have:

  • kube-scheduler installed at the minimum version for your cluster:

    ACS cluster versionMinimum kube-scheduler version
    1.31v1.31.0-aliyun-1.2.0
    1.30v1.30.3-aliyun-1.1.1
    1.28v1.28.9-aliyun-1.1.0
  • acs-virtual-node v2.12.0-acs.4 or later installed

Note

The Enable custom labels for GPU-HPN nodes and scheduler option is enabled by default in newer versions of kube-scheduler. No manual setup is required. For details, see kube-scheduler.

Choose a scheduling method

MethodWhen to use
nodeSelectorPin pods to nodes with a specific label. Simplest option — use this first.
nodeAffinityExpress richer scheduling rules, such as multiple label conditions or operator-based matching (In, NotIn, Exists).

Available node labels

ACS virtual nodes expose the following labels for scheduling:

Label keyDescriptionExample value
topology.kubernetes.io/zoneAvailability zonecn-hangzhou-j
topology.kubernetes.io/regionRegioncn-hangzhou
kubernetes.io/hostnameVirtual node namevirtual-kubelet-cn-hangzhou-j

nodeSelector

nodeSelector matches pods to virtual nodes by label. Add the target label under nodeSelector in your pod spec. For a worked example, see Schedule pods to a specific zone.

nodeAffinity

nodeAffinity supports the same label-based matching as nodeSelector but with a more expressive syntax. It has two modes:

ModeBehavior
requiredDuringSchedulingIgnoredDuringExecution (hard affinity)The scheduler places the pod only on a node that satisfies the rule. If no matching node exists, the pod is not scheduled.
preferredDuringSchedulingIgnoredDuringExecution (soft affinity)The scheduler tries to find a matching node. If none is available, the pod is still scheduled on any eligible node.
Note

If you specify both nodeSelector and nodeAffinity on the same pod, both must be satisfied for the pod to be scheduled. Within nodeSelectorTerms, multiple terms are evaluated with OR logic — the pod is scheduled if any one term matches. Within a single term, multiple matchExpressions entries are evaluated with AND logic — all expressions must match.

Constraints for GPU-HPN pods

The following constraints apply when all three conditions are true:

  • The pod uses a GPU-HPN (High-Performance Network GPU) compute type.

  • The pod's schedulerName is default-scheduler.

  • Enable Custom Tags And Scheduler For GPU-HPN Nodes is not selected in the scheduler component configuration.

FieldConstraint
requiredDuringSchedulingIgnoredDuringExecutionIn nodeSelectorTerms, only the supported affinity labels are allowed in matchExpressions. matchFields cannot be specified.
preferredDuringSchedulingIgnoredDuringExecutionNot supported.

For general-purpose, compute-optimized, and GPU instance types, nodeAffinity has no such constraints.

Schedule pods to a specific zone

This example uses nodeSelector to schedule a Deployment to the cn-hangzhou-j zone.

  1. List the virtual nodes in your cluster.

    kubectl get node

    Expected output:

    NAME                            STATUS   ROLES   AGE     VERSION
    virtual-kubelet-cn-hangzhou-i   Ready    agent   5h42m   v1.28.3-xx
    virtual-kubelet-cn-hangzhou-j   Ready    agent   5h42m   v1.28.3-xx
  2. Create a file named dep-node-selector-demo.yaml with the following content.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: dep-node-selector-demo
      labels:
        app: node-selector-demo
    spec:
      replicas: 4
      selector:
        matchLabels:
          app: node-selector-demo
      template:
        metadata:
          labels:
            app: node-selector-demo
        spec:
          containers:
          - name: node-selector-demo
            image: registry-cn-hangzhou.ack.aliyuncs.com/acs/stress:v1.0.4
            command:
            - "sleep"
            - "infinity"
          # Pin pods to the cn-hangzhou-j zone
          nodeSelector:
            topology.kubernetes.io/zone: cn-hangzhou-j
  3. Apply the manifest.

    kubectl apply -f dep-node-selector-demo.yaml
  4. Verify that all pods are scheduled to the cn-hangzhou-j zone.

    kubectl get pod -o wide

    Expected output:

    NAME                                     READY   STATUS    RESTARTS   AGE    IP               NODE                            NOMINATED NODE   READINESS GATES
    dep-node-selector-demo-b4578576b-cgpfq   1/1     Running   0          112s   192.168.xx.xxx   virtual-kubelet-cn-hangzhou-j   <none>           <none>
    dep-node-selector-demo-b4578576b-fs8kl   1/1     Running   0          110s   192.168.xx.xxx   virtual-kubelet-cn-hangzhou-j   <none>           <none>
    dep-node-selector-demo-b4578576b-nh8zm   1/1     Running   0          2m8s   192.168.xx.xxx   virtual-kubelet-cn-hangzhou-j   <none>           <none>
    dep-node-selector-demo-b4578576b-rpp8l   1/1     Running   0          2m8s   192.168.xx.xxx   virtual-kubelet-cn-hangzhou-j   <none>           <none>

    All four pods run on virtual-kubelet-cn-hangzhou-j, confirming zone-level scheduling works as expected.

What's next