All Products
Search
Document Center

Container Compute Service:Node affinity scheduling

Last Updated:Mar 25, 2025

All nodes in an Alibaba Cloud Container Compute Service (ACS) cluster are virtual nodes. Labels are used to mark various attributes of nodes, such as zones, regions, and the GPU models of the virtual nodes. In an ACS cluster, you can use Kubernetes-native scheduling semantics to implement node affinity scheduling. By configuring the node attributes in the nodeSelector or nodeAffinity field, you can schedule pods to a virtual node that has specific attributes. This topic describes node affinity scheduling of ACS.

Prerequisites

  • kube-scheduler is installed and its version meets the following requirements.

    ACS cluster version

    Scheduler version

    1.31

    v1.31.0-aliyun-1.2.0 and later

    1.30

    v1.30.3-aliyun-1.1.1 and later

    1.28

    v1.28.9-aliyun-1.1.0 and later

  • acs-virtual-node is installed and its version is v2.12.0-acs.4 or later.

Usage notes

nodeSelector

You can add labels to a virtual node by configuring the nodeSelector field to implement node affinity scheduling. The following table describes the labels supported by ACS for different types of virtual nodes.

Virtual node type

Label

Description

Example

Regular virtual node

topology.kubernetes.io/zone

Network zone

topology.kubernetes.io/zone: cn-shanghai-b

GPU-HPN virtual node

topology.kubernetes.io/zone

Network zone

topology.kubernetes.io/zone: cn-shanghai-b

alibabacloud.com/hpn-zone

High-performance network zone

alibabacloud.com/hpn-zone: B1

alibabacloud.com/gpu-model-series

GPU model

alibabacloud.com/gpu-model-series: <example-model>

nodeAffinity

You can also use nodeAffinity to specify the affinity attributes of nodes. However, nodeAffinity is more expressive. For different compute classes, ACS provides constraints on specific fields. The following table describes the constraints.

Compute class

Field

Description

Constraint

  • GPU-accelerated

  • GPU-HPN

requiredDuringSchedulingIgnoredDuringExecution

Pods can be scheduled only if the rule is met. This field is similar to the nodeSelector field.

For the nodeSelectorTerms field:

  • You can add only the preceding affinity labels in the matchExpressions field.

  • The matchFields field is not supported.

preferredDuringSchedulingIgnoredDuringExecution

This field is used to specify node affinity based on weight. The scheduler attempts to find a node that meets the rule. However, the scheduler schedules the pod even if a matching node is unavailable.

Not supported.

The preceding constraints of the nodeAffinity field are unavailable for pods in the general-purpose and performance-enhanced compute classes.

Examples

The following example shows how to configure the nodeSelector field to schedule an application to a specific zone.

  1. Run the following command to view the virtual nodes in the cluster:

    kubectl get node

    Expected results:

    NAME                            STATUS   ROLES   AGE     VERSION
    virtual-kubelet-cn-hangzhou-i   Ready    agent   5h42m   v1.28.3-xx
    virtual-kubelet-cn-hangzhou-j   Ready    agent   5h42m   v1.28.3-xx
  2. Create a file named dep-node-selector-demo.yaml and add the following content to the file:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: dep-node-selector-demo
      labels:
        app: node-selector-demo
    spec:
      replicas: 4
      selector:
        matchLabels:
          app: node-selector-demo
      template:
        metadata:
          labels:
            app: node-selector-demo
        spec:
          containers:
          - name: node-selector-demo
            image: registry-cn-hangzhou.ack.aliyuncs.com/acs/stress:v1.0.4
            command:
            - "sleep"
            - "infinity"
          # Set the zone to cn-hangzhou-j.
          nodeSelector:
            topology.kubernetes.io/zone: cn-hangzhou-j
  3. Run the following command to deploy the application to the cluster:

    kubectl apply -f dep-node-selector-demo.yaml
  4. Run the following command to view the distribution results of pods:

    kubectl get pod -o wide

    Expected results:

    NAME                                     READY   STATUS    RESTARTS   AGE    IP               NODE                            NOMINATED NODE   READINESS GATES
    dep-node-selector-demo-b4578576b-cgpfq   1/1     Running   0          112s   192.168.xx.xxx   virtual-kubelet-cn-hangzhou-j   <none>           <none>
    dep-node-selector-demo-b4578576b-fs8kl   1/1     Running   0          110s   192.168.xx.xxx   virtual-kubelet-cn-hangzhou-j   <none>           <none>
    dep-node-selector-demo-b4578576b-nh8zm   1/1     Running   0          2m8s   192.168.xx.xxx   virtual-kubelet-cn-hangzhou-j   <none>           <none>
    dep-node-selector-demo-b4578576b-rpp8l   1/1     Running   0          2m8s   192.168.xx.xxx   virtual-kubelet-cn-hangzhou-j   <none>           <none>

    The output indicates that the four pods are distributed in the cn-hangzhou-j zone.