All Products
Search
Document Center

Container Service for Kubernetes:Schedule applications to a specific node pool

Last Updated:Oct 30, 2025

To schedule a specific application to nodes with a particular configuration, add labels to a node pool, then configure the application's pod spec with a corresponding nodeSelector or nodeAffinity.

How it works

Scheduling a pod to a specific node pool uses the native Kubernetes scheduling mechanism. By matching a pod's scheduling rules to the labels on the nodes within a pool, you can control placement. The workflow is as follows:

  1. Label the node pool: A node pool manages a group of nodes with identical configurations. After you configure Node Labels for a node pool, Container Service for Kubernetes (ACK) automatically propagates these labels to all current and future nodes within that pool.

    To ensure that labels are applied to existing nodes, select the Update Labels and Taints of Existing Nodes option in the node pool settings.
  2. Define pod scheduling rules: In the pod's YAML manifest, use nodeSelector or nodeAffinity to specify the labels of the target nodes.

  3. (Optional) Configure exclusive access: To ensure that a node pool is reserved for specific workloads, add a taint to the node pool. This taint prevents any pod without a corresponding toleration from being scheduled onto the nodes in that pool.

  4. Auto scheduling: The scheduler then automatically places the pod onto a node that satisfies all the defined rules.

Step 1: Set labels for a node pool

Add custom labels to a node pool to identify business attributes, environments, or any other metadata for scheduling purposes.

ACK automatically creates a globally unique label for each node pool: alibabacloud.com/nodepool-id. You can use this label to match exactly a node pool.
  1. Log on to the ACK console. In the left navigation pane, click Clusters.

  2. On the Clusters page, find the cluster to manage and click its name. In the left navigation pane, choose Nodes > Node Pools.

  3. Find the target node pool and click Edit in the Actions column. Expand the Advanced Options (Optional) section and configure the Node Labels.

    • Key: Must be in the format prefix/name.

      • Name (Required): Must be 1 to 63 characters long and must begin and end with an alphanumeric character ([a-z0-9A-Z]). It can contain hyphens (-), underscores (_), dots (.), and alphanumerics.

      • Prefix (Optional): Must be a DNS subdomain, which is a series of DNS labels separated by dots (.). The total length of the prefix must not exceed 253 characters, and it must end with a forward slash (/).

        Note

        The kubernetes.io/ prefix is reserved for Kubernetes core components. If you use this namespace, your label key must begin with one of the following: kubelet.kubernetes.io/ or node.kubernetes.io/.

    • Value (Optional): Can be 1 to 63 characters long and must begin and end with an alphanumeric character ([a-z0-9A-Z]). It can contain hyphens (-), underscores (_), dots (.), and alphanumerics.

  4. (Optional) Select Update Labels and Taints of Existing Nodes if needed.

  5. Save your changes. Verify that the labels have been applied by navigating to the Nodes page. Then, click Manage Labels And Taints, and check the labels on each node.

Step 2: Configure the application's scheduling policy

Once the node pool is labeled, you can configure your application's deployment YAML to target it using either nodeSelector or nodeAffinity.

  • nodeSelector: Provides a simple and straightforward way to select nodes by requiring an exact match for one or more labels.

  • nodeAffinity: Offers more expressive and flexible scheduling rules, including:

    • Operators such as In, NotIn, and Exists.

    • Hard affinity (requiredDuringSchedulingIgnoredDuringExecution): The pod must be scheduled on a node that meets the criteria, and remains in the pending state if no matching nodes are available.

    • Soft affinity (preferredDuringSchedulingIgnoredDuringExecution): The scheduler will try to place the pod on a matching node, but will schedule it elsewhere if no matching nodes are available.

Example Deployment

  1. Create a file named deployment.yaml. The following examples demonstrate how to schedule an Nginx deployment to nodes with the label pod: nginx.

    nodeAffinity

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-with-affinity
      labels:
        app: nginx
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          affinity:
            nodeAffinity:
              # Hard requirement: The pod must be scheduled to nodes that meet the conditions.
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: pod         # The key of the node label
                    operator: In
                    values:
                    - nginx        # The value of the node label
          containers:
          - name: nginx-with-affinity
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            ports:
            - containerPort: 80
            # In a production environment, declare resource requirements for the container to ensure Quality of Service (QoS).
            resources:
              requests:
                cpu: "100m"
                memory: "128Mi"
              limits:
                cpu: "200m"
                memory: "256Mi"

    nodeSelector

    apiVersion: apps/v1 
    kind: Deployment
    metadata:
      name: nginx-deployment-basic
      labels:
        app: nginx
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          nodeSelector:
            pod: nginx      # Ensures the pod is scheduled only on nodes that have this specific label. 
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            ports:
            - containerPort: 80
            # In a production environment, declare resource requirements for the container to ensure QoS.
            resources:
              requests:
                cpu: "100m"
                memory: "128Mi"
              limits:
                cpu: "200m"
                memory: "256Mi"
  2. Deploy the application.

    kubectl apply -f deployment.yaml
  3. Verify that the pods have been scheduled to the correct nodes.

    Add the -o wide flag to the command to view the specific node where each pod is scheduled.
    kubectl get pods -l app=nginx -o wide

    Check the output to confirm that the pods are running on nodes from your target node pool.

References

  • In addition to managing labels through node pools, you can also set them to individual nodes for fine-grained scheduling control. For instructions, see Manage node labels.

  • If your pods experience issues, such as being stuck in the pending state for an extended period, see Pod troubleshooting.

  • ACK offers a range of advanced scheduling policies, including priority-based instance scaling (defining the order for scaling up and down different instance types) and load-aware scheduling based on real-time node resource utilization. For more information, see Scheduling policies provided by ACK.

  • If you use hard affinity for scheduling and no nodes in the cluster meet the requirements, ACK automatically provisions a new node from any auto scaling-enabled node pool that has the required labels.

  • Clusters created before the node pool feature was introduced may contain unmanaged worker nodes. We recommend migrating these nodes into a node pool to ensure consistent management.