All Products
Search
Document Center

Container Service for Kubernetes:Best practices for associating deployment sets with node pools

Last Updated:Mar 25, 2026

A deployment set keeps Elastic Compute Service (ECS) instances on separate physical servers, preventing a single hardware failure from taking down multiple application replicas simultaneously. When you associate a deployment set with a node pool, every node in that pool is placed on a distinct physical server. Combined with pod anti-affinity or topology spread constraints, this gives your workloads hardware-level isolation.

Prerequisites

Before you begin, ensure that you have:

How it works

Without deployment sets, multiple pods from the same workload can land on nodes that share a physical server. If that server fails, all those pods go down simultaneously. Deployment sets eliminate this single point of failure at the infrastructure level: ECS instances in a deployment set are distributed across separate physical servers and isolated from each other.

Associating a deployment set with a node pool ensures that every node added to that pool — including nodes added during scale-out — is placed on a different physical server. Pod scheduling policies (anti-affinity or topology spread constraints) then spread pods across those distinct nodes.

Two layers of high availability

Deployment sets and availability zones address different failure domains. You can use them independently or together:

LayerFailure domainMechanism
Physical serverA rack or host within a zone failsDeployment set
Availability zoneAn entire zone goes downvSwitch configured across multiple zones

Example 2 in the Examples section below combines both layers.

Limitations

  • Deployment sets are supported by ACK dedicated clusters and ACK managed clusters.

  • Associate a deployment set with a node pool at creation time only. You cannot enable a deployment set on an existing node pool.

  • Each node pool can be associated with only one deployment set.

  • To change the number of ECS instances in a deployment set, scale the associated node pool. See Create and manage a node pool. You cannot add or remove instances directly.

  • Node pools with an associated deployment set do not support preemptible instances.

  • By default, deployment sets use the high-availability strategy, which allows up to 20 ECS instances per zone. The maximum for a region is 20 × number of zones in the region. You cannot increase the number of ECS instances in a deployment set. To increase the maximum number of deployment sets per account, submit a quota increase request in the Quota Center console. See Deployment sets.

  • Insufficient instance stock in a region may cause ECS instance creation to fail, or pay-as-you-go instances stopped in economical mode to fail to restart. If this happens, try again later.

  • Supported deployment strategies vary by instance family. To check which families support a given strategy, call DescribeDeploymentSetSupportedInstanceTypeFamily.

    Deployment strategySupported instance families
    High-availability strategy or high-availability group strategyg8a, g8i, g8y, g7se, g7a, g7, g7h, g7t, g7ne, g7nex, g6, g6e, g6a, g5, g5ne, sn2ne, sn2, sn1; c8a, c8i, c8y, c7se, c7, c7t, c7nex, c7a, c6, c6a, c6e, c5, ic5, sn1ne; r8a, r8i, r8y, r7, r7se, r7t, r7a, r6, r6e, r6a, re6, re6p, r5, re4, se1ne, se1; hfc8i, hfg8i, hfr8i, hfc7, hfg7, hfr7, hfc6, hfg6, hfr6, hfc5, hfg5; d3c, d2s, d2c, d1, d1ne, d1-c14d3, d1-c8d3; i3g, i3, i2, i2g, i2ne, i2gne, i1; ebmg5, ebmc7, ebmg7, ebmr7, sccgn6, scch5, scch5s, sccg5, sccg5s; e, t6, xn4, mn4, n4, e4, n2, n1, gn6i
    Low latency strategyg8a, g8i, g8ae, g8y; c8a, c8i, c8ae, c8y; ebmc8i, ebmg8i, ebmr8i; r8a, r8i, r8ae, r8y; ebmc7, ebmg7, ebmr7

Associate a deployment set with a node pool

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, click the name of the target cluster. In the left-side navigation pane, choose Nodes > Node Pools.

  3. On the Node Pools page, click Create Node Pool. In the dialog box, configure the parameters, select a deployment set, and then click Confirm Order. For parameter details, see Create a node pool.部署集.png

Examples

The following examples show two common scheduling patterns. Both use a node pool with an associated deployment set to achieve hardware-level isolation between nodes. Pod scheduling policies then spread pods across those nodes.

Example 1: One pod per node using pod anti-affinity

This example schedules three replicas of an nginx Deployment to three different nodes within the same node pool.

  1. Create a node pool with three nodes and associate it with a deployment set. See Associate a deployment set with a node pool.

  2. On the Node Pools page, click the node pool name. On the Nodes tab, confirm that three nodes appear.部署集1.png

  3. Log on to the ECS console. In the left-side navigation pane, choose Deployment & Elasticity > Deployment Sets. Confirm that all three nodes belong to the specified deployment set.部署集2.png

  4. Apply the following YAML. The podAntiAffinity rule with topologyKey: kubernetes.io/hostname ensures no two pods from the same Deployment land on the same node. The nodeSelector pins pods to the target node pool.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          name: nginx
          labels:
            app: nginx
        spec:
          affinity:
            podAntiAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:  # Hard anti-affinity: pods must land on distinct nodes.
              - labelSelector:
                  matchExpressions:
                  - key: app
                    operator: In
                    values:
                    - nginx
                topologyKey: kubernetes.io/hostname
          nodeSelector:
              alibabacloud.com/nodepool-id: <nodepool-id>  # Replace with your node pool ID.
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            resources:
              limits:
                cpu: 1
              requests:
                cpu: 1

Result: On the Deployments page, click the Deployment. On the Pods tab, confirm that each pod is scheduled to a different node.部署集3.png

Example 2: Pods spread across multiple zones using topology spread constraints

This example schedules four replicas of an nginx Deployment across four nodes in four different availability zones. It combines deployment set isolation (physical server level) with multi-zone vSwitches (availability zone level) for two layers of high availability.

  1. Create a node pool with four nodes, associate it with a deployment set, and select vSwitches in multiple zones. See Associate a deployment set with a node pool.

  2. On the Node Pools page, click the node pool name. On the Nodes tab, confirm that four nodes across four zones appear. Auto Scaling uses a balanced distribution policy to place ECS instances evenly across zones.部署集4.png

  3. Log on to the ECS console. In the left-side navigation pane, choose Deployment & Elasticity > Deployment Sets. Confirm that all four nodes belong to the specified deployment set.部署集5.png

  4. Apply the following YAML. The two topologySpreadConstraints entries enforce even spread by both hostname and zone, with DoNotSchedule preventing pods from concentrating on fewer nodes or zones. For more information, see Pod topology spread constraints.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      replicas: 4
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          name: nginx
          labels:
            app: nginx
        spec:
          topologySpreadConstraints:
            - maxSkew: 1
              topologyKey: kubernetes.io/hostname       # Spread evenly across nodes.
              whenUnsatisfiable: DoNotSchedule
              labelSelector:
                  matchLabels:
                    app: nginx
            - maxSkew: 1
              topologyKey: topology.kubernetes.io/zone  # Spread evenly across zones.
              whenUnsatisfiable: DoNotSchedule
              labelSelector:
                  matchLabels:
                    app: nginx
          nodeSelector:
              alibabacloud.com/nodepool-id: <nodepool-id>  # Replace with your node pool ID.
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            resources:
              limits:
                cpu: 1
              requests:
                cpu: 1

Result: On the Deployments page, click the Deployment. On the Pods tab, confirm that pods are distributed across different nodes in different zones.部署集6

What's next