All Products
Search
Document Center

Container Service for Kubernetes:Create and manage node pools

Last Updated:Mar 26, 2026

Node pools let you group nodes that share the same configuration — instance types, operating systems, labels, and taints — so you can apply operations to all nodes in the pool at once. A cluster can have multiple node pools with different configurations, and changes to one node pool never affect nodes or applications in other pools.

Before creating a node pool, read Connect to cloud-based ECS computing resources to understand node pool basics, use cases, and billing.

Limitations

The following parameters cannot be modified after a node pool is created:

ParameterConstraint
RegionFixed to the cluster's region
VPCFixed to the cluster's virtual private cloud (VPC)
Security hardeningCannot be changed after the cluster is created
Billing methodCannot switch between spot instances and pay-as-you-go or subscription
Custom security groupType (basic or advanced) cannot be changed after creation

Prerequisites

Before you begin, make sure you have:

If your cluster version does not meet the requirement above, create a custom script for the node pool before proceeding.

Navigate to node pools

  1. Log on to the ACK console. In the left navigation pane, click Clusters.

  2. On the Clusters page, click the name of the target cluster. In the left navigation pane, choose Nodes > Node Pools.

Create a node pool

  1. On the Node Pools page, click Create Node Pool.

  2. In the Create Node Pool dialog box, configure the parameters described in the following sections.

  3. Click Confirm.

After you click Confirm, the node pool list shows the new pool with Initializing status while it is being created. When creation succeeds, the status changes to Active.

On the confirmation page, click Console-to-Code in the lower-left corner to get Terraform or SDK sample code matching your node pool configuration.

Basic configurations

ParameterDescriptionChangeable
Node pool nameA name for the node pool.对
RegionPre-selected based on the cluster's region. Cannot be changed.错
Scaling modeManual or Auto. <br>- Manual: maintains the exact number of nodes you specify in Expected number of nodes. See Manually scale a node pool.<br>- Auto: scales out when pod scheduling demand exceeds cluster capacity, based on configured minimum and maximum instance counts. Clusters running Kubernetes 1.24 or later use node instant scaling by default; earlier versions use node auto scaling. See Node scaling.对

Network configurations

ParameterDescriptionModifiable
VPCPre-selected as the cluster's VPC. Cannot be changed.错
vSwitchDetermines the zones where new nodes are created during scale-out, based on the Scaling policy you choose. Select vSwitches in your target zones. If no vSwitch is available, click Create vSwitch. See Create and manage vSwitches.对

Instance and image configurations

ParameterDescriptionChangeable
Billing methodThe billing method for ECS instances added to the node pool: Pay-As-You-Go, Subscription, or Spot Instance.<br>- Subscription: configure Duration and optionally enable Auto renewal.<br>- Spot Instance: ACK supports spot instances with a 1-hour protection period only. After the protection period, the system checks the spot price and resource availability every 5 minutes. If the real-time price exceeds your bid or inventory is insufficient, the instance is released. See Best practices for spot instance-based node pools.<br><br>
Important

Billing method changes apply only to newly added nodes. Existing nodes retain their original billing method. To change the billing method of an existing node, see Change the billing method from pay-as-you-go to subscription. You cannot switch between spot instances and pay-as-you-go or subscription billing.

对
Instance typeSelect one or more ECS instance types for worker nodes. Filter by vCPU, memory, instance family, and architecture. Selecting multiple instance types improves scale-out success rates. If scale-out fails due to unavailable instance types, add more types to the pool. See ECS specification recommendations for ACK clusters and check the scalability of the node pool.对
Operating systemPublic image or Custom image.<br>- Public image: Alibaba Cloud Linux 3 container-optimized image, ContainerOS, Alibaba Cloud Linux 3, or Ubuntu. See Operating systems.<br>- Custom image: use a custom OS image. See How do I create a custom image based on an existing ECS instance and use it to create nodes?<br>
Note

Alibaba Cloud Marketplace Image is in phased release. To upgrade or change the OS, see Change the OS.

对
Security hardeningChoose one of the following options. Cannot be changed after the cluster is created.<br>- Disable: no security hardening applied.<br>- MLPS security hardening: applies Multi-Level Protection Scheme (MLPS) 2.0 level-3 baselines to Alibaba Cloud Linux 2 and Alibaba Cloud Linux 3 images.
Important

When MLPS security hardening is enabled, SSH root login is prohibited. Use VNC to log on from the ECS console and create regular users for SSH access. See Connect to an instance by using VNC.<br>- OS security hardening: available only for Alibaba Cloud Linux 2 or Alibaba Cloud Linux 3 images.

错
Logon methodSet key pair or Set password.<br>- Set key pair: SSH key pair authentication, supported only for Linux instances. Specify the Logon name (root or ecs-user) and the key pair.<br>- Set password: password must be 8–30 characters and include uppercase letters, lowercase letters, digits, and special characters. Specify the Logon name (root or ecs-user) and the password.<br>
Note

When MLPS security hardening is selected, only Set password is supported.

对

Storage configurations

ParameterDescriptionChangeable
System diskSupported types: ESSD AutoPL, Enterprise SSD (ESSD), ESSD Entry, Standard SSD, and Ultra Disk. Available types depend on the instance families you select.<br>For Enterprise SSD (ESSD), set the performance level (PL). Higher PLs require larger capacity: PL2 requires more than 460 GiB; PL3 requires more than 1,260 GiB. See Capacity and PLs.<br>Encryption is available for Enterprise SSD (ESSD) only. The default service CMK is used by default; you can also use an existing BYOK CMK from KMS.<br>Select More system disk types to specify fallback disk types and improve creation success rates.对
Data diskSupported types: ESSD AutoPL, Enterprise SSD (ESSD), ESSD Entry, SSD, and Ultra Disk.<br>ESSD AutoPL supports performance provision (for sustained above-baseline performance) and performance burst (for read/write spikes).<br>Enterprise SSD (ESSD): configure a custom PL. PL2 requires more than 460 GiB; PL3 requires more than 1,260 GiB.<br>Encryption is available for all data disk types. You can also create data disks from snapshots to accelerate container image loading and large language model (LLM) initialization.<br>During node creation, the last data disk is automatically formatted. The system mounts /var/lib/container to this disk and mounts /var/lib/kubelet and /var/lib/containerd to /var/lib/container. To customize mount points, modify the disk initialization configuration. See Can I mount a data disk to a custom directory in an ACK node pool?<br>
Note

Up to 64 data disks can be attached to an ECS instance, depending on the instance type. To check the limit for a specific type, call the DescribeInstanceTypes operation and check the DiskQuantity parameter.

对

Number of instances

ParameterDescriptionChangeable
Expected number of nodesThe total number of nodes the node pool maintains. Set to at least 2 to make sure cluster components run correctly. Set to 0 if you do not need nodes at creation time. See Manually scale a node pool.对

Advanced configurations

Expand Advanced options (optional) to configure the following parameters.

ParameterDescriptionChangeable
Resource groupThe resource group the node pool belongs to. Each resource can belong to only one resource group.对
Scaling modeApplies only when the node pool uses Auto scaling mode.<br>- Standard mode: scales by creating and releasing ECS instances.<br>- Economical mode: scales by creating, stopping, and restarting ECS instances. Stopped instances incur storage costs only, not compute costs. Not applicable to instance families with local storage (such as big data and local SSD families). See Economical mode.对
Scaling policyDetermines how new nodes are distributed across zones during scale-out.<br>- Priority: scales in the priority order of selected vSwitches. If a zone is unavailable, the next zone is tried.<br>- Cost optimization: creates instances by ascending vCPU unit price. Spot instances are preferred if the billing method is Spot Instance; pay-as-you-go instances supplement when spot instances are unavailable.<br>- Distribution balancing: distributes instances evenly across selected zones (vSwitches). Requires multiple vSwitches.对
Supplement spot capacity with on-demand instancesRequires Spot Instance billing. When enabled, ACK automatically creates pay-as-you-go instances if spot instances cannot be provisioned due to price or inventory constraints.对
Enable supplemental spot instancesRequires Spot Instance billing. When a reclamation notice is received (5 minutes before reclamation), ACK attempts to provision new spot instances as replacements. If compensation succeeds, ACK drains and removes the old nodes. If it fails, ACK does not drain the old nodes and retries when inventory and pricing conditions are met. Enable Supplement spot capacity with on-demand instances alongside this option to improve compensation success rates. See Best practices for spot instance-based node pools.对
ECS tagsTags added to ECS instances during auto scaling. ACK and Auto Scaling reserve 3 tags (ack.aliyun.com:<Cluster ID>, ack.alibabacloud.com/nodepool-id:<Node pool ID>, and acs:autoscaling:scalingGroupId:<Scaling group ID>), leaving a maximum of 17 user-defined tags per node (ECS limit is 20 total).<br>
Note

When auto scaling is enabled, the tags k8s.io/cluster-autoscaler:true and k8s.aliyun.com:true are added by default. The cluster autoscaler reformats node labels and taints for simulation: labels become k8s.io/cluster-autoscaler/node-template/label/<key>:<value> and taints become k8s.io/cluster-autoscaler/node-template/taint/<key>/<value>:<effect>. To increase the tag quota, submit a request in the Quota Center console.

对
TaintsTaints consist of a key, a value, and an effect. Constraints:<br>- Key: 1–63 characters; letters, digits, hyphens (-), underscores (_), and periods (.); must start and end with a letter or digit. For a prefixed key, the prefix must be a subdomain name (≤253 characters, ending with /).<br>- Value: up to 63 characters; same character rules as key; can be blank.<br>- Effect: NoSchedule, NoExecute, or PreferNoSchedule.<br> - NoSchedule: pods are not scheduled to the node.<br> - NoExecute: non-tolerating pods are evicted when the taint is added.<br> - PreferNoSchedule: the scheduler avoids placing non-tolerating pods on the node.<br>See Taints and tolerations.对
Node labelsLabels are key-value pairs. Constraints for keys:<br>- 1–63 characters; letters, digits, hyphens (-), underscores (_), and periods (.); must start and end with a letter or digit.<br>- For a prefixed key, the prefix must be a subdomain name (≤253 characters, ending with /).<br>- Reserved prefixes (cannot use): kubernetes.io/, k8s.io/, and any prefix ending with these. Exceptions: kubelet.kubernetes.io/, node.kubernetes.io, and prefixes ending with those.<br>Values: up to 63 characters; same character rules as key; can be blank.对
Custom node nameWhen enabled, the node name, ECS instance name, and ECS hostname are all set to a custom value. Format: <prefix><IP substring><suffix>.<br>- Total length: 2–64 characters; must start and end with a lowercase letter or digit.<br>- Prefix (required): letters, digits, hyphens (-), and periods (.); must start with a letter; cannot end with a hyphen or period; no consecutive hyphens or periods.<br>- Suffix (optional): same rules as prefix.<br>Example: IP 192.XX.YY.55, prefix aliyun.com, suffix test → Linux node name: aliyun.com192.XX.YY.55test. Windows nodes use hyphens instead of periods in the hostname (for example, 192-XX-YY-55).对
Pre-custom instance dataA script that runs on nodes before they join the cluster. Requires a quota application in the Quota Center console. See User-data scripts.对
Instance user dataA script that runs on nodes after they join the cluster. See User-data scripts. After creating a cluster or adding nodes, check execution results by logging on to a node and running grep cloud-init /var/log/messages.对
CloudMonitor agentInstalls the CloudMonitor agent on new nodes, enabling you to view node metrics in the CloudMonitor console. Applies to newly added nodes only. To install the agent on existing nodes, go to the CloudMonitor console directly.对
Public IP addressAssigns a public IPv4 address to each new node. If enabled, configure Bandwidth billing method and Peak bandwidth. Applies to newly added nodes only. To enable internet access for an existing node, create an EIP and associate it with the node. See Associate an EIP with an ECS instance.对
Custom security groupSelect Basic security group or Advanced security group. Only one type can be selected, and the type cannot be changed after creation. Each ECS instance supports up to 5 security groups — make sure the quota is sufficient. If you select an existing security group, security group rules are not configured automatically; configure them manually. See Configure security group rules to enforce access control on ACK clusters and Security groups.错
RDS whitelistAdds node IP addresses to the whitelist of an ApsaraDB RDS instance.对
Deployment setDistributes ECS instances across physical servers for high availability. Create a deployment set in the ECS console first, then specify it here.
Important

After selecting a deployment set, the maximum number of nodes in the pool is 20 × number of zones. The number of zones depends on the number of vSwitches. Make sure the ECS quota for the selected deployment set is sufficient. See Best practices for associating deployment sets with node pools.

对
Private pool typeControls whether a capacity reservation private pool is used when creating instances.<br>- Open: automatically matches an open private pool; falls back to the public pool if none is found.<br>- Do not use: uses only the public pool.<br>- Specified: uses the specified private pool by ID; instance creation fails if the pool is unavailable.<br>See Private pools.对

Edit a node pool

You can update most node pool parameters after creation — including vSwitches, billing method, instance type, system disk, and auto scaling settings. For the full list of editable parameters, see the Changeable column in the parameter tables above.

Important
  • Modifying a node pool does not affect nodes or applications in other node pools of the cluster.

  • In most cases, updated configurations apply only to newly added nodes. Exceptions: ECS tags, node labels, and taints — changes to these also apply to existing nodes.

  • If you have modified nodes outside of the node pool configuration (for example, directly on the ECS instance), those changes are overwritten when you update the node pool.

When switching Scaling mode:

  • Manual to Auto: enables auto scaling and requires configuring minimum and maximum instance counts.

  • Auto to Manual: disables auto scaling, sets minimum instances to 0 and maximum to 2,000, and sets Expected nodes to the current node count.

Steps:

  1. On the Node Pools page, find the node pool and click Edit in the Actions column.

  2. In the dialog box, modify the parameters and follow the on-screen instructions.

The node pool status shows Updating during the update and returns to Activated when complete.

View a node pool

Click the name of a node pool to view its details across four tabs:

  • Basic information: cluster, node pool, and node configuration details. If auto scaling is enabled, elastic scaling settings are also shown here.

  • Monitoring: CPU usage, memory usage, disk usage, and average CPU/memory per node, powered by Alibaba Cloud Prometheus.

  • Node management: lists all nodes in the pool. Remove, drain, schedule, or perform operations and maintenance (O&M) on individual nodes. Click Export to download the node list as a CSV file.

  • Scaling activities: records of recent scaling events, including instance counts after each activity and failure descriptions. For common error codes, see Error codes and solutions for scaling failures.

Delete a node pool

Important

All nodes in the pool are removed from the cluster's API server. Review the release rules below before proceeding.

Node release behavior depends on whether Expected number of nodes is configured for the pool and the billing method of each node.

Node pool with Expected number of nodes configured:

Billing methodWhat happens to the node
Pay-as-you-goReleased after the node pool is deleted
SubscriptionRetained after the node pool is deleted

Node pool without Expected number of nodes configured:

Node sourceWhat happens to the node
Manually or automatically added nodesRetained (not released)
Subscription nodesRetained (not released)
Other nodesReleased when the node pool is deleted

All released nodes are removed from the cluster's API server. Retained nodes remain registered in the API server.

To release a retained subscription node, change its billing method to pay-as-you-go first, then release the ECS instance from the ECS console.

Steps:

  1. (Optional) Click the node pool name. On the Overview tab, check whether Expected number of nodes is configured. A hyphen (–) means it is not configured.

  2. On the Node Pools page, find the node pool, click image > Delete in the Actions column, then confirm in the dialog box.

What's next