Node pools let you group nodes that share the same configuration — instance types, operating systems, labels, and taints — so you can apply operations to all nodes in the pool at once. A cluster can have multiple node pools with different configurations, and changes to one node pool never affect nodes or applications in other pools.
Before creating a node pool, read Connect to cloud-based ECS computing resources to understand node pool basics, use cases, and billing.
Limitations
The following parameters cannot be modified after a node pool is created:
| Parameter | Constraint |
|---|---|
| Region | Fixed to the cluster's region |
| VPC | Fixed to the cluster's virtual private cloud (VPC) |
| Security hardening | Cannot be changed after the cluster is created |
| Billing method | Cannot switch between spot instances and pay-as-you-go or subscription |
| Custom security group | Type (basic or advanced) cannot be changed after creation |
Prerequisites
Before you begin, make sure you have:
An ACK One registered cluster with an external Kubernetes cluster (deployed in an on-premises data center) connected to it
Network connectivity between the external Kubernetes cluster and the VPC of the ACK One registered cluster — see Scenario-based networking for VPC connections
Proxy configuration of the external Kubernetes cluster imported to the ACK One registered cluster in private network mode — see Associate an external Kubernetes cluster with an ACK One registered cluster
The cloud-node-controller component installed
A self-managed Kubernetes cluster created with kubeadm, running version 1.26, 1.28, 1.30, 1.31, or later
If your cluster version does not meet the requirement above, create a custom script for the node pool before proceeding.
Navigate to node pools
Log on to the ACK console. In the left navigation pane, click Clusters.
On the Clusters page, click the name of the target cluster. In the left navigation pane, choose Nodes > Node Pools.
Create a node pool
On the Node Pools page, click Create Node Pool.
In the Create Node Pool dialog box, configure the parameters described in the following sections.
Click Confirm.
After you click Confirm, the node pool list shows the new pool with Initializing status while it is being created. When creation succeeds, the status changes to Active.
On the confirmation page, click Console-to-Code in the lower-left corner to get Terraform or SDK sample code matching your node pool configuration.
Basic configurations
| Parameter | Description | Changeable |
|---|---|---|
| Node pool name | A name for the node pool. | |
| Region | Pre-selected based on the cluster's region. Cannot be changed. | |
| Scaling mode | Manual or Auto. <br>- Manual: maintains the exact number of nodes you specify in Expected number of nodes. See Manually scale a node pool.<br>- Auto: scales out when pod scheduling demand exceeds cluster capacity, based on configured minimum and maximum instance counts. Clusters running Kubernetes 1.24 or later use node instant scaling by default; earlier versions use node auto scaling. See Node scaling. |
Network configurations
| Parameter | Description | Modifiable |
|---|---|---|
| VPC | Pre-selected as the cluster's VPC. Cannot be changed. | |
| vSwitch | Determines the zones where new nodes are created during scale-out, based on the Scaling policy you choose. Select vSwitches in your target zones. If no vSwitch is available, click Create vSwitch. See Create and manage vSwitches. |
Instance and image configurations
| Parameter | Description | Changeable |
|---|---|---|
| Billing method | The billing method for ECS instances added to the node pool: Pay-As-You-Go, Subscription, or Spot Instance.<br>- Subscription: configure Duration and optionally enable Auto renewal.<br>- Spot Instance: ACK supports spot instances with a 1-hour protection period only. After the protection period, the system checks the spot price and resource availability every 5 minutes. If the real-time price exceeds your bid or inventory is insufficient, the instance is released. See Best practices for spot instance-based node pools.<br><br> Important Billing method changes apply only to newly added nodes. Existing nodes retain their original billing method. To change the billing method of an existing node, see Change the billing method from pay-as-you-go to subscription. You cannot switch between spot instances and pay-as-you-go or subscription billing. | |
| Instance type | Select one or more ECS instance types for worker nodes. Filter by vCPU, memory, instance family, and architecture. Selecting multiple instance types improves scale-out success rates. If scale-out fails due to unavailable instance types, add more types to the pool. See ECS specification recommendations for ACK clusters and check the scalability of the node pool. | |
| Operating system | Public image or Custom image.<br>- Public image: Alibaba Cloud Linux 3 container-optimized image, ContainerOS, Alibaba Cloud Linux 3, or Ubuntu. See Operating systems.<br>- Custom image: use a custom OS image. See How do I create a custom image based on an existing ECS instance and use it to create nodes?<br> Note Alibaba Cloud Marketplace Image is in phased release. To upgrade or change the OS, see Change the OS. | |
| Security hardening | Choose one of the following options. Cannot be changed after the cluster is created.<br>- Disable: no security hardening applied.<br>- MLPS security hardening: applies Multi-Level Protection Scheme (MLPS) 2.0 level-3 baselines to Alibaba Cloud Linux 2 and Alibaba Cloud Linux 3 images. Important When MLPS security hardening is enabled, SSH root login is prohibited. Use VNC to log on from the ECS console and create regular users for SSH access. See Connect to an instance by using VNC.<br>- OS security hardening: available only for Alibaba Cloud Linux 2 or Alibaba Cloud Linux 3 images. | |
| Logon method | Set key pair or Set password.<br>- Set key pair: SSH key pair authentication, supported only for Linux instances. Specify the Logon name (root or ecs-user) and the key pair.<br>- Set password: password must be 8–30 characters and include uppercase letters, lowercase letters, digits, and special characters. Specify the Logon name (root or ecs-user) and the password.<br> Note When MLPS security hardening is selected, only Set password is supported. |
Storage configurations
| Parameter | Description | Changeable |
|---|---|---|
| System disk | Supported types: ESSD AutoPL, Enterprise SSD (ESSD), ESSD Entry, Standard SSD, and Ultra Disk. Available types depend on the instance families you select.<br>For Enterprise SSD (ESSD), set the performance level (PL). Higher PLs require larger capacity: PL2 requires more than 460 GiB; PL3 requires more than 1,260 GiB. See Capacity and PLs.<br>Encryption is available for Enterprise SSD (ESSD) only. The default service CMK is used by default; you can also use an existing BYOK CMK from KMS.<br>Select More system disk types to specify fallback disk types and improve creation success rates. | |
| Data disk | Supported types: ESSD AutoPL, Enterprise SSD (ESSD), ESSD Entry, SSD, and Ultra Disk.<br>ESSD AutoPL supports performance provision (for sustained above-baseline performance) and performance burst (for read/write spikes).<br>Enterprise SSD (ESSD): configure a custom PL. PL2 requires more than 460 GiB; PL3 requires more than 1,260 GiB.<br>Encryption is available for all data disk types. You can also create data disks from snapshots to accelerate container image loading and large language model (LLM) initialization.<br>During node creation, the last data disk is automatically formatted. The system mounts /var/lib/container to this disk and mounts /var/lib/kubelet and /var/lib/containerd to /var/lib/container. To customize mount points, modify the disk initialization configuration. See Can I mount a data disk to a custom directory in an ACK node pool?<br>Note Up to 64 data disks can be attached to an ECS instance, depending on the instance type. To check the limit for a specific type, call the DescribeInstanceTypes operation and check the |
Number of instances
| Parameter | Description | Changeable |
|---|---|---|
| Expected number of nodes | The total number of nodes the node pool maintains. Set to at least 2 to make sure cluster components run correctly. Set to 0 if you do not need nodes at creation time. See Manually scale a node pool. |
Edit a node pool
You can update most node pool parameters after creation — including vSwitches, billing method, instance type, system disk, and auto scaling settings. For the full list of editable parameters, see the Changeable column in the parameter tables above.
Modifying a node pool does not affect nodes or applications in other node pools of the cluster.
In most cases, updated configurations apply only to newly added nodes. Exceptions: ECS tags, node labels, and taints — changes to these also apply to existing nodes.
If you have modified nodes outside of the node pool configuration (for example, directly on the ECS instance), those changes are overwritten when you update the node pool.
When switching Scaling mode:
Manual to Auto: enables auto scaling and requires configuring minimum and maximum instance counts.
Auto to Manual: disables auto scaling, sets minimum instances to 0 and maximum to 2,000, and sets Expected nodes to the current node count.
Steps:
On the Node Pools page, find the node pool and click Edit in the Actions column.
In the dialog box, modify the parameters and follow the on-screen instructions.
The node pool status shows Updating during the update and returns to Activated when complete.
View a node pool
Click the name of a node pool to view its details across four tabs:
Basic information: cluster, node pool, and node configuration details. If auto scaling is enabled, elastic scaling settings are also shown here.
Monitoring: CPU usage, memory usage, disk usage, and average CPU/memory per node, powered by Alibaba Cloud Prometheus.
Node management: lists all nodes in the pool. Remove, drain, schedule, or perform operations and maintenance (O&M) on individual nodes. Click Export to download the node list as a CSV file.
Scaling activities: records of recent scaling events, including instance counts after each activity and failure descriptions. For common error codes, see Error codes and solutions for scaling failures.
Delete a node pool
All nodes in the pool are removed from the cluster's API server. Review the release rules below before proceeding.
Node release behavior depends on whether Expected number of nodes is configured for the pool and the billing method of each node.
Node pool with Expected number of nodes configured:
| Billing method | What happens to the node |
|---|---|
| Pay-as-you-go | Released after the node pool is deleted |
| Subscription | Retained after the node pool is deleted |
Node pool without Expected number of nodes configured:
| Node source | What happens to the node |
|---|---|
| Manually or automatically added nodes | Retained (not released) |
| Subscription nodes | Retained (not released) |
| Other nodes | Released when the node pool is deleted |
All released nodes are removed from the cluster's API server. Retained nodes remain registered in the API server.
To release a retained subscription node, change its billing method to pay-as-you-go first, then release the ECS instance from the ECS console.
Steps:
(Optional) Click the node pool name. On the Overview tab, check whether Expected number of nodes is configured. A hyphen (–) means it is not configured.
On the Node Pools page, find the node pool, click
> Delete in the Actions column, then confirm in the dialog box.