Managing nodes in a Kubernetes cluster becomes complex as workloads grow — different applications need different instance types, OS images, billing methods, and O&M strategies. Node pools let you organize cluster nodes into logical groups that share the same configuration, so you can apply different settings and maintenance policies to different sets of nodes within a single ACK cluster.
A single cluster can have multiple node pools with different configurations. For example:
-
Mix OS images — ContainerOS, Alibaba Cloud Linux, and Windows in the same cluster
-
Mix container runtimes — containerd, Docker, and Sandboxed-Container in the same cluster
-
Mix billing methods — pay-as-you-go, subscription, and pay-by-preemptible-instance in the same cluster
-
Enable auto scaling on multiple node pools independently
Node pool types
ACK provides two node pool types: regular node pools and managed node pools.
Regular node pools
A regular node pool contains one or more nodes that share the same configuration. Each node pool maps to one scaling group. When you scale the node pool, ACK uses Auto Scaling to add or remove nodes.
Best for: Teams that want full control over node O&M. Patching, repairs, and updates are all triggered manually.
Some system components are installed in the default node pool. Scaling the default node pool automatically may destabilize those components. If you need auto scaling, create a dedicated node pool with auto scaling enabled rather than using the default node pool.
Managed node pools
A managed node pool automates routine O&M tasks — including CVE vulnerability patching, node repair, component updates, and minor kubelet version updates. You set a maintenance window, and ACK performs these operations automatically within that window.
Best for: Teams that want to reduce operational overhead and let ACK handle day-to-day node maintenance. Some complex node anomalies may still require manual intervention.
For more information, see Overview of managed node pools.
Choose a node pool type
| Regular node pool | Managed node pool | |
|---|---|---|
| O&M responsibility | You | ACK (partial) |
| Maintenance window | Not required | Required |
| Node repair | Manual | Automatic |
| CVE vulnerability patching | Manual | Automatic |
| Component update | Manual | Automatic |
| Minor kubelet version update | Manual | Automatic |
| Fast ContainerOS scale-out | Not supported | Supported — 1,000 ContainerOS nodes ready in 53 s vs. 330 s for CentOS, measured at 90% of nodes ready |
| Supported OS images | ContainerOS, Alibaba Cloud Linux, Red Hat, Ubuntu, Windows | ContainerOS, Alibaba Cloud Linux, Red Hat, Ubuntu |
CVE vulnerability patching in managed node pools is an advanced feature provided by Security Center. It requires Security Center Enterprise Edition or higher. ACK does not charge additional fees for this feature. For more information, see Functions and features.
For more information about supported OS images and limits, see OS images.
Constraints:
-
Remove all nodes from a node pool before deleting it.
-
Auto scaling can only be enabled when creating a node pool. Once enabled, the node pool becomes an elastic node pool with the following characteristics:
-
Manual scale-out is not supported.
-
The pay-by-preemptible-instance billing method is supported.
-
Standard CPU instances, GPU-accelerated instances, and shared GPU-accelerated instances are supported in scaling activities.
-
Disabling auto scaling converts an elastic node pool back to a regular node pool. Converting a regular node pool to an elastic node pool is not supported.
-
For more information, see Enable node auto scaling.
Supported operations
| Operation | Description |
|---|---|
| Create and manage node pools | Create a node pool and specify its configuration. |
| Modify a node pool | Modify node pool configuration. Changes apply only to newly added nodes in most cases. Exceptions include label and taint synchronization, node upgrades, node repair, vulnerability patching, and kubelet configuration — these changes also apply to existing nodes. |
| Manually scale a node pool | Adjust the desired node count. Increasing the count adds nodes; decreasing it releases nodes in descending order of creation time. |
| Add existing ECS instances to an ACK cluster | Add existing nodes that don't belong to any cluster to a node pool. Limits apply — see the Limits section of that topic. |
| Remove a node | Remove one or more nodes from a node pool. Removed nodes no longer belong to the cluster or node pool. You can drain the node and release the instance before removal. |
| Update a node pool | Update the OS image, runtime, and kubelet for all nodes in a node pool. Update in batches to minimize impact on running workloads. Managed node pools can update automatically within the maintenance window. |
| Repair nodes | Repair abnormal nodes one at a time. Managed node pools repair nodes automatically. |
| Patch OS CVE vulnerabilities for node pools | Patch vulnerabilities across nodes in a node pool. Patch in batches to minimize workload impact. Managed node pools patch automatically within the maintenance window. |
| Customize the kubelet configurations of a node pool | Modify kubelet configuration for all nodes in a node pool. Changes also apply to nodes added later. |
| Enable node auto scaling | Configure Auto Scaling to add standard CPU instances, GPU-accelerated instances, or preemptible instances based on actual load and your scaling policy. Supports multiple zones, instance types, and scaling modes. |
Billing
Node pools are free. You pay only for the cloud resources provisioned in the node pool — primarily ECS instances and scaling groups.
-
For ECS billing details, see Billing of ECS.
-
For Auto Scaling billing details, see Billing of Auto Scaling.
To change the billing method of existing nodes in a node pool (for example, from pay-as-you-go to subscription), use the ECS console. For more information, see Change the billing method of an instance from pay-as-you-go to subscription.
Key concepts
The following terms describe the underlying infrastructure that node pools rely on.
| Term | Description |
|---|---|
| Scaling group | A collection of ECS instances used for auto scaling and management. Each node pool maps to one scaling group. All resources in a node pool — ECS instances and scaling groups — must belong to the same Alibaba Cloud account. Important
Always configure and manage nodes through node pools, not directly through scaling groups. Direct changes to scaling groups may break node pool features. |
| Scaling configuration | Defines the configuration template for ECS instances created during scale-out. When auto scaling triggers a scale-out, new instances are created based on this configuration. Important
Do not modify the scaling configuration through the Auto Scaling console or API. Manage node configuration through node pools only. |
| Scaling activity | The operation triggered when a node pool scales in or out. The system completes the operation automatically and records the activity. View historical scaling activities in the console. |
| Replace system disks | Some node pool operations — such as adding existing nodes or updating OS images — initialize nodes by replacing their system disks. After replacement, IaaS attributes (node name, instance ID, IP address) remain unchanged, but system disk data is deleted. Data disks are not affected. Important
Store persistent data on data disks, not system disks. |
| In-place upgrade | An alternative to replacing system disks. In-place upgrades update components without replacing the system disk, reinitializing the node, or destroying node data. |
What's next
-
Create and manage node pools — configure your first node pool
-
Manually scale a node pool — add or remove nodes on demand
-
Add existing ECS nodes to an ACK cluster — bring existing instances into a node pool
-
Remove a node — safely remove nodes with drain and release options
-
Node pool O&M — upgrade, repair, and patch node pools
-
Best practices for nodes and node pools — deployment sets, preemptible instance node pools, and more
-
Schedule application pods to a specific node pool — target workloads to specific node pools
-
Migrate the container runtime from Docker to containerd — required for clusters running Kubernetes 1.24 or later
-
FAQ about nodes and node pools — troubleshoot common issues