All Products
Search
Document Center

Container Service for Kubernetes:Create a node pool

Last Updated:Dec 28, 2023

Nodes in Kubernetes are physical or virtual nodes used to run containerized applications. A node pool consists of nodes that have the same configuration or nodes that are used for the same purpose. You can create node pools to easily manage and maintain nodes. This topic describes how to create a regular node pool or managed node pool in a Container Service for Kubernetes (ACK) cluster.

Prerequisites

Node pool types

  • Regular node pool: You can use a regular node pool to manage a set of nodes that have the same configuration, such as specifications, labels, and taints. For more information, see Node pool overview.

  • Managed node pool: Managed node pools provide automated O&M features, such as automatic CVE vulnerability patching and automatic node repair. For more information, see Overview of managed node pools.

    Note

    Only ACK Pro clusters support managed node pools.

For more information about the difference between managed node pools and regular node pools, see Comparison between managed node pools and regular node pools.

Procedure

Note

Creating a node pool in a cluster does not affect the nodes and applications deployed in other node pools of the cluster.

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, click the name of the cluster that you want to manage and choose Nodes > Node Pools in the left-side navigation pane.

  3. On the Node Pools page, click Create Node Pool in the upper-right corner. In the Create Node Pool dialog box, configure the node pool parameters.

    Basic settings

    Parameter

    Description

    Node Pool Name

    Specify a node pool name.

    Managed node pool settings

    Managed Node Pool

    Specify whether to enable the managed node pool feature.

    Managed node pools are O&M-free node pools provided by ACK. Managed node pools support CVE vulnerability patching and auto recovery. They can efficiently reduce your O&M work and enhance node security. For more information, see Overview of managed node pools.

    Auto Recovery Rule

    This parameter is available after you select Managed Node Pool.

    After you select Restart Faulty Node, the system automatically restarts relevant components to repair nodes in the NotReady state and drains the nodes before restarting them.

    Auto Update Rule

    This parameter is available after you select Managed Node Pool.

    After you select Automatically Update Kubelet, the system automatically updates the kubelet when a new version is available. For more information, see Node pool updates.

    Auto CVE Patching

    This parameter is available after you select Managed Node Pool.

    You can configure ACK to automatically patch high-risk, medium-risk, and low-risk vulnerabilities. For more information, see Auto repair and CVE patching.

    ACK may need to restart the nodes after patching certain vulnerabilities so that the patching can take effect. After you select Restart Nodes to Patch Vulnerabilities, the system restarts nodes on demand. If you do not select this option, you need to manually restart the nodes after the vulnerabilities are patched.

    Maintenance Window

    Image updates, runtime updates, and Kubernetes version updates are automatically performed during the maintenance window. For more information, see Overview of managed node pools.

    Region

    By default, the region in which the cluster resides is selected. You cannot change the region.

    Confidential Computing

    Important
    • To use confidential computing, submit a ticket to apply to be added to the whitelist.

    • This parameter is available when you select containerd for the Container Runtime parameter.

    Specify whether to enable confidential computing. ACK provides a cloud-native and all-in-one confidential computing solution based on hardware encryption technologies. Confidential computing ensures data security, integrity, and confidentiality. It simplifies the development and delivery of trusted or confidential applications at lower costs. For more information, see TEE-based confidential computing.

    Container Runtime

    Specify the container runtime based on the Kubernetes version. The following list describes the Kubernetes versions supported by different container runtimes:

    • containerd: containerd is recommended for all Kubernetes versions.

    • Sandboxed-Container: supports Kubernetes 1.24 and earlier.

    • Docker: supports Kubernetes 1.22 and earlier.

    For more information, see Comparison of Docker, containerd, and Sandboxed-Container.

    Network settings

    VPC

    By default, the virtual private cloud (VPC) in which the cluster resides is selected. You cannot change the VPC.

    vSwitch

    When the node pool is being scaled out, new nodes are created in the zones of the selected vSwitches based on the policy that you select for the Scaling Policy parameter. You can select vSwitches in the zones that you want to use.

    If no vSwitch is available, click Create vSwitch to create one. For more information, see Create and manage a vSwitch.

    Auto Scaling

    Specify whether to enable auto scaling. You can use the auto scaling feature of ACK to dynamically scale computing resources for your business based on business requirements and scaling policies in a cost-effective manner. For more information, see What is Auto Scaling?

    Before you enable auto scaling, you must configure auto scaling settings. For more information, see Step 1: Enable auto scaling for the cluster.

    Billing Method

    The following billing methods are supported for nodes in a node pool: pay-as-you-go, subscription, and preemptible instances.

    • If you select the pay-as-you-go billing method, Elastic Compute Service (ECS) instances in the node pool are billed on a pay-as-you-go basis. You are not charged for using the node pool.

    • If you select the subscription billing method, you must set the Duration and Auto Renewal parameters.

    • If you select the preemptible instances billing method, you must set the following parameter at the same time.

      Upper Price Limit of Current Instance Spec: If the real-time market price of an instance type that you select is lower than the value of this parameter, a preemptible instance of this instance type is created. After the protection period (1 hour) ends, the system checks the spot price and resource availability of the instance type every 5 minutes. If the real-time market price exceeds your bid price or the resource inventory is insufficient, the preemptible instance is released.

      ACK supports only preemptible instances with a protection period. For more information, see Overview and Best practices for preemptible instance-based node pools.

    Important
    • If you change the billing method of a node pool, the change takes effect only on newly added nodes. The existing nodes in the node pool still use the original billing method. For more information about how to change the billing method of existing nodes in a node pool, see Change the billing method of an ECS instance from pay-as-you-go to subscription.

    • To ensure that all nodes use the same billing method, ACK does not allow you to change the billing method of a node pool from pay-as-you-go or subscription to preemptible instances, or change the billing method of a node pool from preemptible instances to pay-as-you-go or subscription.

    Instance settings

    Instance Type

    You can select multiple instance types. You can filter instance types by vCPU, memory, architecture, or category.

    Note

    The instance types that you select are displayed in the Selected Types section.

    When the node pool is scaled out, ECS instances of the instance types that you select for the Instance Type parameter are created. The scaling policy of the node pool determines which instance types are used to create new nodes during scale-out activities. Select multiple instance types to improve the success rate of node pool scale-out operations.

    If the node pool fails to be scaled out because the instance types are unavailable or the instances are out of stock, you can specify more instance types for the node pool. The ACK console automatically evaluates the scalability of the node pool. You can view the scalability level when you create the node pool or after you create the node pool.

    Note

    ARM-based ECS instances support only images for ARM. For more information about ARM-based node pools, see Configure an ARM-based node pool.

    Selected Types

    The selected instance types are displayed.

    System Disk

    ESSD AutoPL, Enhanced SSD (ESSD), Standard SSD, and Ultra Disk are supported.

    The types of system disks that you can select depend on the instance types that you select. Disk types that are not displayed in the drop-down list are not supported by the instance types that you select. For more information about the disk types supported by different instance types, see Instance families.

    Note
    • If you select Enhanced SSD (ESSD) as the system disk type, you can set a custom performance level for the system disk. You can select higher performance levels for ESSDs with larger storage capacities. For example, you can select performance level 2 for an ESSD with a storage capacity of more than 460 GiB. You can select performance level 3 for an ESSD with a storage capacity of more than 1,260 GiB. For more information, see Capacities and performance levels.

    • You can select Encryption only if you set the system disk type to Enhanced SSD (ESSD). By default, the default service CMK is used to encrypt the system disk. You can also use an existing CMK generated by using BYOK in KMS.

    You can select More System Disk Types and select a disk type other than the current one in the System Disk section to improve the success rate of system disk creation. The system will attempt to create a system disk based on the specified disk types in sequence.

    Mount Data Disk

    ESSD AutoPL, Enhanced SSD (ESSD), SSD, and Ultra Disk are supported.

    The disk types that you can select depend on the instance types that you select. Disk types that are not displayed in the drop-down list are not supported by the instance types that you select. For more information about the disk types supported by different instance types, see Instance families.

    Note
    • If you select Enhanced SSD (ESSD) as the system disk type, you can set a custom performance level for the system disk. You can select higher performance levels for ESSDs with larger storage capacities. For example, you can select performance level 2 for an ESSD with a storage capacity of more than 460 GiB. You can select performance level 3 for an ESSD with a storage capacity of more than 1,260 GiB. For more information, see Capacities and performance levels.

    • You can select Encryption for all disk types when you specify the type of data disk. By default, the default service CMK is used to encrypt the data disk. You can also use an existing CMK generated by using BYOK in KMS.

    • The maximum number of data disks that can be mounted depends on the instance types that you select. You can view the selected data disks and the remaining number of data disks that you can mount on the right side of Mount Data Disk.

    Expected Nodes

    The number of nodes that you want the node pool to maintain. You can set the Expected Nodes parameter to adjust the number of nodes in the node pool. If you do not want to create nodes in the node pool, set this parameter to 0. For more information, see Scale a node pool.

    Operating System

    ACK supports images for the following operating systems:

    • Alibaba Cloud Linux 3 (the default OS used by ACK)

    • Alibaba Cloud Linux 3 for ARM

    • Alibaba Cloud Linux 2

    • Alibaba Cloud Linux UEFI 2

    • ContainerOS

    • Windows

    • Windows Core

    • CentOS

    For more information, see Overview of OS images.

    Note
    • After you change the OS image of the node pool, the change takes effect only on newly added node. The existing nodes in the node pool still use the original OS image. For more information about how to update the OS image of an existing node, see Node pool updates.

    • To ensure that all nodes in the node pool use the same OS image, ACK allows you to only update the node OS image to the latest version. ACK does not allow you to change the type of OS image.

    Logon settings

    Logon Type

    Valid values: Key Pair, Password, and Later.

    Note

    If you select Reinforcement based on classified protection for the Security Reinforcement parameter, only the Key Pair option is supported.

    • Configure the logon type when you create the node pool:

      • Key Pair: Alibaba Cloud SSH key pairs provide a secure and convenient method to log on to ECS instances. An SSH key pair consists of a public key and a private key. SSH key pairs support only Linux instances. For more information, see Overview.

      • Password: The password must be 8 to 30 characters in length, and must contain uppercase letters, lowercase letters, digits, and special characters.

    • Configure the logon type after you create the node pool: For more information, see Bind an SSH key pair to an instance and Reset the logon password of an instance.

    Username

    If you select Key Pair or Password for Logon Type, you must select root or ecs-user as the username.

    Public IP

    Specify whether to assign an IPv4 address to each node. If you clear the check box, no public IP address is allocated. If you select the check box, you must also set the Bandwidth Billing Method and Peak Bandwidth parameters.

    Note

    This parameter takes effect only on newly added nodes and does not take effect on existing nodes. If you want to enable an existing node to access the Internet, you must create an EIP and associate the EIP with the node. For more information, see Associate an EIP with an ECS instance.

    CloudMonitor Agent

    Specify whether to install the CloudMonitor agent. After you install the CloudMonitor agent on ECS nodes, you can view the monitoring information about the nodes in the CloudMonitor console.

    Advanced settings

    Parameter

    Description

    ECS Tags

    Add tags to the ECS instances that are automatically added during auto scaling. Tag keys must be unique. A key cannot exceed 128 characters in length. Keys and values cannot start with aliyun or acs:. Keys and values cannot contain https:// or http://.

    An ECS instance can have at most 20 tags. To increase the quota limit, submit an application in the Quota Center console. The following tags are automatically added to an ECS node by ACK and Auto Scaling. Therefore, you can add 17 tags to an ECS node.

    • The following two ECS tags are added by ACK:

      • ack.aliyun.com:<Cluster ID>

      • ack.alibabacloud.com/nodepool-id:<Node pool ID>

    • The following ECS tag is added by Auto Scaling: acs:autoscaling:scalingGroupId:<Scaling group ID>.

    Note
    • After you enable auto scaling, the following ECS tags are added to the node pool by default: k8s.io/cluster-autoscaler:true and k8s.aliyun.com:true.

    • The auto scaling component simulates scale-out activities based on node labels and taints. For this purpose, the format of node labels is changed to k8s.io/cluster-autoscaler/node-template/label/Label key:Label value and the format of taints is changed to k8s.io/cluster-autoscaler/node-template/taint/Taint key/Taint value:Taint effect.

    Taints

    Add taints to nodes. A taint consists of a key, a value, and an effect. A taint key can be prefixed. If you want to specify a prefixed taint key, add a forward slash (/) between the prefix and the remaining content of the key. For more information, see Taints and tolerations. The following limits apply to taints:

    • Key: A key must be 1 to 63 characters in length, and can contain letters, digits, hyphens (-), underscores (_), and periods (.). A key must start and end with a letter or digit.

      If you want to specify a prefixed taint key, the prefix must be a subdomain name. A subdomain name consists of DNS labels that are separated by periods (.), and cannot exceed 253 characters in length. It must end with a forward slash (/). For more information about subdomain names, see DNS subdomain names.

    • Value: A value cannot exceed 63 characters in length, and can contain letters, digits, hyphens (-), underscores (_), and periods (.). A value must start and end with a letter or digit. You can also leave a value empty.

    • You can specify the following effects for a taint: NoSchedule, NoExecute, and PreferNoSchedule.

      • NoSchedule: If a node has a taint whose effect is NoSchedule, the system does not schedule pods to the node.

      • NoExecute: Pods that do not tolerate this taint are evicted after this taint is added to a node. Pods that tolerate this taint are not evicted after this taint is added to a node.

      • PreferNoSchedule: The system attempts to avoid scheduling pods to nodes with taints that are not tolerated by the pods.

    Node Label

    Add labels to nodes. A label is a key-value pair. A label key can be prefixed. If you want to specify a prefixed label key, add a forward slash (/) between the prefix and the remaining content of the key. The following limits apply to labels:

    • The key of a label must be 1 to 63 characters in length, and can contain letters, digits, hyphens (-), underscores (_), and periods (.). It must start and end with a letter or a digit.

      If you want to specify a prefixed taint key, the prefix must be a subdomain name. A subdomain name consists of DNS labels that are separated by periods (.), and cannot exceed 253 characters in length. It must end with a forward slash (/). For more information about subdomain names, see DNS subdomain names.

      The following prefixes are used by key Kubernetes components and cannot be used in node labels:

      • kubernetes.io/

      • k8s.io/

      • Prefixes that end with kubernetes.io/ or k8s.io/. Example: test.kubernetes.io/.

        However, you can still use the following prefixes:

        • kubelet.kubernetes.io/

        • node.kubernetes.io

        • Prefixes that end with kubelet.kubernetes.io/.

        • Prefixes that end with node.kubernetes.io.

    • The value of a label cannot exceed 63 characters in length, and can contain letters, digits, hyphens (-), underscores (_), and periods (.). The value of a label can be empty or start and end with a letter or a digit.

    • If you select Set to Unschedulable, nodes are unschedulable when they are added to the cluster. You can set an existing node to schedulable on the Nodes page in the ACK console.

    Scaling Policy

    • Priority: The system scales the node pool based on the priorities of the vSwitches that you select for the node pool. The vSwitches that you select are displayed in descending order of priority. If Auto Scaling fails to create ECS instances in the zone of the vSwitch with the highest priority, Auto Scaling attempts to create ECS instances in the zone of the vSwitch with a lower priority.

    • Cost Optimization: The system creates instances based on the vCPU unit prices in ascending order. Preemptible instances are preferentially created when multiple preemptible instance types are specified in the scaling configurations. If preemptible instances cannot be created due to reasons such as insufficient stocks, the system attempts to create pay-as-you-go instances.

      If you select Preemptible Instance for the Billing Method parameter, you must set the following parameters:

      • Percentage of Pay-as-you-go Instances: Specify the percentage of pay-as-you-go instances in the node pool. Valid values: 0 to 100.

      • Enable Supplemental Preemptible Instances: After you enable this feature, Auto Scaling automatically creates the same number of preemptible instances 5 minutes before the system reclaims the existing preemptible instances. The system sends a notification to Auto Scaling 5 minutes before it reclaims preemptible instances.

      • Enable Supplemental Pay-as-you-go Instances: After you enable this feature, Auto Scaling attempts to create pay-as-you-go ECS instances to meet the scaling requirement if Auto Scaling fails to create preemptible instances for reasons such as that the unit price is too high or preemptible instances are out of stock.

    • Distribution Balancing: The even distribution policy takes effect only when you select multiple vSwitches. This policy ensures that ECS instances are evenly distributed among the zones (the vSwitches) of the scaling group. If ECS instances are unevenly distributed across the zones due to reasons such as insufficient stocks, you can perform a rebalancing operation.

    Important

    You cannot change the scaling policy of a node pool after the node pool is created.

    CPU Policy

    The CPU management policy for kubelet nodes.

    • None: The default CPU management policy.

    • Static: This policy allows pods with specific resource characteristics on the node to be granted enhanced CPU affinity and exclusivity.

    For more information, see CPU management policies.

    Resource Group

    The resource group to which the cluster belongs. Each resource can belong to only one resource group. You can regard a resource group as a project, an application, or an organization based on your business scenarios. For more information, see Resource groups.

    Deployment Set

    Important
    • To use the deployment set feature, apply to be added to the whitelist in Quota Center.

    • You cannot change the deployment set used by a node pool after the node pool is created.

    • After you select a deployment set, the maximum number of nodes that can be created in the node pool is limited. The default upper limit of nodes that you can create in a deployment set is calculated based on the following formula: Default upper limit = 20 × Number of zones. The number of zones is determined by the vSwitches that you select for the node pool. Exercise caution when you select the deployment set. To avoid node creation failures, make sure that the ECS quota of the deployment set that you select is sufficient.

    You must create a deployment set in the ECS console before you can select the deployment set for a node pool in the ACK console. For more information about how to create a deployment set, see Create a deployment set.

    You can use a deployment set to distribute your ECS instances to different physical servers to ensure high service availability and implement underlying disaster recovery. If you specify a deployment set when you create ECS instances, the instances are created and distributed based on the deployment strategy that you preset for the deployment set within the specified region. For more information, see Best practices for associating deployment sets with node pools.

    Custom Security Group

    You can select Basic Security Group or Advanced Security Group. For more information about security groups, see Overview.

    Important
    • To use custom security groups, apply to be added to the whitelist in Quota Center.

    • The security groups that you select must be of the same type (basic security group or advanced security group).

    • Each ECS instance can be added to at most five security groups. Make sure that the security group quota is sufficient.

    • You cannot change the security groups of a node pool when you modify the node pool.

    • If you select an existing security group, the system does not automatically add additional rules to the security group. This may cause errors when you access the nodes in the cluster. To prevent access failures, you must manually configure security group rules. For more information about how to manage security group rules, see Configure security group rules to enforce access control on ACK clusters.

    Custom Image

    If you select a custom image, the default image is replaced by the custom image. For more information, see Use a custom image to create an ACK cluster.

    RDS Whitelist

    Click Select RDS Instance to add node IP addresses to the whitelist of an ApsaraDB RDS instance.

    Custom Node Name

    Specify whether to use a custom node name. If you choose to use a custom node name, the name of the node, the name of the ECS instance, and the hostname of the ECS instance are changed. However, the preceding names are not changed for Windows instances.

    A custom node name consists of a prefix, an IP substring, and a suffix. The prefix is required and the suffix is optional for a custom node name.

    • A custom node name must be 2 to 64 characters in length. The prefix and suffix can contain letters, digits, hyphens (-), and periods (.). The prefix and suffix must start with a letter and cannot end with a hyphen (-) or period (.). The prefix and suffix cannot contain consecutive hyphens (-) or periods (.).

    • For a Windows instance that uses a custom node name, the hostname of the ECS instance is fixed to the IP address of the node. In the hostname, hyphens (-) are used to replace the periods (.) in the IP address. The hostname does not include the prefix or suffix.

    For example, the node IP address is 192.168.xx.xx, the prefix is aliyun.com, and the suffix is test.

    • If the node runs Linux, the name of the node, the name of the ECS instance, and the hostname of the ECS instance are aliyun.com192.168.xx.xxtest.

    • If the node runs Windows, the hostname of the ECS instance is 192-168-xx-xx, and the names of the node and ECS instance are aliyun.com192.168.x.xxtest.

    Pre-defined Custom Data

    To use this feature, submit an application in the Quota Center console.

    Nodes automatically run predefined scripts before they are added to the cluster. For more information about user-data scripts, see User-data scripts.

    For example, if you enter echo "hello world", a node runs the following script:

    #!/bin/bash
    echo "hello world"
    [Node initialization script]

    User Data

    Nodes automatically run user-data scripts after they are added to the cluster. For more information about user-data scripts, see User-data scripts.

    For example, if you enter echo "hello world", a node runs the following script:

    #!/bin/bash
    [Node initialization script]
    echo "hello world"
    Note

    After you create a cluster or add nodes, the execution of the user-data script on a node may fail. We recommend that you log on to a node and run the grep cloud-init/var/log/messages command to view the execution log and check whether the execution succeeds or fails on the node.

    Private Pool Type

    Valid values: Open, Do Not Use, and Specified.

    • Open: The system automatically matches an open private pool. If no matching is found, resources in the public pool are used.

    • Do Not Use: No private pool is used. Only resources in the public pool are used.

    • Specified: Specify a private pool by ID. If the specified private pool is unavailable, ECS instances fail to start up.

    For more information, see Private pools.

  4. Click Confirm Order.

    If the Status column of the node pool in the node pool list displays Initializing, the node pool is being created. After the node pool is created, the Status column of the node pool displays Active.

What to do next

After the node pool is created, you can click an action in the Actions column of the node pool or click More in the Actions column to perform one of the following operations.

Action

Description

References

Details

View the details of the node pool.

None

Edit

Modify the configuration of the node pool. For example, you can modify the vSwitch, managed node pool settings, billing method, instance type, or auto scaling setting of the node pool.

Modify a node pool

Monitor

View basic monitoring information about ECS instances collected by Managed Service for Prometheus.

Monitored nodes

Scale

Adjust the expected number of nodes to scale the node pool. This helps reduce costs.

Scale a node pool

Configure Managed Node Pool

Configure managed node pool settings, such as the auto recovery rule, auto update rule, and auto CVE vulnerability patching.

Basic settings

Add Existing Node

Automatically or manually add existing ECS instances to the cluster.

Add existing ECS instances to an ACK cluster

Clone

Clone a node pool that contains the expected number of nodes based on the current node pool configuration.

None

Node Repair

When the nodes in the managed node pool encounter errors, ACK will automatically repair the nodes.

Auto repair

CVE Patching

Patch high-risk CVE vulnerabilities in nodes with a few clicks.

CVE patching

Configure kubelet

Modify the kubelet configuration.

Customize the kubelet configuration of a node pool

Upgrade

Update the kubelet, operating system, or container runtime on demand.

Node pool updates

Delete

Delete the current node pool to save costs.

Delete a node pool

References