Container Service for Kubernetes (ACK) allows you to create node pools and manage nodes in a cluster by node pool. For example, you can centrally manage the labels and taints of the nodes in a node pool. This topic describes how to create a node pool in an ACK cluster and how to adjust the number of nodes in a node pool.

Prerequisites

An ACK cluster whose Kubernetes version is later than 1.9 is created. For more information, see Create an ACK managed cluster.
Notice
  • Make sure that you have a sufficient node quota in the cluster. To increase the node quota, Submit a ticket. For more information about the resource quotas related to ACK clusters, see Limits.
  • When you add an existing Elastic Compute Service (ECS) instance to a node pool, make sure that the ECS instance is associated with an elastic IP address (EIP) or a NAT gateway is configured for the virtual private cloud (VPC) where the ECS instance is deployed. In addition, make sure that the ECS instance can access the Internet. Otherwise, you cannot add the ECS instance to the node pool.

Background information

ACK provides two types of node pools: regular node pools and managed node pools. Regular node pools include the default node pool and custom node pools. You can enable auto scaling for custom node pools and managed node pools. For more information, see Node pool overview.

Considerations

Some system components are installed in the default node pool. When the system automatically scales the default node pool, the system components may become unstable. If you want to use the auto scaling feature, we recommend that you create a new node pool that has auto scaling enabled.

Create a node pool

  1. Log on to the ACK console.
  2. In the left-side navigation pane of the ACK console, click Clusters.
  3. On the Clusters page, find the cluster that you want to manage and click the name of the cluster or click Details in the Actions column. The details page of the cluster appears.
  4. In the left-side navigation pane of the details page, choose Nodes > Node Pools.
  5. In the upper-right corner of the Node Pools page, click Create Node Pool.
    In the upper-right corner of the Node Pools page, you can click Create Managed Node Pool to create a managed node pool, or click Configure Auto Scaling to create a node pool that has auto scaling enabled.
  6. In the Create Node Pool dialog box, configure the node pool.
    For more information about the parameters, see Create an ACK managed cluster. The following table describes some of the parameters.
    Parameter Description
    Expected Nodes The Expected Nodes parameter specifies the number of nodes that you want to keep in a node pool. You can change the value of this parameter to adjust the number of nodes in the node pool. If you do not want to add nodes to the node pool, set this parameter to 0. For more information, see Modify the expected number of nodes in a node pool.
    Billing Method You can select Pay-As-You-Go, Subscription, or Preemptible Instance. For more information, see Instance purchasing options.
    Auto Scaling Select Enable Auto Scaling to enable auto scaling for the node pool. For more information, see Auto scaling of nodes.
    Operating System Select an operating system for the nodes in the node pool. You can select CentOS, Alibaba Cloud Linux, or Windows.
    Public IP If you select Assign a Public IPv4 Address to Each Node, public IPv4 addresses are assigned to the nodes in the node pool. You can connect to the nodes by using the assigned IP addresses. For more information about public IP addresses, see Public IP addresses.
    Notice If you select Assign a Public IPv4 Address to Each Node, do not select Associate EIP when you configure auto scaling. Otherwise, nodes cannot be added to the node pool.
    ECS Label Add labels to the ECS instances.
    Node Label Add labels to the nodes in the node pool.
    Note If you select Set New Nodes to Unschedulable, nodes that are newly added to the cluster are marked as unschedulable. You can go to the Nodes page and set the nodes as schedulable.
    Scaling Mode You can select Standard or Swift.
    • Standard: the standard mode. Auto scaling is implemented by creating and releasing ECS instances based on resource requests and usage.
    • Swift: the swift mode. Auto scaling is implemented by creating, stopping, and starting ECS instances. This mode accelerates scaling activities.
      Note
      • The Scaling Mode parameter is available only if you select Enable Auto Scaling for Auto Scaling.
      • If a stopped ECS instance fails to be restarted in swift mode, the ECS instance is not released. You can manually release the ECS instance.
    Scaling Policy
    • Priority: scales the node pool based on the priorities of the vSwitches that you specify. If Auto Scaling fails to create ECS instances in the zone of the vSwitch with the highest priority, Auto Scaling attempts to create ECS instances in the zone of the vSwitch with a lower priority.
    • Cost Optimization: creates ECS instances based on the ascending order of vCPU unit prices. The system preferably creates preemptible instances when multiple instance types are specified. If Auto Scaling fails to create preemptible instances for reasons such as that preemptible instances are out of stock, Auto Scaling attempts to create pay-as-you-go ECS instances. If you set the scaling policy to cost optimization, you can configure the following parameters:
      • Percentage of Pay-as-you-go Instances: Specify the percentage of pay-as-you-go instances in the node pool. Valid values: 0 to 100.
      • Enable Supplemental Preemptible Instances: After you enable this feature, Auto Scaling automatically creates the same number of preemptible instances 5 minutes before the system reclaims the existing preemptible instances. The system sends a notification to Auto Scaling 5 minutes before it reclaims preemptible instances.
      • Enable Supplemental Pay-as-you-go Instances: After you enable this feature, Auto Scaling attempts to create pay-as-you-go ECS instances to meet the scaling requirement if Auto Scaling fails to create preemptible instances for reasons such as that the unit price is too high or preemptible instances are out of stock.
    • Distribution Balancing: evenly distributes ECS instances across the zones of the vSwitches that are specified for the scaling group. If the distribution of ECS instances across zones is not balanced due to reasons such as that ECS resources are out of stock, you can select this policy to evenly distribute the ECS instances across zones.
      Note This policy takes effect only when you have specified multiple vSwitches in the VPC.
    Resource Group You can specify a resource group for scale-out activities. When the node pool is scaled out, nodes from the specified resource group are added to the node pool. By default, the resource group of the ACK cluster is specified.
    Custom Security Group You can select Basic Security Group or Advanced Security Group. For more information about security groups, see Overview.
    Note
    • To use custom security groups, Submit a ticket and apply to be added to the whitelist.
    • The security groups that you select must be of the same type (basic security group or advanced security group).
    • You can select at most five security groups.
    • You cannot change the security groups of a node pool when you modify the node pool.
    • If you specify an existing security group when you create a node pool, the system does not automatically configure security group rules. To access the cluster, you must manually configure security group rules for the cluster. For more information, see Limits on ECS instances.
    • If you select an existing security group, the system does not automatically configure security group rules. This may cause errors when you access the nodes in the cluster. You must manually configure security group rules. For more information, see Configure security group rules to enforce access control on ACK clusters.
  7. Click Confirm Order.
    On the Node Pools page, check the status of the node pool. If the node pool is in the Initializing state, the node pool is being created. After the node pool is created, the node pool is in the Active state. 123

Modify the expected number of nodes in a node pool

The expected number of nodes specifies the number of nodes that a node pool must keep. After you specify the expected number of nodes in a node pool, the nodes in the node pool are automatically scaled to the specified number. For more information about the scaling rules, see Rules for changes to the expected number of instances.

Node pools that are configured with the Expected Nodes parameter and those that are not configured with the parameter have different reactions to operations such as removing nodes and releasing ECS instances. For more information, see What are the differences between node pools that are configured with the Expected Nodes parameter and those that are not configured with this parameter?.

You can scale in or out a node pool by changing the expected number of nodes:
  • Scale out the node pool: Set the expected number of nodes to a value that is greater than the current value. Then, the node pool is automatically scaled out. If you want to scale out a node pool, we recommend that you use this method. This way, the system can automatically add nodes to the node pool if you fail to manually add nodes to the node pool.
  • Scale in the node pool: Set the expected number of nodes to a value that is smaller than the current value. Then, the node pool is automatically scaled in.
    Notice If you use the ECS or Auto Scaling console or use the ECS or Auto Scaling API to remove nodes or release instances in a node pool, the node pool automatically scales to the expected number of nodes. A node pool also scales to the expected number of nodes when the subscription instances in the node pool expire and are automatically released. Therefore, if you want to change the number of nodes in a node pool, modify the expected number of nodes or manually remove nodes from the node pool in the ACK console. For more information, see Remove a node.
  1. Log on to the ACK console.
  2. In the left-side navigation pane of the ACK console, click Clusters.
  3. On the Clusters page, find the cluster that you want to manage and click the name of the cluster or click Details in the Actions column. The details page of the cluster appears.
  4. In the left-side navigation pane of the details page, choose Nodes > Node Pools.
  5. On the Node Pools page, find the node pool that you want to manage and click Scale in the Actions column.
  6. Grant ACK the permissions to access cloud resources.
    1. Click AliyunOOSLifecycleHook4CSRole.
      Note
      • If the current account is an Alibaba Cloud account, click AliyunOOSLifecycleHook4CSRole.
      • If the current account is a RAM user, make sure the Alibaba Cloud account is assigned the AliyunOOSLifecycleHook4CSRole role. Then, grant the AliyunRAMReadOnlyAccess permission to the RAM user. For more information, see Assign RBAC roles to RAM users or RAM roles.
    2. On the Cloud Resource Access Authorization page, click Agree to Authorization.
  7. In the node pool dialog box, configure the parameters.
    For more information about the parameters, see Create a node pool. The following list describes some of the parameters:
    • Expected Nodes: Specify the expected number of nodes in the node pool. You can add at most 500 nodes at a time.
    • ECS Label: Add labels to the ECS instances.
    • Node Label: Add labels to the nodes to be added to the cluster.
      Note
      • If you select Add Labels and Taints to Existing Nodes, the modifications of labels and taints are synchronized to existing nodes in the node pool. This does not change the labels and taints that you previously added to the existing nodes.
      • If you select Set New Nodes to Unschedulable, nodes that are newly added to the cluster are marked as unschedulable. You can go to the Nodes page and set the nodes as schedulable.

    • Taints: Add taints to the nodes to be added to the node pool.
  8. Click Confirm.
    • On the Node Pools page, the status of the node pool is Scaling Out. This indicates that the scale-out activity is in progress. After the scale-out activity is completed, the status of the node pool changes to Active.
    • On the Node Pools page, the status of the node pool is Removing. This indicates that the scale-in activity is in progress. After the scale-in activity is completed, the status of the node pool changes to Active.

Modify a node pool

Note After you modify a node pool, the modifications take effect only on nodes that are newly added to the node pool. The modifications do not apply to the existing nodes in the node pool. If you want to change the billing method of the existing nodes in a node pool, log on to the ECS console and make the change. For more information, see Change the billing method of an ECS instance from pay-as-you-go to subscription.

Find the node pool that you want to modify and click Edit in the Actions column. For more information about the parameters, see Create a node pool. The following list describes some of the parameters:

  • Operating System: Change the operating system version for the nodes.
  • ECS Label: Add labels to the ECS instances.
  • Node Label: Add labels to the nodes to be added to the cluster.
    Note
    • If you select Add Labels and Taints to Existing Nodes, the specified labels and taints are synchronized to the existing nodes in the node pool. This does not change the labels and taints that you previously added to the existing nodes.
    • If you select Set New Nodes to Unschedulable, nodes that are newly added to the cluster are marked as unschedulable. You can go to the Nodes page and set the nodes as schedulable.

  • Taints: Add taints to the nodes to be added to the node pool.

Add free nodes to a node pool

Free nodes are nodes that are not added to node pools. Free nodes exist in clusters that were created before the node pool feature is released.

  1. Create a node pool that uses the same configurations as the free nodes and set the expected number of nodes to the number of free nodes that you want to add.
    1. Log on to the ACK console.
    2. In the left-side navigation pane of the ACK console, click Clusters.
    3. On the Clusters page, find the cluster that you want to manage and click the name of the cluster or click Details in the Actions column. The details page of the cluster appears.
    4. In the left-side navigation pane of the details page, choose Nodes > Node Pools.
    5. In the upper-right corner of the Node Pools page, click Create Node Pool.
    6. In the Create Node Pool dialog box, configure the node pool and click Confirm Order.
      For more information, see Create a node pool.
  2. Remove free nodes
    Note You can remove free nodes in the ACK console. When you remove free nodes, you must select Release ECS Instance in the Remove Node dialog box. For more information, see Remove a node.
    1. Log on to the ACK console.
    2. In the left-side navigation pane of the ACK console, click Clusters.
    3. On the Clusters page, find the cluster that you want to manage and click the name of the cluster or click Details in the Actions column. The details page of the cluster appears.
    4. In the left-side navigation pane of the details page, choose Nodes > Nodes.
    5. On the Nodes page, find the node that you want to remove and choose More > Remove in the Actions column.
      Note To remove multiple nodes at a time, select the nodes that you want to remove on the Nodes page and click Batch Remove.
    6. In the Remove Node dialog box, select Release ECS Instance and Drain the Node, and then click OK.
      • Release ECS Instance:
        • Only pas-as-you-go ECS instances are released. The system continues to bill ECS instances that are not released.
        • Subscription ECS instances are automatically released after the subscription expires.
        • If you do not select Release ECS Instance, you are still charged for the ECS instance where the node is deployed.
      • Drain the Node: Select this option to migrate pods that run on the nodes to be removed to other nodes in the cluster. If you select this option, make sure that the other nodes have sufficient resources for these pods.
        You can also run the kubectl drain node-name [options] command to migrate pods that run on the nodes to be removed to other nodes in the cluster.
        Note
        • node-name must be in the format of your-region-name.node-id.

          your-region-name specifies the region where the cluster that you want to manage is deployed. node-id specifies the ID of the ECS instance where the node to be removed is deployed. Example: cn-hangzhou.i-xxx.

        • options specifies the optional parameters of the command. Example: --force --ignore-daemonsets --delete-local-data. You can run the kubectl drain --help command to view help information.

Other operations

On the Node Pools page, you can perform the following operations:
  • Enter a node pool name into the search box to the right side of the Name drop-down list. Then, click Search to search for the node pool.
  • Click Details in the Actions column to view the node pool details.
  • Click Sync Node Pool to query and update the information and status of node pools and nodes in the cluster. If you modified the cluster nodes or the status of the nodes is different from the actual status, you can click Sync Node Pool to update the status of the nodes.
  • Click the name of the node pool that you want to manage. On the details page of the node pool, you can perform the following operations:
    • Click the Overview tab to view information about the cluster, node pool, node configurations, and auto scaling settings.
    • Click the Nodes tab to view information about the nodes in the node pool. You can select multiple nodes and remove them at the same time.
    • Click the Nodes tab. In the upper-right corner of the tab, you can click Export to export node information to a comma-separated values (CSV) file.
    • Click the Nodes tab and select Display Only Failed Nodes to view nodes that failed to be created.
    • Click the Scaling Activities tab to view the latest scaling events of the node pool.