You can use the node auto scaling feature to enable Container Service for Kubernetes (ACK) to automatically scale nodes when resources in the current cluster cannot fulfil pod scheduling. The node auto scaling feature is suitable for small-scale scaling activities and workloads that require only one scaling activity each time. For example, this feature is suitable for a cluster that contains less than 20 node pools with auto scaling enabled or node pools that have auto scaling enabled and each of which contains less than 100 nodes.
Before you start
To better work with the node auto scaling feature, we recommend that you read the Overview of node scaling topic and pay attention to the following items:
How node auto scaling works and its features
Use scenarios of node auto scaling
Usage notes for node auto scaling
Prerequisites
An ACK managed cluster or ACK dedicated cluster that runs Kubernetes 1.24 or later is created. For more information, see Create an ACK managed cluster, Create an ACK dedicated cluster, and Manually update ACK clusters.
Elastic Scaling Service (ESS) is activated, and the AliyunCSManagedAutoScalerRole Resource Access Management (RAM) role is assigned to ACK.
Step 1: Enable node auto scaling
Before you use node auto scaling, you must enable and configure this feature on the node pools page in the ACK console. When you configure this feature, set Node Scaling Method to Auto Scaling.
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side navigation pane, choose .
On the Node Pools page, click Enable next to Node Scaling.
If this is the first time you use this feature, follow the on-screen instructions to activate Auto Scaling and complete authorization. Skip this step if you have already completed authorization.
In the Node Scaling Configuration panel, set Node Scaling Method to Auto Scaling, configure scaling parameters, and then click OK.
Parameter
Description
Node PoolsScale-out Policy
Random Policy: randomly scale out a node pool when multiple scalable node pools exist.
Default Policy: scale out the node pool that wastes the least resources when multiple scalable node pools exist.
Priority-based Policy: scale out node pools based on their scale-out priorities when multiple scalable node pools exist.
You can specify a scale-out priority for a node pool only after the node pool is created.
Scan Interval
Specify the interval at which the cluster is evaluated for scaling. Default value: 60s.
The autoscaler triggers scale-out activities based on the actual scheduling status. You need only to configure scale-in conditions.
ImportantElastic Compute Service (ECS) nodes: The autoscaler performs scale-in activities only when the Scale-in Threshold, Defer Scale-in For:, and Cooldown conditions are met.
GPU-accelerated nodes: The autoscaler performs scale-in activities only when the GPU Scale-in Threshold, Defer Scale-in For:, and Cooldown conditions are met.
Allow Scale-in
Specify whether to allow scale-in activities. The scale-in configuration does not take effect when this switch is turned off. Proceed with caution.
Scale-in Threshold
Specify the ratio of the resource request of a node to resource capacity of the node in a node pool that has node auto scaling enabled.
A scale-in activity is performed only when the CPU and memory utilization of a node is lower than the Scale-in Threshold.
GPU Scale-in Threshold
The scale-in threshold for GPU-accelerated nodes.
A scale-in activity is performed only when the CPU, memory, and GPU utilization of a node is lower than the Scale-in Threshold.
Defer Scale-in For
The interval between the time when the scale-in threshold is reached and the time when the scale-in activity (reduce the number of pods) starts. Unit: minutes. Default value: 10.
ImportantThe autoscaler performs scale-in activities only when Scale-in Threshold is configured and the Defer Scale-in For condition is met.
Cooldown
After the autoscaler performs a scale-out activity, the autoscaler waits a cooldown period before it can perform a scale-in activity.
The autoscaler cannot perform scale-in activities within the cooldown period but can still check whether the nodes meet the scale-in conditions. After the cooldown period ends, if a node meets the scale-in conditions and the waiting period specified in the Defer Scale-in For parameter ends, the node is removed. For example, the Cooldown parameter is set to 10 minutes and the Defer Scale-in For parameter is set to 5 minutes. The autoscaler cannot perform scale-in activities within the 10-minute cooldown period after performing a scale-out activity. However, the autoscaler can still check whether the nodes meet the scale-in conditions within the cooldown period. When the cooldown period ends, the nodes that meet the scale-in conditions are removed after 5 minutes.
Step 2: Configure a node pool that has auto scaling enabled
The node auto scaling feature scales only nodes in node pools that have auto scaling enabled. Therefore, after you configure node auto scaling, you need to configure at least one node pool that has auto scaling enabled. You can create a node pool that has auto scaling enabled or enable auto scaling for an existing node pool.
The following table describes the key parameters. The term "node pool" in the following section refers to a node pool that has auto scaling enabled. For more information, see Create a node pool and Modify a node pool.
Parameter | Description |
Auto Scaling | Specify whether to enable auto scaling. This feature provides cost-effective computing resource scaling based on resource demand and scaling policies. For more information, see Auto scaling overview. Before you enable this feature, you need to enable node auto scaling for the node pool. For more information, see Step 1: Enable node auto scaling. |
Instance-related parameters | Select the ECS instances used by the worker node pool based on instance types or attributes. You can filter ECS instances by attributes such as vCPU, memory, instance family, and architecture. When the node pool is scaled out, ECS instances of the selected instance types are created. The scaling policy of the node pool determines which instance types are used to create new nodes during scale-out activities. Select multiple instance types to improve the success rate of node pool scale-out operations. The instance types of the nodes in the node pool. If you select only one instance type, the fluctuations of the ECS instance stock affect the scaling success rate. We recommend that you select multiple instance types to increase the scaling success rate. If you select only GPU-accelerated instances, you can select Enable GPU Sharing on demand. For more information, see cGPU overview. |
Instances | The number of instances in the node pool, excluding existing instances in the cluster. By default, the minimum number of instances is 0. If you specify one or more instances, the system adds the instances to the node pool. When a scale-out activity is triggered, the instances in the node pool are added to the associated cluster. |
Operating System | When you enable auto scaling, you can select an image based on Alibaba Cloud Linux, Windows, or Windows Core. If you select an image based on Windows or Windows Core, the system automatically adds the |
Node Label | Node labels are automatically added to nodes that are added to the cluster by scale-out activities. Important Auto scaling can recognize node labels and taints only after the node labels and taints are mapped to node pool tags. A node pool can have only a limited number of tags. Therefore, you must limit the total number of ECS tags, taints, and node labels of a node pool that has auto scaling enabled to less than 12. |
Scaling Policy |
|
Scaling Mode | You can select Standard or Swift.
|
Taints | After you add taints to a node, ACK no longer schedules pods to the node. |
After you create a node pool that has auto scaling enabled, you can refer to Step 1: Enable node auto scaling and select Priority-based Policy on demand. The valid values of priorities are integers from 1 to 100.
Step 3: (Optional) Verify node auto scaling
After you complete the preceding configuration, you can use the node auto scaling feature. The node pool displays that auto scaling is enabled and cluster-autoscaler is installed in the cluster.
Auto scaling is enabled for the node pool
The Node Pools page displays node pools with auto scaling enabled.
cluster-autoscaler is installed
In the left-side navigation pane of the details page, choose .
Select the kube-system namespace. The cluster-autoscaler component is displayed.
FAQ
Why does the auto scaling component fail to add nodes after a scale-out activity is triggered?
Check whether the following scenarios exist:
The instance types of the node pool that has auto scaling enabled cannot fulfil the resource requests of pods. Resources provided by ECS instance types comply with the ECS specifications. ACK reserves a certain amount of node resources to run Kubernetes components and system processes. This ensures that the OS kernel, system services, and Kubernetes daemons can run as normal. However, this causes the amount of allocatable resources of a node to differ from the resource capacity of the node. For more information, see Resource reservation policy.
By default, system components are automatically installed on nodes. Therefore, the resource request of pods must be lower than the resource capacity of the instance type.
Cross-zone scale-out activities cannot be triggered for pods that have limits on zones.
The RAM role does not have the permissions to manage the cluster. You must complete authorization for each cluster that is involved in the scale-out activity. For more information, see Step 1: Enable node auto scaling.
The following issues occur when you activate Auto Scaling:
The instance fails to be added to the cluster and a timeout error occurs.
The node is not ready and a timeout error occurs.
To ensure that nodes can be accurately scaled, the auto scaling component does not perform any scaling activities before it fixes the abnormal nodes.
Why does the auto scaling component fail to remove nodes after a scale-in activity is triggered?
Check whether the following scenarios exist:
The resource request threshold of pods is higher than the scale-in threshold.
Pods in the kube-system namespace run on the node.
A scheduling policy forces the pods to run on the current node. Therefore, the pods cannot be scheduled to other nodes.
The pods on the node have PodDisruptionBudget and have reached the minimum limit of PodDisruptionBudget.
For more information, see FAQ about cluster-autoscaler.
How do I choose between multiple node pools that have auto scaling enabled when I perform a scaling activity?
When pods fail to be scheduled, the scheduling simulation logic of the autoscaler is triggered to help make decisions based on the labels, taints, and instance types of a node pool. If the simulation shows that pods can be scheduled to the node pool, the autoscaler performs a scale-out activity to add nodes. If multiple node pools meet the scheduling conditions during the simulation, the least-waste principle is applied by default. The node pool that has the least resources left after nodes are added to the cluster is selected.
What types of pods can prevent cluster-autoscaler from removing nodes?
What scheduling policies does the node auto scaling feature use to determine whether the unschedulable pods can be scheduled to a node pool that has the auto scaling feature enabled?
The following list describes the scheduling policies used by cluster-autoscaler.
PodFitsResources
GeneralPredicates
PodToleratesNodeTaints
MaxGCEPDVolumeCount
NoDiskConflict
CheckNodeCondition
CheckNodeDiskPressure
CheckNodeMemoryPressure
CheckNodePIDPressure
CheckVolumeBinding
MaxAzureDiskVolumeCount
MaxEBSVolumeCount
ready
MatchInterPodAffinity
NoVolumeZoneConflict
How do I attach the AliyunCSManagedAutoScalerRolePolicy policy to the worker RAM role?
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Cluster Resources tab, click the hyperlink next to Worker RAM Role.
In the RAM, click Precise Permission.
In the Precise Permission panel, System Policy is selected by default. Enter
AliyunCSManagedAutoScalerRolePolicy
into the Policy Name field and click OK.In the Precise Permission panel, click Close. Refresh the page. The page shows that the permissions are added.
Manually restart Deployment cluster-autoscaler in the kube-system namespace so that the new RAM policy can take effect.