All Products
Search
Document Center

Container Service for Kubernetes:kube-scheduler

Last Updated:Dec 09, 2025

kube-scheduler is a control plane component that schedules pods to suitable nodes in a cluster based on node resource usage and pod scheduling requirements.

Introduction

About kube-scheduler

The kube-scheduler identifies viable nodes for each pod in the scheduling queue based on the Request values in a pod's specification and the Allocatable resources of nodes. The kube-scheduler then sorts all the viable nodes and binds the pod to a suitable node. By default, the kube-scheduler distributes pods evenly based on their Request values. For more information, see the official Kubernetes documentation for kube-scheduler.

Filter and Score plugins

The Kubernetes Scheduling Framework uses plugins to handle complex scheduling logic. This makes scheduling flexible and extensible. The Filter plugin removes nodes that cannot run a specific pod. The Score plugin then ranks the remaining nodes. The score indicates how suitable a node is for running the pod.

The following table lists the enabled Filter and Score plugins and their default weights for each kube-scheduler version.

View default plugins

Component version

Filter

Score

v1.30.1-aliyun.6.5.4.fcac2bdf

  • Default open source plugins:

    Same as the open source community. For more information, see Default Filter plugins for v1.30.1.

  • Default ACK plugins:

    • NodeNUMAResource

    • topologymanager

    • EciPodTopologySpread

    • ipawarescheduling

    • BatchResourceFit

    • PreferredNode

    • gpushare

    • NetworkTopology

    • CapacityScheduling

    • elasticresource

    • resourcepolicy

    • gputopology

    • ECIBinderV1

    • loadawarescheduling

    • EciScheduling

  • Default open source plugins:

    Same as the open source community. For more information, see Default Score plugins for v1.30.1.

  • Default ACK plugins and their weights:

    • Name: NodeNUMAResource Default Weight: 1

    • Name: ipawarescheduling Default Weight: 1

    • Name: gpuNUMAJointAllocation Default Weight: 1

    • Name: PreferredNode Default Weight: 10000

    • Name: gpushare Default Weight: 20000

    • Name: gputopology Default Weight: 1

    • Name: numa Default Weight: 1

    • Name: EciScheduling Default Weight: 2

    • Name: NodeAffinity Default Weight: 2

    • Name: elasticresource Default Weight: 1000000

    • Name: resourcepolicy Default Weight: 1000000

    • Name: NodeBEResourceLeastAllocated Default Weight: 1

    • Name: loadawarescheduling Default Weight: 10

v1.28.3-aliyun-6.5.2.7ff57682

  • Default open source plugins:

    Same as the open source community. For more information, see Default Filter plugins for v1.28.3.

  • Default ACK plugins:

    • NodeNUMAResource

    • topologymanager

    • EciPodTopologySpread

    • ipawarescheduling

    • BatchResourceFit

    • PreferredNode

    • gpushare

    • NetworkTopology

    • CapacityScheduling

    • elasticresource

    • resourcepolicy

    • gputopology

    • ECIBinderV1

    • loadawarescheduling

    • EciScheduling

  • Default open source plugins:

    Same as the open source community. For more information, see Default Score plugins for v1.28.3.

  • Default ACK plugins and their weights:

    • Name: NodeNUMAResource Default Weight: 1

    • Name: ipawarescheduling Default Weight: 1

    • Name: gpuNUMAJointAllocation Default Weight: 1

    • Name: PreferredNode Default Weight: 10000

    • Name: gpushare Default Weight: 20000

    • Name: gputopology Default Weight: 1

    • Name: numa Default Weight: 1

    • Name: EciScheduling Default Weight: 2

    • Name: NodeAffinity Default Weight: 2

    • Name: elasticresource Default Weight: 1000000

    • Name: resourcepolicy Default Weight: 1000000

    • Name: NodeBEResourceLeastAllocated Default Weight: 1

    • Name: loadawarescheduling Default Weight: 10

v1.26.3-aliyun-6.6.1.605b8a4f

  • Default open source plugins:

    Same as the open source community. For more information, see Default Filter plugins for v1.26.3.

  • Default ACK plugins:

    • NodeNUMAResource

    • topologymanager

    • EciPodTopologySpread

    • ipawarescheduling

    • BatchResourceFit

    • PreferredNode

    • gpushare

    • NetworkTopology

    • CapacityScheduling

    • elasticresource

    • resourcepolicy

    • gputopology

    • ECIBinderV1

    • loadawarescheduling

    • EciScheduling

  • Default open source plugins:

    Same as the open source community. For more information, see Default Score plugins for v1.26.3.

  • Default ACK plugins:

    • Name: NodeNUMAResource Default Weight: 1

    • Name: ipawarescheduling Default Weight: 1

    • Name: gpuNUMAJointAllocation Default Weight: 1

    • Name: PreferredNode Default Weight: 10000

    • Name: gpushare Default Weight: 20000

    • Name: gputopology Default Weight: 1

    • Name: numa Default Weight: 1

    • Name: EciScheduling Default Weight: 2

    • Name: NodeAffinity Default Weight: 2

    • Name: elasticresource Default Weight: 1000000

    • Name: resourcepolicy Default Weight: 1000000

    • Name: NodeBEResourceLeastAllocated Default Weight: 1

    • Name: loadawarescheduling Default Weight: 10

Plugin features

View plugin details and related documents

Plugin name

Description

Related documents

NodeNUMAResource

Manages CPU topology-aware scheduling.

Enable CPU topology-aware scheduling

topologymanager

Manages node NUMA resource allocation.

Enable NUMA topology-aware scheduling

EciPodTopologySpread

Enhances topology spread constraints in virtual node scheduling scenarios.

Enable virtual node scheduling policies for a cluster

ipawarescheduling

IP-aware scheduling based on remaining IP addresses.

Scheduling FAQ

BatchResourceFit

Enables and manages the colocation of multi-type workloads.

Best practices for colocation of multi-type workloads

PreferredNode

Reserves nodes for node pools with auto scaling enabled.

Node auto scaling

gpushare

Manages shared GPU scheduling.

Shared GPU scheduling

NetworkTopology

Manages network topology-aware scheduling.

Topology-aware scheduling

CapacityScheduling

Manages CapacityScheduling.

Use Capacity Scheduling

elasticresource

Manages ECI elastic scheduling.

Use ElasticResource for ECI elastic scheduling (Discontinued)

resourcepolicy

Manages the scheduling of custom elastic resources.

Custom elastic resource priority scheduling

gputopology

Manages GPU topology-aware scheduling.

GPU topology-aware scheduling

ECIBinderV1

Binds virtual nodes in ECI elastic scheduling scenarios.

Schedule pods to run on ECI

loadawarescheduling

Manages load-aware scheduling.

Use load-aware scheduling

EciScheduling

Manages virtual node scheduling.

Enable virtual node scheduling policies for a cluster

Usage notes

kube-scheduler is automatically installed in a Kubernetes cluster. You can use it without additional configurations. We recommend that you upgrade the kube-scheduler component to the latest version to benefit from the most recent feature optimizations and bug fixes. You can log on to the Container Service Management Console, click the target cluster, and then in the navigation pane on the left, select Operations Management > Component Management to upgrade the component.

Change history

Version 1.34 change history

Version

Change time

Change description

v1.34.0-apsara.6.11.7.43cab345

December 08, 2025

  • New features:

    • Network topology-aware scheduling now supports EP size-based scheduling. PyTorchJob pods are automatically placed contiguously based on their index during scheduling.

  • Bug fixes:

    • Improved the efficiency of auto scaling.

    • The scheduler no longer updates the Pod Scheduled condition when an ACS pod is scheduled. This prevents node pool auto scaling from being triggered.

    • Fixed an issue where the scheduler could not read the ACS GPU Partition of scheduled pods in the cluster after a restart.

v1.34.0-apsara.6.11.6.3c0b732b

November 10, 2025

  • Bug fixes:

    • Fixed a memory leak issue in IP-aware scheduling.

    • Fixed an issue where the CapacityScheduling quota was updated before the pod was bound, which could cause statistical errors.

  • New features:

    • Added support for __IGNORE__RESOURCE__.

    • Added support for declaring a pod as temporarily unschedulable using the alibabacloud.com/schedule-admission annotation.

    • Added support for ACS shared GPU.

    • Improved scheduling for PVCs to increase the scheduling speed for pods with disks.

    • Fixed an issue where the ScheduleCycle was updated incorrectly when ResourcePolicy and Gang scheduling were used together.

v1.34.0-apsara.6.11.5.3c117f21

October 23, 2025

  • Bug fixes:

    • Fixed an issue where using the `alibabacloud.com/acs: "true"` label for ACS or the `alibabacloud.com/eci: "true"` label for ECI did not take effect.

    • Fixed an issue that prevented a Pod from being scheduled when multiple containers in the Pod requested nvidia.com/gpu.

    • Fixed an issue where the scheduler might crash during high-concurrency use of ACS computing power.

v1.34.0-apsara.6.11.3.ff6b62d8

September 17, 2025

Supported all previous features in ACK clusters of version 1.34.

Version 1.33 change history

Version

Change time

Change description

v1.33.0-apsara.6.11.7.4a6779f8

December 05, 2025

  • New features:

    • Network topology-aware scheduling now supports EP size-based scheduling. PyTorchJob pods are automatically placed contiguously based on their index during scheduling.

  • Bug fixes:

    • Improved the efficiency of auto scaling.

    • The scheduler no longer updates the Pod Scheduled condition when ACS instance provisioning is triggered. This prevents node pool auto scaling from being triggered.

    • Fixed an issue where the scheduler could not read the ACS GPU Partition of scheduled pods in the cluster after a restart.

    • Fixed an issue where having SelectedNode on a PVC affected pod scheduling.

v1.33.0-apsara.6.11.6.2fce98cb

November 10, 2025

  • Bug fixes:

    • Fixed a memory leak issue in IP-aware scheduling.

    • Fixed an issue where the CapacityScheduling quota was updated before the pod was bound, which could cause statistical errors.

  • New features:

    • Added support for __IGNORE__RESOURCE__.

    • Added support for declaring a pod as temporarily unschedulable using the alibabacloud.com/schedule-admission annotation.

    • Added support for ACS shared GPU.

    • Improved scheduling for PVCs to increase the scheduling speed for pods with disks.

    • Fixed an issue where the ScheduleCycle was updated incorrectly when ResourcePolicy and Gang scheduling were used together.

v1.33.0-apsara.6.11.5.8dd6f5f4

October 23, 2025

  • Bug fixes:

    • Fixed an issue where using the `alibabacloud.com/acs: "true"` label for ACS or the `alibabacloud.com/eci: "true"` label for ECI did not take effect.

v1.33.0-apsara.6.11.4.77470105

September 15, 2025

  • Bug fixes:

    • Fixed a scheduling failure that occurred when multiple containers in a single pod requested nvidia.com/gpu.

    • Fixed an issue where the scheduler might crash during high-concurrency use of ACS computing power.

v1.33.0-apsara.6.11.3.ed953a31

September 08, 2025

  • New features:

    • ElasticQuotaTree can now use the alibabacloud.com/ignore-empty-resource annotation to ignore resource limits that are not declared in the quota.

    • NetworkTopology now supports declaring spread distribution in JobNetworkTopology using constraints.

  • Bug fixes:

    • Fixed an issue where the scheduler component might crash when PodTopologySpread was used.

v1.33.0-aliyun.6.11.2.330dcea7

August 19, 2025

  • Improved the scheduling determinism of GOAT. Nodes with the node.cloudprovider.kubernetes.io/uninitialized or node.kubernetes.io/unschedulable taint are no longer considered not ready.

  • Fixed an issue in the ElasticQuotaTree fairness check where quotas with an empty Min value or an empty Request value were incorrectly considered unsatisfied.

  • Fixed an issue where the scheduler component might crash when provisioning ACS instances.

  • Fixed an issue where the scheduler reported an error if an InitContainer had no resource requests. (29d1951)

v1.33.0-aliyun.6.11.1.382cd0a6

July 25, 2025

v1.33.0-aliyun.6.11.0.87e9673b

July 18, 2025

  • Improved the scheduling determinism of GOAT to prevent determinism failures caused by concurrent NodeReady states during pod scheduling.

  • Fixed an incorrect pod count for Gang scheduling that occurred when a PodGroup CR was deleted and recreated while scheduled pods existed.

  • Fixed an issue in the ElasticQuota preemption policy where pods with the same policy might be preempted, and preemption within the same quota might occur when resource usage did not reach the Min value.

  • Fixed an issue in IP-aware scheduling where the scheduler did not correctly prevent pods from being scheduled to nodes with insufficient IP addresses.

  • Fixed an issue where the TimeoutOrExceedMax and ExceedMax policies in ResourcePolicy were invalid (introduced in version 6.9.x).

  • Fixed an issue where the MaxPod count was occasionally calculated incorrectly after elastic scaling was triggered in ResourcePolicy.

  • Added a scheduling fairness check to ElasticQuotaTree. When a quota with unsatisfied resources has pending pods, the scheduler no longer schedules new pods to quotas that have met their resource guarantees. This feature must be enabled using the StrictFairness parameter of the plugin and is enabled by default when the preemption algorithm is set to None.

  • Added the ScheduleAdmission feature. The scheduler does not schedule pods that have the alibabacloud.com/schedule-admission annotation.

  • The scheduler now supports pods with the alibabacloud.com/eci=true, alibabacloud.com/acs=true, or eci=true label. For these pods, the scheduler checks only the VolumeBind, VolumeRestrictions, VolumeZone, and virtual node-related plugins (ServerlessGateway, ServerlessScheduling, and ServerlessBinder). If a pod does not have a PVC-type volume attached, the scheduler skips all checks and passes the pod directly to the virtual node for processing.

  • Added a security check to ResourcePolicy scheduling. If a Unit might patch pod labels and the label might affect the MatchLabels of a ReplicaSet or StatefulSet, the Unit is skipped.

v1.33.0-aliyun.6.9.4.8b58e6b4

June 10, 2025

  • Fixed an issue where InterPodAffinity and PodTopologySpread might fail during continuous pod scheduling.

  • Fixed an occasional scheduling anomaly that occurred when ResourcePolicy was used.

  • Improved the scheduler's behavior when interacting with node pools that have auto scaling enabled.

  • Fixed an incorrect pod count in ResourcePolicy for custom elastic resource priority scheduling.

  • Fixed a potential disk leak that occurred when WaitForFirstConsumer-type disks were used with serverless computing power.

v1.33.0-aliyun.6.9.2.09bce458

April 28, 2025

Supported all previous features in ACK clusters of version 1.33.

Version 1.32 change history

Version

Change time

Change description

v1.32.0-apsara.6.11.6.03248691

November 10, 2025

  • Bug fixes:

    • Fixed a memory leak issue in IP-aware scheduling.

    • Fixed an issue where the CapacityScheduling quota was updated before the pod was bound, which could cause statistical errors.

  • New features:

    • Added support for __IGNORE__RESOURCE__.

    • Added support for declaring a pod as temporarily unschedulable using the alibabacloud.com/schedule-admission annotation.

    • Added support for ACS shared GPU.

    • Improved scheduling for PVCs to increase the scheduling speed for pods with disks.

    • Fixed an issue where the ScheduleCycle was updated incorrectly when ResourcePolicy and Gang scheduling were used together.

v1.32.0-apsara.6.11.5.c774d3c3

October 23, 2025

  • Bug fixes:

    • Fixed an issue where using the `alibabacloud.com/acs: "true"` label for ACS or the `alibabacloud.com/eci: "true"` label for ECI did not take effect.

v1.32.0-apsara.6.11.4.4a4f4843

September 15, 2025

  • Bug fixes:

    • Fixed a scheduling failure that occurred when multiple containers in a single pod requested nvidia.com/gpu.

    • Fixed an issue where the scheduler might crash during high-concurrency use of ACS computing power.

v1.32.0-apsara.6.11.3.b651c575

September 12, 2025

  • New features:

    • ElasticQuotaTree can now use the alibabacloud.com/ignore-empty-resource annotation to ignore resource limits that are not declared in the quota.

    • NetworkTopology now supports declaring spread distribution in JobNetworkTopology using constraints.

v1.32.0-aliyun.6.11.2.58302423

August 21, 2025

  • Improved the scheduling determinism of GOAT. Nodes with the node.cloudprovider.kubernetes.io/uninitialized or node.kubernetes.io/unschedulable taint are no longer considered not ready.

  • Fixed an issue in the ElasticQuotaTree fairness check where quotas with an empty Min value or an empty Request value were incorrectly considered unsatisfied.

  • Fixed an issue where the scheduler component might crash when provisioning ACS instances.

v1.32.0-aliyun.6.11.1.ab632d8c

July 25, 2025

v1.32.0-aliyun.6.11.0.0350a0e7

July 18, 2025

  • Improved the scheduling determinism of GOAT to prevent determinism failures caused by concurrent NodeReady states during pod scheduling.

  • Fixed an incorrect pod count for Gang scheduling that occurred when a PodGroup CR was deleted and recreated while scheduled pods existed.

  • Fixed an issue in the ElasticQuota preemption policy where pods with the same policy might be preempted, and preemption within the same quota might occur when resource usage did not reach the Min value.

  • Fixed an issue in IP-aware scheduling where the scheduler did not correctly prevent pods from being scheduled to nodes with insufficient IP addresses.

  • Fixed an issue where the TimeoutOrExceedMax and ExceedMax policies in ResourcePolicy were invalid (introduced in version 6.9.x).

  • Fixed an issue where the MaxPod count was occasionally calculated incorrectly after elastic scaling was triggered in ResourcePolicy.

  • Added a scheduling fairness check to ElasticQuotaTree. When a quota with unsatisfied resources has pending pods, the scheduler no longer schedules new pods to quotas that have met their resource guarantees. This feature must be enabled using the StrictFairness parameter of the plugin and is enabled by default when the preemption algorithm is set to None.

  • Added the ScheduleAdmission feature. The scheduler does not schedule pods that have the alibabacloud.com/schedule-admission annotation.

  • The scheduler now supports pods with the alibabacloud.com/eci=true, alibabacloud.com/acs=true, or eci=true label. For these pods, the scheduler checks only the VolumeBind, VolumeRestrictions, VolumeZone, and virtual node-related plugins (ServerlessGateway, ServerlessScheduling, and ServerlessBinder). If a pod does not have a PVC-type volume attached, the scheduler skips all checks and passes the pod directly to the virtual node for processing.

  • Added a security check to ResourcePolicy scheduling. If a Unit might patch pod labels and the label might affect the MatchLabels of a ReplicaSet or StatefulSet, the Unit is skipped.

v1.32.0-aliyun.6.9.4.d5a8a355

June 04, 2025

  • Fixed an issue where InterPodAffinity and PodTopologySpread might fail during continuous pod scheduling.

  • Fixed an occasional scheduling anomaly that occurred when ResourcePolicy was used.

  • Fixed a preemption anomaly in ElasticQuota.

v1.32.0-aliyun.6.9.3.515ac311

May 14, 2025

  • Improved the scheduler's behavior when interacting with node pools that have auto scaling enabled.

  • Fixed an incorrect pod count in ResourcePolicy for custom elastic resource priority scheduling.

  • Fixed a potential disk leak that occurred when WaitForFirstConsumer-type disks were used with serverless computing power.

v1.32.0-aliyun.6.9.2.09bce458

April 16, 2025

  • Fixed an anomaly in the ElasticQuota preemption feature.

  • Added support for scheduling pods to ACS GPU-HPN nodes in ACK clusters.

v1.32.0-aliyun.6.8.6.bd13955d

April 02, 2025

  • Fixed an issue where disks of the WaitForFirstConsumer type were not created by the CSI Plugin in ACK serverless clusters.

v1.32.0-aliyun.6.9.0.a1c7461b

February 28, 2025

  • Added support for IP-aware scheduling based on remaining IP addresses on nodes.

  • Added a plugin to support resource checks before a Kube-Queue job is dequeued.

  • Added support for switching the preemption algorithm implementation through component configuration.

v1.32.0-aliyun.6.8.5.28a2aed7

February 19, 2025

  • Fixed an issue where disks might be repeatedly created when using ECI or ACS.

  • Fixed an issue where the Max property was invalid after declaring PodLabels in custom elastic resource priority scheduling.

v1.32.0-aliyun.6.8.4.2b585931

January 17, 2025

Supported all previous features in ACK clusters of version 1.32.

Version 1.31 change history

Version

Change time

Change description

v1.31.0-apsara.6.11.5.28c6b51a

October 20, 2025

  • Bug fixes:

    • Fixed an issue where using the `alibabacloud.com/acs: "true"` label for ACS or the `alibabacloud.com/eci: "true"` label for ECI did not take effect.

v1.31.0-apsara.6.11.4.69d7e1fa

September 15, 2025

  • Bug fixes:

    • Fixed a scheduling failure that occurred when multiple containers in a single pod requested nvidia.com/gpu.

    • Fixed an issue where the scheduler might crash during high-concurrency use of ACS computing power.

v1.31.0-apsara.6.11.3.9b41ad4a

September 12, 2025

  • New features:

    • ElasticQuotaTree can now use the alibabacloud.com/ignore-empty-resource annotation to ignore resource limits that are not declared in the quota.

    • NetworkTopology now supports declaring spread distribution in JobNetworkTopology using constraints.

    • Improved the scheduling determinism of GOAT. Nodes with the node.cloudprovider.kubernetes.io/uninitialized or node.kubernetes.io/unschedulable taint are no longer considered not ready.

  • Bug fixes

    • Fixed an issue in the ElasticQuotaTree fairness check where quotas with an empty Min value or an empty Request value were incorrectly considered unsatisfied.

    • Fixed an issue where the scheduler component might crash when provisioning ACS instances.

v1.31.0-aliyun.6.11.1.c9ed2f40

July 25, 2025

v1.31.0-aliyun.6.11.0.ea1f0f94

July 18, 2025

  • Improved the scheduling determinism of GOAT to prevent determinism failures caused by concurrent NodeReady states during pod scheduling.

  • Fixed an incorrect pod count for Gang scheduling that occurred when a PodGroup CR was deleted and recreated while scheduled pods existed.

  • Fixed an issue in the ElasticQuota preemption policy where pods with the same policy might be preempted, and preemption within the same quota might occur when resource usage did not reach the Min value.

  • Fixed an issue in IP-aware scheduling where the scheduler did not correctly prevent pods from being scheduled to nodes with insufficient IP addresses.

  • Fixed an issue where the TimeoutOrExceedMax and ExceedMax policies in ResourcePolicy were invalid (introduced in version 6.9.x).

  • Fixed an issue where the MaxPod count was occasionally calculated incorrectly after elastic scaling was triggered in ResourcePolicy.

  • Added a scheduling fairness check to ElasticQuotaTree. When a quota with unsatisfied resources has pending pods, the scheduler no longer schedules new pods to quotas that have met their resource guarantees. This feature must be enabled using the StrictFairness parameter of the plugin and is enabled by default when the preemption algorithm is set to None.

  • Added the ScheduleAdmission feature. The scheduler does not schedule pods that have the alibabacloud.com/schedule-admission annotation.

  • The scheduler now supports pods with the alibabacloud.com/eci=true, alibabacloud.com/acs=true, or eci=true label. For these pods, the scheduler checks only the VolumeBind, VolumeRestrictions, VolumeZone, and virtual node-related plugins (ServerlessGateway, ServerlessScheduling, and ServerlessBinder). If a pod does not have a PVC-type volume attached, the scheduler skips all checks and passes the pod directly to the virtual node for processing.

  • Added a security check to ResourcePolicy scheduling. If a Unit might patch pod labels and the label might affect the MatchLabels of a ReplicaSet or StatefulSet, the Unit is skipped.

v1.31.0-aliyun.6.9.4.c8e540e8

June 04, 2025

  • Fixed an issue where InterPodAffinity and PodTopologySpread might fail during continuous pod scheduling.

  • Fixed an occasional scheduling anomaly that occurred when ResourcePolicy was used.

  • Fixed a preemption anomaly in ElasticQuota.

v1.31.0-aliyun.6.9.3.051bb0e8

May 14, 2025

  • Improved the scheduler's behavior when interacting with node pools that have auto scaling enabled.

  • Fixed an incorrect pod count in ResourcePolicy for custom elastic resource priority scheduling.

  • Fixed a potential disk leak that occurred when WaitForFirstConsumer-type disks were used with serverless computing power.

v1.31.0-aliyun.6.8.6.520f223d

April 02, 2025

  • Fixed an issue where disks of the WaitForFirstConsumer type were not created by the CSI Plugin in ACK serverless clusters.

v1.31.0-aliyun.6.9.0.8287816e

February 28, 2025

  • Added support for IP-aware scheduling based on remaining IP addresses on nodes.

  • Added a plugin to support resource checks before a Kube-Queue job is dequeued.

  • Added support for switching the preemption algorithm implementation through component configuration.

v1.31.0-aliyun.6.8.5.2c6ea085

February 19, 2025

  • Fixed an issue where disks might be repeatedly created when using ECI or ACS.

  • Fixed an issue where the Max property was invalid after declaring PodLabels in custom elastic resource priority scheduling.

v1.31.0-aliyun.6.8.4.8f585f26

January 02, 2025

  • Custom elastic resource priority scheduling:

    • Added support for ACS GPU.

    • Fixed a potential ECI instance leak that occurred when using PVCs in ACK serverless clusters.

  • CapacityScheduling:

    • Fixed an issue where ElasticQuotaTree usage was incorrect in ACS resource normalization scenarios.

v1.31.0-aliyun.6.8.3.eeb86afc

December 16, 2024

Custom elastic resource priority scheduling: Added support for multiple ACS-type Units.

v1.31.0-aliyun.6.8.2.eeb86afc

December 05, 2024

Custom elastic resource priority scheduling: Added support for defining PodAnnotations in a Unit.

v1.31.0-aliyun.6.8.1.116b8e1f

December 02, 2024

  • Improved the performance of network topology-aware scheduling.

  • Fixed an issue where ECI pods might be scheduled back to ECS nodes for execution.

  • Load-aware scheduling no longer restricts DaemonSet pods during scheduling.

v1.31.0-aliyun.6.7.1.1943173f

November 06, 2024

  • Custom elastic resource priority scheduling

    • Added support for detecting the number of pods that trigger elastic scaling.

    • The `resource: elastic` field in a Unit is deprecated. Use the k8s.aliyun.com/resource-policy-wait-for-ecs-scaling annotation in PodLabels instead.

  • CPU topology-aware scheduling

    • Fixed a potential anomaly that occurred when the ECS instance type changed.

v1.31.0-aliyun.6.7.0.740ba623

November 04, 2024

  • CapacityScheduling

    • Fixed an issue where elastic quota preemption was performed even without an ElasticQuotaTree.

  • Custom elastic resource priority scheduling

    • Added support for ACS-type units.

v1.31.0-aliyun.6.6.1.5bd14ab0

October 22, 2024

  • Fixed an occasional Invalid Score issue in PodTopologySpread.

  • Improved the event messages for Coscheduling. The number of Coscheduling failures is now included in the events.

  • Improved the messages related to virtual node scheduling. Warning events are no longer sent during the virtual node scheduling process.

  • Network topology-aware scheduling

    • Fixed an issue where pods could not be scheduled after preemption in network topology-aware scheduling.

  • NUMA topology-aware scheduling

    • Fixed an issue where NUMA topology-aware scheduling did not take effect.

v1.31.0-aliyun.6.6.0.ba473715

September 13, 2024

Supported all previous features in ACK clusters of version 1.31.

Version 1.30 change history

Version

Change time

Change description

v1.30.3-apsara.6.11.6.a298df6b

November 10, 2025

  • New features:

    • Added support for __IGNORE__RESOURCE__.

    • Added support for declaring a pod as temporarily unschedulable using the alibabacloud.com/schedule-admission annotation.

    • Added support for ACS shared GPU.

    • Improved scheduling for PVCs to increase the scheduling speed for pods with disks.

    • Fixed an issue where the ScheduleCycle was updated incorrectly when ResourcePolicy and Gang scheduling were used together.

    • ElasticQuotaTree can now use the alibabacloud.com/ignore-empty-resource annotation to ignore resource limits that are not declared in the quota.

    • Improved the scheduling determinism of GOAT. Nodes with the node.cloudprovider.kubernetes.io/uninitialized or node.kubernetes.io/unschedulable taint are no longer considered not ready.

  • Bug fixes:

    • Fixed a memory leak issue in IP-aware scheduling.

    • Fixed an issue where the CapacityScheduling quota was updated before the pod was bound, which could cause statistical errors.

    • Fixed an issue in the ElasticQuotaTree fairness check where quotas with an empty Min value or an empty Request value were incorrectly considered unsatisfied.

v1.30.3-apsara.6.11.3.bc707580

October 23, 2025

  • Bug fixes:

    • Fixed an issue where using the `alibabacloud.com/acs: "true"` label for ACS or the `alibabacloud.com/eci: "true"` label for ECI did not take effect.

v1.30.3-apsara.6.11.2.463d59c9

September 15, 2025

  • Bug fixes:

    • Fixed a scheduling failure that occurred when multiple containers in a single pod requested nvidia.com/gpu.

    • Fixed an issue where the scheduler might crash during high-concurrency use of ACS computing power.

v1.30.3-aliyun.6.11.1.c005a0b0

July 25, 2025

v1.30.3-aliyun.6.11.0.84cdcafb

July 18, 2025

  • Improved the scheduling determinism of GOAT to prevent determinism failures caused by concurrent NodeReady states during pod scheduling.

  • Fixed an incorrect pod count for Gang scheduling that occurred when a PodGroup CR was deleted and recreated while scheduled pods existed.

  • Fixed an issue in the ElasticQuota preemption policy where pods with the same policy might be preempted, and preemption within the same quota might occur when resource usage did not reach the Min value.

  • Fixed an issue in IP-aware scheduling where the scheduler did not correctly prevent pods from being scheduled to nodes with insufficient IP addresses.

  • Fixed an issue where the TimeoutOrExceedMax and ExceedMax policies in ResourcePolicy were invalid (introduced in version 6.9.x).

  • Fixed an issue where the MaxPod count was occasionally calculated incorrectly after elastic scaling was triggered in ResourcePolicy.

  • Added a scheduling fairness check to ElasticQuotaTree. When a quota with unsatisfied resources has pending pods, the scheduler no longer schedules new pods to quotas that have met their resource guarantees. This feature must be enabled using the StrictFairness parameter of the plugin and is enabled by default when the preemption algorithm is set to None.

  • Added the ScheduleAdmission feature. The scheduler does not schedule pods that have the alibabacloud.com/schedule-admission annotation.

  • The scheduler now supports pods with the alibabacloud.com/eci=true, alibabacloud.com/acs=true, or eci=true label. For these pods, the scheduler checks only the VolumeBind, VolumeRestrictions, VolumeZone, and virtual node-related plugins (ServerlessGateway, ServerlessScheduling, and ServerlessBinder). If a pod does not have a PVC-type volume attached, the scheduler skips all checks and passes the pod directly to the virtual node for processing.

  • Added a security check to ResourcePolicy scheduling. If a Unit might patch pod labels and the label might affect the MatchLabels of a ReplicaSet or StatefulSet, the Unit is skipped.

v1.30.3-aliyun.6.9.4.818b6506

June 04, 2025

  • Fixed an issue where InterPodAffinity and PodTopologySpread might fail during continuous pod scheduling.

  • Fixed an occasional scheduling anomaly that occurred when ResourcePolicy was used.

  • Fixed a preemption anomaly in ElasticQuota.

v1.30.3-aliyun.6.9.3.ce7e2faf

May 14, 2025

  • Improved the scheduler's behavior when interacting with node pools that have auto scaling enabled.

  • Fixed an incorrect pod count in ResourcePolicy for custom elastic resource priority scheduling.

  • Fixed a potential disk leak that occurred when WaitForFirstConsumer-type disks were used with serverless computing power.

v1.30.3-aliyun.6.8.6.40d5fdf4

April 02, 2025

  • Fixed an issue where disks of the WaitForFirstConsumer type were not created by the CSI Plugin in ACK serverless clusters.

v1.30.3-aliyun.6.9.0.f08e56a7

February 28, 2025

  • Added support for IP-aware scheduling based on remaining IP addresses on nodes.

  • Added a plugin to support resource checks before a Kube-Queue job is dequeued.

  • Added support for switching the preemption algorithm implementation through component configuration.

v1.30.3-aliyun.6.8.5.af20249c

February 19, 2025

  • Fixed an issue where disks might be repeatedly created when using ECI or ACS.

  • Fixed an issue where the Max property was invalid after declaring PodLabels in custom elastic resource priority scheduling.

v1.30.3-aliyun.6.8.4.946f90e8

January 02, 2025

  • Custom elastic resource priority scheduling:

    • Added support for ACS GPU.

    • Fixed a potential ECI instance leak that occurred when using PVCs in ACK serverless clusters.

  • CapacityScheduling:

    • Fixed an issue where ElasticQuotaTree usage was incorrect in ACS resource normalization scenarios.

v1.30.3-aliyun.6.8.3.697ce9b5

December 16, 2024

Custom elastic resource priority scheduling: Added support for multiple ACS-type Units.

v1.30.3-aliyun.6.8.2.a5fa5dbd

December 05, 2024

Custom elastic resource priority scheduling

  • Added support for defining PodAnnotations in a Unit.

v1.30.3-aliyun.6.8.1.6dc0fd75

December 02, 2024

  • Improved the performance of network topology-aware scheduling.

  • Fixed an issue where ECI pods might be scheduled back to ECS nodes for execution.

  • Load-aware scheduling no longer restricts DaemonSet pods during scheduling.

v1.30.3-aliyun.6.7.1.d992180a

November 06, 2024

  • Custom elastic resource priority scheduling

    • Added support for detecting the number of pods that trigger elastic scaling.

    • The `resource: elastic` field in a Unit is deprecated. Use the k8s.aliyun.com/resource-policy-wait-for-ecs-scaling annotation in PodLabels instead.

  • CPU topology-aware scheduling

    • Fixed a potential anomaly that occurred when the ECS instance type changed.

v1.30.3-aliyun.6.7.0.da474ec5

November 04, 2024

  • CapacityScheduling

    • Fixed an issue where elastic quota preemption was performed even without an ElasticQuotaTree.

  • Custom elastic resource priority scheduling

    • Added support for ACS-type units.

v1.30.3-aliyun.6.6.4.b8940a30

October 22, 2024

  • Fixed an occasional Invalid Score issue in PodTopologySpread.

v1.30.3-aliyun.6.6.3.994ade8a

October 18, 2024

  • Improved the event messages for Coscheduling. The number of Coscheduling failures is now included in the events.

  • Improved the messages related to virtual node scheduling. Warning events are no longer sent during the virtual node scheduling process.

v1.30.3-aliyun.6.6.2.0be67202

September 23, 2024

  • Network topology-aware scheduling

    • Fixed an issue where pods could not be scheduled after preemption in network topology-aware scheduling.

  • NUMA topology-aware scheduling

    • Fixed an issue where NUMA topology-aware scheduling did not take effect.

v1.30.3-aliyun.6.6.1.d98352c6

September 11, 2024

  • Network topology-aware scheduling now supports preemption.

  • SlurmOperator

    • Supports hybrid scheduling in Kubernetes & Slurm clusters.

  • Coscheduling

    • Supports the latest CRD version from the community.

v1.30.3-aliyun.6.5.6.fe7bc1d5

August 20, 2024

Fixed a scheduling anomaly with PodAffinity/PodAntiAffinity introduced in v1.30.1-aliyun.6.5.1.5dad3be8.

v1.30.3-aliyun.6.5.5.8b10ee7c

August 01, 2024

  • Rebased to community version v1.30.3.

v1.30.1-aliyun.6.5.5.fcac2bdf

August 01, 2024

  • CapacityScheduling

    • Fixed a potential quota calculation error when Coscheduling and CapacityScheduling were used together.

  • GPUShare

    • Fixed an incorrect calculation of remaining resources on computing power scheduling nodes.

  • Custom elastic resource priority scheduling

    • Improved the node scale-out behavior when ResourcePolicy and Cluster Autoscaler are used together. Nodes are no longer scaled out when all pods in all Units have reached their Max value.

v1.30.1-aliyun.6.5.4.fcac2bdf

July 22, 2024

  • Coscheduling

    • Fixed a quota statistics error when using ECI.

  • Fixed the occasional "xxx is in cache, so can't be assumed" issue.

v1.30.1-aliyun.6.5.3.9adaeb31

July 10, 2024

Fixed an issue introduced in v1.30.1-aliyun.6.5.1.5dad3be8 where pods remained in the Pending state for a long time.

v1.30.1-aliyun.6.5.1.5dad3be8

June 27, 2024

  • Coscheduling

    • Improved Coscheduling speed.

  • Supports sequential pod scheduling.

  • Supports declaring equivalence classes to improve scheduling performance.

  • Used PreEnqueue to improve the performance of existing scheduler plugins.

v1.30.1-aliyun.6.4.7.6643d15f

May 31, 2024

  • Supported all previous features in ACK clusters of version 1.30.

Version 1.28 change history

Version

Change time

Change description

v1.28.12-apsara-6.11.5.db9be0f5

October 20, 2025

  • Bug fixes:

    • Fixed an issue where using the `alibabacloud.com/acs: "true"` label for ACS or the `alibabacloud.com/eci: "true"` label for ECI did not take effect.

v1.28.12-apsara-6.11.4.a48c5b6c

September 15, 2025

  • Bug fixes:

    • Fixed a scheduling failure that occurred when multiple containers in a single pod requested nvidia.com/gpu.

    • Fixed an issue where the scheduler might crash during high-concurrency use of ACS computing power.

v1.28.12-apsara-6.11.3.1a06b13e

September 09, 2025

  • New features:

    • ElasticQuotaTree can now use the alibabacloud.com/ignore-empty-resource annotation to ignore resource limits that are not declared in the quota.

v1.28.12-aliyun-6.11.1.f23c663c

July 25, 2025

v1.28.12-aliyun-6.11.0.4003ef92

July 18, 2025

  • Improved the scheduling determinism of GOAT to prevent determinism failures caused by concurrent NodeReady states during pod scheduling.

  • Fixed an incorrect pod count for Gang scheduling that occurred when a PodGroup CR was deleted and recreated while scheduled pods existed.

  • Fixed an issue in the ElasticQuota preemption policy where pods with the same policy might be preempted, and preemption within the same quota might occur when resource usage did not reach the Min value.

  • Fixed an issue in IP-aware scheduling where the scheduler did not correctly prevent pods from being scheduled to nodes with insufficient IP addresses.

  • Fixed an issue where the TimeoutOrExceedMax and ExceedMax policies in ResourcePolicy were invalid (introduced in version 6.9.x).

  • Fixed an issue where the MaxPod count was occasionally calculated incorrectly after elastic scaling was triggered in ResourcePolicy.

  • Added a scheduling fairness check to ElasticQuotaTree. When a quota with unsatisfied resources has pending pods, the scheduler no longer schedules new pods to quotas that have met their resource guarantees. This feature must be enabled using the StrictFairness parameter of the plugin and is enabled by default when the preemption algorithm is set to None.

  • Added the ScheduleAdmission feature. The scheduler does not schedule pods that have the alibabacloud.com/schedule-admission annotation.

  • The scheduler now supports pods with the alibabacloud.com/eci=true, alibabacloud.com/acs=true, or eci=true label. For these pods, the scheduler checks only the VolumeBind, VolumeRestrictions, VolumeZone, and virtual node-related plugins (ServerlessGateway, ServerlessScheduling, and ServerlessBinder). If a pod does not have a PVC-type volume attached, the scheduler skips all checks and passes the pod directly to the virtual node for processing.

  • Added a security check to ResourcePolicy scheduling. If a Unit might patch pod labels and the label might affect the MatchLabels of a ReplicaSet or StatefulSet, the Unit is skipped.

v1.28.12-aliyun-6.9.4.206fc5f8

June 04, 2025

  • Fixed an issue where InterPodAffinity and PodTopologySpread might fail during continuous pod scheduling.

  • Fixed an occasional scheduling anomaly that occurred when ResourcePolicy was used.

  • Fixed a preemption anomaly in ElasticQuota.

v1.28.12-aliyun-6.9.3.cd73f3fe

May 14, 2025

  • Improved the scheduler's behavior when interacting with node pools that have auto scaling enabled.

  • Fixed an incorrect pod count in ResourcePolicy for custom elastic resource priority scheduling.

  • Fixed a potential disk leak that occurred when WaitForFirstConsumer-type disks were used with serverless computing power.

v1.28.12-aliyun-6.8.6.5f05e0ac

April 02, 2025

  • Fixed an issue where disks of the WaitForFirstConsumer type were not created by the CSI Plugin in ACK serverless clusters.

v1.28.12-aliyun-6.9.0.6a13fa65

February 28, 2025

  • Added support for IP-aware scheduling based on remaining IP addresses on nodes.

  • Added a plugin to support resource checks before a Kube-Queue job is dequeued.

  • Added support for switching the preemption algorithm implementation through component configuration.

v1.28.12-aliyun-6.8.5.b6aef0d1

February 19, 2025

  • Fixed an issue where disks might be repeatedly created when using ECI or ACS.

  • Fixed an issue where the Max property was invalid after declaring PodLabels in custom elastic resource priority scheduling.

v1.28.12-aliyun-6.8.4.b27c0009

January 02, 2025

  • Custom elastic resource priority scheduling:

    • Added support for ACS GPU.

    • Fixed a potential ECI instance leak that occurred when using PVCs in ACK serverless clusters.

  • CapacityScheduling:

    • Fixed an issue where ElasticQuotaTree usage was incorrect in ACS resource normalization scenarios.

v1.28.12-aliyun-6.8.3.70c756e1

December 16, 2024

Custom elastic resource priority scheduling: Added support for multiple ACS-type Units.

v1.28.12-aliyun-6.8.2.9a307479

December 05, 2024

Custom elastic resource priority scheduling

  • Added support for defining PodAnnotations in a Unit.

v1.28.12-aliyun-6.8.1.db6cdeb8

December 02, 2024

  • Improved the performance of network topology-aware scheduling.

  • Fixed an issue where ECI pods might be scheduled back to ECS nodes for execution.

  • Load-aware scheduling no longer restricts DaemonSet pods during scheduling.

v1.28.12-aliyun-6.7.1.44345748

November 06, 2024

  • Custom elastic resource priority scheduling

    • Added support for detecting the number of pods that trigger elastic scaling.

    • The `resource: elastic` field in a Unit is deprecated. Use the k8s.aliyun.com/resource-policy-wait-for-ecs-scaling annotation in PodLabels instead.

  • CPU topology-aware scheduling

    • Fixed a potential anomaly that occurred when the ECS instance type changed.

v1.28.12-aliyun-6.7.0.b97fca02

November 04, 2024

  • CapacityScheduling

    • Fixed an issue where elastic quota preemption was performed even without an ElasticQuotaTree.

  • Custom elastic resource priority scheduling

    • Added support for ACS-type units.

v1.28.12-aliyun-6.6.4.e535a698

October 22, 2024

  • Fixed an occasional Invalid Score issue in PodTopologySpread.

v1.28.12-aliyun-6.6.3.188f750b

October 11, 2024

  • Improved the event messages for Coscheduling. The number of Coscheduling failures is now included in the events.

  • Improved the messages related to virtual node scheduling. Warning events are no longer sent during the virtual node scheduling process.

v1.28.12-aliyun-6.6.2.054ec1f5

September 23, 2024

  • Network topology-aware scheduling

    • Fixed an issue where pods could not be scheduled after preemption in network topology-aware scheduling.

  • NUMA topology-aware scheduling

    • Fixed an issue where NUMA topology-aware scheduling did not take effect.

v1.28.12-aliyun-6.6.1.348b251d

September 11, 2024

  • Network topology-aware scheduling now supports preemption.

  • SlurmOperator

    • Supports hybrid scheduling in Kubernetes & Slurm clusters.

v1.28.12-aliyun-6.5.4.79e08301

August 20, 2024

Fixed a scheduling anomaly with PodAffinity/PodAntiaffinity introduced in v1.28.3-aliyun-6.5.1.364d020b.

v1.28.12-aliyun-6.5.3.aefde017

August 01, 2024

  • Rebased to community version v1.28.12.

v1.28.3-aliyun-6.5.3.79e08301

August 01, 2024

  • CapacityScheduling

    • Fixed a potential quota calculation error when Coscheduling and CapacityScheduling were used together.

  • GPUShare

    • Fixed an incorrect calculation of remaining resources on computing power scheduling nodes.

  • Custom elastic resource priority scheduling

    • Improved the node scale-out behavior when ResourcePolicy and Cluster Autoscaler are used together. Nodes are no longer scaled out when all pods in all Units have reached their Max value.

v1.28.3-aliyun-6.5.2.7ff57682

July 22, 2024

  • Coscheduling

    • Fixed a quota statistics error when using ECI.

  • Fixed the occasional "xxx is in cache, so can't be assumed" issue.

  • Fixed an issue introduced in v1.28.3-aliyun-6.5.1.364d020b where pods remained in the Pending state for a long time.

v1.28.3-aliyun-6.5.1.364d020b

June 27, 2024

  • Coscheduling

    • Improved Coscheduling speed.

  • Supports sequential pod scheduling.

  • Supports declaring equivalence classes to improve scheduling performance.

  • Used PreEnqueue to improve the performance of existing scheduler plugins.

v1.28.3-aliyun-6.4.7.0f47500a

May 24, 2024

  • Network topology-aware scheduling

    • Fixed an occasional scheduling failure in network topology-aware scheduling.

v1.28.3-aliyun-6.4.6.f32dc398

May 16, 2024

  • Shared GPU scheduling

    • Fixed a GPU scheduling anomaly that occurred in LINGJUN Clusters after the ack.node.gpu.schedule label of a node was changed from egpu to default.

  • CapacityScheduling

    • Fixed an issue that occasionally caused the running AddPod on PreFilter plugin error.

  • Elastic scheduling

    • Added a feature that generates a wait for eci provisioning event when an ECI instance is provisioned using the alibabacloud.com/burst-resource annotation.

v1.28.3-aliyun-6.4.5.a8b4a599

May 09, 2024

v1.28.3-aliyun-6.4.3.f57771d7

March 18, 2024

  • Shared GPU scheduling

    • Supports submitting a ConfigMap to specify card isolation.

  • Custom elastic resource priority scheduling

    • Added support for the elastic resource type.

v1.28.3-aliyun-6.4.2.25bc61fb

March 01, 2024

Disabled the SchedulerQueueingHints feature by default. For more information, see Pull Request #122291.

v1.28.3-aliyun-6.4.1.c7db7450

February 21, 2024

  • Added support for NUMA joint allocation.

  • Custom elastic resource priority scheduling

    • Added support for waiting between Units.

  • Fixed an issue in IP-aware scheduling where an incorrect remaining IP count reduced the number of schedulable pods.

v1.28.3-aliyun-6.3.1ab2185e

January 10, 2024

  • Custom elastic resource priority scheduling

    • Fixed an issue where ECI zone affinity and spread constraints did not take effect when custom elastic resource priority scheduling was used.

  • CPU topology-aware scheduling

    • Prevents the same CPU core from being repeatedly assigned to a single pod, which caused pod startup failures on nodes.

  • ECI elastic scheduling

    • Fixed an issue where pods were still scheduled to ECI when using the alibabacloud.com/burst-resource label to specify a policy, even if the label value was not eci or eci_only.

v1.28.3-aliyun-6.2.84d57ad9

December 21, 2023

Added support for MatchLabelKeys in custom elastic resource priority scheduling to automatically group different versions during application releases.

v1.28.3-aliyun-6.1.ac950aa0

December 13, 2023

  • CapacityScheduling

    • Added a feature to specify a quota. You can specify the quota to which a pod belongs using the quota.scheduling.alibabacloud.com/name annotation on the pod.

    • Added a queue association feature. This feature supports counting only the resources of pods managed by Kube-Queue.

    • Improved the preemption logic. In the new version, CapacityScheduling preemption does not cause the resource usage of the preempted quota's pods to fall below the Min value, nor does it cause the resource usage of the preempting quota's pods to exceed the Min value.

  • Custom elastic resource priority

    • Added support for updating the Unit and node labels of a ResourcePolicy. After an update, the Deletion-Cost of pods is synchronized.

    • Added IgnoreTerminatingPod. This feature supports ignoring terminating pods when counting the number of pods in a Unit.

    • Added the IgnorePreviousPod option. This feature supports ignoring pods whose CreationTimestamp is earlier than that of the associated ResourcePolicy when counting the number of pods in a Unit.

    • Added the PreemptPolicy option. This feature supports preemption attempts between Units.

  • GPUShare

    • Improved the GPUShare scheduling speed. The P99 scheduling latency of the Filter plugin is reduced from milliseconds to microseconds.

v1.28.3-aliyun-5.8-89c55520

October 28, 2023

Supported all previous features in ACK clusters of version 1.28.

Version 1.26 change history

Version

Change time

Change description

v1.26.3-aliyun-6.8.7.5a563072

November 27, 2025

Fixed a scheduling failure caused by NUMAAwareResource returning a score greater than 100.

v1.26.3-aliyun-6.8.7.fec3f2bc

May 14, 2025

  • Fixed a potential disk leak that occurred when WaitForFirstConsumer-type disks were used with serverless computing power.

v1.26.3-aliyun-6.9.0.293e663c

February 28, 2025

  • Added support for IP-aware scheduling based on remaining IP addresses on nodes.

  • Added a plugin to support resource checks before a Kube-Queue job is dequeued.

  • Added support for switching the preemption algorithm implementation through component configuration.

v1.26.3-aliyun-6.8.5.7838feba

February 19, 2025

  • Fixed an issue where disks might be repeatedly created when using ECI or ACS.

  • Fixed an issue where the Max property was invalid after declaring PodLabels in custom elastic resource priority scheduling.

v1.26.3-aliyun-6.8.4.4b180111

January 02, 2025

  • Custom elastic resource priority scheduling:

    • Added support for ACS GPU.

    • Fixed a potential ECI instance leak that occurred when using PVCs in ACK serverless clusters.

  • CapacityScheduling:

    • Fixed an issue where ElasticQuotaTree usage was incorrect in ACS resource normalization scenarios.

v1.26.3-aliyun-6.8.3.95c73e0b

December 16, 2024

Custom elastic resource priority scheduling: Added support for multiple ACS-type Units.

v1.26.3-aliyun-6.8.2.9c9fa19f

December 05, 2024

Custom elastic resource priority scheduling

  • Added support for defining PodAnnotations in a Unit.

v1.26.3-aliyun-6.8.1.a12db674

December 02, 2024

  • Fixed an issue where ECI pods might be scheduled back to ECS nodes for execution.

  • Load-aware scheduling no longer restricts DaemonSet pods during scheduling.

v1.26.3-aliyun-6.7.1.d466c692

November 06, 2024

  • Custom elastic resource priority scheduling

    • Added support for detecting the number of pods that trigger elastic scaling.

    • The `resource: elastic` field in a Unit is deprecated. Use the k8s.aliyun.com/resource-policy-wait-for-ecs-scaling annotation in PodLabels instead.

  • CPU topology-aware scheduling

    • Fixed a potential anomaly that occurred when the ECS instance type changed.

v1.26.3-aliyun-6.7.0.9c293fb7

November 04, 2024

  • CapacityScheduling

    • Fixed an issue where elastic quota preemption was performed even without an ElasticQuotaTree.

  • Custom elastic resource priority scheduling

    • Added support for ACS-type units.

v1.26.3-aliyun-6.6.4.7a8f3f9d

October 22, 2024

Improved the messages related to virtual node scheduling. Warning events are no longer sent during the virtual node scheduling process.

v1.26.3-aliyun-6.6.3.67f250fe

September 04, 2024

  • SlurmOperator

    • Improved the scheduling performance of the plugin.

v1.26.3-aliyun-6.6.2.9ea0a6f5

August 30, 2024

  • InterPodAffinity

    • Fixed an issue where removing a taint from a new node did not trigger pod rescheduling.

v1.26.3-aliyun-6.6.1.605b8a4f

July 31, 2024

  • SlurmOperator

    • Supports hybrid scheduling in Kubernetes & Slurm clusters.

  • Custom elastic resource priority scheduling

    • Improved the feature to prevent unnecessary node scale-out when used with node pools that have auto scaling enabled.

v1.26.3-aliyun-6.4.7.2a77d106

June 27, 2024

  • Coscheduling

    • Improved Coscheduling speed.

v1.26.3-aliyun-6.4.6.78cacfb4

May 16, 2024

  • CapacityScheduling

    • Fixed an issue that occasionally caused the running AddPod on PreFilter plugin error.

  • Elastic scheduling

    • Added a feature that generates a wait for eci provisioning event when an ECI instance is provisioned using the alibabacloud.com/burst-resource annotation.

v1.26.3-aliyun-6.4.5.7f36e9b3

May 09, 2024

v1.26.3-aliyun-6.4.3.e7de0a1e

March 18, 2024

  • Shared GPU scheduling

    • Supports submitting a ConfigMap to specify card isolation.

  • Custom elastic resource priority scheduling

    • Added support for the elastic resource type.

v1.26.3-aliyun-6.4.1.d24bc3c3

February 21, 2024

  • Improved the score of virtual nodes in the NodeResourceFit plugin. Virtual nodes now always receive a score of 0 in the NodeResourceFit plugin, which allows the Preferred type of NodeAffinity to prioritize scheduling to ECS nodes.

  • Added support for NUMA joint allocation.

  • Custom elastic resource priority scheduling

    • Added support for waiting between Units.

  • Fixed an issue in IP-aware scheduling where an incorrect remaining IP count reduced the number of schedulable pods.

v1.26.3-aliyun-6.3.33fdc082

January 10, 2024

  • Custom elastic resource priority

    • Scheduling: Fixed an issue where ECI zone affinity and spread constraints did not take effect when custom elastic resource priority scheduling was used.

  • CPU topology-aware scheduling

    • Prevents the same CPU core from being repeatedly assigned to a single pod, which caused pod startup failures on nodes.

  • ECI elastic scheduling

    • Fixed an issue where pods were still scheduled to ECI when using the alibabacloud.com/burst-resource label to specify a policy, even if the label value was not eci or eci_only.

  • CapacityScheduling

    • Added a feature that automatically enables job preemption in ACK LINGJUN Clusters.

v1.26.3-aliyun-6.2.d9c15270

December 21, 2023

Added support for MatchLabelKeys in custom elastic resource priority scheduling to automatically group different versions during application releases.

v1.26.3-aliyun-6.1.a40b0eef

December 13, 2023

  • CapacityScheduling

    • Added a feature to specify a quota. You can specify the quota to which a pod belongs using the quota.scheduling.alibabacloud.com/name annotation on the pod.

    • Added a queue association feature. This feature supports counting only the resources of pods managed by Kube-Queue.

    • Improved the preemption logic. In the new version, CapacityScheduling preemption does not cause the resource usage of the preempted quota's pods to fall below the Min value, nor does it cause the resource usage of the preempting quota's pods to exceed the Min value.

  • Custom elastic resource priority

    • Added an update feature. This feature supports updating the Unit of a ResourcePolicy and the label of a node. After an update, the Deletion-Cost of pods is synchronized.

    • Added IgnoreTerminatingPod. This feature supports ignoring terminating pods when counting the number of pods in a Unit.

    • Added the IgnorePreviousPod option. This feature supports ignoring pods whose CreationTimestamp is earlier than that of the associated ResourcePolicy when counting the number of pods in a Unit.

    • Added the PreemptPolicy option. This feature supports preemption attempts between Units.

  • GPUShare

    • Improved the GPUShare scheduling speed. The P99 scheduling latency of the Filter plugin is reduced from milliseconds to microseconds.

v1.26.3-aliyun-5.9-cd4f2cc3

November 16, 2023

  • Improved the display of reasons for scheduling failures due to unsatisfied disk types.

v1.26.3-aliyun-5.8-a1482f93

October 16, 2023

  • Added support for Windows node scheduling.

  • Improved the Coscheduling speed when multiple tasks are scheduled at the same time to reduce task blocking.

v1.26.3-aliyun-5.7-2f57d3ff

September 20, 2023

  • Fixed an occasional Admit failure when scheduling pods with GPUShare.

  • Added a plugin to the scheduler that is aware of remaining IP addresses on a node. Pods are no longer scheduled to a node if it has no remaining IP addresses.

  • Added a topology-aware scheduling plugin to the scheduler. This plugin supports scheduling pods to the same topology domain and automatically retries across multiple topology domains.

  • The scheduler updates the Usage and Request information of ElasticQuotaTree at a frequency of one second.

v1.26.3-aliyun-5.5-8b98a1cc

July 05, 2023

  • Fixed an issue where pods occasionally remained in the Pending state for a long time during Coscheduling.

  • Improved the user experience when using Coscheduling with elastic node pools. Other pods in a PodGroup no longer trigger node pool scale-out if some pods cannot be scheduled or scaled out due to incorrect node selector configurations.

v1.26.3-aliyun-5.4-21b4da4c

July 03, 2023

  • Fixed an issue where the Max property of ResourcePolicy was invalid.

  • Improved the impact of many pending pods on scheduler performance. The scheduler throughput is now similar to the case with no pending pods, even when many pending pods exist in the cluster.

v1.26.3-aliyun-5.1-58a821bf

May 26, 2023

Supports updating fields such as min-available and Matchpolicy for a PodGroup.

v1.26.3-aliyun-5.0-7b1ccc9d

May 22, 2023

  • The custom elastic resource priority feature supports declaring the maximum number of replicas in the Unit field.

  • Supports GPU topology-aware scheduling.

v1.26.3-aliyun-4.1-a520c096

April 27, 2023

Nodes are no longer scaled out by the autoscaler when the Elasticquota limit is exceeded or the number of Gang pods is not met.

Version 1.24 change history

Version

Change time

Change description

v1.24.6-aliyun-6.4.7.e7ffcda5

May 06, 2025

  • Fixed an occasional Max count error in ResourcePolicy.

  • Fixed a potential disk leak that occurred when WaitForFirstConsumer-type disks were used with serverless computing power.

v1.24.6-aliyun-6.5.0.37a567db (Available to allowlisted users)

November 04, 2024

Custom elastic resource priority scheduling

  • Added support for ACS-type units.

v1.24.6-aliyun-6.4.6.c4d551a0

May 16, 2024

  • CapacityScheduling

    • Fixed an issue that occasionally caused the running AddPod on PreFilter plugin error.

v1.24.6-aliyun-6.4.5.aab44b4a

May 09, 2024

v1.24.6-aliyun-6.4.3.742bd819

March 18, 2024

  • Shared GPU scheduling

    • Supports submitting a ConfigMap to specify card isolation.

  • Custom elastic resource priority scheduling

    • Added support for the elastic resource type.

v1.24.6-aliyun-6.4.1.14ebc575

February 21, 2024

  • Improved the score of virtual nodes in the NodeResourceFit plugin. Virtual nodes now always receive a score of 0 in the NodeResourceFit plugin, which allows the Preferred type of NodeAffinity to prioritize scheduling to ECS nodes.

  • Added support for NUMA joint allocation.

  • Custom elastic resource priority scheduling

    • Added support for waiting between Units.

  • Fixed an issue in IP-aware scheduling where an incorrect remaining IP count reduced the number of schedulable pods.

v1.24.6-aliyun-6.3.548a9e59

January 10, 2024

  • Custom elastic resource priority scheduling

    • Scheduling: Fixed an issue where ECI zone affinity and spread constraints did not take effect when custom elastic resource priority scheduling was used.

  • CPU topology-aware scheduling

    • Prevents the same CPU core from being repeatedly assigned to a single pod, which caused pod startup failures on nodes.

  • ECI elastic scheduling

    • Fixed an issue where pods were still scheduled to ECI when using the alibabacloud.com/burst-resource label to specify a policy, even if the label value was not eci or eci_only.

  • CapacityScheduling

    • Added a feature that automatically enables job preemption in ACK LINGJUN Clusters.

v1.24.6-aliyun-6.2.0196baec

December 21, 2023

Added support for MatchLabelKeys in custom elastic resource priority scheduling to automatically group different versions during application releases.

v1.24.6-aliyun-6.1.1900da95

December 13, 2023

  • CapacityScheduling

    • Added a feature to specify a quota. You can specify the quota to which a pod belongs using the quota.scheduling.alibabacloud.com/name annotation on the pod.

    • Added a queue association feature. This feature supports counting only the resources of pods managed by Kube-Queue.

    • Improved the preemption logic. In the new version, CapacityScheduling preemption does not cause the resource usage of the preempted quota's pods to fall below the Min value, nor does it cause the resource usage of the preempting quota's pods to exceed the Min value.

  • Custom elastic resource priority

    • Added an update feature. This feature supports updating the Unit of a ResourcePolicy and the label of a node. After an update, the Deletion-Cost of pods is synchronized.

    • Added IgnoreTerminatingPod. This feature supports ignoring terminating pods when counting the number of pods in a Unit.

    • Added the IgnorePreviousPod option. This feature supports ignoring pods whose CreationTimestamp is earlier than that of the associated ResourcePolicy when counting the number of pods in a Unit.

    • Added the PreemptPolicy option. This feature supports preemption attempts between Units.

  • GPUShare

    • Improved the GPUShare scheduling speed. The P99 scheduling latency of the Filter plugin is reduced from milliseconds to microseconds.

v1.24.6-aliyun-5.9-e777ab5b

November 16, 2023

  • Improved the display of reasons for scheduling failures due to unsatisfied disk types.

v1.24.6-aliyun-5.8-49fd8652

October 16, 2023

  • Added support for Windows node scheduling.

  • Improved the Coscheduling speed when multiple tasks are scheduled at the same time to reduce task blocking.

v1.24.6-aliyun-5.7-62c7302c

September 20, 2023

  • Fixed an occasional Admit failure when scheduling pods with GPUShare.

v1.24.6-aliyun-5.6-2bb99440

August 31, 2023

  • Added a plugin to the scheduler that is aware of remaining IP addresses on a node. Pods are no longer scheduled to a node if it has no remaining IP addresses.

  • Added a topology-aware scheduling plugin to the scheduler. This plugin supports scheduling pods to the same topology domain and automatically retries across multiple topology domains.

  • The scheduler updates the Usage and Request information of ElasticQuotaTree at a frequency of one second.

v1.24.6-aliyun-5.5-5e8aac79

July 05, 2023

  • Fixed an issue where pods occasionally remained in the Pending state for a long time during Coscheduling.

  • Improved the user experience when using Coscheduling with elastic node pools. Other pods in a PodGroup no longer trigger node pool scale-out if some pods cannot be scheduled or scaled out due to incorrect node selector configurations.

v1.24.6-aliyun-5.4-d81e785e

July 03, 2023

  • Fixed an issue where the Max property of ResourcePolicy was invalid.

  • Improved the impact of many pending pods on scheduler performance. The scheduler throughput is now similar to the case with no pending pods, even when many pending pods exist in the cluster.

v1.24.6-aliyun-5.1-95d8a601

May 26, 2023

Supports updating fields such as min-available and Matchpolicy for Coscheduling.

v1.24.6-aliyun-5.0-66224258

May 22, 2023

  • The custom elastic resource priority feature supports declaring the maximum number of replicas in the Unit field.

  • Supports GPU topology-aware scheduling.

v1.24.6-aliyun-4.1-18d8d243

March 31, 2023

ElasticResource supports scheduling pods to Arm VK nodes.

v1.24.6-4.0-330eb8b4-aliyun

March 01, 2023

  • GPUShare:

    • Fixed an incorrect scheduler state that occurred when a GPU node was downgraded.

    • Fixed an issue where GPU nodes could not be allocated with full GPU memory.

    • Supports preempting GPU pods.

  • Coscheduling:

    • Supports declaring a Gang through PodGroup and Koordinator APIs.

    • Supports controlling the retry policy of a Gang through Matchpolicy.

    • Supports Gang Group.

    • The name of a Gang must comply with DNS subdomain rules.

  • Custom parameters: Supports Loadaware-related configuration parameters.

v1.24.6-3.2-4f45222b-aliyun

January 13, 2023

Fixed an issue where inaccurate GPUShare memory calculation prevented pods from using GPU memory properly.

v1.24.6-ack-3.1

November 14, 2022

  • The score feature for GPU shared scheduling is enabled by default (it was disabled by default in previous versions).

  • Supports load-aware scheduling.

v1.24.6-ack-3.0

September 27, 2022

Supports Capacity Scheduling.

v1.24.3-ack-2.0

September 21, 2022

  • Supports GPU shared scheduling.

  • Supports Coscheduling.

  • Supports ECI elastic scheduling.

  • Supports intelligent CPU scheduling.

Version 1.22 change history

Version

Change time

Change description

v1.22.15-aliyun-6.4.5.e54fd757

May 06, 2025

  • Fixed an occasional Max count error in ResourcePolicy.

  • Fixed a potential disk leak that occurred when WaitForFirstConsumer-type disks were used with serverless computing power.

v1.22.15-aliyun-6.4.4.7fc564f8

May 16, 2024

  • CapacityScheduling

    • Fixed an issue that occasionally caused the running AddPod on PreFilter plugin error.

v1.22.15-aliyun-6.4.3.e858447b

April 22, 2024

  • Custom elastic resource priority scheduling

    • Fixed an issue where deleting a ResourcePolicy occasionally led to an abnormal state.

v1.22.15-aliyun-6.4.2.4e00a021

March 18, 2024

  • CapacityScheduling

    • Fixed an issue where preemption occasionally failed in ACK LINGJUN Clusters.

  • Added support for using a ConfigMap to manually blocklist specific GPU cards in a cluster.

v1.22.15-aliyun-6.4.1.1205db85

February 29, 2024

  • Custom elastic resource priority scheduling

    • Fixed occasional concurrency conflicts.

v1.22.15-aliyun-6.4.0.145bb899

February 28, 2024

  • CapacityScheduling

    • Fixed an issue where the feature for specifying quotas caused incorrect quota statistics.

v1.22.15-aliyun-6.3.a669ec6f

January 10, 2024

  • Custom elastic resource priority scheduling

    • Fixed an issue where ECI zone affinity and anti-affinity failed to take effect when using custom elastic resource priority scheduling.

    • Added support for MatchLabelKeys.

  • CPU topology-aware scheduling

    • Fixed an issue where a pod could fail to start on a node because the same CPU core was assigned to the pod multiple times.

  • ECI elastic scheduling

    • Fixed an issue where pods were scheduled to ECI even if the value of the alibabacloud.com/burst-resource label was not eci or eci_only.

  • CapacityScheduling

    • Enabled automatic Job preemption in ACK LINGJUN Clusters.

v1.22.15-aliyun-6.1.e5bf8b06

December 13, 2023

  • CapacityScheduling

    • Added a feature to specify quotas. A pod can now specify its quota using the quota.scheduling.alibabacloud.com/name label.

    • Added a queue association feature. You can now configure a quota to count only the resource usage of pods that are managed by Kube Queue.

    • Optimized the preemption logic. Preemption by CapacityScheduling no longer causes the resource usage of a preempted quota to fall below its Min value. It also prevents the resource usage of the preempting quota from exceeding its Min value.

  • Custom elastic resource priority

    • Added an update feature. You can now update the Unit of a ResourcePolicy and the labels of a node. After an update, the pod's Deletion-Cost is updated accordingly.

    • Added the IgnoreTerminatingPod setting. When enabled, terminating pods are ignored when counting the number of pods in a Unit.

    • Added the IgnorePreviousPod option. This option lets you ignore pods with a CreationTimestamp that is earlier than the associated ResourcePolicy when counting the number of pods in a Unit.

    • Added the PreemptPolicy option. This option enables pod preemption attempts between Units.

  • GPUShare

    • Optimized GPUShare scheduling speed. This optimization reduces the P99 scheduling latency of the filter plugin from milliseconds to microseconds.

v1.22.15-aliyun-5.9-04a5e6eb

November 16, 2023

  • Improved the error message for scheduling failures caused by an unsupported disk type.

v1.22.15-aliyun-5.8-29a640ae

October 16, 2023

  • Added support for scheduling on Windows nodes.

  • Optimized the scheduling speed of Coscheduling for multitasking to reduce job blocking.

v1.22.15-aliyun-5.7-bfcffe21

September 20, 2023

  • Fixed occasional Admit failures during GPUShare pod scheduling.

v1.22.15-aliyun-5.6-6682b487

August 14, 2023

  • A new scheduler plugin tracks the remaining IP addresses on a node. The scheduler no longer schedules pods to a node that has no available IP addresses.

  • A new topology-aware scheduling plugin schedules pods to the same topology domain and automatically retries across multiple domains.

  • The scheduler now updates the Usage and Request information of the ElasticQuotaTree every second.

v1.22.15-aliyun-5.5-82f32f68

July 5, 2023

  • Fixed an issue where pods occasionally remained in the Pending state for a long time during Coscheduling.

  • Improved the use of PodGroups with elastic node pools. When some pods cannot be scheduled due to incorrect node selector configurations, other pods in the PodGroup no longer trigger a scale-out for the node pool.

v1.22.15-aliyun-5.4-3b914a05

July 03, 2023

  • Fixed an issue where the Max property of ResourcePolicy was ineffective.

  • Improved scheduler performance when handling many pending pods. The scheduler throughput is now comparable to its throughput in a cluster with no pending pods.

v1.22.15-aliyun-5.1-8a479926

May 26, 2023

Supports updating PodGroup fields such as min-available and Matchpolicy.

v1.22.15-aliyun-5.0-d1ab67d9

May 22, 2023

  • The custom elastic resource priority feature lets you declare the maximum number of replicas in the Unit field.

  • Supports GPU topology-aware scheduling.

v1.22.15-aliyun-4.1-aec17f35

March 31, 2023

ElasticResource supports scheduling pods to Arm VK nodes.

v1.22.15-aliyun-4.0-384ca5d5

March 3, 2023

  • GPUShare:

    • Fixed an issue where the scheduler reported an incorrect status when a GPU node was downgraded.

    • Fixed an issue that prevented GPU nodes from allocating their full GPU memory.

    • Added support for preempting GPU pods.

  • Coscheduling:

    • You can use a PodGroup and the Koordinator API to declare a Gang.

    • You can use Matchpolicy to control the retry policy for a gang.

    • Gang Groups are supported.

    • The Gang name must follow DNS subdomain rules.

  • Custom parameters: Support for Loadaware configuration parameters.

v1.22.15-2.1-a0512525-aliyun

January 10, 2023

Fixed an issue where inaccurate GPUShare memory calculation prevented Pods from using GPU memory normally.

v1.22.15-ack-2.0

November 30, 2022

  • The scheduler supports custom parameters.

  • Supports payload-aware scheduling.

  • Supports elastic scheduling based on node pool priority.

  • Supports scheduling for shared GPU computing power.

v1.22.3-ack-1.1

February 27, 2022

Fixed an issue where shared GPU scheduling failed when the cluster had only one node.

v1.22.3-ack-1.0

January 04, 2021

  • Smart CPU scheduling.

  • Coscheduling is supported.

  • Capacity scheduling is supported.

  • ECI elastic scheduling.

  • Shared GPU scheduling.

Version 1.20 change history

Version

Change time

Change description

v1.20.11-aliyun-10.6-f95f7336

September 22, 2023

  • Fixed an occasional quota usage statistics error in ElasticQuotaTree.

v1.20.11-aliyun-10.3-416caa03

May 26, 2023

  • Fixed an occasional cache error in GPUShare in earlier Kubernetes versions.

v1.20.11-aliyun-10.2-f4a371d3

April 27, 2023

  • ElasticResource supports scheduling pods to Arm VK nodes.

  • Fixed a scheduling failure in load-aware scheduling caused by CPU usage exceeding the requested amount.

v1.20.11-aliyun-10.0-ae867721

April 03, 2023

Coscheduling supports MatchPolicy.

v1.20.11-aliyun-9.2-a8f8c908

March 08, 2023

  • CapacityScheduling: Fixed an incorrect scheduler state caused by quotas with the same name.

  • Supports disk scheduling.

  • Shared GPU scheduling:

    • Fixed an incorrect scheduler state that occurred when a GPU node was downgraded.

    • Fixed an occasional issue where GPU nodes could not be allocated with full GPU memory.

    • Supports preempting GPU pods.

  • CPU topology-aware scheduling: Pods with CPU scheduling enabled are not scheduled to nodes without NUMA enabled.

  • Supports custom parameters.

v1.20.4-ack-8.0

August 29, 2022

Fixed known bugs.

v1.20.4-ack-7.0

February 22, 2022

Supports elastic scheduling based on node pool priority.

v1.20.4-ack-4.0

September 02, 2021

  • Supports load-aware scheduling.

  • Supports ECI elastic scheduling.

v1.20.4-ack-3.0

May 26, 2021

Supports intelligent CPU scheduling based on Socket and L3 cache.

v1.20.4-ack-2.0

May 14, 2021

Supports Capacity Scheduling.

v1.20.4-ack-1.0

April 07, 2021

  • Supports intelligent CPU scheduling.

  • Supports Coscheduling.

  • Supports GPU topology-aware scheduling.

  • Supports GPU shared scheduling.

Version 1.18 change history

Version

Change time

Change description

v1.18-ack-4.0

September 02, 2021

Supports load-aware scheduling.

v1.18-ack-3.1

June 05, 2021

ECI scheduling is compatible with node pools.

v1.18-ack-3.0

March 12, 2021

Supports unified scheduling for ECI and ECS.

v1.18-ack-2.0

November 30, 2020

GPU topology-aware scheduling supports GPU shared scheduling.

v1.18-ack-1.0

September 24, 2020

Intelligent CPU scheduling supports Coscheduling.

Version 1.16 change history

Version

Change time

Change description

v1.16-ack-1.0

July 21, 2020

  • Supports intelligent CPU scheduling in Kubernetes v1.16 clusters.

  • Supports Coscheduling in Kubernetes v1.16 clusters.