All Products
Search
Document Center

Container Service for Kubernetes:kube-scheduler

Last Updated:Mar 25, 2026

kube-scheduler is a control plane component that assigns pods to nodes. It evaluates each pod's resource requests against each node's Allocatable capacity, then selects the best-fit node using a two-phase pipeline: filtering out ineligible nodes, then ranking the remaining candidates.

How it works

kube-scheduler processes pods in a scheduling queue. For each pod, it reads the pod's declared resource requests and the node's Allocatable property to identify valid nodes, scores all valid nodes, and binds the pod to the highest-scoring one. By default, kube-scheduler distributes pods evenly based on their resource requests.

For the full upstream specification, see kube-scheduler in the Kubernetes documentation.

Filter and Score plugins

Scheduling runs in two phases:

  1. Filter — each Filter plugin eliminates nodes that cannot run the pod. A node must pass all Filter plugins to remain a candidate.

  2. Score — each Score plugin assigns a score to each remaining node. The final node score is the sum of plugin_score × plugin_weight across all Score plugins.

Weight dominance: Two ACK plugins — elasticresource and resourcepolicy — each have a default weight of 1,000,000. When either is active, its score effectively determines the outcome, making the contributions of other plugins negligible. The next-highest weights are gpushare (20,000) and PreferredNode (10,000). Keep this in mind when configuring workloads that use elastic scaling or node pool priority routing.

The table below lists the default-enabled plugins for each kube-scheduler version. Open-source plugins follow upstream defaults; ACK-specific plugins are listed explicitly.

Default enabled plugins by version

Component versionFilter pluginsScore plugins
v1.30.1-aliyun.6.5.4.fcac2bdfOpen-source plugins: same as upstream. See v1.30.1 default Filter plugins.<br><br>ACK plugins: NodeNUMAResource, topologymanager, EciPodTopologySpread, ipawarescheduling, BatchResourceFit, PreferredNode, gpushare, NetworkTopology, CapacityScheduling, elasticresource, resourcepolicy, gputopology, ECIBinderV1, loadawarescheduling, EciSchedulingOpen-source plugins: same as upstream. See v1.30.1 default Score plugins.<br><br>ACK plugins and default weights: NodeNUMAResource (1), ipawarescheduling (1), gpuNUMAJointAllocation (1), PreferredNode (10,000), gpushare (20,000), gputopology (1), numa (1), EciScheduling (2), NodeAffinity (2), elasticresource (1,000,000), resourcepolicy (1,000,000), NodeBEResourceLeastAllocated (1), loadawarescheduling (10)
v1.28.3-aliyun-6.5.2.7ff57682Open-source plugins: same as upstream. See v1.28.3 default Filter plugins.<br><br>ACK plugins: NodeNUMAResource, topologymanager, EciPodTopologySpread, ipawarescheduling, BatchResourceFit, PreferredNode, gpushare, NetworkTopology, CapacityScheduling, elasticresource, resourcepolicy, gputopology, ECIBinderV1, loadawarescheduling, EciSchedulingOpen-source plugins: same as upstream. See v1.28.3 default Score plugins.<br><br>ACK plugins and default weights: NodeNUMAResource (1), ipawarescheduling (1), gpuNUMAJointAllocation (1), PreferredNode (10,000), gpushare (20,000), gputopology (1), numa (1), EciScheduling (2), NodeAffinity (2), elasticresource (1,000,000), resourcepolicy (1,000,000), NodeBEResourceLeastAllocated (1), loadawarescheduling (10)
v1.26.3-aliyun-6.6.1.605b8a4fOpen-source plugins: same as upstream. See v1.26.3 default Filter plugins.<br><br>ACK plugins: NodeNUMAResource, topologymanager, EciPodTopologySpread, ipawarescheduling, BatchResourceFit, PreferredNode, gpushare, NetworkTopology, CapacityScheduling, elasticresource, resourcepolicy, gputopology, ECIBinderV1, loadawarescheduling, EciSchedulingOpen-source plugins: same as upstream. See v1.26.3 default Score plugins.<br><br>ACK plugins and default weights: NodeNUMAResource (1), ipawarescheduling (1), gpuNUMAJointAllocation (1), PreferredNode (10,000), gpushare (20,000), gputopology (1), numa (1), EciScheduling (2), NodeAffinity (2), elasticresource (1,000,000), resourcepolicy (1,000,000), NodeBEResourceLeastAllocated (1), loadawarescheduling (10)

Plugin descriptions

PluginWhat it doesWhen to use itRelated documentation
NodeNUMAResourceSchedules CPU-intensive workloads with NUMA topology awareness, so pods land on nodes where CPUs share the same memory bus.Latency-sensitive workloads that benefit from CPU locality — such as real-time data processing or HPC (High-Performance Computing) jobs.Enable CPU topology-aware scheduling
topologymanagerAllocates NUMA node resources to pods that request co-located CPU and memory, ensuring consistent memory bandwidth.Workloads that require memory bandwidth consistency, such as AI inference or in-memory databases.Enable NUMA topology-aware scheduling
EciPodTopologySpreadExtends topology spread constraints for virtual node scheduling, distributing pods across availability zones when using Elastic Container Instance (ECI) virtual nodes.Balancing pod distribution across availability zones in ECI virtual node deployments.Enable virtual node scheduling policies for a cluster
ipawareschedulingFilters out nodes with insufficient remaining IP addresses in the VPC (Virtual Private Cloud) subnet, preventing pod startup failures from IP exhaustion.VPC subnets with limited IP ranges where IP exhaustion is a risk.Scheduling FAQ
BatchResourceFitEnables colocation of multiple workload types on the same node, improving overall cluster utilization.Mixed online and offline workload environments where you want to maximize node utilization.Best practices for colocation of multi-type workloads
PreferredNodeReserves nodes for node pools with auto scaling enabled, preventing regular pods from occupying capacity intended for elastic scale-out.Clusters with auto scaling node pools where node reservation is needed.Node auto scaling
gpushareSchedules pods that share a single GPU card across multiple workloads. Default weight: 20,000.Multiple inference tasks or lightweight GPU jobs that do not need exclusive GPU allocation.Shared GPU scheduling
NetworkTopologySchedules pods with awareness of physical network topology, placing distributed workload components on nodes with optimal interconnects.Distributed AI training jobs where inter-node network bandwidth significantly affects performance.Topology-aware scheduling
CapacitySchedulingEnforces elastic quota limits across namespaces, guaranteeing minimum resources and capping maximum usage per team.Multi-tenant clusters where multiple teams share capacity and need resource isolation.Use Capacity Scheduling
elasticresource[Discontinued] Managed ECI elastic scheduling. Use resourcepolicy for new deployments.Use ElasticResource to implement ECI elastic scheduling (discontinued)
resourcepolicySchedules pods across node pools in priority order, with elastic overflow to ECI when Elastic Compute Service (ECS) capacity is insufficient. Default weight: 1,000,000 — dominates node selection when active.Workloads that should prefer ECS nodes but burst to ECI when ECS capacity runs out.Priority-based scheduling of custom elastic resources
gputopologySchedules multi-GPU workloads to nodes where the required GPUs are connected with optimal interconnects such as NVLink, maximizing GPU-to-GPU bandwidth.Large-scale distributed training that requires high GPU-to-GPU bandwidth.GPU topology-aware scheduling
ECIBinderV1Handles the binding phase for pods that target ECI capacity in elastic scheduling scenarios.ECI elastic scheduling deployments — works alongside EciScheduling.Schedule pods to ECI
loadawareschedulingDistributes pods based on real-time CPU and memory utilization rather than declared requests, avoiding hot spots on over-utilized nodes.Bursty or over-provisioned workloads where declared requests do not reflect actual usage.Use load-aware scheduling
EciSchedulingManages virtual node selection for pods targeting ECI. Works alongside ECIBinderV1 to route pods through the ECI scheduling path.ECI elastic scheduling deployments.Enable virtual node scheduling policies for a cluster

Upgrade the component

kube-scheduler is installed by default and requires no configuration. Keep it up to date to get the latest scheduling optimizations and bug fixes.

To upgrade kube-scheduler, log on to the ACK console, click the target cluster, then navigate to Operations Management > Component Management.Container Service for Kubernetes (ACK) console

Change history

Version 1.34 change history

VersionRelease dateChanges
v1.34.0-apsara.6.11.8.a32868e8January 5, 2026New features: Optimized shared GPU scheduling efficiency. Added metrics for serverless resource scheduling — processing latency, timestamp tracking, and concurrent configurations — to improve observability of serverless workloads. Bug fixes: Fixed GPUShare pod annotation updates in the Reserve phase to ensure correct persistence of scheduling results. Fixed incorrect NUMA ID removal. Fixed a post-restart failure to reconstruct NUMA allocation results from pod annotations, which caused uneven resource allocation. Fixed an issue where NominatedNodeName was not cleared under specific conditions such as out-of-stock or concurrent preemption. Fixed an issue where resources for pods with a NominatedNodeName were not reserved by quota when Reservation was disabled. Fixed gang-scheduling behavior so that a gang fails as a whole when preemption fails, preventing multiple invalid scheduling attempts for multi-replica jobs. Fixed a preemption failure in NetworkTopology caused by incorrect StateData and Filter object calls. Fixed a scheduling issue with self-built Virtual Kubelets that have GPUs. Fixed a scheduling issue when multiple containers in a pod request a full GPU card. Optimized ElasticQuota Min/Max Guarantee logic so that a quota exceeding its Min value can only preempt itself after scheduling.
v1.34.0-apsara.6.11.7.43cab345December 8, 2025New features: Network topology-aware scheduling now supports EP size scheduling. For PyTorchJob, pods are automatically placed contiguously based on their index. Bug fixes: Improved auto scaling efficiency. The scheduler no longer updates the Pod Scheduled condition when an ACS pod scheduling is triggered, preventing unintended node pool scale-out. Fixed a post-restart failure to read ACS GPU partitions of scheduled pods.
v1.34.0-apsara.6.11.6.3c0b732bNovember 10, 2025New features: Added support for __IGNORE__RESOURCE__. Added support for the alibabacloud.com/schedule-admission annotation to mark a pod as temporarily unschedulable. Added support for ACS shared GPU. Optimized PersistentVolumeClaim (PVC) scheduling to speed up pod creation with disks. Fixed incorrect ScheduleCycle updates when ResourcePolicy and Gang are used together. Bug fixes: Fixed a memory leak in remaining IP-aware scheduling. Fixed a statistics error when CapacityScheduling quota is updated before a pod is bound.
v1.34.0-apsara.6.11.5.3c117f21October 23, 2025Bug fixes: Fixed an issue where the alibabacloud.com/acs: "true" or alibabacloud.com/eci: "true" label did not take effect. Fixed a scheduling issue when multiple containers in a pod request nvidia.com/gpu. Fixed a potential scheduler crash under high-concurrency ACS compute requests.
v1.34.0-apsara.6.11.3.ff6b62d8September 17, 2025Initial support for all previous features in ACK clusters of version 1.34.

Version 1.33 change history

VersionRelease dateChanges
v1.33.0-apsara.6.11.8.709bb6e6January 5, 2026New features: Optimized shared GPU scheduling efficiency. Added metrics for serverless resource scheduling — processing latency, timestamp tracking, and concurrent configurations — to improve observability. Bug fixes: Fixed GPUShare pod annotation updates in the Reserve phase. Fixed incorrect NUMA ID removal. Fixed post-restart NUMA allocation reconstruction failure. Fixed NominatedNodeName not being cleared under out-of-stock or concurrent preemption conditions. Fixed resources for pods with NominatedNodeName not being reserved by quota when Reservation was disabled. Fixed gang-scheduling failure behavior to prevent multiple invalid scheduling attempts. Fixed a NetworkTopology preemption failure caused by incorrect StateData and Filter object calls. Fixed scheduling issues with self-built GPU Virtual Kubelets and multiple containers requesting a full GPU card. Optimized ElasticQuota Min/Max Guarantee logic.
v1.33.0-apsara.6.11.7.4a6779f8December 5, 2025New features: Network topology-aware scheduling now supports EP size scheduling; PyTorchJob pods are placed contiguously by index. Bug fixes: Improved auto scaling efficiency. The scheduler no longer updates the Pod Scheduled condition when an ACS instance is created, preventing node pool scale-out. Fixed post-restart failure to read ACS GPU partitions. Fixed an issue where a pod with a PVC that had a SelectedNode could not be scheduled.
v1.33.0-apsara.6.11.6.2fce98cbNovember 10, 2025New features: Added support for __IGNORE__RESOURCE__. Added support for the alibabacloud.com/schedule-admission annotation. Added support for ACS shared GPU. Optimized PVC scheduling to speed up pod creation with disks. Fixed incorrect ScheduleCycle updates when ResourcePolicy and Gang are used together. Bug fixes: Fixed a memory leak in remaining IP-aware scheduling. Fixed a statistics error when CapacityScheduling quota is updated before a pod is bound.
v1.33.0-apsara.6.11.5.8dd6f5f4October 23, 2025Bug fixes: Fixed an issue where the alibabacloud.com/acs: "true" or alibabacloud.com/eci: "true" label did not take effect.
v1.33.0-apsara.6.11.4.77470105September 15, 2025Bug fixes: Fixed a scheduling issue when multiple containers in a pod request nvidia.com/gpu. Fixed a potential scheduler crash under high-concurrency ACS compute requests.
v1.33.0-apsara.6.11.3.ed953a31September 8, 2025New features: ElasticQuotaTree now supports the alibabacloud.com/ignore-empty-resource annotation to ignore undeclared resource limits in a quota. NetworkTopology now supports declaring discretized distribution via a constraint in JobNetworkTopology. Bug fixes: Fixed a potential scheduler crash when PodTopologySpread was in use.
v1.33.0-aliyun.6.11.2.330dcea7August 19, 2025Improved GOAT scheduling determinism to avoid treating nodes as not ready when node.cloudprovider.kubernetes.io/uninitialized and node.kubernetes.io/unschedulable taints are present. Fixed an ElasticQuotaTree fairness check that incorrectly marked quotas with an empty Min value or empty Request as unmet. Fixed a potential scheduler crash when creating ACS instances. Fixed a scheduler error when an init container's resources were empty (29d1951).
v1.33.0-aliyun.6.11.1.382cd0a6July 25, 2025Fixed an issue where using ElasticResource to implement ECI elastic scheduling (discontinued) did not take effect.
v1.33.0-aliyun.6.11.0.87e9673bJuly 18, 2025Improved GOAT scheduling determinism to handle concurrent NodeReady state changes during pod scheduling. Fixed incorrect gang pod count when a PodGroup CR is deleted and recreated while scheduled pods exist. Fixed ElasticQuota preemption to prevent pods with the same policy from being preempted and to prevent intra-quota preemption when usage is below the Min value. Fixed remaining IP-aware scheduling to correctly block scheduling to nodes with insufficient IP addresses. Fixed TimeoutOrExceedMax and ExceedMax ResourcePolicy policies that were broken since version 6.9.x. Fixed incorrect MaxPod calculation after elastic scaling in ResourcePolicy. Added a scheduling fairness check to ElasticQuotaTree: when a quota with unmet requirements has pending pods, no new pods are scheduled for quotas that already meet their guarantees. Enable this with the StrictFairness parameter (on by default when the preemption algorithm is None). Added the ScheduleAdmission feature: pods with the alibabacloud.com/schedule-admission annotation are not scheduled. Added support for pods with alibabacloud.com/eci=true, alibabacloud.com/acs=true, and eci=true labels — the scheduler checks only volume and virtual node plugins for these pods and skips all other checks when no PVC mount is present. Added a ResourcePolicy security check: units are skipped if patching pod labels could affect a ReplicaSet or StatefulSet's MatchLabels.
v1.33.0-aliyun.6.9.4.8b58e6b4June 10, 2025Fixed InterPodAffinity and PodTopologySpread becoming invalid during continuous pod scheduling. Fixed occasional scheduling anomalies with ResourcePolicy. Improved scheduler interaction with auto-scaling node pools. Fixed incorrect pod count in ResourcePolicy for priority-based elastic resource scheduling. Fixed a potential cloud disk leak when using WaitForFirstConsumer disks with serverless compute.
v1.33.0-aliyun.6.9.2.09bce458April 28, 2025Initial support for all previous features in ACK clusters of version 1.33.

Version 1.32 change history

VersionRelease dateChanges
v1.32.0-apsara.6.11.8.df9f2fa6January 5, 2026New features: Optimized shared GPU scheduling efficiency. Added serverless resource scheduling metrics for improved observability. Bug fixes: Fixed GPUShare pod annotation updates in the Reserve phase. Fixed incorrect NUMA ID removal. Fixed post-restart NUMA allocation reconstruction failure. Fixed NominatedNodeName clearance and quota reservation issues. Fixed gang-scheduling failure behavior. Fixed a NetworkTopology preemption failure. Fixed scheduling issues with self-built GPU Virtual Kubelets and multi-container full GPU requests. Optimized ElasticQuota Min/Max Guarantee logic.
v1.32.0-apsara.6.11.7.4489ebf4December 10, 2025Bug fixes: Improved auto scaling efficiency. The scheduler no longer updates the Pod Scheduled condition when an ACS instance is created. Fixed post-restart failure to read ACS GPU partitions.
v1.32.0-apsara.6.11.6.03248691November 10, 2025New features: Added support for __IGNORE__RESOURCE__. Added support for the alibabacloud.com/schedule-admission annotation. Added support for ACS shared GPU. Optimized PVC scheduling for faster pod creation with disks. Fixed incorrect ScheduleCycle updates when ResourcePolicy and Gang are used together. Bug fixes: Fixed a memory leak in remaining IP-aware scheduling. Fixed a statistics error when CapacityScheduling quota is updated before a pod is bound.
v1.32.0-apsara.6.11.5.c774d3c3October 23, 2025Bug fixes: Fixed an issue where the alibabacloud.com/acs: "true" or alibabacloud.com/eci: "true" label did not take effect.
v1.32.0-apsara.6.11.4.4a4f4843September 15, 2025Bug fixes: Fixed a scheduling issue when multiple containers in a pod request nvidia.com/gpu. Fixed a potential scheduler crash under high-concurrency ACS compute requests.
v1.32.0-apsara.6.11.3.b651c575September 12, 2025New features: ElasticQuotaTree now supports the alibabacloud.com/ignore-empty-resource annotation. NetworkTopology now supports discretized distribution via JobNetworkTopology.
v1.32.0-aliyun.6.11.2.58302423August 21, 2025Improved GOAT scheduling determinism for uninitialized and unschedulable node taints. Fixed an ElasticQuotaTree fairness check error for empty Min or Request quotas. Fixed a potential scheduler crash when creating ACS instances.
v1.32.0-aliyun.6.11.1.ab632d8cJuly 25, 2025Fixed an issue where using ElasticResource to implement ECI elastic scheduling (discontinued) did not take effect.
v1.32.0-aliyun.6.11.0.0350a0e7July 18, 2025Improved GOAT scheduling determinism for concurrent NodeReady changes. Fixed incorrect gang pod count on PodGroup CR recreations. Fixed ElasticQuota preemption logic. Fixed remaining IP-aware scheduling. Fixed TimeoutOrExceedMax and ExceedMax ResourcePolicy policies broken since version 6.9.x. Fixed incorrect MaxPod calculation after elastic scaling. Added ElasticQuotaTree fairness check. Added ScheduleAdmission feature. Added support for alibabacloud.com/eci=true, alibabacloud.com/acs=true, and eci=true pod labels. Added ResourcePolicy security check.
v1.32.0-aliyun.6.9.4.d5a8a355June 4, 2025Fixed InterPodAffinity and PodTopologySpread becoming invalid during continuous scheduling. Fixed ResourcePolicy scheduling anomalies. Fixed an ElasticQuota preemption issue.
v1.32.0-aliyun.6.9.3.515ac311May 14, 2025Improved scheduler interaction with auto-scaling node pools. Fixed incorrect pod count in ResourcePolicy. Fixed a potential cloud disk leak with WaitForFirstConsumer disks and serverless compute.
v1.32.0-aliyun.6.9.2.09bce458April 16, 2025Fixed an ElasticQuota preemption anomaly. Added support for scheduling pods to ACS GPU-HPN nodes.
v1.32.0-aliyun.6.8.6.bd13955dApril 2, 2025Fixed an issue in ACK serverless clusters where WaitForFirstConsumer cloud disks were not created via the Container Storage Interface (CSI) plugin.
v1.32.0-aliyun.6.9.0.a1c7461bFebruary 28, 2025Added support for node remaining IP-aware scheduling. Added a plugin to check resources before Kube Queue tasks are dequeued. Added support for switching the preemption algorithm via component configuration.
v1.32.0-aliyun.6.8.5.28a2aed7February 19, 2025Fixed repeated cloud disk creation when using ECI or ACS. Fixed the ResourcePolicy Max value becoming invalid after declaring PodLabels.
v1.32.0-aliyun.6.8.4.2b585931January 17, 2025Initial support for all previous features in ACK clusters of version 1.32.

Version 1.31 change history

VersionRelease dateChanges
v1.31.0-apsara.6.11.5.28c6b51aOctober 20, 2025Bug fixes: Fixed an issue where the alibabacloud.com/acs: "true" or alibabacloud.com/eci: "true" label did not take effect.
v1.31.0-apsara.6.11.4.69d7e1faSeptember 15, 2025Bug fixes: Fixed a scheduling issue when multiple containers in a pod request nvidia.com/gpu. Fixed a potential scheduler crash under high-concurrency ACS compute requests.
v1.31.0-apsara.6.11.3.9b41ad4aSeptember 12, 2025New features: ElasticQuotaTree now supports the alibabacloud.com/ignore-empty-resource annotation. NetworkTopology now supports discretized distribution via JobNetworkTopology. Improved GOAT scheduling determinism for uninitialized and unschedulable node taints. Bug fixes: Fixed an ElasticQuotaTree fairness check error for empty Min or Request quotas. Fixed a potential scheduler crash when creating ACS instances.
v1.31.0-aliyun.6.11.1.c9ed2f40July 25, 2025Fixed an issue where using ElasticResource to implement ECI elastic scheduling (discontinued) did not take effect.
v1.31.0-aliyun.6.11.0.ea1f0f94July 18, 2025Improved GOAT scheduling determinism for concurrent NodeReady changes. Fixed incorrect gang pod count on PodGroup CR recreations. Fixed ElasticQuota preemption logic. Fixed remaining IP-aware scheduling. Fixed TimeoutOrExceedMax and ExceedMax ResourcePolicy policies broken since version 6.9.x. Fixed incorrect MaxPod calculation after elastic scaling. Added ElasticQuotaTree fairness check with StrictFairness parameter. Added ScheduleAdmission feature. Added support for alibabacloud.com/eci=true, alibabacloud.com/acs=true, and eci=true pod labels. Added ResourcePolicy security check.
v1.31.0-aliyun.6.9.4.c8e540e8June 4, 2025Fixed InterPodAffinity and PodTopologySpread becoming invalid. Fixed ResourcePolicy scheduling anomalies. Fixed an ElasticQuota preemption issue.
v1.31.0-aliyun.6.9.3.051bb0e8May 14, 2025Improved scheduler interaction with auto-scaling node pools. Fixed incorrect pod count in ResourcePolicy. Fixed a potential cloud disk leak with WaitForFirstConsumer disks and serverless compute.
v1.31.0-aliyun.6.8.6.520f223dApril 2, 2025Fixed an issue in ACK serverless clusters where WaitForFirstConsumer cloud disks were not created via the CSI plugin.
v1.31.0-aliyun.6.9.0.8287816eFebruary 28, 2025Added support for node remaining IP-aware scheduling. Added a plugin to check resources before Kube Queue tasks are dequeued. Added support for switching the preemption algorithm via component configuration.
v1.31.0-aliyun.6.8.5.2c6ea085February 19, 2025Fixed repeated cloud disk creation when using ECI or ACS. Fixed the ResourcePolicy Max value becoming invalid after declaring PodLabels.
v1.31.0-aliyun.6.8.4.8f585f26January 2, 2025Priority-based scheduling of custom elastic resources: Added support for ACS GPU. Fixed a potential ECI instance leak when PVCs were used in an ACK serverless cluster. Capacity Scheduling: Fixed incorrect ElasticQuotaTree usage in ACS resource normalization scenarios.
v1.31.0-aliyun.6.8.3.eeb86afcDecember 16, 2024Priority-based scheduling of custom elastic resources: Added support for multiple ACS-type units.
v1.31.0-aliyun.6.8.2.eeb86afcDecember 5, 2024Priority-based scheduling of custom elastic resources: Added support for defining PodAnnotations in a unit.
v1.31.0-aliyun.6.8.1.116b8e1fDecember 2, 2024Improved network topology-aware scheduling performance. Fixed an issue where ECI pods could be scheduled back to ECS nodes. Load-aware scheduling no longer restricts DaemonSet pods.
v1.31.0-aliyun.6.7.1.1943173fNovember 6, 2024Priority-based scheduling of custom elastic resources: Added support for trigger-based pod autoscaling. [Deprecated] The resource: elastic field in a unit is deprecated — use k8s.aliyun.com/resource-policy-wait-for-ecs-scaling in PodLabels instead. CPU topology-aware scheduling: Fixed an issue that occurred when ECS instance types change.
v1.31.0-aliyun.6.7.0.740ba623November 4, 2024Capacity Scheduling: Fixed elastic quota preemption being triggered even when ElasticQuotaTree was absent. Priority-based scheduling of custom elastic resources: Added support for ACS-type units.
v1.31.0-aliyun.6.6.1.5bd14ab0October 22, 2024Fixed an occasional invalid score caused by PodTopologySpread. Improved Coscheduling event messages to include failure counts. Removed spurious warning events during virtual node scheduling. Network topology-aware scheduling: Fixed an issue where pods could not be scheduled after preemption. NUMA topology-aware scheduling: Fixed an issue where NUMA topology-aware scheduling did not take effect.
v1.31.0-aliyun.6.6.0.ba473715September 13, 2024Initial support for all previous features in ACK clusters of version 1.31.

Version 1.30 change history

VersionRelease dateChanges
v1.30.3-apsara.6.11.7.3cfed0f9December 10, 2025Bug fixes: Improved auto scaling efficiency. The scheduler no longer updates the Pod Scheduled condition when an ACS instance is created. Fixed post-restart failure to read ACS GPU partitions.
v1.30.3-apsara.6.11.6.a298df6bNovember 10, 2025New features: Added support for __IGNORE__RESOURCE__. Added support for the alibabacloud.com/schedule-admission annotation. Added support for ACS shared GPU. Optimized PVC scheduling. Fixed incorrect ScheduleCycle updates when ResourcePolicy and Gang are used together. ElasticQuotaTree now supports the alibabacloud.com/ignore-empty-resource annotation. Improved GOAT scheduling determinism for uninitialized and unschedulable node taints. Bug fixes: Fixed a memory leak in remaining IP-aware scheduling. Fixed a statistics error when CapacityScheduling quota is updated before a pod is bound. Fixed an ElasticQuotaTree fairness check error for empty Min or Request quotas.
v1.30.3-apsara.6.11.3.bc707580October 23, 2025Bug fixes: Fixed an issue where the alibabacloud.com/acs: "true" or alibabacloud.com/eci: "true" label did not take effect.
v1.30.3-apsara.6.11.2.463d59c9September 15, 2025Bug fixes: Fixed a scheduling issue when multiple containers in a pod request nvidia.com/gpu. Fixed a potential scheduler crash under high-concurrency ACS compute requests.
v1.30.3-aliyun.6.11.1.c005a0b0July 25, 2025Fixed an issue where using ElasticResource to implement ECI elastic scheduling (discontinued) did not take effect.
v1.30.3-aliyun.6.11.0.84cdcafbJuly 18, 2025Improved GOAT scheduling determinism for concurrent NodeReady changes. Fixed incorrect gang pod count on PodGroup CR recreations. Fixed ElasticQuota preemption logic. Fixed remaining IP-aware scheduling. Fixed TimeoutOrExceedMax and ExceedMax ResourcePolicy policies broken since version 6.9.x. Fixed incorrect MaxPod calculation after elastic scaling. Added ElasticQuotaTree fairness check. Added ScheduleAdmission feature. Added support for alibabacloud.com/eci=true, alibabacloud.com/acs=true, and eci=true pod labels. Added ResourcePolicy security check.
v1.30.3-aliyun.6.9.4.818b6506June 4, 2025Fixed InterPodAffinity and PodTopologySpread becoming invalid. Fixed ResourcePolicy scheduling anomalies. Fixed an ElasticQuota preemption issue.
v1.30.3-aliyun.6.9.3.ce7e2fafMay 14, 2025Improved scheduler interaction with auto-scaling node pools. Fixed incorrect pod count in ResourcePolicy. Fixed a potential cloud disk leak with WaitForFirstConsumer disks and serverless compute.
v1.30.3-aliyun.6.8.6.40d5fdf4April 2, 2025Fixed an issue in ACK serverless clusters where WaitForFirstConsumer cloud disks were not created via the CSI plugin.
v1.30.3-aliyun.6.9.0.f08e56a7February 28, 2025Added support for node remaining IP-aware scheduling. Added a plugin to check resources before Kube Queue tasks are dequeued. Added support for switching the preemption algorithm via component configuration.
v1.30.3-aliyun.6.8.5.af20249cFebruary 19, 2025Fixed repeated cloud disk creation when using ECI or ACS. Fixed the ResourcePolicy Max value becoming invalid after declaring PodLabels.
v1.30.3-aliyun.6.8.4.946f90e8January 2, 2025Priority-based scheduling of custom elastic resources: Added support for ACS GPU. Fixed a potential ECI instance leak when PVCs were used in an ACK serverless cluster. Capacity Scheduling: Fixed incorrect ElasticQuotaTree usage in ACS resource normalization scenarios.
v1.30.3-aliyun.6.8.3.697ce9b5December 16, 2024Priority-based scheduling of custom elastic resources: Added support for multiple ACS-type units.
v1.30.3-aliyun.6.8.2.a5fa5dbdDecember 5, 2024Priority-based scheduling of custom elastic resources: Added support for defining PodAnnotations in a unit.
v1.30.3-aliyun.6.8.1.6dc0fd75December 2, 2024Improved network topology-aware scheduling performance. Fixed an issue where ECI pods could be scheduled back to ECS nodes. Load-aware scheduling no longer restricts DaemonSet pods.
v1.30.3-aliyun.6.7.1.d992180aNovember 6, 2024Priority-based scheduling of custom elastic resources: Added support for trigger-aware scaling based on pod count. [Deprecated] The resource: elastic field in a unit is deprecated — use k8s.aliyun.com/resource-policy-wait-for-ecs-scaling in PodLabels instead. CPU topology-aware scheduling: Fixed an issue that occurred when ECS instance types change.
v1.30.3-aliyun.6.7.0.da474ec5November 4, 2024Capacity Scheduling: Fixed elastic quota preemption being triggered even when ElasticQuotaTree was absent. Priority-based scheduling of custom elastic resources: Added support for ACS-type units.
v1.30.3-aliyun.6.6.4.b8940a30October 22, 2024Fixed an occasional invalid score caused by PodTopologySpread.
v1.30.3-aliyun.6.6.3.994ade8aOctober 18, 2024Improved Coscheduling event messages to include failure counts. Removed spurious warning events during virtual node scheduling.
v1.30.3-aliyun.6.6.2.0be67202September 23, 2024Network topology-aware scheduling: Fixed an issue where pods could not be scheduled after preemption. NUMA topology-aware scheduling: Fixed an issue where NUMA topology-aware scheduling did not take effect.
v1.30.3-aliyun.6.6.1.d98352c6September 11, 2024Added support for preemption in network topology-aware scheduling. SlurmOperator: Added support for hybrid scheduling in Kubernetes and Slurm clusters. Coscheduling: Added support for the latest community CRD version.
v1.30.3-aliyun.6.5.6.fe7bc1d5August 20, 2024Fixed the abnormal pod affinity/pod anti-affinity scheduling issue introduced in v1.30.1-aliyun.6.5.1.5dad3be8.
v1.30.3-aliyun.6.5.5.8b10ee7cAugust 1, 2024Rebased to community version v1.30.3.
v1.30.1-aliyun.6.5.5.fcac2bdfAugust 1, 2024Capacity Scheduling: Fixed a quota calculation error when Coscheduling and CapacityScheduling are used together. GPUShare: Fixed an error calculating remaining resources on compute-power scheduling nodes. Priority-based scheduling of custom elastic resources: Optimized node scale-out behavior when ResourcePolicy and ClusterAutoscaler are used together — nodes are no longer scaled out when all units have reached their Max value.
v1.30.1-aliyun.6.5.4.fcac2bdfJuly 22, 2024Coscheduling: Fixed a quota statistics error when using ECI. Fixed an occasional "xxx is in cache, so can't be assumed" error.
v1.30.1-aliyun.6.5.3.9adaeb31July 10, 2024Fixed pods staying in Pending state for extended periods, introduced in v1.30.1-aliyun.6.5.1.5dad3be8.
v1.30.1-aliyun.6.5.1.5dad3be8June 27, 2024Coscheduling: Improved scheduling performance. Added support for sequential pod scheduling. Added support for declaring equivalence classes to improve scheduling performance. Optimized existing scheduler plugins using PreEnqueue.
v1.30.1-aliyun.6.4.7.6643d15fMay 31, 2024Initial support for all previous features in ACK clusters of version 1.30.

Version 1.28 change history

VersionRelease dateChanges
v1.28.12-apsara-6.11.5.db9be0f5October 20, 2025Bug fixes: Fixed an issue where the alibabacloud.com/acs: "true" or alibabacloud.com/eci: "true" label did not take effect.
v1.28.12-apsara-6.11.4.a48c5b6cSeptember 15, 2025Bug fixes: Fixed a scheduling issue when multiple containers in a pod request nvidia.com/gpu. Fixed a potential scheduler crash under high-concurrency ACS compute requests.
v1.28.12-apsara-6.11.3.1a06b13eSeptember 9, 2025New features: ElasticQuotaTree now supports the alibabacloud.com/ignore-empty-resource annotation.
v1.28.12-aliyun-6.11.1.f23c663cJuly 25, 2025Fixed an issue where using ElasticResource to implement ECI elastic scheduling (discontinued) did not take effect.
v1.28.12-aliyun-6.11.0.4003ef92July 18, 2025Improved GOAT scheduling determinism for concurrent NodeReady changes. Fixed incorrect gang pod count on PodGroup CR recreations. Fixed ElasticQuota preemption logic. Fixed remaining IP-aware scheduling. Fixed TimeoutOrExceedMax and ExceedMax ResourcePolicy policies broken since version 6.9.x. Fixed incorrect MaxPod calculation after elastic scaling. Added ElasticQuotaTree fairness check. Added ScheduleAdmission feature. Added support for alibabacloud.com/eci=true, alibabacloud.com/acs=true, and eci=true pod labels. Added ResourcePolicy security check.
v1.28.12-aliyun-6.9.4.206fc5f8June 4, 2025Fixed InterPodAffinity and PodTopologySpread becoming invalid. Fixed ResourcePolicy scheduling anomalies. Fixed an ElasticQuota preemption issue.
v1.28.12-aliyun-6.9.3.cd73f3feMay 14, 2025Improved scheduler interaction with auto-scaling node pools. Fixed incorrect pod count in ResourcePolicy. Fixed a potential cloud disk leak with WaitForFirstConsumer disks and serverless compute.
v1.28.12-aliyun-6.8.6.5f05e0acApril 2, 2025Fixed an issue in ACK serverless clusters where WaitForFirstConsumer cloud disks were not created via the CSI plugin.
v1.28.12-aliyun-6.9.0.6a13fa65February 28, 2025Added support for node remaining IP-aware scheduling. Added a plugin to check resources before Kube Queue tasks are dequeued. Added support for switching the preemption algorithm via component configuration.
v1.28.12-aliyun-6.8.5.b6aef0d1February 19, 2025Fixed repeated disk creation when using ECI or ACS. Fixed the ResourcePolicy Max value becoming invalid after declaring PodLabels.
v1.28.12-aliyun-6.8.4.b27c0009January 2, 2025Priority-based scheduling of custom elastic resources: Added support for ACS GPU. Fixed a potential ECI instance leak when PVCs were used in an ACK serverless cluster. Capacity Scheduling: Fixed incorrect ElasticQuotaTree usage in ACS resource normalization scenarios.
v1.28.12-aliyun-6.8.3.70c756e1December 16, 2024Priority-based scheduling of custom elastic resources: Added support for multiple ACS-type units.
v1.28.12-aliyun-6.8.2.9a307479December 5, 2024Priority-based scheduling of custom elastic resources: Added support for defining PodAnnotations in a unit.
v1.28.12-aliyun-6.8.1.db6cdeb8December 2, 2024Improved network topology-aware scheduling performance. Fixed an issue where ECI pods could be scheduled back to ECS nodes. Load-aware scheduling no longer restricts DaemonSet pods.
v1.28.12-aliyun-6.7.1.44345748November 6, 2024Priority-based scheduling of custom elastic resources: Added support for detecting the pod count that triggers elastic scaling. [Deprecated] The resource: elastic field in a unit is deprecated — use k8s.aliyun.com/resource-policy-wait-for-ecs-scaling in PodLabels instead. CPU topology-aware scheduling: Fixed an issue that occurred when ECS instance types change.
v1.28.12-aliyun-6.7.0.b97fca02November 4, 2024Capacity Scheduling: Fixed elastic quota preemption being triggered even when ElasticQuotaTree was absent. Priority-based scheduling of custom elastic resources: Added support for ACS-type units.
v1.28.12-aliyun-6.6.4.e535a698October 22, 2024Fixed an occasional invalid score caused by PodTopologySpread.
v1.28.12-aliyun-6.6.3.188f750bOctober 11, 2024Improved Coscheduling event messages to include failure counts. Removed spurious warning events during virtual node scheduling.
v1.28.12-aliyun-6.6.2.054ec1f5September 23, 2024Network topology-aware scheduling: Fixed an issue where pods could not be scheduled after preemption. NUMA topology-aware scheduling: Fixed an issue where NUMA topology-aware scheduling did not take effect.
v1.28.12-aliyun-6.6.1.348b251dSeptember 11, 2024Added support for preemption in network topology-aware scheduling. SlurmOperator: Added support for hybrid scheduling in Kubernetes and Slurm clusters.
v1.28.12-aliyun-6.5.4.79e08301August 20, 2024Fixed the abnormal pod affinity/pod anti-affinity scheduling issue introduced in v1.28.3-aliyun-6.5.1.364d020b.
v1.28.12-aliyun-6.5.3.aefde017August 1, 2024Rebased to community version v1.28.12.
v1.28.3-aliyun-6.5.3.79e08301August 1, 2024Capacity Scheduling: Fixed a quota calculation error when Coscheduling and CapacityScheduling are used together. GPUShare: Fixed an error calculating remaining resources on compute-power scheduling nodes. Priority-based scheduling of custom elastic resources: Optimized node scale-out behavior when ResourcePolicy and ClusterAutoscaler are used together.
v1.28.3-aliyun-6.5.2.7ff57682July 22, 2024Coscheduling: Fixed a quota statistics error when using ECI. Fixed an occasional "xxx is in cache, so can't be assumed" error. Fixed pods staying in Pending state for extended periods, introduced in v1.28.3-aliyun-6.5.1.364d020b.
v1.28.3-aliyun-6.5.1.364d020bJune 27, 2024Coscheduling: Improved scheduling performance. Added support for sequential pod scheduling. Added support for declaring equivalence classes. Optimized existing scheduler plugins using PreEnqueue.
v1.28.3-aliyun-6.4.7.0f47500aMay 24, 2024Network topology-aware scheduling: Fixed an occasional scheduling failure.
v1.28.3-aliyun-6.4.6.f32dc398May 16, 2024Shared GPU scheduling: Fixed an issue in LINGJUN clusters where GPU scheduling became abnormal after changing the ack.node.gpu.schedule label from egpu to default. Capacity Scheduling: Fixed an occasional running AddPod on PreFilter plugin error. Elastic scheduling: Added the wait for eci provisioning event when an ECI instance is created using alibabacloud.com/burst-resource.
v1.28.3-aliyun-6.4.5.a8b4a599May 9, 2024NUMA co-scheduling: Updated the NUMA joint allocation API. See Enable NUMA topology-aware scheduling.
v1.28.3-aliyun-6.4.3.f57771d7March 18, 2024Shared GPU scheduling: Added support for using a ConfigMap to specify card isolation. Priority-based scheduling of custom elastic resources: Added support for the elastic resource type.
v1.28.3-aliyun-6.4.2.25bc61fbMarch 1, 2024Disabled the SchedulerQueueingHints feature by default. See Pull Request #122291.
v1.28.3-aliyun-6.4.1.c7db7450February 21, 2024Added support for NUMA joint allocation. Priority-based scheduling of custom elastic resources: Added support for waiting between units. Fixed a remaining IP-aware scheduling issue where an incorrect IP count reduced the number of schedulable pods.
v1.28.3-aliyun-6.3.1ab2185eJanuary 10, 2024Priority-based scheduling of custom elastic resources: Fixed ECI zone affinity and discretization not taking effect. CPU topology-aware scheduling: Fixed repeated allocation of the same CPU core to a single pod, which caused pods to fail to start. ECI elastic scheduling: Fixed pods being scheduled to ECI when the alibabacloud.com/burst-resource label value was not eci or eci_only.
v1.28.3-aliyun-6.2.84d57ad9December 21, 2023Priority-based scheduling of custom elastic resources: Added support for MatchLabelKeys to automatically group different application versions during rollouts.
v1.28.3-aliyun-6.1.ac950aa0December 13, 2023Capacity Scheduling: Added quota assignment via the quota.scheduling.alibabacloud.com/name pod annotation. Added queue association to count only resources managed by Kube Queue. Optimized preemption logic — preemption no longer drops the preempted quota below its Min value or lifts the preempting quota above its Min value. Priority-based scheduling of custom elastic resources: Added support for updating ResourcePolicy unit and node labels with Deletion-Cost synchronization. Added IgnoreTerminatingPod to exclude terminating pods from unit pod counts. Added IgnorePreviousPod to exclude pods older than the associated ResourcePolicy from unit counts. Added PreemptPolicy for inter-unit pod preemption attempts. GPUShare: Reduced P99 Filter plugin scheduling latency from milliseconds to microseconds.
v1.28.3-aliyun-5.8-89c55520October 28, 2023Initial support for all previous features in ACK clusters of version 1.28.

Version 1.26 change history

VersionRelease dateChanges
v1.26.3-aliyun-6.8.7.5a563072November 27, 2025Fixed a scheduling issue caused by NUMAAwareResource returning a score greater than 100.
v1.26.3-aliyun-6.8.7.fec3f2bcMay 14, 2025Fixed a potential cloud disk leak when using WaitForFirstConsumer disks with serverless compute.
v1.26.3-aliyun-6.9.0.293e663cFebruary 28, 2025Added support for node remaining IP-aware scheduling. Added a plugin to check resources before Kube Queue tasks are dequeued. Added support for switching the preemption algorithm via component configuration.
v1.26.3-aliyun-6.8.5.7838febaFebruary 19, 2025Fixed repeated disk creation when using ECI or ACS. Fixed the ResourcePolicy Max value becoming invalid after declaring PodLabels.
v1.26.3-aliyun-6.8.4.4b180111January 2, 2025Priority-based scheduling of custom elastic resources: Added support for ACS GPU. Fixed a potential ECI instance leak when PVCs were used in an ACK serverless cluster. Capacity Scheduling: Fixed incorrect ElasticQuotaTree usage in ACS resource normalization scenarios.
v1.26.3-aliyun-6.8.3.95c73e0bDecember 16, 2024Priority-based scheduling of custom elastic resources: Added support for multiple ACS-type units.
v1.26.3-aliyun-6.8.2.9c9fa19fDecember 5, 2024Priority-based scheduling of custom elastic resources: Added support for defining PodAnnotations in a unit.
v1.26.3-aliyun-6.8.1.a12db674December 2, 2024Fixed an issue where ECI pods could be scheduled back to ECS nodes. Load-aware scheduling no longer restricts DaemonSet pods.
v1.26.3-aliyun-6.7.1.d466c692November 6, 2024Priority-based scheduling of custom elastic resources: Added support for detecting the pod count that triggers elastic scaling. [Deprecated] The resource: elastic field in a unit is deprecated — use k8s.aliyun.com/resource-policy-wait-for-ecs-scaling in PodLabels instead. CPU topology-aware scheduling: Fixed an issue that occurred when ECS instance types change.
v1.26.3-aliyun-6.7.0.9c293fb7November 4, 2024Capacity Scheduling: Fixed elastic quota preemption being triggered even when ElasticQuotaTree was absent. Priority-based scheduling of custom elastic resources: Added support for ACS-type units.
v1.26.3-aliyun-6.6.4.7a8f3f9dOctober 22, 2024Removed spurious warning events during virtual node scheduling.
v1.26.3-aliyun-6.6.3.67f250feSeptember 4, 2024SlurmOperator: Improved plugin scheduling performance.
v1.26.3-aliyun-6.6.2.9ea0a6f5August 30, 2024InterPodAffinity: Fixed an issue where removing taints from a new node did not trigger pod rescheduling.
v1.26.3-aliyun-6.6.1.605b8a4fJuly 31, 2024SlurmOperator: Added support for hybrid scheduling in Kubernetes and Slurm clusters. Priority-based scheduling of custom elastic resources: Optimized behavior to avoid unnecessary node scale-out when used with auto-scaling node pools.
v1.26.3-aliyun-6.4.7.2a77d106June 27, 2024Coscheduling: Improved scheduling speed.
v1.26.3-aliyun-6.4.6.78cacfb4May 16, 2024Capacity Scheduling: Fixed an occasional running AddPod on PreFilter plugin error. Elastic scheduling: Added the wait for eci provisioning event when an ECI instance is created using alibabacloud.com/burst-resource.
v1.26.3-aliyun-6.4.5.7f36e9b3May 9, 2024NUMA co-scheduling: Updated the NUMA joint allocation API. See Enable NUMA topology-aware scheduling.
v1.26.3-aliyun-6.4.3.e7de0a1eMarch 18, 2024Shared GPU scheduling: Added support for using a ConfigMap to specify card isolation. Priority-based scheduling of custom elastic resources: Added support for the elastic resource type.
v1.26.3-aliyun-6.4.1.d24bc3c3February 21, 2024Optimized NodeResourceFit scoring for virtual nodes to prevent them from always scoring 0, allowing preferred-type node affinity to correctly prioritize ECS nodes. Added support for NUMA joint allocation. Priority-based scheduling of custom elastic resources: Added support for waiting between units. Fixed a remaining IP-aware scheduling issue where an incorrect IP count reduced the number of schedulable pods.
v1.26.3-aliyun-6.3.33fdc082January 10, 2024Priority-based scheduling of custom elastic resources: Fixed ECI zone affinity and discretization not taking effect. CPU topology-aware scheduling: Fixed repeated allocation of the same CPU core to a single pod. ECI elastic scheduling: Fixed pods being scheduled to ECI when the alibabacloud.com/burst-resource label value was not eci or eci_only. Capacity Scheduling: Automatically enabled job preemption in ACK LINGJUN clusters.
v1.26.3-aliyun-6.2.d9c15270December 21, 2023Priority-based scheduling of custom elastic resources: Added support for MatchLabelKeys to automatically group different application versions during rollouts.
v1.26.3-aliyun-6.1.a40b0eefDecember 13, 2023Capacity Scheduling: Added quota assignment via quota.scheduling.alibabacloud.com/name. Added queue association for Kube Queue. Optimized preemption logic. Priority-based scheduling of custom elastic resources: Added ResourcePolicy unit and node label update support. Added IgnoreTerminatingPod and IgnorePreviousPod options. Added PreemptPolicy for inter-unit preemption. GPUShare: Reduced P99 Filter plugin scheduling latency from milliseconds to microseconds.
v1.26.3-aliyun-5.9-cd4f2cc3November 16, 2023Improved the display of scheduling failure reasons for unsatisfied cloud disk types.
v1.26.3-aliyun-5.8-a1482f93October 16, 2023Added support for Windows node scheduling. Improved Coscheduling speed when handling concurrent multi-task scheduling to reduce blocking.
v1.26.3-aliyun-5.7-2f57d3ffSeptember 20, 2023Fixed GPUShare occasionally failing to admit pods. Added remaining IP-aware scheduling — pods are no longer scheduled to nodes with no remaining IP addresses. Added topology-aware scheduling with cross-topology-domain retry support. The scheduler now updates ElasticQuotaTree Usage and Request information at 1-second intervals.
v1.26.3-aliyun-5.5-8b98a1ccJuly 5, 2023Fixed pods occasionally staying in Pending state during Coscheduling. Improved Coscheduling behavior with elastic node pools — other pods in a PodGroup no longer trigger scale-out when some pods fail to schedule due to incorrect node selector configurations.
v1.26.3-aliyun-5.4-21b4da4cJuly 3, 2023Fixed an issue where the ResourcePolicy Max property was invalid. Improved scheduler throughput with many pending pods to match performance when no pending pods are present.
v1.26.3-aliyun-5.1-58a821bfMay 26, 2023Added support for updating PodGroup fields such as min-available and MatchPolicy.
v1.26.3-aliyun-5.0-7b1ccc9dMay 22, 2023Priority-based scheduling of custom elastic resources: Added support for declaring maximum replica counts in the Unit field. Added support for GPU topology-aware scheduling.
v1.26.3-aliyun-4.1-a520c096April 27, 2023The autoscaler no longer scales out nodes when the ElasticQuota limit is exceeded or the gang pod count is not met.

Version 1.24 change history

VersionRelease dateChanges
v1.24.6-aliyun-6.4.7.e7ffcda5May 6, 2025Fixed an occasional incorrect Max count in ResourcePolicy. Fixed a potential cloud disk leak when using WaitForFirstConsumer disks with serverless compute.
v1.24.6-aliyun-6.5.0.37a567db (available on allowlist)November 4, 2024Priority-based scheduling of custom elastic resources: Added support for ACS-type units.
v1.24.6-aliyun-6.4.6.c4d551a0May 16, 2024Capacity Scheduling: Fixed an occasional running AddPod on PreFilter plugin error.
v1.24.6-aliyun-6.4.5.aab44b4aMay 9, 2024NUMA joint allocation: Updated the NUMA joint allocation API. See Enable NUMA topology-aware scheduling.
v1.24.6-aliyun-6.4.3.742bd819March 18, 2024Shared GPU scheduling: Added support for using a ConfigMap to specify card isolation. Priority-based scheduling of custom elastic resources: Added support for the elastic resource type.
v1.24.6-aliyun-6.4.1.14ebc575February 21, 2024Optimized NodeResourceFit scoring for virtual nodes: a virtual node is always assigned a score of 0, ensuring that a preferred node affinity rule correctly prioritizes scheduling to ECS nodes. Added support for NUMA joint allocation. Priority-based scheduling of custom elastic resources: Added support for waiting between units. Fixed a remaining IP-aware scheduling issue where an incorrect IP count reduced the number of schedulable pods.
v1.24.6-aliyun-6.3.548a9e59January 10, 2024Priority-based scheduling of custom elastic resources: Fixed ECI zone affinity and discretization not taking effect. CPU topology-aware scheduling: Fixed repeated CPU core allocation to a single pod. ECI elastic scheduling: Fixed pods being scheduled to ECI when the alibabacloud.com/burst-resource label value was not eci or eci_only. Capacity Scheduling: Automatically enabled job preemption in ACK LINGJUN clusters.
v1.24.6-aliyun-6.2.0196baecDecember 21, 2023Priority-based scheduling of custom elastic resources: Added support for MatchLabelKeys.
v1.24.6-aliyun-6.1.1900da95December 13, 2023Capacity Scheduling: Added quota assignment via quota.scheduling.alibabacloud.com/name. Added queue association for Kube Queue. Optimized preemption logic. Priority-based scheduling of custom elastic resources: Added ResourcePolicy unit and node label update support. Added IgnoreTerminatingPod and IgnorePreviousPod options. Added PreemptPolicy for inter-unit preemption. GPUShare: Reduced P99 Filter plugin scheduling latency from milliseconds to microseconds.
v1.24.6-aliyun-5.9-e777ab5bNovember 16, 2023Improved the display of scheduling failure reasons for unsatisfied cloud disk types.
v1.24.6-aliyun-5.8-49fd8652October 16, 2023Added support for Windows node scheduling. Improved Coscheduling speed for concurrent multi-task scheduling.
v1.24.6-aliyun-5.7-62c7302cSeptember 20, 2023Fixed GPUShare occasionally failing to admit pods.
v1.24.6-aliyun-5.6-2bb99440August 31, 2023Added remaining IP-aware scheduling. Added topology-aware scheduling with cross-topology-domain retry support. The scheduler now updates ElasticQuotaTree information at 1-second intervals.
v1.24.6-aliyun-5.5-5e8aac79July 5, 2023Fixed pods occasionally staying in Pending state during Coscheduling. Improved Coscheduling behavior with elastic node pools.
v1.24.6-aliyun-5.4-d81e785eJuly 3, 2023Fixed an issue where the ResourcePolicy Max property was invalid. Improved scheduler throughput with many pending pods.
v1.24.6-aliyun-5.1-95d8a601May 26, 2023Added support for updating PodGroup fields such as min-available and MatchPolicy.
v1.24.6-aliyun-5.0-66224258May 22, 2023Priority-based scheduling of custom elastic resources: Added support for declaring maximum replica counts in the Unit field. Added support for GPU topology-aware scheduling.
v1.24.6-aliyun-4.1-18d8d243March 31, 2023ElasticResource now supports scheduling pods to Arm virtual node (VK) nodes.
v1.24.6-4.0-330eb8b4-aliyunMarch 1, 2023GPUShare: Fixed an incorrect scheduler state when a GPU node is downgraded. Fixed GPU nodes that could not be fully allocated with GPU memory. Added support for preempting GPU pods. Coscheduling: Added support for declaring a gang using PodGroup and Koordinator APIs. Added support for controlling gang retry policy using MatchPolicy. Added support for Gang Group. Gang names must comply with DNS subdomain naming rules. Added support for load-aware scheduling configuration parameters.
v1.24.6-3.2-4f45222b-aliyunJanuary 13, 2023Fixed inaccurate GPUShare memory calculation that prevented pods from using GPU memory properly.
v1.24.6-ack-3.1November 14, 2022The GPU sharing score feature is now enabled by default. Added support for load-aware scheduling.
v1.24.6-ack-3.0September 27, 2022Added support for Capacity Scheduling.
v1.24.3-ack-2.0September 21, 2022Added support for shared GPU scheduling, Coscheduling, ECI elastic scheduling, and CPU topology-aware scheduling.

Version 1.22 change history

VersionRelease dateChanges
v1.22.15-aliyun-6.4.5.e54fd757May 6, 2025Fixed an occasional incorrect Max count in ResourcePolicy. Fixed a potential cloud disk leak when using WaitForFirstConsumer disks with serverless compute.
v1.22.15-aliyun-6.4.4.7fc564f8May 16, 2024Capacity Scheduling: Fixed an occasional running AddPod on PreFilter plugin error.
v1.22.15-aliyun-6.4.3.e858447bApril 22, 2024Priority-based scheduling of custom elastic resources: Fixed an occasional abnormal status when deleting a ResourcePolicy.
v1.22.15-aliyun-6.4.2.4e00a021March 18, 2024Capacity Scheduling: Fixed an occasional preemption failure in ACK LINGJUN clusters. Added support for manually blocklisting specific GPU cards in a cluster via ConfigMap.
v1.22.15-aliyun-6.4.1.1205db85February 29, 2024Priority-based scheduling of custom elastic resources: Fixed an occasional concurrency conflict.
v1.22.15-aliyun-6.4.0.145bb899February 28, 2024Capacity Scheduling: Fixed an issue where specifying a quota caused incorrect quota statistics.
v1.22.15-aliyun-6.3.a669ec6fJanuary 10, 2024Priority-based scheduling of custom elastic resources: Fixed ECI zone affinity and discretization not taking effect. Added support for MatchLabelKeys. CPU topology-aware scheduling: Fixed repeated CPU core allocation to a single pod. ECI elastic scheduling: Fixed pods being scheduled to ECI when the alibabacloud.com/burst-resource label value was not eci or eci_only. Capacity Scheduling: Automatically enabled job preemption in ACK LINGJUN clusters.
v1.22.15-aliyun-6.1.e5bf8b06December 13, 2023Capacity Scheduling: Added quota assignment via quota.scheduling.alibabacloud.com/name. Added queue association for Kube Queue. Optimized preemption logic. Priority-based scheduling of custom elastic resources: Added ResourcePolicy unit and node label update support. Added IgnoreTerminatingPod and IgnorePreviousPod options. Added PreemptPolicy for inter-unit preemption. GPUShare: Reduced P99 Filter plugin scheduling latency from milliseconds to microseconds.
v1.22.15-aliyun-5.9-04a5e6ebNovember 16, 2023Improved the display of scheduling failure reasons for unsatisfied cloud disk types.
v1.22.15-aliyun-5.8-29a640aeOctober 16, 2023Added support for Windows node scheduling. Improved Coscheduling speed for concurrent multi-task scheduling.
v1.22.15-aliyun-5.7-bfcffe21September 20, 2023Fixed GPUShare occasionally failing to admit pods.
v1.22.15-aliyun-5.6-6682b487August 14, 2023Added remaining IP-aware scheduling. Added topology-aware scheduling with cross-topology-domain retry support. The scheduler now updates ElasticQuotaTree information at 1-second intervals.
v1.22.15-aliyun-5.5-82f32f68July 5, 2023Fixed pods occasionally staying in Pending state during Coscheduling. Improved Coscheduling behavior with PodGroup and elastic node pools.
v1.22.15-aliyun-5.4-3b914a05July 3, 2023Fixed an issue where the ResourcePolicy Max property was invalid. Improved scheduler throughput with many pending pods.
v1.22.15-aliyun-5.1-8a479926May 26, 2023Added support for updating PodGroup fields such as min-available and MatchPolicy.
v1.22.15-aliyun-5.0-d1ab67d9May 22, 2023Priority-based scheduling of custom elastic resources: Added support for declaring maximum replica counts in the Unit field. Added support for GPU topology-aware scheduling.
v1.22.15-aliyun-4.1-aec17f35March 31, 2023ElasticResource now supports scheduling pods to Arm virtual node (VK) nodes.
v1.22.15-aliyun-4.0-384ca5d5March 3, 2023GPUShare: Fixed an incorrect scheduler state when a GPU node is downgraded. Fixed GPU nodes that could not be fully allocated with GPU memory. Added support for preempting GPU pods. Coscheduling: Added support for PodGroup and Koordinator APIs. Added MatchPolicy-based gang retry control. Added support for Gang Group. Gang names must comply with DNS subdomain naming rules. Added support for load-aware scheduling configuration parameters.
v1.22.15-2.1-a0512525-aliyunJanuary 10, 2023Fixed inaccurate GPUShare memory calculation that prevented pods from using GPU memory properly.
v1.22.15-ack-2.0November 30, 2022Added support for custom parameters. Added support for load-aware scheduling, elastic scheduling based on node pool priority, and shared GPU compute scheduling.
v1.22.3-ack-1.1February 27, 2022Fixed a shared GPU scheduling failure when the cluster had only one node.
v1.22.3-ack-1.0January 4, 2021Added support for CPU topology-aware scheduling, Coscheduling, Capacity Scheduling, ECI elastic scheduling, and shared GPU scheduling.

Version 1.20 change history

VersionRelease dateChanges
v1.20.11-aliyun-10.6-f95f7336September 22, 2023Fixed an occasional incorrect quota usage calculation in ElasticQuotaTree.
v1.20.11-aliyun-10.3-416caa03May 26, 2023Fixed a GPUShare cache error that occurred on earlier Kubernetes versions.
v1.20.11-aliyun-10.2-f4a371d3April 27, 2023ElasticResource now supports scheduling pods to Arm virtual node (VK) nodes. Fixed a load-aware scheduling failure caused by CPU usage exceeding the requested amount.
v1.20.11-aliyun-10.0-ae867721April 3, 2023Added support for MatchPolicy in Coscheduling.
v1.20.11-aliyun-9.2-a8f8c908March 8, 2023Capacity Scheduling: Fixed an incorrect scheduler state caused by duplicate quota names. Added support for cloud disk scheduling. Shared GPU scheduling: Fixed an incorrect scheduler state when a GPU node is downgraded. Fixed GPU nodes that could occasionally not be fully allocated with GPU memory. Added support for preempting GPU pods. CPU topology-aware scheduling: Pods with CPU scheduling enabled are no longer scheduled to nodes without NUMA enabled. Added support for custom parameters.
v1.20.4-ack-8.0August 29, 2022Fixed known bugs.
v1.20.4-ack-7.0February 22, 2022Added support for elastic scheduling based on node pool priority.
v1.20.4-ack-4.0September 2, 2021Added support for load-aware scheduling and ECI elastic scheduling.
v1.20.4-ack-3.0May 26, 2021Added support for CPU topology-aware scheduling based on socket and L3 cache topology.
v1.20.4-ack-2.0May 14, 2021Added support for Capacity Scheduling.
v1.20.4-ack-1.0April 7, 2021Added support for CPU topology-aware scheduling, Coscheduling, GPU topology-aware scheduling, and shared GPU scheduling.

Version 1.18 change history

VersionRelease dateChanges
v1.18-ack-4.0September 2, 2021Added support for load-aware scheduling.
v1.18-ack-3.1June 5, 2021Made ECI scheduling compatible with node pools.
v1.18-ack-3.0March 12, 2021Added support for unified scheduling of ECI and ECS.
v1.18-ack-2.0November 30, 2020Added support for GPU topology-aware scheduling and shared GPU scheduling.
v1.18-ack-1.0September 24, 2020Added support for CPU topology-aware scheduling and Coscheduling.

Version 1.16 change history

VersionRelease dateChanges
v1.16-ack-1.0July 21, 2020Added support for CPU topology-aware scheduling and Coscheduling in Kubernetes v1.16 clusters.