All Products
Search
Document Center

Container Service for Kubernetes:kube-scheduler

Last Updated:Dec 19, 2023

kube-scheduler is a control plane component that schedules pods to nodes that meet resource usage and pod scheduling requirements.

Introduction

kube-scheduler selects a valid node for each pod in the scheduling queue based on the resource request of the pod and the allocatable resources on the node. In addition, kube-scheduler can sort all valid nodes in a specific order and select a suitable node to host the pod. By default, kube-scheduler spreads pods across nodes based on pod requests.

Usage notes

kube-scheduler is automatically installed in a Kubernetes cluster. You can use kube-scheduler without additional configurations. To update kube-scheduler, log on to the Container Service for Kubernetes (ACK) console, click the cluster that you want to manage, and then choose Operations > Add-ons.

Release notes

Release notes for V1.28

Version number

Release date

Description

v1.28.3-aliyun-5.8-89c55520

2023-10-28

All features provided by earlier versions are supported in kube-scheduler V1.28.

Release notes for V1.26

Version number

Release date

Description

v1.26.3-aliyun-5.8-a1482f93

2023-10-16

  • Pods can be scheduled to Windows nodes.

  • Coscheduling is optimized to accelerate the scheduling of multiple concurrent tasks and reduce blocked tasks.

v1.26.3-aliyun-5.7-2f57d3ff

2023-09-20

  • The following issue is fixed: When GPU sharing is used to schedule pods, kube-scheduler occasionally fails to admit pods.

  • A plug-in is added to kube-scheduler to detect the available IP addresses on a node. If no IP addresses are available on the node, pods are no longer scheduled to the node.

  • A topology-aware scheduling plug-in is added to kube-scheduler. This plug-in can schedule pods to the same topological domain and automatically retries scheduling on multiple topological domains.

  • kube-scheduler updates the usage and request information about ElasticQuotaTree every one second.

v1.26.3-aliyun-5.5-8b98a1cc

2023-07-05

  • The following issue is fixed: Pods occasionally remain in the Pending state for a long time when Coscheduling is used.

  • User experience is optimized for using Coscheduling and elastic node pools at the same time. If some pods in a pod group cannot be scheduled or created due to incorrect node selector configurations, the other pods in the PodGroup will not trigger scale-out activities.

v1.26.3-aliyun-5.4-21b4da4c

2023-07-03

  • The issue that the max parameter in a ResourcePolicy does not take effect is fixed.

  • The impact of a large number of pending pods on the kube-scheduler performance is mitigated. This update brings the throughput of kube-scheduler up to a level close to that when the cluster does not contain pending pods.

v1.26.3-aliyun-5.1-58a821bf

2023-05-26

Fields such as min-available and Matchpolicy can be updated for PodGroups.

v1.26.3-aliyun-5.0-7b1ccc9d

2023-05-22

  • The maximum number of replicated pods can be specified in the Unit field when you configure priority-based resource scheduling.

  • Topology-aware GPU scheduling is supported.

v1.26.3-aliyun-4.1-a520c096

2023-04-27

Nodes are not added by cluster-autoscaler when the elastic quota is exhausted or when pods are insufficient for gang scheduling.

Release notes for V1.24

Version number

Release date

Description

v1.24.6-aliyun-5.8-49fd8652

2023-10-16

  • Pods can be scheduled to Windows nodes.

  • Coscheduling is optimized to accelerate the scheduling of multiple concurrent tasks and reduce blocked tasks.

v1.24.6-aliyun-5.7-62c7302c

2023-09-20

  • The following issue is fixed: When GPU sharing is used to schedule pods, kube-scheduler occasionally fails to admit pods.

v1.24.6-aliyun-5.6-2bb99440

2023-08-31

  • A plug-in is added to kube-scheduler to detect the available IP addresses on a node. If no IP addresses are available on the node, pods are no longer scheduled to the node.

  • A topology-aware scheduling plug-in is added to kube-scheduler. This plug-in can schedule pods to the same topological domain and automatically retries scheduling on multiple topological domains.

  • kube-scheduler updates the usage and request information about ElasticQuotaTree every one second.

v1.24.6-aliyun-5.5-5e8aac79

2023-07-05

  • The following issue is fixed: Pods occasionally remain in the Pending state for a long time when Coscheduling is used.

  • User experience is optimized for using Coscheduling and elastic node pools at the same time. If some pods in a pod group cannot be scheduled or created due to incorrect node selector configurations, the other pods in the PodGroup will not trigger scale-out activities.

v1.24.6-aliyun-5.4-d81e785e

2023-07-03

  • The issue that the max parameter in a ResourcePolicy does not take effect is fixed.

  • The impact of a large number of pending pods on the kube-scheduler performance is mitigated. This update brings the throughput of kube-scheduler up to a level close to that when the cluster does not contain pending pods.

v1.24.6-aliyun-5.1-95d8a601

2023-05-26

Fields such as min-available and Matchpolicy can be updated for Coscheduling.

v1.24.6-aliyun-5.0-66224258

2023-05-22

  • The maximum number of replicated pods can be specified in the Unit field when you configure priority-based resource scheduling.

  • Topology-aware GPU scheduling is supported.

v1.24.6-aliyun-4.1-18d8d243

2023-03-31

Elastic resources can be used to schedule pods to ARM-based virtual nodes.

v1.24.6-4.0-330eb8b4-aliyun

2023-03-01

  • GPU sharing:

    • The kube-scheduler status error during GPU-accelerated node downgrades is fixed.

    • The issue that you cannot allocate all GPU memory of a GPU-accelerated node is fixed.

    • Pods on GPU-accelerated nodes can be preempted.

  • Coscheduling:

    • Gangs can be claimed by using PodGroups or calling the Koordinator API.

    • Gang scheduling retries can be controlled by claiming a matchpolicy.

    • Gang groups are supported.

    • The naming of gangs must meet the rules for DNS subdomains.

  • Custom parameters are added to support Loadaware-related configurations.

v1.24.6-3.2-4f45222b-aliyun

2023-01-13

The issue that the GPU memory used by pods cannot be displayed as normal due to the incorrect shared GPU memory information is fixed.

v1.24.6-ack-3.1

2022-11-14

  • The score feature is enabled for GPU sharing by default. The score feature is disabled by default in earlier versions.

  • Load-aware scheduling is supported.

v1.24.6-ack-3.0

2022-09-27

Capacity scheduling is supported.

v1.24.3-ack-2.0

2022-09-21

  • GPU sharing is supported.

  • Coscheduling is supported.

  • Elastic Container Instance-based scheduling is supported.

  • Intelligent CPU scheduling is supported.

Release notes for V1.22

Version number

Release date

Description

v1.22.15-aliyun-5.8-29a640ae

2023-10-16

  • Pods can be scheduled to Windows nodes.

  • Coscheduling is optimized to accelerate the scheduling of multiple concurrent tasks and reduce blocked tasks.

v1.22.15-aliyun-5.7-bfcffe21

2023-09-20

  • The following issue is fixed: When GPU sharing is used to schedule pods, kube-scheduler occasionally fails to admit pods.

v1.22.15-aliyun-5.6-6682b487

2023-08-14

  • A plug-in is added to kube-scheduler to detect the available IP addresses on a node. If no IP addresses are available on the node, pods are no longer scheduled to the node.

  • A topology-aware scheduling plug-in is added to kube-scheduler. This plug-in can schedule pods to the same topological domain and automatically retries scheduling on multiple topological domains.

  • kube-scheduler updates the usage and request information about ElasticQuotaTree every one second.

v1.22.15-aliyun-5.5-82f32f68

2023-07-05

  • The following issue is fixed: Pods occasionally remain in the Pending state for a long time when Coscheduling is used.

  • The user experience of PodGroups in elastic node pools is optimized. If some pods in a pod group cannot be scheduled or created due to incorrect node selector configurations, the other pods in the PodGroup will not trigger scale-out activities.

v1.22.15-aliyun-5.4-3b914a05

2023-07-03

  • The issue that the max parameter in a ResourcePolicy does not take effect is fixed.

  • The impact of a large number of pending pods on the kube-scheduler performance is mitigated. This update brings the throughput of kube-scheduler up to a level close to that when the cluster does not contain pending pods.

v1.22.15-aliyun-5.1-8a479926

2023-05-26

Fields such as min-available and Matchpolicy can be updated for PodGroups.

v1.22.15-aliyun-5.0-d1ab67d9

2023-05-22

  • The maximum number of replicated pods can be specified in the Unit field when you configure priority-based resource scheduling.

  • Topology-aware GPU scheduling is supported.

v1.22.15-aliyun-4.1-aec17f35

2023-03-31

Elastic resources can be used to schedule pods to ARM-based virtual nodes.

v1.22.15-aliyun-4.0-384ca5d5

2023-03-03

  • GPU sharing:

    • The kube-scheduler status error during GPU-accelerated node downgrades is fixed.

    • The issue that you cannot allocate all GPU memory of a GPU-accelerated node is fixed.

    • Pods on GPU-accelerated nodes can be preempted.

  • Coscheduling:

    • Gangs can be claimed by using PodGroups or calling the Koordinator API.

    • Gang scheduling retries can be controlled by claiming a matchpolicy.

    • Gang groups are supported.

    • The naming of gangs must meet the rules for DNS subdomains.

  • Custom parameters are added to support Loadaware-related configurations.

v1.22.15-2.1-a0512525-aliyun

2023-01-10

The issue that the GPU memory used by pods cannot be displayed as normal due to the incorrect shared GPU memory information is fixed.

v1.22.15-ack-2.0

2022-11-30

  • Custom parameter settings are supported.

  • Load-aware scheduling is supported.

  • Priority-based scheduling is supported. You can use this feature to schedule pods to node pools based on priorities.

  • The computing power of GPUs can be shared.

v1.22.3-ack-1.1

2022-02-27

The issue that GPU sharing and scheduling do not work when the cluster contains only one node is fixed.

v1.22.3-ack-1.0

2021-01-04

  • Intelligent CPU scheduling is supported.

  • Coscheduling is supported.

  • Capacity scheduling is supported.

  • Elastic Container Instance-based scheduling is supported.

  • GPU sharing is supported.

Release notes for V1.20

Version number

Release date

Description

v1.20.11-aliyun-10.3-416caa03

2023-05-26

  • The cache error that occasionally occurs during GPU sharing in earlier Kubernetes versions is fixed.

v1.20.11-aliyun-10.2-f4a371d3

2023-04-27

  • Elastic resources can be used to schedule pods to ARM-based virtual nodes.

  • The issue that load-aware scheduling does not work as expected when the CPU usage of pods on a node exceeds the CPU request of the pods is fixed.

v1.20.11-aliyun-10.0-ae867721

2023-04-03

The Matchpolicy field is supported by Coscheduling.

v1.20.11-aliyun-9.2-a8f8c908

2023-03-08

  • CapacityScheduling: The kube-scheduler status error caused by quotas with the same name is fixed.

  • Cloud disk scheduling is supported.

  • GPU sharing and scheduling:

    • The kube-scheduler status error during GPU-accelerated node downgrades is fixed.

    • The occasional issue that you cannot allocate all GPU memory of a GPU-accelerated node is fixed.

    • Pods on GPU-accelerated nodes can be preempted.

  • Topology-aware CPU scheduling: Pods that have CPU scheduling enabled are not scheduled to nodes that have Numa disabled.

  • Custom parameters are added.

v1.20.4-ack-8.0

2022-08-29

Bugs are fixed.

v1.20.4-ack-7.0

2022-02-22

Priority-based scheduling is supported. You can use this feature to schedule pods to node pools based on priorities.

v1.20.4-ack-4.0

2021-09-02

  • Load-aware scheduling is supported.

  • Elastic Container Instance-based scheduling is supported.

v1.20.4-ack-3.0

2021-05-26

Intelligent CPU scheduling based on sockets and L3 cache (last level cache) is supported.

v1.20.4-ack-2.0

2021-05-14

Capacity scheduling is supported.

v1.20.4-ack-1.0

2021-04-07

  • Intelligent CPU scheduling is supported.

  • Coscheduling is supported.

  • Topology-aware GPU scheduling is supported.

  • GPU sharing is supported.

Release notes for V1.18

Version number

Release date

Description

v1.18-ack-4.0

2021-09-02

Load-aware scheduling is supported.

v1.18-ack-3.1

2021-06-05

Node pools are supported by Elastic Container Instance-based scheduling.

v1.18-ack-3.0

2021-03-12

Scheduling based on both Elastic Container Instance and Elastic Compute Service (ECS) is supported.

v1.18-ack-2.0

2020-11-30

Topology-aware GPU scheduling and GPU sharing are supported.

v1.18-ack-1.0

2020-09-24

Intelligent CPU scheduling and Coscheduling are supported.

Release notes for V1.16

Version number

Release date

Description

v1.16-ack-1.0

2020-07-21

  • Intelligent CPU scheduling is supported by clusters that run Kubernetes 1.16.

  • Coscheduling is supported by clusters that run Kubernetes 1.16.