Horizontal Pod Autoscaler (HPA) is a component that can automatically scale the number of pods in Kubernetes clusters. This topic provides answers to some commonly asked questions about HPA.
If a FailedGetResourceMetric warning is returned in the current field, as shown in the following returned HPA conditions, it indicates that Cloud Controller Manager (CCM) cannot collect monitoring metrics from resources.
Name: kubernetes-tutorial-deployment Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Mon, 10 Jun 2019 11:46:48 0530 Reference: Deployment/kubernetes-tutorial-deployment Metrics: ( current / target ) resource cpu on pods (as a percentage of request): <unknown> / 2% Min replicas: 1 Max replicas: 4 Deployment pods: 1 current / 0 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetResourceMetric 3m3s (x1009 over 4h18m) horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
Possible causes:
- Cause 1: The data source from which metrics are collected is unavailable.
Run the
kubectl top pod
command to check whether metric data of monitored pods is returned. If no metric data is returned, run thekubectl get apiservice
command to check whether the metrics-server component is available.Sample output:
NAME SERVICE AVAILABLE AGE v1. Local True 29h v1.admissionregistration.k8s.io Local True 29h v1.apiextensions.k8s.io Local True 29h v1.apps Local True 29h v1.authentication.k8s.io Local True 29h v1.authorization.k8s.io Local True 29h v1.autoscaling Local True 29h v1.batch Local True 29h v1.coordination.k8s.io Local True 29h v1.monitoring.coreos.com Local True 29h v1.networking.k8s.io Local True 29h v1.rbac.authorization.k8s.io Local True 29h v1.scheduling.k8s.io Local True 29h v1.storage.k8s.io Local True 29h v1alpha1.argoproj.io Local True 29h v1alpha1.fedlearner.k8s.io Local True 5h11m v1beta1.admissionregistration.k8s.io Local True 29h v1beta1.alicloud.com Local True 29h v1beta1.apiextensions.k8s.io Local True 29h v1beta1.apps Local True 29h v1beta1.authentication.k8s.io Local True 29h v1beta1.authorization.k8s.io Local True 29h v1beta1.batch Local True 29h v1beta1.certificates.k8s.io Local True 29h v1beta1.coordination.k8s.io Local True 29h v1beta1.events.k8s.io Local True 29h v1beta1.extensions Local True 29h ... [v1beta1.metrics.k8s.io kube-system/metrics-server True 29h] ... v1beta1.networking.k8s.io Local True 29h v1beta1.node.k8s.io Local True 29h v1beta1.policy Local True 29h v1beta1.rbac.authorization.k8s.io Local True 29h v1beta1.scheduling.k8s.io Local True 29h v1beta1.storage.k8s.io Local True 29h v1beta2.apps Local True 29h v2beta1.autoscaling Local True 29h v2beta2.autoscaling Local True 29h
If the apiservice for v1beta1.metrics.k8s.io is not metrics-server deployed in the kube-system namespace, check whether metrics-server is overwritten by Prometheus Operator. If metrics-server is overwritten by Prometheus Operator, use the following YAML template to redeploy metrics-server:
apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata: name: v1beta1.metrics.k8s.io spec: service: name: metrics-server namespace: kube-system group: metrics.k8s.io version: v1beta1 insecureSkipTLSVerify: true groupPriorityMinimum: 100 versionPriority: 100
If no error is found after you have performed the preceding checks, see the troubleshooting section in the related topic of metrics-server.
- Cause 2: Metrics cannot be collected during a rolling update or scale-out activity.
By default, metrics-server collects metrics at intervals of one second. However, metrics-server must wait a few seconds before it can collect metrics after a rolling update or scale-out activity. We recommend that you query metrics two seconds after a rolling update or scale-out activity.
- Cause 3: The
request
field is not specified for the pod.HPA automatically obtains the CPU or memory usage by calculating the value of
used resource/requested resource
of the pod. If the requested resource is not specified in the pod configurations, HPA cannot calculate the resource usage. Therefore, you must ensure that the request field is specified in the pod configurations.
##Add the following configuration to the startup settings. --enable-hpa-rolling-update-skipped=true
--metric_resolution
parameter in the startup settings. Example: --metric_resolution=15s
.
scaleTargetRef
to the scaling object of HPA. This way, only HPA scales the application that is specified
by scaleTargetRef
in the HPA configurations. This also makes CronHPA aware of the state of HPA. CronHPA
does not directly scale the Deployment. The Deployment is scaled by HPA. This resolves
the conflict between CronHPA and HPA. For more information about how to make CronHPA
work with HPA without conflicts, see CronHPA in the Related topics section.