This topic provides answers to some frequently asked questions about auto scaling in Container Service for Kubernetes (ACK).
If a FailedGetResourceMetric warning is returned in the Events section, as shown in the following returned HPA conditions, Cloud Controller Manager (CCM) cannot collect resource metrics from the monitored resources.
Name: kubernetes-tutorial-deployment Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Mon, 10 Jun 2019 11:46:48 +0530 Reference: Deployment/kubernetes-tutorial-deployment Metrics: ( current / target ) resource cpu on pods (as a percentage of request): <unknown> / 2% Min replicas: 1 Max replicas: 4 Deployment pods: 1 current / 0 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetResourceMetric 3m3s (x1009 over 4h18m) horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
- Cause 1: The data sources from which resource metrics are collected are unavailable.
kubectl top podcommand to check whether the metric data of the monitored pods is returned. If no metric data is returned, run the
kubectl get apiservicecommand to check whether the metrics-server component is available.
The following output shows an example of the returned data:
NAME SERVICE AVAILABLE AGE v1. Local True 29h v1.admissionregistration.k8s.io Local True 29h v1.apiextensions.k8s.io Local True 29h v1.apps Local True 29h v1.authentication.k8s.io Local True 29h v1.authorization.k8s.io Local True 29h v1.autoscaling Local True 29h v1.batch Local True 29h v1.coordination.k8s.io Local True 29h v1.monitoring.coreos.com Local True 29h v1.networking.k8s.io Local True 29h v1.rbac.authorization.k8s.io Local True 29h v1.scheduling.k8s.io Local True 29h v1.storage.k8s.io Local True 29h v1alpha1.argoproj.io Local True 29h v1alpha1.fedlearner.k8s.io Local True 5h11m v1beta1.admissionregistration.k8s.io Local True 29h v1beta1.alicloud.com Local True 29h v1beta1.apiextensions.k8s.io Local True 29h v1beta1.apps Local True 29h v1beta1.authentication.k8s.io Local True 29h v1beta1.authorization.k8s.io Local True 29h v1beta1.batch Local True 29h v1beta1.certificates.k8s.io Local True 29h v1beta1.coordination.k8s.io Local True 29h v1beta1.events.k8s.io Local True 29h v1beta1.extensions Local True 29h ... [v1beta1.metrics.k8s.io kube-system/metrics-server True 29h] ... v1beta1.networking.k8s.io Local True 29h v1beta1.node.k8s.io Local True 29h v1beta1.policy Local True 29h v1beta1.rbac.authorization.k8s.io Local True 29h v1beta1.scheduling.k8s.io Local True 29h v1beta1.storage.k8s.io Local True 29h v1beta2.apps Local True 29h v2beta1.autoscaling Local True 29h v2beta2.autoscaling Local True 29h
If the apiservice for v1beta1.metrics.k8s.io is not kube-system/metrics-server, check whether metrics-server is overwritten by Prometheus Operator. If metrics-server is overwritten by Prometheus Operator, use the following YAML template to redeploy metrics-server:
apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata: name: v1beta1.metrics.k8s.io spec: service: name: metrics-server namespace: kube-system group: metrics.k8s.io version: v1beta1 insecureSkipTLSVerify: true groupPriorityMinimum: 100 versionPriority: 100
If no error is found after you perform the preceding checks, see the troubleshooting section in the related topic of metrics-server.
- Cause 2: Metrics cannot be collected during a rolling update or scale-out activity.
By default, metrics-server collects metrics at intervals of 1 second. However, metrics-server must wait a few seconds before it can collect metrics after it performs a rolling update or scale-out activity. We recommend that you update metrics 2 seconds after a rolling update or scale-out activity.
- Cause 3: The
requestfield is missing.
HPA automatically obtains the CPU or memory usage by calculating the value of
used resource/requested resourceof the pod. If the requested resource is not specified in the pod configurations, HPA cannot calculate the resource usage. Therefore, you must make sure that the requests field is specified in the pod configurations.
Add the following configuration to the startup settings. --enable-hpa-rolling-update-skipped=true
--metric_resolutionparameter in the startup settings. Example: