×
Community Blog How to Monitor and Autoscale Cloud Native Applications in Kubernetes

How to Monitor and Autoscale Cloud Native Applications in Kubernetes

This article describes how a cloud-native application seamlessly integrates monitoring and autoscaling capabilities in Kubernetes.

Preface

1

While an increasing number of developers continuously accept and recognize the design concept of the cloud-native applications, it is critical to note that Kubernetes has become the center of the entire cloud-native implementation stack. Cloud service capabilities are revealed from the standard Kubernetes interface to the service layer through Cloud Provider, CRD Controller, and Operator. Developers build their own cloud-native applications and platforms based on Kubernetes. Hence, Kubernetes is now the platform for building platforms. Let's understand how a cloud-native application seamlessly integrates monitoring and autoscaling capabilities in Kubernetes.

This article is a compilation of various excerpts from a speech titled "Cloud-native Application Monitoring and Autoscaling in Kubernetes" by Liu Zhongwei (Mo Yuan), at KubeCon. Liu is a technical expert at Alibaba Cloud Container Platform.

Alibaba Cloud Container Service for Kubernetes: Monitoring Overview

2

Alibaba Cloud container service for Kubernetes mainly supports the following two types of integrations.

Cloud Service Integration

Alibaba Cloud container service for Kubernetes integrates with four cloud monitoring services, including Simple Log Service (SLS), Application Real-Time Monitoring Service (ARMS), and Application High Availability Service (AHAS), and CloudMonitor.

SLS is mainly responsible for collecting and analyzing logs. In the Alibaba Cloud container service for Kubernetes, SLS collects three different types of logs:

  • Logs of core components, such as APIServer
  • Logs of access layers, such as Service Mesh/Ingress
  • Standard logs of applications

In addition to the standard link for collecting logs, SLS also provides the upper-layer log analysis capability. By default, it provides the audit analysis capability based on APIServer, the observability display of the access layer, and log analysis of the application layer. In the Alibaba Cloud container service for Kubernetes, the log component is installed by default, and developers only need to check it while creating the cluster.

ARMS is mainly responsible for collecting, analyzing, and displaying performance metrics of the application. Currently, it mainly supports the integration of Java and PHP, and collects metrics at the JVM layer, such as GC times, slow SQL of applications, and call stacks. It plays a very important role in performance tuning.

AHAS is an architecture-aware monitoring service. Generally, most of the load types in the Kubernetes cluster are microservices, and calling topology of microservices is also complex. Therefore, when the network link of the cluster has problems, the biggest challenge is how to quickly locate, discover, and diagnose problems. AHAS shows the cluster topology through the traffic and trend of the network, providing a higher level of problem diagnosis.

Open-Source Solution Integration

The compatibility and integration of open-source solutions are also part of the monitoring capability of the Alibaba Cloud container service for Kubernetes. It mainly includes the following two parts.

Enhancement and Integration of Kubernetes Built-in Monitoring Components

In the Kubernetes community, heapster/metrics-server is a built-in monitoring solution, and core components, such as Dashboard and HPA, depend on the metrics provided by these built-in monitoring capabilities. Due to the inability to ensure the complete synchronization of the release cycle of components in the Kubernetes ecosystem and the release of Kubernetes, some consumers with monitoring capabilities have monitoring problems in Kubernetes. Therefore, Alibaba Cloud came up with enhancements on metrics-server to achieve version compatibility. In addition, for node diagnosis, the Alibaba Cloud container service enhances NPD coverage, supports the FD file handle monitoring, the NTP time synchronization verification, and the inbound/outbound network capability verification. Further, it makes the eventer open-source to support the offline transmission of Kubernetes event data to SLS, Kafka, and DingTalk, thus implementing ChatOps.

Enhancement and Integration of Prometheus Ecosystem

To support Prometheus, the standard third-party monitoring platform in the Kubernetes ecosystem, the Alibaba Cloud container service provides integrated charts for developers to integrate with one click. In addition, there are enhancements at the following three levels:

  • Enhanced Storage and Performance: Support product-level storage capabilities (TSDB and InfluxDB) to provide more persistent and efficient monitoring storage and query.
  • Enhanced Collection Metrics: Fix some inaccurate monitoring issues caused by defects in the Prometheus design, to provide exporters a single-GPU card, multi-GPU cards, and shared sharding.
  • Enhanced Observability at the Upper Layer: Support scenario-based CRD monitoring metrics integration (cloud-native monitoring capabilities, such as Argo, Spark, and TensorFlow) to implement multi-rent observability.

Overview of Autoscaling in Alibaba Cloud Container Service Kubernetes

3

Alibaba Cloud container service for Kubernetes mainly includes the following two types of autoscaling components.

Scheduling Layer Autoscaling Components

For scheduling layer autoscaling components, all autoscaling operations are pod-related, regardless of the specific resource situation.

  • Horizontal Pod Autoscaling (HPA): HPA is a component to scale pods horizontally. In addition to the community-supported resource metrics and custom metrics, Alibaba Cloud container service for Kubernetes also provides an external-metrics-adapter to support cloud service metrics as criteria for determining autoscaling. Currently, it supports metric indicators in different dimensions of multiple products, such as the QPS and RT of Ingress, and GC times and slow SQL times applied in ARMS.
  • **Vertical Pod Autoscaling (VPA): VPA helps to scale pods vertically. It is mainly used for scaling and upgrading stateful services.
  • cronHPA: It is a scheduled scaling component that targets periodic loads. It predicts regular load cycles through resource portraits and saves resource costs through periodic scaling.
  • Resizer: It is a scaling controller for the core components of the cluster. It helps to achieve linear and gradient scaling based on the number of CPU cores and nodes of the cluster. Currently, the scaling of core components, such as CoreDNS is its key target scenario.

Resource Layer Autoscaling Components

Resource layer autoscaling components support operations regarding the relationship between pods and specific resources.

  • Cluster-Autoscaler: Cluster-Autoscaler is a relatively mature node scaling component at present. It helps to scale nodes and schedule pods that cannot be scheduled to the new pop-up nodes when pod resources are insufficient.
  • Virtual-Kubelet-Autoscaler: It is an open-source component of Alibaba Cloud container service for Kubernetes. Similar to the principle of Cluster-Autoscaler, when a pod cannot be scheduled due to resource problems, it does not pop up a node, instead, it binds the pod to a virtual node and starts the pod through ECI.

Demo Show Case

4

The preceding diagram shows a simple Demo, where the application subject is APIservice. The APIservice calls the database through the sub-APIservice, and the access layer is managed through Ingress. Use PTS to simulate the traffic generated by the upper layer, collect logs of the access layer through SLS, and collect application performance metrics through ARMS. Finally, expose external metrics through alibaba-cloud-metrics-adapster to trigger HPA for re-computing workload copies. When the scaled pod occupies all the cluster resources, virtual-kubelet-autoscaler triggers to generate ECI for hosting loads beyond the cluster capacity planning.

Summary

It is very simple to use monitoring and autoscaling capabilities on the Alibaba Cloud Container Service for Kubernetes. Developers only need to install the corresponding component chart in one click to get complete access. With multi-dimensional monitoring and autoscaling capabilities, cloud-native applications obtain higher stability and robustness at the lowest cost.

0 0 0
Share on

You may also like

Comments

Related Products