This topic describes how to use Container Service for Kubernetes and Elastic Container Instance (ECI) to enable auto scaling for online business applications.

Step 1: Prepare application images

The preceding figure shows three applications and how they work together. User requests are sent to biz-app, which simulates an entry application. Then, biz-app forwards the requests to cpu-app and mem-app, which simulates a CPU-intensive application and a memory-intensive application respectively. Therefore, you need to prepare three Docker images.

  • biz-app:
  • mem-app:
  • cpu-app:

Step 2: Create a managed Kubernetes cluster

This topic takes a managed Kubernetes cluster as an example. This cluster uses nodes backed by Elastic Compute Service (ECS) instances to sever normal traffic, and uses a virtual node backed by ECI through the virtual-kubelet-autoscaler add-on to serve burst traffic.

Step 3: Install Virtual Kubelet and the virtual-kubelet-autoscaler add-on in the managed Kubernetes cluster

Log on to the Container Service console. In the left-side navigation pane, choose Marketplace > App Catalog. On the App Catalog page that appears, search for ack-virtual-node, and click the card of the add-on to install the add-on. After the ack-virtual-node add-on is installed, install the ack-virtual-kubelet-autoscaler add-on in the same way.

Step 4: Create the applications

Select the type of the applications based on your business requirements. This topic uses deployment as an example. In the left-side navigation pane, choose Applications > Deployments.

On the page that appears, click Create from Image in the upper-right corner. On the page that appears, set the parameters as required, specify one of the prepared application images, and then configure the service. Repeat the preceding steps to create the other two applications in sequence.

When you create biz-app, you need to specify two environment variables, as shown in the following figures. Set the two environment variables to the internal service endpoints of cpu-app and mem-app, respectively.

In addition, you need to create an ingress for biz-app to expose biz-app to the Internet.

Step 5: Configure auto scaling policies

On the Deployments page, click the target application. On the application details page that appears, click the Horizontal Pod Autoscaler tab. Then, click Edit in the Actions column of an autoscaler to configure the policy.

Be default, you can configure auto scaling policies based on the memory usage and CPU usage. You can also create custom auto scaling policies as required.

Step 6: Install third-party add-ons

On the App Catalog page, you can search for the required add-on and click the card of the add-on to install the add-on. In this example, the add-ons shown in the following figure are installed.

  • ack-virtual-node: uses Virtual Kubelet to schedule pods to ECI.
  • ack-virtual-kubelet-autoscaler: uses Virtual Kubelet to schedule pods to ECI when resources are insufficient on physical nodes. The scalability of the managed Kubernetes cluster described in this topic is achieved through this add-on.
  • ahas: provides features such as automatic detection of application architectures, high availability assessments based on fault injection, and one-click throttling and degradation.
  • arms-pilot: automatically detects application topologies, generates 3D topologies, detects and monitors interfaces, and captures abnormal and slow transactions.
  • arms-prometheus: supports the open-source Prometheus ecosystem and monitors a wide variety of components. The add-on offers a ready-to-use monitoring dashboard and provides fully managed Prometheus services.

Step 7: Use Performance Testing to run a stress test

Use Performance Testing to simulate a traffic change from off-peak hours to peak hours.

Step 8: Check the result

As shown in the preceding figure, when the queries per second (QPS) are changed from 1 to 5, 10, and 20, resources are scaled for mem-app and cpu-app based on the loads.