Overview
This topic describes how to use the virtual-kubelet-autoscaler add-on to schedule pods to a virtual node when computing resources are insufficient on physical nodes of a Kubernetes cluster. This brings ultra scalability to your business applications.

Before you begin
1. Log on to the Container Service console to view the Kubernetes cluster that you have created. If you do not have a cluster, create one. For more information, see Create a Kubernetes cluster.
2. Install the ack-virtual-node add-on. For more information, see Virtual nodes.
3. Install the virtual-kubelet-autoscaler add-on. To install the add-on, follow these steps: In the left-side navigation pane, choose Marketplace > App Catalog. On the App Catalog page that appears, search for ack-virtual-kubelet-autoscaler and click the card of the add-on that is found.

On the page that appears, select the target cluster in the upper-right corner and click Create.

kubectl get deploy -n kube-system

Example
View the resource usage of physical nodes
Log on to the Container Service console to view the nodes of your Kubernetes cluster. In this example, two Elastic Compute Service (ECS) instances work as physical nodes, as shown in the following figure. The type of the two ECS instances is ecs.c5.large, indicating that each instance is allocated 2 vCPUs and 4 GiB memory. For more information about the instance type, see the c5, compute optimized instance family section in Compute optimized instance families. No pods are scheduled to the virtual-kubelet node, which is a virtual node.

Create a deployment
Prepare the YAML file of the deployment. This topic uses the deployment-autoscaler.yaml file as an example. The deployment has 10 replicas, and 2 vCPUs and 4 GiB memory need to be allocated to the container of each replica.
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment-autoscaler
labels:
app: nginx
spec:
replicas: 10
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: registry-vpc.cn-beijing.aliyuncs.com/eci_open/nginx:alpine
ports:
- containerPort: 80
resources:
requests:
cpu: 2
memory: 4Gi
Run the following command to create the deployment based on the preceding YAML file:
kubectl create -f deployment-autoscaler.yaml
Alternatively, log on to the Container Service console. Choose Applications > Deployments in the left-side navigation pane. On the page that appears, click Create from Template in the upper-right corner to create the deployment.


Check the status of the pods
Run the following command to view the running status of pods:
kubectl get pods -o wide

You can also run the following command to view the running status of a specific pod:
kubectl describe pod nginx-deployment-autoscaler-786876b6b-5qtw4

As shown in the preceding figure, the pod is scheduled to the virtual-kubelet node through the virtual-kubelet-autoscaler add-on because computing resources are insufficient on physical nodes.
Check the nodes of the Kubernetes cluster in the Contain Service console. As shown in the following figure, 10 pods are scheduled to the virtual-kubelet node.
