This topic describes virtual nodes and introduces how to deploy a virtual node and schedule a pod to a virtual node.
Virtual nodes and ECI
Elastic Container Instance (ECI) is a container-oriented serverless computing service. It provides maintenance-free container runtimes that support strong isolation and quick startup. ECI enables you to focus on container-based applications without the need to purchase or manage ECS instances, saving you the hassle of infrastructure maintenance. You can create ECIs based on the actual needs. Fees are charged based on the resource usage during container execution time.
Based on Virtual Kubelet, virtual nodes enable seamless integration between Kubernetes and ECI and drastically enhance the elasticity of Kubernetes clusters by eliminating the constraint of computing power provided by cluster nodes. For more information about how Virtual Kubelet works and its architecture, see Virtual Kubelet.
- Online businesses with periodic traffic patterns, such as online education and e-commerce websites. Virtual nodes can remarkably reduce computing costs by optimizing resource pool maintenance.
- Virtual nodes can effectively lower costs in computing scenarios where Spark or Presto is used for data processing.
- CI/CD Pipeline: Jenkins and Gitlab-Runner.
- Jobs: Scheduled jobs and AI jobs.
Install the ack-virtual-node add-on
- You have created a managed or dedicated Kubernetes cluster. For more information, see Create a managed ACK cluster.
- You have activated Elastic Container Instance. To activate the service, go to the ECI console.
- The region where the cluster is deployed must be supported by ECI. To view supported regions and zones, go to the ECI console.
- Log on to the Container Service console.
- In the left-side navigation pane, choose ack-virtual-node.. On the page that appears, click
- On the App Catalog - ack-virtual-node page, click the Parameters tab and set the parameters.
Parameter Description How to obtain the value ECI_REGION The name of the region. Click the target cluster name on the Clusters page. On the Basic Information page that appears, copy the value of Region in the Basic Information section.Note For example, cn-hangzhou represents the China (Hangzhou) region. ECI_VPC The VPC network of the cluster. Click the target cluster name on the Clusters page. On the Basic Information page that appears, copy the value of VPC in the Cluster Resources section. ECI_VSWITCH VSwitches. Click a node ID on the Nodes page. On the Instance Details page that appears, copy the value of VSwitch in the Configuration Information section.Note
Make sure that the zone of the VSwitch is supported by ECI.
VSwitches support multiple zones. You can specify multiple VSwitches in the format of
ECI_VSWITCH: "vsw-xxxxxxx1, vsw-xxxxxxx2, vsw-xxxxxxx3".
ECI_SECURITY_GROUP The security group ID. Click a node ID on the Nodes page. Click Security Groups in the left-side navigation pane. Copy the value of Security Group ID on the Security Groups tab. ECI_ACCESS_KEY Your AccessKey ID. For more information, see How can I obtain an AccessKey pair?. ECI_SECRET_KEY Your AccessKey secret. For more information, see How can I obtain an AccessKey pair?. ALIYUN_CLUSTERID The ID of the target cluster. Click the target cluster name on the Clusters page. On the Basic Information page that appears, copy the value of Cluster ID in the Basic Information section.
- On the Deploy page at the right, select the target cluster and verify that Namespace is set to kube-system and Release Name is set to ack-virtual-node. Then click Create.
- In the left-side navigation pane, choose Nodes page that appears, virtual node virtual-node-eci is now displayed.. On the
- Run the following commands to query the statuses of virtual-node-controller and virtual-node-admission-controller. For more information, see Use kubectl on Cloud Shell to manage Kubernetes clusters.
# kubectl -n kube-system get statefulset virtual-node-eci NAME READY AGE virtual-node-eci 1/1 1m # kubectl -n kube-system get deploy ack-virtual-node-affinity-admission-controller NAME READY UP-TO-DATE AVAILABLE AGE ack-virtual-node-affinity-admission-controller 1/1 1 1 1m # kubectl -n kube-system get pod|grep virtual-node-eci virtual-node-eci-0 1/1 Running 0 1m # kubectl get no|grep virtual-node-eci virtual-node-eci-0 Ready agent 1m v1.11.2-aliyun-1.0.207
Schedule a pod to a virtual node
- Set nodeSelector and tolerations.
Virtual nodes have specific taints. You must set nodeSelector and tolerations for a pod before you can schedule the pod to a virtual node. The sample template is as follows:
apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx imagePullPolicy: Always name: nginx nodeSelector: type: virtual-kubelet tolerations: - key: virtual-kubelet.io/provider operator: Exists
- Add a label to the pod .
The virtual-node-affinity-admission-controller webhook automatically schedules correctly labeled pods to a virtual node. The following example uses label eci=true. The sample template is as follows:
# kubectl run nginx --image nginx -l eci=true # kubectl get pod -o wide|grep virtual-node-eci nginx-7fc9f746b6-r4xgx 0/1 ContainerCreating 0 20s 192.168.1.38 virtual-node-eci-0 <none> <none>
- Add a label to the namespace.
The virtual-node-affinity-admission-controller webhook automatically schedules pods in a correctly labeled namespace to a virtual node. The following example uses label virtual-node-affinity-injection=enabled. The sample template is as follows:
# kubectl create ns vk # kubectl label namespace vk virtual-node-affinity-injection=enabled # kubectl -n vk run nginx --image nginx # kubectl -n vk get pod -o wide|grep virtual-node-eci nginx-6f489b847d-vgj4d 1/1 Running 0 1m 192.168.1.37 virtual-node-eci-0 <none> <none>
Modify the configurations of the virtual node controller
The configurations of the virtual node controller determine how pods are scheduled to the node and pod runtime environment such as VSwitch and security group settings. You can modify the configurations of the controller based on your needs. New configurations apply to new pods only, and do not affect existing pods running on the node.
# kubectl -n kube-system edit statefulset virtual-node-eci
- Upgrade the virtual node controller version.
To use the latest features of virtual nodes, you need to upgrade the virtual node controller to the latest version. For example, to enable pods to access ClusterIP services, the virtual node controller version must be later than v126.96.36.199-aliyun.
- Modify security group (ECI_SECURITY_GROUP) settings.
You can modify this environment variable to change the security group associated with pods.
- Modify VSwitch (ECI_VSWITCH) settings.
You can modify this environment variable to change the VSwitch where pods belong. We recommend that you configure multiple VSwitches across zones to ensure high availability. When ECI resources are insufficient in one zone, the controller can create pods in another zone.
- Modify kube-proxy settings (ECI_KUBE_PROXY).
By default, this environment variable is set to true, indicating that pods can access ClusterIP services. If pods no longer need to access ClusterIP services, you can set the environment variable to false to disable kube-proxy. In large scale scenarios, for example, the cluster needs to start a large number of pods, which dramatically increases the number of concurrent connections between kube-proxy and Kubernetes API server, you can disable kube-proxy to lift pressure on the API server. Instead, you can configure PrivateZone to enable pods to access services in the cluster.
- Create multiple virtual nodes.
We recommend that you deploy a maximum of 3,000 pods on a virtual node. To create more pods, you can create more virtual nodes. You can modify the number of replicas in the statefulset configuration to create more virtual nodes. The number of replicas represents that of virtual nodes in the cluster. Each virtual node corresponds to a virtual node controller, which manages the pods on the virtual node. The controllers do not interfere with each other. A sample configuration is as follows:
# kubectl -n kube-system scale statefulset virtual-node-eci --replicas=4 statefulset.apps/virtual-node-eci scaled # kubectl get no NAME STATUS ROLES AGE VERSION cn-hangzhou.192.168.1.1 Ready <none> 63d v1.12.6-aliyun.1 cn-hangzhou.192.168.1.2 Ready <none> 63d v1.12.6-aliyun.1 virtual-node-eci-0 Ready agent 6m v1.11.2-aliyun-1.0.207 virtual-node-eci-1 Ready agent 1m v1.11.2-aliyun-1.0.207 virtual-node-eci-2 Ready agent 1m v1.11.2-aliyun-1.0.207 virtual-node-eci-3 Ready agent 1m v1.11.2-aliyun-1.0.207
Delete a virtual node
# kubectl drain virtual-node-eci-0 ... # kubectl -n kube-system delete statefulset virtual-node-eci # kubectl delete no virtual-node-eci-0 ...