ACK One registered clusters seamlessly integrate Kubernetes with cloud-based serverless computing resources through ACK virtual nodes, enabling self-managed Kubernetes clusters to access elastic cloud computing capabilities, such as CPU and GPU resources. You can use ACK virtual nodes to create serverless pods in your self-managed Kubernetes clusters, using cloud resources to run pods for elastic scaling during business expansion and traffic peaks.
How it works
Key benefits
Virtual nodes offer the following advantages:
O&M-free: You do not need to manage or maintain infrastructure resources. In addition, virtual nodes are hosted resources, meaning there is no need to perform regular node O&M operations for them, such as system updates and patch installation.
Ultra-large capacity: Scale out to 50,000 pods in a cluster at any time.
ImportantIf your pods are associated with many Services, we recommend that you keep no more than 20,000 in the cluster.
Second-level scaling: Quickly create thousands of pods to handle traffic spikes.
Security isolation: Deploy pods on elastic container instances. Instances on which pods are deployed are isolated from each other by using lightweight virtual sandboxes.
Cost reduction: Pods are created on demand and billed on a pay-as-you-go basis. The serverless architecture helps prevent resource waste and reduce O&M costs.
Use cases
Virtual nodes are well-suited for the following scenarios:
Online businesses
For online businesses that need to frequently handle traffic spikes, such as online education and e-commerce, using virtual nodes can prevent system overloading caused by failures to scale out resources during peak hours and avoid resource waste during off-peak hours.
Data processing
If you use virtual nodes to handle many concurrent online tasks, such as Spark and Presto tasks, you no longer need to worry about the cost of underlying resources. You can deploy thousands of pods within a short period of time to handle big data businesses.
AI jobs
If you use virtual nodes, you do not need to reserve resources for long-term AI jobs that expend large amounts of compute resources, such as model training and model inference jobs. Resources can be deployed on demand and billed on a per-second basis to reduce costs. In addition, resources can be scaled out within seconds to handle unexpected jobs.
CI/CD testing
You can use virtual nodes to create and release container instances at any time to handle batch test tasks for CI/CD, such as CI packaging, stress tests, and simulation tests. Resources can be deployed on demand and billed on a per-second basis. This lets you provision many resources at a low cost.
Jobs and CronJobs
Jobs and CronJobs are automatically terminated after they are completed, and the pods created by them are also deleted. If you use virtual nodes, after a Job or CronJob is completed, resource billing automatically stops and the compute resources are released to avoid incurring additional costs.
Limits
Take note of the following limits before you use virtual nodes:
DaemonSets are not supported. You can replace DaemonSets with sidecar containers.
You cannot specify
HostPathorHostNetworkin podmanifests.Privileged containers are not supported. You can use a security context to add capabilities to a pod.
NoteThe privileged container feature is in internal preview. To use this feature, submit a ticket.
NodePort Services and the Session Affinity feature are not supported.
The China South Finance and Alibaba Gov Cloud regions are not supported.
Billing information
The virtual node itself incurs no charges. However, pods running on virtual nodes incur charges for computing resources used. For more information, see Elastic Container Instance billing and ACS billing.