This topic provides an overview of serverless Kubernetes (ASK) clusters, including the benefits and use scenarios. This topic also compares ASK clusters with Container Service for Kubernetes (ACK) clusters. This helps you quickly get started with ASK clusters.
- O&M-free: You can deploy an application in an ASK cluster within a few seconds. You can focus on application development without the need to manage nodes.
- Auto scaling: You do not need to perform capacity planning for the cluster. ASK automatically scales resources based on your workload requirements.
- Kubernetes compatibility: ASK supports Kubernetes-native resources such as Services, Ingresses, and Helm charts. This allows you to seamlessly migrate Kubernetes applications.
- Secure isolation: Pods are created based on elastic container instances. Pods of different applications are isolated from each other to prevent mutual interference.
- Cost-effectiveness: Pods are created based on your business requirements. You are charged based on the resources used by your applications. You are not charged for idle resources. In addition, the serverless architecture helps reduce O&M costs.
- Integration and interconnection: Containerized applications in ASK clusters support seamless integration with Alibaba Cloud fundamentals. These applications can communicate with existing applications and databases in the virtual private cloud (VPC) where the cluster is deployed. The containerized applications can also communicate with VM-based applications.
When you use ASK clusters, you are charged for pods instead of nodes. The fees of pods are calculated based on the pricing of Elastic Container Instance. For more information, see Elastic Container Instance pricing overview.
Comparison between ASK and ACK
- Application management
In ASK clusters, you do not need to manage or maintain nodes, or perform capacity planning. This reduces the costs of infrastructure management and O&M.
- Dynamic scaling
For workloads that have periodic traffic patterns, such as online education and e-commence applications, ASK clusters can automatically scale resources based on workload requirements. This way, the computing costs and idle resources are reduced, and traffic spikes can be handled in a more efficient manner.
- Data computing
To meet computing requirements of applications such as Spark, ASK clusters can start a large number of pods within a short period of time to process tasks. When the tasks are terminated, the pods are automatically released to stop billing. This dramatically reduces the overall computing costs. For more information, see Use ASK to create Spark tasks.
You can use ASK clusters to build continuous integration (CI) environments by using tools such as Jenkins or GitLab-Runner. You can set up an application delivery pipeline that covers stages such as source code compilation, image building and pushing, and application deployment. The continuous integration tasks are isolated from each other for enhanced security. You do not need to maintain specific resource pools. This reduces computing costs. For more information, see Elastic and cost-effective CI/CD based on ASK.
You can run CronJobs in ASK clusters. Billing automatically stops when the jobs are terminated. You do not need to maintain specific resource pools. This avoids resource waste.