This topic provides an overview of serverless Kubernetes (ASK) clusters, including their benefits, pricing, and scenarios, and compares ASK with Container Service for Kubernetes (ACK) to help you quickly familiarize yourself with ASK clusters.

Overview

Serverless Kubernetes clusters enable you to deploy containerized applications without the hassle of purchasing nodes, node maintenance, or capacity planning. Bills are generated based on the CPU and memory quotas that you configured for your applications. Serverless Kubernetes clusters are fully compatible with Kubernetes and make it much easier to get started with Kubernetes. You can focus on application design and development instead of underlying infrastructure management.

Each pod in a serverless Kubernetes cluster runs in a secure and isolated container runtime built on Elastic Container Instance (ECI). The underlying networking of ECI is strongly isolated by lightweight virtual sandboxes. Pods do not affect one another.

Benefits

  • Easy to use: You can deploy an application in a serverless Kubernetes cluster within several seconds. You can focus on application development instead of node management.
  • Quick scaling: You can effortlessly scale out resources as per your workload requirements without the hassle of node capacity planning.
  • Secure isolation: Pods are created based on ECI. Applications run in pods that are isolated from one another to prevent mutual interference.
  • Cost-effective: Pods are created based on workload needs and billed based on resource usage. No charges will be incurred for idle resources. The serverless architecture saves operational cost.
  • Kubernetes compatible: Supports native Kubernetes resources such as service, ingress, and Helm, which enables you to migrate Kubernetes applications seamlessly.
  • Integration and interconnection: The applications in serverless Kubernetes clusters can smoothly use basic services provided by Alibaba Cloud. They can also communicate with existing applications and databases in your VPC network, and the other applications that run on virtual nodes.

Pricing

In serverless Kubernetes clusters, fees are billed based on pods instead of nodes. Pods are charged based on the pricing of ECI. For more information, see Pricing overview.

Fees will also be incurred for services such as Server Load Balancer and PrivateZone. For more information, see the corresponding pricing pages.

ASK and ACK comparison

Comparison

Scenarios

  • Application hosting

    Serverless Kubernetes clusters save you the hassle of node management, maintenance, or capacity planning, which dramatically reduces the costs of infrastructure management and maintenance.

  • Dynamic scaling

    For workloads that have periodic traffic patterns, such as online education and e-commence applications, serverless Kubernetes clusters support dynamic scaling to drastically reduce computing costs and idle resources, and smoothly handle sudden traffic spikes.

  • Data computing

    To meet computing needs for applications such as Spark, serverless Kubernetes clusters can start a large number of pods in a short period of time to support task processing. After the task has been terminated, these pods are automatically released to stop billing, dramatically reducing the overall computing costs.

  • CI/CD

    You can build on serverless Kubernetes clusters to implement continuous integration environments with tools such as Jenkins or GitLab-Runner. You can set up an application delivery pipeline that includes stages such as source code compilation, image build and push, and application deployment. The continuous integration tasks are isolated from one another for enhanced security. You do not need to maintain fixed resource pools, which reduces computing costs.

  • Cron jobs

    You can set up cron jobs in serverless Kubernetes clusters. Billing stops automatically after jobs have been terminated. You do not need to maintain fixed resource pools, which avoids resource waste.

  • Test environment

    Serverless Kubernetes clusters provide ready-to-use resources. You can quickly create and delete pods based on actual needs at a low cost.