Kubernetes is an open source orchestration platform that is commonly used to manage containerized applications and services. This topic describes the features, types, and limits of ACK clusters.

Background information

Alibaba Cloud provides different types of ACK clusters to meet the requirements of diverse scenarios.
  • ACK clusters are most commonly used and suitable for most scenarios.
  • Serverless Kubernetes (ASK) clusters are suitable for handling agile workloads that require quick scaling, and processing individual tasks or multiple parallel tasks. For more information, see ASK overview.
  • ACK edge clusters are the most suitable option when you want to handle edge computing services such as Internet of Things (IoT) and Content Delivery Network (CDN). For more information, see ACK@Edge overview.

ACK also provides highly integrated solutions for sectors such as genomics computing and AI-empowered big data computing. ACK optimizes container performance based on the high-performance computing and networking capabilities of Infrastructure-as-a-Service (IaaS). ACK allows you to centrally manage clusters that are deployed in multi-cloud or hybrid cloud environments. You can log on to the ACK console to manage your Kubernetes clusters deployed in data centers or third-party clouds.


For more information about the limits of ACK clusters, see Limits.

Cluster lifecycle

The following table describes the states of a cluster and the following figure shows the transitions between the states.
Table 1. Cluster states
State Description
Initializing Creating the cloud resources that are used by the cluster.
Creation Failed Failed to create the cloud resources that are used by the cluster.
Running The cloud resources used by the cluster are created.
Updating Updating the metadata of the cluster.
Scaling Adding nodes to the cluster.
Removing Removing nodes from the cluster.
Upgrading Upgrading the cluster.
Draining Evicting pods from a node to other nodes. After all pods are evicted from the node, the node becomes unschudulable.
Deleting Deleting the cluster.
Deletion Failed Failed to delete the cluster.
Deleted (invisible to users) The cluster is deleted.
Figure 1. State transitions
State transitions

Cluster types

ACK clusters are classified into ACK Pro, ACK standard, and ACK dedicated.

Item ACK Pro ACK standard ACK dedicated
Feature You need only to create worker nodes. ACK creates and manages master nodes. You must create master nodes and worker nodes.
ACK standard clusters are easy to use, cost-effective, and highly available. You do not need to manage master nodes.

For more information about the differences between ACK standard clusters and ACK Pro clusters, see Comparison.

ACK standard clusters are easy to use, cost-effective, and highly available. You do not need to manage master nodes. ACK dedicated clusters allow you to manage the cluster infrastructure in a more fine-grained manner. You must design, maintain, and upgrade the clusters on your own.
Billing method You are charged for cluster management based on the number of clusters. In addition, you are also charged for worker nodes and infrastructure resources. For more information, see Billing. Cluster management is free of charge. However, you are charged for worker nodes and infrastructure resources. For more information, see Billing. Cluster management is free of charge. However, you are charged for master nodes, worker nodes, and infrastructure resources. For more information, see Billing.
Use scenario Applicable to the production and testing environments of enterprise users. Applicable to the learning and testing needs of individual users. Applicable to the studies and customization of Kubernetes.
User Personas 2 ACK dedicated cluster
Cluster creation procedure ACK managed cluster ACK dedicated cluster


The following table describes the features of ACK clusters.
Feature Description
Cluster management
  • Cluster creation: You can create multiple types of cluster based on your requirements, choose multiple types of worker node, and customize the configurations on demand. For more information, see Create a professional managed Kubernetes cluster, Create an ACK managed cluster, and Create an ACK dedicated cluster.
  • Cluster upgrade: You can upgrade Kubernetes with a few clicks and manage the upgrade of system components in a unified manner. For more information, see Upgrade the Kubernetes version of an ACK cluster.
  • Elastic scaling: You can scale up or scale down resources in the console with a few clicks based on your requirements. You can also use service-level affinity rules and perform horizontal scaling.
  • Multi-cluster management: You can manage applications in data centers and clusters in multiple clouds and regions in a unified manner.
  • Permission management: You can grant permissions to users in the Resource Access Management (RAM) console or by using role-based access control (RBAC) policies.
Node pool management

You can manage the lifecycle of node pools. You can configure different specifications for node pools in a cluster, such as vSwitches, runtimes, operating systems, and security groups. For more information, see Node pool overview.

Application management
  • Application creation: You can create multiple types of application from images and templates. You can configure environment variables, application health checks, data disks, and logging.
  • Application lifecycle management: You can view, update, and delete applications, roll back application versions, view application events, perform rolling updates of applications, use new application versions to replace earlier application versions, and use triggers to redeploy applications.
  • Application pod scheduling: You can schedule application pods based on the following three policies: pod affinity, node affinity, and pod anti-affinity.
  • Application pod scaling: You can scale the number of application pods manually or by using the Horizontal Pod Autoscaler (HPA).
  • Application release: Phased release and blue-green release are supported.
  • App Catalog: You can use App Catalog to simplify the integration of Alibaba Cloud services.
  • Application Center: After an application is deployed, the application center displays the topology of the application on one page. You can also manage and roll back the application version in scenarios such as continuous deployment.
  • Application backup and recovery: You can back up Kubernetes applications and restore applications from backup data. For more information, see Back up and restore applications.
  • Storage plug-ins: FlexVolume and CSI are supported. For more information, see CSI overview and FlexVolume overview.
  • Volumes and persistent volume claims (PVCs):
    • You can create Block Storage volumes, Apsara File Storage NAS (NAS) volumes, Object Storage Service (OSS) volumes, and Cloud Paralleled File System (CPFS) volumes.
    • You can bind a volume to a PVC.
    • You can dynamically create and migrate volumes.
    • You can view and update volumes and PVCs by running scripts.
  • You can set up container networks based on the Flannel or Terway plug-in. For more information, see Overview.
  • You can specify CIDR blocks for Services and pods.
  • You can use the NetworkPolicy feature. For more information, see Use network policies.
  • You can use Ingresses to route requests.
  • You can use DNS-based service discovery. For more information, see Overview.
O&M and security
  • Observability
    • Monitoring: You can monitor clusters, nodes, applications, and pods. You can use the Prometheus plug-in.
    • Logging: You can view cluster logs, pod logs, and application logs.
    • Alerting: You can configure alerts to manage exceptions in the cluster based on various metrics for different scenarios. For more information, see Alert management.
  • Cost analysis: provides visualized analysis on resource usage and cost distribution to help improve resource utilization.
  • Runtime Security: allows you to manage security policies of the container runtime, configure routine inspections of application security, and configure security monitoring and alerting on the runtime. This enhances the overall security capabilities of containers.
  • Sandboxed-Container: allows you to run an application in a sandboxed and lightweight virtual machine. This virtual machine has a dedicated kernel, isolates applications from each other, and provides enhanced security. Sandboxed-Container is suitable in scenarios such as untrusted application isolation, fault isolation, performance isolation, and load isolation among multiple users.
  • TEE-based confidential computing: provides a cloud-native and all-in-one solution for developing, managing, and delivering trusted, confidential computing applications based on Intel Software Guard Extensions (SGX). This solution ensures data security, integrity, and confidentiality. Confidential computing allows you to isolate sensitive data and code by using a trusted execution environment.
Heterogeneous computing
  • GPU and Neural Processing Unit (NPU) computing: allows you to create clusters that use GPU-accelerated or NPU-accelerated instances as worker nodes. Supports scheduling, monitoring, auto scaling, and O&M management of GPU resources. For more information, see Create an ACK managed cluster with GPU-accelerated nodes.
  • GPU sharing: allows you to implement a GPU sharing framework in your cluster deployed in the cloud or in a data center to run multiple containers on a GPU-accelerated node. For more information, see cGPU overview.
  • Cloud-native AI: provides cloud-native AI computing capabilities and supports orchestration and management of data computing tasks. For more information, see Overview.
Developer services

Open source projects

For more information about the open source projects that are used by ACK, see Open source projects.