Kubernetes is an open source orchestration platform that is commonly used to manage containerized applications and services. Container Service for Kubernetes (ACK) combines native Kubernetes with the virtualization, storage, networking, and security capabilities of Alibaba Cloud, and provides secure, high-performance, and scalable ACK clusters. ACK makes it easy to manage containerized applications with Kubernetes on Alibaba Cloud and simplifies the process of creating and scaling clusters. This allows you to focus on the development and management of containerized applications. This topic describes the features, types, and limits of ACK clusters.

Background information

Alibaba Cloud provides different types of ACK clusters to meet the requirements of diverse scenarios.
  • ACK clusters are most commonly used and are suitable for most scenarios.
  • Serverless Kubernetes (ASK) clusters are suitable for handling agile workloads that require quick scaling, and processing individual tasks or multiple parallel tasks. For more information, see ASK overview.
  • Edge Kubernetes clusters are the most suitable option when you want to handle edge computing services such as Internet of Things (IoT) and Content Delivery Network (CDN). For more information, see ACK@Edge overview.

ACK also provides highly integrated solutions for sectors such as genomics computing and AI-empowered big data computing. ACK optimizes container performance based on the high-performance computing and networking capabilities of Infrastructure-as-a-Service (IaaS). ACK allows you to centrally manage clusters that are deployed in multi-cloud or hybrid cloud environments. You can log on to the ACK console to manage your Kubernetes clusters deployed in data centers or third-party clouds.

Limits

For more information about the limits of ACK clusters, see Limits.

Cluster lifecycle

The following table describes the different states of a cluster and the figure shows the transitions between the states.
Table 1. Cluster states
State Description
Initializing Creating the cloud resources that are used by the cluster.
Creation Failed Failed to create the cloud resources that are used by the cluster.
Running The cloud resources used by the cluster are created.
Updating Updating the metadata of the cluster.
Scaling Adding nodes to the cluster.
Removing Removing nodes from the cluster.
Upgrading Upgrading the cluster.
Deleting Deleting the cluster.
Deletion Failed Failed to delete the cluster.
Deleted (invisible to users) The cluster is deleted.
Figure 1. State transitions
State transitions

Cluster type

ACK clusters are classified into professional, standard, and dedicated clusters.

Item Professional Kubernetes cluster Standard Kubernetes cluster Dedicated Kubernetes cluster
Features You need only to create worker nodes. ACK creates and manages master nodes. You must create master nodes and worker nodes.
Managed Kubernetes clusters are easy to use, cost-effective, and highly available. You do not need to manage master nodes.

For more information about the differences between standard and professional Kubernetes clusters, see Comparison.

Managed Kubernetes clusters are easy to use, cost-effective, and highly available. You do not need to manage master nodes. Dedicated Kubernetes clusters allow you to manage the cluster infrastructure in a finer-grained manner. You must design, maintain, and upgrade the clusters on your own.
Billing methods You are charged for cluster management based on the number of clusters. In addition, you are also charged for worker nodes and infrastructure resources. For more information, see Billing. Cluster management is free of charge. However, you are charged for worker nodes and infrastructure resources. For more information, see Billing. Cluster management is free of charge. However, you are charged for master nodes, worker nodes, and infrastructure resources. For more information, see Billing.
Scenarios Applicable to the production and testing environments of enterprise users. Applicable to the learning and testing needs of individual users. Applicable to the studies and customization of Kubernetes.
User personas 2 Dedicated Kubernetes cluster
Cluster creation procedure Managed Kubernetes cluster Dedicated Kubernetes cluster

Features

The following table describes the features of ACK clusters.
Feature Description
Cluster management
  • Create clusters: You can create multiple types of cluster based on your requirements, choose multiple types of worker node, and flexibly customize the configurations.
  • Upgrade clusters: You can upgrade Kubernetes with a few clicks and manage the upgrade of system components in a unified manner.
  • Manage node pools: You can manage the lifecycle of node pools. You can configure node pools of different specifications in a cluster, such as vSwitches, runtimes, operating systems, and security groups.
  • Elastic scaling: You can scale in and scale out resources in the console with a few clicks based on your requirements. You can also use service-level affinity rules and scale up resources.
  • Manage multiple clusters: You can manage applications in data centers and clusters in multiple clouds and regions in a unified manner.
  • Manage permissions: You can grant permissions to users in the RAM console or by using role-based access control (RBAC) policies.
Node pool

You can manage the lifecycle of node pools. You can configure different specifications for node pools in a cluster, such as vSwitches, runtimes, operating systems, and security groups. For more information, see Node pool overview.

Application management
  • Create applications: You can create multiple types of application based on images and templates. You can configure environment variables, application health checks, data disks, and logging.
  • Manage lifecycles: You can view, update, and delete applications, roll back application versions, view application events, perform rolling updates of applications, use new application versions to replace old application versions, and use triggers to redeploy applications.
  • Schedule applications: You can schedule application pods based on the following three policies: pod affinity, node affinity, and pod anti-affinity.
  • Scale applications: You can scale the number of application pods manually or by using the Horizontal Pod Autoscaler (HPA).
  • Release applications: Phased release and blue-green release are supported.
  • App Catalog: You can use App Catalog to simplify the integration of Alibaba Cloud services.
  • Application center: After an application is deployed, the application center displays the topology of the application on one page. You can also manage and roll back the application version in scenarios such as continuous deployment.
  • Back up and restore applications: You can back up Kubernetes applications and restore applications from backup data. For more information, see Back up and restore applications.
Storage
  • Storage plug-ins: FlexVolume and CSI are supported.
  • Volumes and persistent volume claims (PVCs):
    • You can create Block Storage volumes, Apsara File Storage NAS (NAS) volumes, Object Storage Service (OSS) volumes, and Cloud Paralleled File System (CPFS) volumes.
    • You can bind a volume to a PVC.
    • You can dynamically create and migrate volumes.
    • You can view and update volumes and PVCs by running scripts.
  • Network:
    • You can create clusters in virtual private clouds (VPCs) and use the Flannel or Terway network plug-ins.
    • You can specify CIDR blocks of Services and pods.
    • You can use the NetworkPolicy feature.
    • You can use Ingresses to route requests from outside a cluster to services within the cluster.
Networking
O&M and security
  • Monitoring: You can monitor clusters, nodes, applications, and pods. You can use the Prometheus plug-in.
  • Logging: You can view cluster logs, pod logs, and application logs.
  • The Runtime Security page allows you to manage security policies of the container runtime, configure routine inspections of application security, and configure security monitoring and alerting on the runtime. This enhances the overall security capabilities of containers.
  • Sandboxed-Container allows you to run an application in a sandboxed and lightweight virtual machine. This virtual machine has a dedicated kernel, isolates applications from each other, and provides enhanced security. Sandboxed-Container is suitable in scenarios such as untrusted application isolation, fault isolation, performance isolation, and load isolation among multiple users.
  • TEE-based confidential computing is a cloud-native and all-in-one solution based on Intel Software Guard Extensions (SGX). This solution ensures data security, integrity, and confidentiality. Confidential computing allows you to isolate sensitive data and code by using a trusted execution environment.
Heterogeneous computing
  • GPU and NPU computing: allows you to create clusters that use GPU-accelerated or NPU-accelerated instances as worker nodes. Supports scheduling, monitoring, auto scaling, and O&M management of GPU resources. For more information, see Create a managed Kubernetes cluster with GPU-accelerated nodes.
  • GPU sharing: allows you to implement a GPU sharing framework in your cluster deployed in the cloud or in a data center to run multiple containers on a GPU-accelerated node. For more information, see cGPU overview.
  • Cloud-native AI: provides cloud-native AI computing capabilities and supports orchestration and management of data computing tasks. For more information, see Overview.
Developer services

Open source projects

For more information about the open source projects that are used by ACK, see Open source projects.

FAQ