Basic terms


A collection of cloud resources that are required to run containers. Several cloud resources, such as Elastic Compute Service (ECS) instances, Server Load Balancer (SLB) instances, and Virtual Private Clouds (VPCs), are associated together to form a cluster.

Managed Kubernetes cluster

A cluster for which you only need to create worker nodes. Container Service for Kubernetes creates and manages master nodes. This type of Kubernetes cluster is easy to use with low cost and high availability. You do not need to manage the master nodes of the Kubernetes cluster.

Dedicated Kubernetes cluster

A cluster for which you must create three master nodes and several worker nodes to achieve high availability. This type of Kubernetes cluster allows you to manage the cluster infrastructure in a more fine-grained manner. It requires you to plan, maintain, and upgrade the Kubernetes cluster on your own.

Serverless Kubernetes cluster

A cluster for which you do not need to create and manage any master nodes or worker nodes. You can use the Container Service console or command-line interface to configure resources for containers, specify container images for applications, provide methods for external access, and start applications.


A virtual machine (VM) or a physical server that has Docker Engine installed and is used to deploy and manage containers. The Agent program of Container Service is installed on a node and registered with a cluster. The number of nodes in a cluster can be scaled based on your requirements.


A runtime instance created from a Docker image. A single node can run multiple containers.


A standard packaging format of a containerized application in Docker. An image from Docker Hub, Alibaba Cloud Container Registry, or your private registry can be specified to deploy its packaged containerized application. An image ID is a unique identifier composed of the image repository URI and image tag. The default tag is latest.

Kubernetes terms

Master node

The manager of a Kubernetes cluster. It runs components such as kube-apiserver, kube-scheduler, kube-controller-manager, etcd, and container network. Generally, three master nodes are deployed to ensure high availability.

Worker node

A node in a Kubernetes cluster that carries workloads. It can be either a VM or a physical server. A worker node schedules pods and communicates with the master node. Components running on a worker node include the Docker runtime environment, kubelet, kube-proxy, and other optional add-on components.


A method used in Kubernetes to divide cluster resources between multiple users. By default, Kubernetes starts with three initial namespaces: default, kube-system, and kube-public. Administrators can also create new namespaces as required.


The smallest deployable computing unit that can be created and managed in Kubernetes. A pod encapsulates one or more containers, storage resources, a unique network IP address, and options that specify how the containers run.

Replication controller (RC)

A feature that monitors running pods to ensure that a specified number of pod replicas are running at any given time. One or more pod replicas can be specified. If the number of pod replicas is smaller than the specified value, an RC starts new pod replicas. If the number of pod replicas exceeds the specified value, the RC stops the redundant pod replicas.

Replica set (RS)

The upgraded version of RC. Compared with RCs, RSs support more selector types. RS objects are not used independently, but are used as deployment parameters under ideal conditions.


An update operation performed on a Kubernetes cluster. Deployment is more widely applied than RS. You can use deployments to create, update, or perform rolling updates for services. A new RS is created when you perform a rolling update for a service. A compound operation is performed to increase the number of replicas in the new RS to the desired value while decreasing the number of replicas in the original RS to zero. This kind of compound operation is better performed by a deployment than through RS. We do not recommend that you manage or use the RS created by a deployment.


The basic operation unit of Kubernetes. It is an abstraction of real application services. Each service has multiple containers that support it. The kube-proxy port and service selector determine the back-end container to which a service request is forwarded, and a single access interface is provided externally. The back end can be scaled or maintained without the awareness of users.


A collection of key-value pairs attached to resource objects. Labels are intended to specify identifying attributes of objects that are meaningful and relevant to users, but do not directly imply semantics to the core system. Labels can be attached to objects at creation time, and subsequently added and modified at any time. Each object can have a set of key-value labels, and each key must be unique for a specified object.


Kubernetes volumes are similar to Docker volumes. However, they are different in one key aspect. Docker volumes are used to persist data in Docker containers, while Kubernetes volumes share the same lifetime as the pods that enclose them. The volumes declared in each pod are shared by all containers in the pod. The actual back-end storage technology used is irrelevant when you use persistent volume claim (PVC) logical storage. The specific configurations for persistent volumes (PVs) are completed by storage administrators.

PV and PVC

PVs and PVCs allow Kubernetes clusters to provide a logical abstraction over the storage resources, so that the actual configurations of back-end storage can be ignored in the pod configuration logic, and instead completed by the PV configurators. The relationship between PVs and PVCs is similar to that between nodes and pods. PVs and nodes are resource providers which can vary by cluster infrastructure, and are configured by the administrators of a Kubernetes cluster. PVCs and pods are resource consumers that can vary based on service requirements, and are configured by either the users or service administrators of a Kubernetes cluster.


A collection of rules that allow inbound access to cluster services. An Ingress can be configured to provide services with externally-reachable URLs, load-balance traffic, terminate SSL, and offer name-based virtual hosting. You can request the Ingress by posting Ingress resources to API servers. An Ingress controller is responsible for fulfilling an Ingress, usually with a load balancer. It can also be used to configure your edge router or additional front ends to help handle the traffic.