Container Service for Kubernetes (ACK) provides high-performance management services for containerized applications. You can use ACK to manage containerized applications that run on the cloud in a convenient and efficient manner. This topic describes how to use kubectl to deploy, expose, and monitor a containerized application in an ACK cluster.
Background information
- This topic demonstrates how to deploy an ack-cube application in a professional Kubernetes cluster by using a container image. This application provides an online magic cube game. After you perform the steps in this topic, a professional Kubernetes cluster is created and deployed with an application that provides a magic cube game.
- The container image used to deploy the sample application is built based on an open source project. The image address is
registry.cn-hangzhou.aliyuncs.com/acr-toolkit/ack-cube:1.0
. - kubectl is a command-line tool that Kubernetes provides for you to connect to and manage Kubernetes clusters. For more information about kubectl, see kubectl.
- Cloud Shell is a web-based command-line tool provided by Alibaba Cloud. You can use kubectl in Cloud Shell in the ACK console to manage ACK clusters. Installation and configuration are not required.
Prerequisites
You are familiar with the basic concepts of Kubernetes. For more information, see Terms.
Procedure

Step 1: Activate and grant permissions to ACK
If this is the first time you use ACK, you must activate ACK and grant ACK the permissions to access cloud resources.
- Go to the Container Service for Kubernetes page.
- Read and select Container Service for Kubernetes Terms of Service.
- Click Activate Now.
- Log on to the ACK console.
- On the Container service needs to create default roles page, click Go to RAM console. On the Cloud Resource Access Authorization page, click Confirm Authorization Policy. After you assign the Resource Access Management (RAM) roles to ACK, log on to the ACK console again to get started with ACK.
Step 2: Create an ACK Pro cluster
This step shows how to create an ACK Pro cluster. Default settings are used for most cluster parameters. For more information about cluster parameters, see Create an ACK Pro cluster.
- Log on to the ACK console.
- In the left-side navigation pane of the ACK console, click Clusters.
- In the upper-right corner of the Clusters page, click Create Kubernetes Cluster.
- On the Managed Kubernetes tab, set cluster parameters as described in the following table. Use default settings for the parameters that are not included in the table.
Parameter Description Cluster Name Enter a name for the cluster. In this example, the name is set to ACK-Demo. Cluster Specification The type of the cluster. In this example, Professional is selected. For more information about ACK Pro clusters, see Overview of ACK Pro clusters. Region Select a region to deploy the cluster. In this example, the China (Beijing) region is selected. VPC ACK clusters can be deployed only in virtual private clouds (VPCs). You must specify a VPC in the same region as the cluster. In this example, a VPC named vpc-ack-demo is created in the China (Beijing) region. To create a VPC, click Create VPC. For more information, see Create and manage a VPC.
vSwitch Select vSwitches for nodes in the cluster to communicate with each other. In this example, a vSwitch named vswitch-ack-demo is created in the vpc-ack-demo VPC. Select vswitch-ack-demo in the vSwitch list. To create a vSwitch, click Create vSwitch. For more information, see Create and manage a vSwitch.
Access to API Server Specify whether to expose the Kubernetes API server of the cluster to the Internet. If you want to manage the cluster over the Internet, you must expose the Kubernetes API server with an elastic IP address (EIP). In this example, Expose API Server with EIP is selected.
- Click Next:Node Pool Configurations. Configure the following parameters as described. Use default parameters for the remaining parameters.
Parameter Description Instance Type Select instance types that are used to deploy nodes. To ensure the stability of the cluster, we recommend that you select instance types with at least 4 vCPUs and 8 GiB of memory. For more information about Elastic Compute Service (ECS) instance types and how to select instance types, see Overview of instance families and Select ECS instances to create the master and worker nodes of an ACK cluster. You can specify the number of vCPUs and amount of memory to filter the instance types. You can also specify an instance type in the search box to search for the instance type.
Quantity Specify the number of worker nodes. In this example, the number is set to 2 to avoid service interruptions caused by single points of failure (SPOFs). System Disk Set the system disk for nodes. In this example, the enhanced SSD is selected and the disk size is set to 40 GiB, which is the smallest size available. Logon Type Select the logon type for nodes. In this example, password logon is selected as the logon type and a password is specified. - Click Next:Component Configurations. Use default settings for all component parameters.
- Click Next:Confirm Order, read and select Terms of Service, and then click Create Cluster. Note It requires approximately 10 minutes to create a cluster. After the cluster is created, you can view the cluster on the Clusters page.
Step 3: Connect to the cluster
This step shows how to connect to the ACK cluster by using a kubectl client or Cloud Shell. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster and Use kubectl on Cloud Shell to manage ACK clusters.
Method 1: Connect to the cluster by using a kubectl client
- Log on to the ACK console.
- In the left-side navigation pane of the ACK console, click Clusters.
- On the Clusters page, click the name of the cluster.
- On the Cluster Information page, click the Connection Information tab. Click Copy on the Public Access tab. This way, the credential used to access the cluster over the Internet is copied.
- Paste the credential to the config file in the $HOME/.kube directory, save the file, and then exit. Note If the .kube folder and the config file do not exist in the $HOME/ directory, you must manually create the folder and file.
- Run a kubectl command to connect to the cluster. Run the following command to query the namespaces of the cluster:
kubectl get namespace
Expected output:NAME STATUS AGE arms-prom Active 4h39m default Active 4h39m kube-node-lease Active 4h39m kube-public Active 4h39m kube-system Active 4h39m
Method 2: Connect to the cluster by using Cloud Shell
- Log on to the ACK console.
- In the left-side navigation pane of the ACK console, click Clusters.
- On the Clusters page, find the cluster that you want to manage and choose Actions column. in the It requires a few seconds to start Cloud Shell. After Cloud Shell is started, you can run kubectl commands on the Cloud Shell interface to manage the cluster and applications deployed in the cluster.
Step 4: Deploy and expose an application
This step shows how to use kubectl to deploy a stateless application by creating a Deployment and use a LoadBalancer Service to expose the application. For more information about how to expose an application, see Use an automatically created SLB instance to expose an application.
- Use the following YAML template to create an ack-cube.yaml file:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1 kind: Deployment metadata: name: ack-cube # The name of the application. labels: app: ack-cube spec: replicas: 2 # The number of replicated pods. selector: matchLabels: app: ack-cube # You must specify the same value for the selector of the Service that is used to expose the application. template: metadata: labels: app: ack-cube spec: containers: - name: ack-cube image: registry.cn-hangzhou.aliyuncs.com/acr-toolkit/ack-cube:1.0 # Replace with the address of the image that you want to use in the format of <image_name:tags>. ports: - containerPort: 80 # The container port that you want to open. resources: limits: # The resource limits of the application. cpu: '1' memory: 1Gi requests: # The resource requests of the application. cpu: 500m memory: 512Mi
- Run the following command to deploy the ack-cube application:
kubectl apply -f ack-cube.yaml
- Run the following command to query the status of the application:
kubectl get deployment ack-cube
Expected output:NAME READY UP-TO-DATE AVAILABLE AGE ack-cube 2/2 2 2 96s
- Use the following YAML template to create an ack-cube-svc.yaml file. Set
selector
to the value ofmatchLabels
in the ack-cube.yaml file. In this example, the value isapp: ack-cube
. This adds the application to the backend of the Service.apiVersion: v1 kind: Service metadata: labels: app: ack-cube name: ack-cube-svc namespace: default spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: ack-cube # You must specify the value of the matchLabels parameter in the YAML file that is used to create the Deployment. type: LoadBalancer
- Run the following command to create a Service named ack-cube-svc and use the Service to expose the application. ACK automatically creates an Internet-facing Server Load Balancer (SLB) instance and associates the instance with the Service.
kubectl apply -f ack-cube-svc.yaml
- Run the following command to verify that the LoadBalancer Service is created. The application that you created is exposed by using the IP address in the EXTERNAL-IP column in the output.
kubectl get svc ack-cube-svc
Expected output:NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ack-cube-svc LoadBalancer 172.16.72.161 47.94.xx.xx 80:31547/TCP 32s
Step 5: Test the application
Enter the IP address (EXTERNAL-IP) in the address bar of your browser and press Enter to start the magic cube game.
Step 6: Monitor the application
For more information, see Step 5: Monitor the application.
References
- To enable the auto scaling of application pods, you can configure the Horizontal Pod Autoscaler (HPA), Cron Horizontal Pod Autoscaler (CronHPA), and Vertical Pod Autoscaler (VPA). For more information, see Auto scaling overview.
- In addition to exposing applications through Services, you can use Ingresses to enable application traffic routing at Layer 7. For more information, see Create an NGINX Ingress.
- In addition to monitoring container performance, you can also monitor the cluster infrastructure, application performance, and your business operations. For more information, see Observability overview.
- To avoid unnecessary costs, we recommend that you delete clusters no longer in use. For more information, see Delete a cluster.