Container Service for Kubernetes (ACK) provides high-performance management services for containerized applications. You can use ACK to manage containerized applications that run on the cloud in a convenient and efficient manner. This topic describes how to use kubectl to deploy, expose, and monitor a containerized application in an ACK cluster.

Background information

  • This topic demonstrates how to deploy an ack-cube application in a professional Kubernetes cluster by using a container image. This application provides an online magic cube game. After you perform the steps in this topic, a professional Kubernetes cluster is created and deployed with an application that provides a magic cube game. cube
  • The container image used to deploy the sample application is built based on an open source project. The image address is
  • kubectl is a command-line tool that Kubernetes provides for you to connect to and manage Kubernetes clusters. For more information about kubectl, see kubectl.
  • Cloud Shell is a web-based command-line tool provided by Alibaba Cloud. You can use kubectl in Cloud Shell in the ACK console to manage ACK clusters. Installation and configuration are not required.


You are familiar with the basic concepts of Kubernetes. For more information, see Terms.



Step 1: Activate and grant permissions to ACK

If this is the first time you use ACK, you must activate ACK and grant ACK the permissions to access cloud resources.

  1. Go to the Container Service for Kubernetes page.
  2. Read and select Container Service for Kubernetes Terms of Service.
  3. Click Activate Now.
  4. Log on to the ACK console.
  5. On the Container service needs to create default roles page, click Go to RAM console. On the Cloud Resource Access Authorization page, click Confirm Authorization Policy.
    After you assign the Resource Access Management (RAM) roles to ACK, log on to the ACK console again to get started with ACK.

Step 2: Create an ACK Pro cluster

This step shows how to create an ACK Pro cluster. Default settings are used for most cluster parameters. For more information about cluster parameters, see Create an ACK Pro cluster.

  1. Log on to the ACK console.
  2. In the left-side navigation pane of the ACK console, click Clusters.
  3. In the upper-right corner of the Clusters page, click Create Kubernetes Cluster.
  4. On the Managed Kubernetes tab, set cluster parameters as described in the following table. Use default settings for the parameters that are not included in the table.
    Cluster NameEnter a name for the cluster. In this example, the name is set to ACK-Demo.
    Cluster SpecificationThe type of the cluster. In this example, Professional is selected. For more information about ACK Pro clusters, see Overview of ACK Pro clusters.
    RegionSelect a region to deploy the cluster. In this example, the China (Beijing) region is selected.
    VPCACK clusters can be deployed only in virtual private clouds (VPCs). You must specify a VPC in the same region as the cluster.

    In this example, a VPC named vpc-ack-demo is created in the China (Beijing) region. To create a VPC, click Create VPC. For more information, see Create and manage a VPC.

    vSwitchSelect vSwitches for nodes in the cluster to communicate with each other. In this example, a vSwitch named vswitch-ack-demo is created in the vpc-ack-demo VPC. Select vswitch-ack-demo in the vSwitch list.

    To create a vSwitch, click Create vSwitch. For more information, see Create and manage a vSwitch.

    Access to API ServerSpecify whether to expose the Kubernetes API server of the cluster to the Internet. If you want to manage the cluster over the Internet, you must expose the Kubernetes API server with an elastic IP address (EIP).

    In this example, Expose API Server with EIP is selected.

  5. Click Next:Node Pool Configurations. Configure the following parameters as described. Use default parameters for the remaining parameters.
    Instance TypeSelect instance types that are used to deploy nodes. To ensure the stability of the cluster, we recommend that you select instance types with at least 4 vCPUs and 8 GiB of memory. For more information about Elastic Compute Service (ECS) instance types and how to select instance types, see Overview of instance families and Select ECS instances to create the master and worker nodes of an ACK cluster.

    You can specify the number of vCPUs and amount of memory to filter the instance types. You can also specify an instance type in the search box to search for the instance type.

    QuantitySpecify the number of worker nodes. In this example, the number is set to 2 to avoid service interruptions caused by single points of failure (SPOFs).
    System DiskSet the system disk for nodes. In this example, the enhanced SSD is selected and the disk size is set to 40 GiB, which is the smallest size available.
    Logon TypeSelect the logon type for nodes. In this example, password logon is selected as the logon type and a password is specified.
  6. Click Next:Component Configurations. Use default settings for all component parameters.
  7. Click Next:Confirm Order, read and select Terms of Service, and then click Create Cluster.
    Note It requires approximately 10 minutes to create a cluster. After the cluster is created, you can view the cluster on the Clusters page.

Step 3: Connect to the cluster

This step shows how to connect to the ACK cluster by using a kubectl client or Cloud Shell. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster and Use kubectl on Cloud Shell to manage ACK clusters.

Method 1: Connect to the cluster by using a kubectl client

  1. Log on to the ACK console.
  2. In the left-side navigation pane of the ACK console, click Clusters.
  3. On the Clusters page, click the name of the cluster.
  4. On the Cluster Information page, click the Connection Information tab. Click Copy on the Public Access tab. This way, the credential used to access the cluster over the Internet is copied.
  5. Paste the credential to the config file in the $HOME/.kube directory, save the file, and then exit.
    Note If the .kube folder and the config file do not exist in the $HOME/ directory, you must manually create the folder and file.
  6. Run a kubectl command to connect to the cluster.
    Run the following command to query the namespaces of the cluster:
    kubectl get namespace
    Expected output:
    NAME              STATUS   AGE
    arms-prom         Active   4h39m
    default           Active   4h39m
    kube-node-lease   Active   4h39m
    kube-public       Active   4h39m
    kube-system       Active   4h39m

Method 2: Connect to the cluster by using Cloud Shell

  1. Log on to the ACK console.
  2. In the left-side navigation pane of the ACK console, click Clusters.
  3. On the Clusters page, find the cluster that you want to manage and choose More > Open Cloud Shell in the Actions column.
    It requires a few seconds to start Cloud Shell. After Cloud Shell is started, you can run kubectl commands on the Cloud Shell interface to manage the cluster and applications deployed in the cluster.

Step 4: Deploy and expose an application

This step shows how to use kubectl to deploy a stateless application by creating a Deployment and use a LoadBalancer Service to expose the application. For more information about how to expose an application, see Use an automatically created SLB instance to expose an application.

  1. Use the following YAML template to create an ack-cube.yaml file:
    apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1
    kind: Deployment
      name: ack-cube # The name of the application. 
        app: ack-cube
      replicas: 2 # The number of replicated pods. 
          app: ack-cube  # You must specify the same value for the selector of the Service that is used to expose the application. 
            app: ack-cube
          - name: ack-cube
            image: # Replace with the address of the image that you want to use in the format of <image_name:tags>. 
            - containerPort: 80 # The container port that you want to open. 
              limits: # The resource limits of the application. 
                cpu: '1'
                memory: 1Gi
              requests: # The resource requests of the application.
                cpu: 500m
                memory: 512Mi        
  2. Run the following command to deploy the ack-cube application:
    kubectl apply -f ack-cube.yaml
  3. Run the following command to query the status of the application:
    kubectl get deployment ack-cube
    Expected output:
    ack-cube   2/2     2            2           96s
  4. Use the following YAML template to create an ack-cube-svc.yaml file. Set selector to the value of matchLabels in the ack-cube.yaml file. In this example, the value is app: ack-cube. This adds the application to the backend of the Service.
    apiVersion: v1
    kind: Service
        app: ack-cube
      name: ack-cube-svc
      namespace: default
      - port: 80
        protocol: TCP
        targetPort: 80
        app: ack-cube # You must specify the value of the matchLabels parameter in the YAML file that is used to create the Deployment. 
      type: LoadBalancer
  5. Run the following command to create a Service named ack-cube-svc and use the Service to expose the application.
    ACK automatically creates an Internet-facing Server Load Balancer (SLB) instance and associates the instance with the Service.
    kubectl apply -f ack-cube-svc.yaml
  6. Run the following command to verify that the LoadBalancer Service is created.
    The application that you created is exposed by using the IP address in the EXTERNAL-IP column in the output.
    kubectl get svc ack-cube-svc
    Expected output:
    NAME           TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
    ack-cube-svc   LoadBalancer   47.94.xx.xx   80:31547/TCP   32s

Step 5: Test the application

Enter the IP address (EXTERNAL-IP) in the address bar of your browser and press Enter to start the magic cube game.

Step 6: Monitor the application

For more information, see Step 5: Monitor the application.


  • To enable the auto scaling of application pods, you can configure the Horizontal Pod Autoscaler (HPA), Cron Horizontal Pod Autoscaler (CronHPA), and Vertical Pod Autoscaler (VPA). For more information, see Auto scaling overview.
  • In addition to exposing applications through Services, you can use Ingresses to enable application traffic routing at Layer 7. For more information, see Create an NGINX Ingress.
  • In addition to monitoring container performance, you can also monitor the cluster infrastructure, application performance, and your business operations. For more information, see Observability overview.
  • To avoid unnecessary costs, we recommend that you delete clusters no longer in use. For more information, see Delete a cluster.