Container Service for Kubernetes (ACK) provides high-performance management services for containerized applications. You can use ACK to manage containerized applications that run on the cloud in a convenient and efficient manner. This topic describes how to use the ACK console to deploy, expose, and monitor a containerized application in an ACK cluster.

Background information

  • This topic demonstrates how to deploy an ack-cube application in a professional Kubernetes cluster by using a container image. This application provides an online magic cube game. After you perform the steps in this topic, a professional Kubernetes cluster is created and deployed with an application that provides a magic cube game. cube
  • The container image used to deploy the sample application is built based on an open source project. The image address is registry.cn-hangzhou.aliyuncs.com/acr-toolkit/ack-cube:1.0.
  • Standard Kubernetes clusters and professional Kubernetes clusters are both managed Kubernetes clusters. Compared with standard Kubernetes clusters, professional Kubernetes clusters provide higher stability and enhanced security, and are covered by the service level agreement (SLA) that includes compensation clauses. For more information about the billing of ACK clusters and the cloud resources used by ACK clusters, see Billing.

Prerequisites

You are familiar with the basic concepts of Kubernetes. For more information, see Basic concepts.

Procedure

workflow

Step 1: Activate and grant permissions to ACK

If this is the first time you use ACK, you must activate ACK and grant ACK the permissions to access cloud resources.

  1. Go to the Container Service for Kubernetes page.
  2. Read and select Container Service for Kubernetes Terms of Service.
  3. Click Activate Now.
  4. Log on to the ACK console.
  5. On the Container service needs to create default roles page, click Go to RAM console. On the Cloud Resource Access Authorization page, click Confirm Authorization Policy.
    After you assign the Resource Access Management (RAM) roles to ACK, log on to the ACK console again to get started with ACK.

Step 2: Create a professional Kubernetes cluster

This step shows how to create a professional Kubernetes cluster. Default settings are used for most cluster parameters. For more information about cluster parameters, see Create a professional managed Kubernetes cluster.

  1. Log on to the ACK console.
  2. In the left-side navigation pane of the ACK console, click Clusters.
  3. In the upper-right corner of the Clusters page, click Create Kubernetes Cluster.
  4. On the Managed Kubernetes tab, set cluster parameters as described in the following table. Use default settings for the parameters that are not included in the table.
    Cluster configuration
    Parameter Description
    Cluster Name Enter a name for the cluster. In this example, the name is set to ACK-Demo.
    Cluster Specification Select the cluster type. In this example, Professional is selected. For more information about professional Kubernetes clusters, see Introduction to professional managed Kubernetes clusters.
    Region Select a region to deploy the cluster. In this example, the China (Beijing) region is selected.
    VPC ACK clusters can be deployed only in virtual private clouds (VPCs). You must specify a VPC in the same region as the cluster.

    In this example, a VPC named vpc-ack-demo is created in the China (Beijing) region. To create a VPC, click Create VPC. For more information, see Work with VPCs.

    vSwitch Select vSwitches for nodes in the cluster to communicate with each other. In this example, a vSwitch named vswitch-ack-demo is created in the vpc-ack-demo VPC. Select vswitch-ack-demo in the vSwitch list.

    To create a vSwitch, click Create vSwitch. For more information, see Work with vSwitches.

    Access to API Server Specify whether to expose the cluster API server to the Internet. If you want to manage the cluster over the Internet, you must expose the cluster API server with an elastic IP address (EIP).

    In this example, Expose API Server with EIP is selected.

  5. Click Next:Worker Configurations. Set worker node parameters as described in the following table. Use default settings for the parameters that are not included in the table.
    Worker nodes configuration
    Parameter Description
    Instance Type Select the instance types that are used to deploy worker nodes. To ensure the stability of the cluster, we recommend that you select instance types with at least 4 vCPUs and 8 GiB of memory. For more information about Elastic Compute Service (ECS) instance types and how to select instance types, see Select ECS instance types and Select ECS instances to create the master and worker nodes of an ACK cluster.

    In this example, the ecs.g5.xlarge instance type is selected to deploy worker nodes. You can enter ecs.g5.xlarge in the search box and click the search icon.

    Quantity Specify the number of worker nodes based on your business requirements. In this example, the number is set to 2 to avoid service interruptions caused by single points of failure (SPOFs).
    System Disk Select the system disk type for the worker nodes. In this example, the enhanced SSD is selected and the disk size is set to 40 GiB, which is the smallest size available.
    Logon Type Select the logon type of the worker nodes. In this example, password logon is selected and a password is specified.
  6. Click Next:Component Configurations. Use default settings for all component parameters.
  7. Click Next:Confirm Order. Read and select Terms of Service. Click Create Cluster.
    Note It requires approximately 10 minutes to create a cluster. After the cluster is created, you can view the cluster on the Clusters page.

Step 3: Create and expose an application

This step shows how to deploy a stateless application by using a Deployment and expose the application to the Internet. This application provides a magic cube game. For more information about the parameters used to create a Deployment, see Create a stateless application by using a Deployment.

  1. On the Clusters page, click the name of the ACK-Demo cluster.
  2. In the left-side navigation pane of the details page, choose Workloads > Deployments.
  3. In the upper-right corner of the Deployments page, click Create from Image.
  4. On the Basic Information wizard page, set the application name to ack-cube.
  5. Click Next. On the Container wizard page, set container parameters.
    Container configuration
    Parameter Description
    Image Name You can enter an untagged image address or click Select Image to select the image that you want to use.

    In this example, registry.cn-hangzhou.aliyuncs.com/acr-toolkit/ack-cube is specified.

    Image Version Click Select Image Version and select an image version. If you do not specify an image version, the latest version is used. In this example, 1.0 is specified.
    Resource Limit Specify the resource limits of the application. This prevents the application from occupying excessive amounts of resources.

    In this example, 1 CPU core and 1,024 MiB of memory are specified. Ephemeral Storage is left empty.

    Required Resources Specify the amount of resources that are reserved for the application. This prevents application unavailability caused by insufficient resources.

    In this example, 0.5 CPU core and 512 MiB of memory are specified. Ephemeral Storage is left empty.

    Port Configure container ports. In this example, TCP port 80 is configured and named ack-cube.
  6. Click Next. On the Advanced wizard page, click Create to the right side of Services.
  7. In the Create Service dialog box, set Service parameters and click Create. This creates a Service to expose the ack-cube application.
    service
    Parameter Description
    Name Enter a name for the Service. In this example, the name is set to ack-cube-svc.
    Type The type of Service. This parameter determines how the Service is accessed. Select Server Load Balancer. Select Public Access and Create SLB Instance. You can click Modify to change the SLB instance specification based on your business requirements.

    In this example, the default specification Small I (slb.s1.small) is used.

    Port Mapping Specify a Service port and a container port. The container port must be the same as the one that is exposed in the backend pod.

    In this example, the Service port and container port are both set to 80.

  8. In the lower-right corner of the Advanced wizard page, click Create.
    After the application is created, you are redirected to the Complete wizard page. You can view the resource objects that are included in the application and click View Details to view the application details. succeed

Step 4: Test access to the application

This step shows how to access the application by using the Service.

  1. On the Clusters page, click the name of the ACK-Demo cluster.
  2. In the left-side navigation pane of the details page, choose Network > Services.
  3. On the Services page, find the ack-cube-svc Service and click the IP address in the External Endpoint column to start the magic cube game.
    service endpoint

Step 5: Monitor the application

This step shows how to monitor the status of the application based on metrics such as the CPU usage, memory usage, and network I/O.

  1. On the Clusters page, click the name of the ACK-Demo cluster.
  2. In the left-side navigation pane of the details page, choose Operations > Prometheus Monitoring.
  3. On the Prometheus Monitoring page, click the Deployment tab. Set namespace to default and deployment to ack-cube.
    Then, you can view the resource usage of the selected application, including the requested resources and the resource limits as shown in the following figure. prometheus-deploy
  4. On the Prometheus Monitoring page, click the Pod tab. Set namespace to default and Pod to the pod that you want to monitor.
    The page displays the resource usage of the selected pod. prometheus-pod

References

  • To enable the auto scaling of application pods, you can configure the Horizontal Pod Autoscaler (HPA), Cron Horizontal Pod Autoscaler (CronHPA), and Vertical Pod Autoscaler (VPA). For more information, see Auto scaling overview.
  • In addition to exposing applications through Services, you can use Ingresses to enable application traffic routing at Layer 7. For more information, see Create an Ingress.
  • In addition to monitoring container performance, you can also monitor the cluster infrastructure, application performance, and your business operations. For more information, see Observability overview.
  • To avoid unnecessary costs, we recommend that you delete clusters no longer in use. For more information, see Delete an ACK cluster.