Prerequisites

Create a Kubernetes cluster. For more information, see Create a cluster.

Procedure

  1. Log on to the Container Service console.
  2. Under Kubernetes, click Application > Deployment in the left-side navigation pane. Enter the Deployment List page and click Create by image in the upper-right corner.
  3. Enter the application Name, then select the Cluster and Namespace. Click Next to go to the Configuration step.
    By default, the system uses the default namespace if the namespace is not configured.
  4. Configure the general settings for the application.
    • Image name: You can click Select image to select the image in the displayed dialog box and then click OK. In this example, the image name is nginx.

      You can also enter the private registry in the format of domainname/namespace/imagename:tag.

    • Image version: Click Select image version to select the version. If the image version is not specified, the system uses the latest version by default.
    • Scale: Specify the number of containers. In this example, only one container is in the pod. If multiple containers are specified, the same number of pods will be started.
  5. Configure the resource limit and resource reserve for the container.
    • Resource Limit: Specify the upper limit for the resources (CPU and memory) that can be used by this application to avoid occupying excessive resources.
    • Resource Request: Specify how many resources (CPU and memory) are reserved for the application, that is, these resources are exclusive to the container. Other services or processes will compete for resources when the resources are insufficient. By specifying the Resource Request, the application will not become unavailable because of insufficient resources.
    CPU is measured in millicores (one thousandth of one core). Memory is measured in bytes, which can be Gi, Mi, or Ki.
  6. Configure the data volumes.

    Local storage and cloud storage can be configured.

    • Local storage: Supports hostPath, configmap, secret, and temporary directory. The local data volumes mount the corresponding mount source to the container path. For more information, see volumes.
    • Cloud storage: Supports three types of cloud storage: cloud disk, Network Attached Storage (NAS), and Object Storage Service (OSS).
    In this example, a data volume of cloud disk is configured. Mounting the cloud disk to the /tmp container path stores data generated in this path to the cloud disk.
  7. Configure the environment variable.

    You can configure the environment variable for the pod in the format of key-value pairs to add the environment label or pass the configurations for the pod. For more information, see Pod variable.

  8. Configure the container.

    You can configure the Command, Args, and Container Config for the container running in the pod.

    • Command and Args: If not configured, the default settings of the image are used. If configured, the default settings of the image are overwritten. If only the Args is configured, the default command will run the new arguments when the container is started. Command and Args cannot be modified after the pod is created.
    • Container Config: Select the stdin check box to enable standard input for the container. Select the tty check box to assign an virtual terminal to send signals to the container. These two options are usually used together, which indicates to bind the terminal (tty) to the container standard input (stdin). For example, an interactive program obtains standard input from you and then displays the obtained standard input in the terminal.
  9. Configure health check

    The health check function includes liveness probes and readiness probes. Liveness probes are used to detect when to restart the container. Readiness probes determine if the container is ready for receiving traffic. For more information about health checks, see https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes.

    Request method Configuration description
    HTTP request An HTTP GET request is sent to the container. The following are supported parameters:
    • Protocol: HTTP/HTTPS
    • Path: Path to access the HTTP server
    • Port: Number or name of the port exposed by the container. The port number must be in the range of 1 to 65535.
    • HTTP Header: Custom headers in the HTTP request. HTTP allows repeated headers. Supports the correct configuration of key values.
    • Initial Delay (in seconds): Namely, the initialDelaySeconds. Seconds for the first liveness or readiness probe has to wait after the container is started.
    • Period (in seconds): Namely, the periodseconds. Intervals at which the probe is performed. The default value is 10 seconds. The minimum value 1 second.
    • Timeout (in seconds): Namely, the timeoutSeconds. The time of probe timeout. The default value is 1 second and the minimum value is 1 second.
    • Success Threshold: The minimum number of consecutive successful probes that are considered as successful after a failed probe. The default value is 1. This parameter must be 1 for liveness. The minimum value is 1.
    • Failure Threshold: The minimum number of consecutive failed probes that are considered as failed after a successful probe. The default value is 3. The minimum value is 1.
    TCP connection A TCP socket is send to the container. The kubelet attempts to open a socket to your container on the specified port. If a connection can be established, the container is considered healthy. If not, it is considered as a failure. The following are supported parameters:
    • Port: Number or name of the port exposed by the container. The port number must be in the range of 1 to 65535.
    • Initial Delay (in seconds): Namely, the initialDelaySeconds. Seconds for the first liveness or readiness probe has to wait after the container is started.
    • Period (in seconds): Namely, the periodseconds. Intervals at which the probe is performed. The default value is 10 seconds. The minimum value 1 second.
    • Timeout (in seconds): Namely, the timeoutSeconds. The time of probe timeout. The default value is 1 second and the minimum value is 1 second.
    • Success Threshold: The minimum number of consecutive successful probes that are considered as successful after a failed probe. The default value is 1. This parameter must be 1 for liveness. The minimum value is 1.
    • Failure Threshold: The minimum number of consecutive failed probes that are considered as failed after a successful probe. The default value is 3. The minimum value is 1.
    Command line Detect the health of the container by executing probe detection commands in the container. The following are supported parameters:
    • Command: A probe command used to detect the health of the container.
    • Initial Delay (in seconds): Namely, the initialDelaySeconds. Seconds for the first liveness or readiness probe has to wait after the container is started.
    • Period (in seconds): Namely, the periodseconds. Intervals at which the probe is performed. The default value is 10 seconds. The minimum value 1 second.
    • Timeout (in seconds): Namely, the timeoutSeconds. The time of probe timeout. The default value is 1 second and the minimum value is 1 second.
    • Success Threshold: The minimum number of consecutive successful probes that are considered as successful after a failed probe. The default value is 1. This parameter must be 1 for liveness. The minimum value is 1.
    • Failure Threshold: The minimum number of consecutive failed probes that are considered as failed after a successful probe. The default value is 3. The minimum value is 1.
  10. Select whether or not to enable the Auto Scaling.
    You can choose whether to enable Auto Scaling. To meet the demands of applications under different loads, Container Service supports the container auto scaling, which automatically adjusts the number of containers according to the container CPU and memory usage.
    Note To enable auto scaling, you must configure required resources for the deployment. Otherwise, the container auto scaling cannot take effect.
    • Metric: CPU and memory. Configure a resource type as needed.
    • Condition: The percentage value of resource usage. The container begins to expand when the resource usage exceeds this value.
    • Maximum number of containers: The maximum number of containers that the deployment can expand to.
    • Minimum number of containers: The minimum number of containers that the deployment can shrink to.
  11. Click Next after completing the configurations.
  12. In the Access Control step, configure a service to bind with the backend pods. Click Create after the access control configurations.
    • Service: Select None to not create a service, or select a service type as follows:
      • ClusterIP: Exposes the service by using the internal IP address of your cluster. With this type selected, the service is accessible only within the cluster.
      • NodePort: Exposes the service by using the IP address and static port (NodePort) on each node. A ClusterIP service, to which the NodePort service is routed, is automatically created. You can access the NodePort service from outside the cluster by requesting <NodeIP>:<NodePort>.
      • Server Load Balancer: Exposes the service by using Server Load Balancer, which is provided by Alibaba Cloud. Select public or inner to access the service by using the Internet or intranet. Server Load Balancer can route to the NodePort and ClusterIP services.
    • Name: By default, a service name composed of the application name and the suffix svc is generated. In this example, the generated service name is nginx-default-svc. You can modify the service name as needed.
    • Port Mapping: You must add the service port and the container port. If NodePort is selected as the service type, you must configure the node port to avoid the port conflict. Select TCP or UDP as the Protocol.
  13. The Done step indicating the successful creation appears. The objects contained in the application are displayed. You can click View to view the deployment list.
  14. The newly created deployment nginx-default-deployment is displayed on the Deployment page.
  15. Click Application > Service in the left-side navigation pane. The newly created service nginx-default-svc is displayed on the Service List page.
  16. Access the external endpoint in the browser to access the Nginx welcome page.