Create a Kubernetes cluster. For more information, see Create a cluster.


  1. Log on to the Container Service console.
  2. Click Kubernetes >Application > > Deployment in the left-side navigation pane.Click Create by image in the upper-right corner.
  3. Enter the application Name, and then select the Cluster and Namespace. Click Next to go to the Configuration step.
    By default, the system uses the default namespace if the namespace is not configured.

  4. Configure the general settings for the application.
    • Image Name: You can click Select image to select the image in the displayed dialog box and then click OK. In this example, the image name is nginx. 

      You can also enter the private registry in the format of domainname/namespace/imagename:tag.

    • Image Version: Click Select image version to select the image version. If the image version is not specified, the system uses the latest version by default.
    • Scale: Specify the number of containers. In this example, only one container is in the pod. If multiple containers are specified, the same number of pods will be started.

  5. Configure the resource limit and resource reserve for the container.
    • Resource Limit: Specify the upper limit for the resources (CPU and memory) that can be used by this application to avoid occupying excessive resources.
    • Resource Request: Specify how many resources (CPU and memory) are reserved for the application, that is, these resources are exclusive to the container. Other services or processes will compete for resources when the resources are insufficient. By specifying the Resource Request, the application will not become unavailable because of insufficient resources.
    CPU is measured in millicores (one thousandth of one core). Memory is measured in bytes, which can be Gi, Mi, or Ki.

  6. Configure the data volumes.

    The hostPath data volumes can be configured. The hostPath data volumes mount the files or directories in the host file system to the pod. For more information, see hostPath in volumes.

    In this example, configure a hostPath data volume named data. Use the host directory /tmp and mount the data volume data to the var/lib/docker  directory in the container.

  7. Configure the environment variable.

    You can configure the environment variable for the pod in the format of key-value pairs to add the environment label or pass the configurations for the pod. For more information, see Pod variable.

  8. Configure the container.

    You can configure the Command, Args, and Container Config for the container running in the pod.

    • Command and Args: If not configured, the default settings of the image are used. If configured, the default settings of the image are overwritten. If only the Args is configured, the default command will run the new arguments when the container is started. Command and Args cannot be modified after the pod is created.
    • Container Config: Select the stdin check box to enable standard input for the container. Select the tty check box to assign an virtual terminal to send signals to the container. These two options are usually used together, which indicates to bind the terminal (tty) to the container standard input (stdin). For example, an interactive program obtains standard input from you and then displays the obtained standard input in the terminal.
  9. Select whether or not to enable the Auto Scaling.

    To meet the demands of applications under different loads, Container Service supports the container auto scaling, which automatically adjusts the number of containers according to the container CPU and memory usage.

  10. Click Next after completing the configurations. In the Access Control step, configure a service to bind with the backend pods.

    • Service: Select None to not create a service, or select a service type as follows:
      • ClusterIP: Exposes the service by using the internal IP address of your cluster. With this type selected, the service is only accessible from within the cluster.
      • NodePort: Exposes the service by using the IP address and static port (NodePort) on each node. A ClusterIP service, to which the NodePort service is routed, is automatically created. You can access the NodePort service from outside the cluster by requesting <NodeIP>:<NodePort>.
      • Server Load Balancer: Exposes the service by using Server Load Balancer, which is provided by Alibaba Cloud. Select public or inner to access the service by using the Internet or intranet. Server Load Balancer can route to the NodePort and ClusterIP services.
    • Name: By default, a service name composed of the application name and the suffix svc is generated. In this example, the generated service name is nginx-default-svc. You can modify the service name as needed.
    • Port Mapping: You must add the service port and the container port. If NodePort is selected as the service type, you must configure the node port to avoid the port conflict. Select TCP or UDP as the Protocol.
  11. Click Create after the access control configurations.
    The Done step indicating the successful creation appears. The objects contained in the application are displayed. You can click View to view the deployment list.

  12. The newly created deployment nginx-default-deployment is displayed on the Deployment page.

  13. ClickApplication > > Servicein the left-side navigation pane. The newly created service nginx-default-svc is displayed on the Service List page.

  14. Access the external endpoint in the browser to access the Nginx welcome page.