This topic describes how to use an image to create an NGINX application that runs in sandboxed containers. The NGINX application is accessible over the Internet.

Prerequisites

A cluster of Container Service for Kubernetes (ACK) that runs sandboxed containers is created. For more information, see Create a managed Kubernetes cluster that runs sandboxed containers.

Procedure

  1. Log on to the ACK console.
  2. In the left-side navigation pane of the ACK console, click Clusters.
  3. On the Clusters page, find the cluster that you want to manage and click the name of the cluster or click Details in the Actions column. The details page of the cluster appears.
  4. In the left-side navigation pane of the details page, choose Workloads > Deployments.
  5. On the Deployments tab, click Create from Image.
  6. On the Basic Information wizard page, set the basic information and parameters for the application, and then click Next.

    Set Name, Replicas, Type, Annotations, and Label. Specify whether to synchronize timezone from node to container. The number of replicas equals the number of replicated pods provisioned for the application.

    Note

    In this example, Deployment is selected as the application type.

  7. Configure containers.
    Note In the upper part of the Container wizard page, click Add Container to add more containers for the application.
    The following parameters are required to configure the containers.
    • General settingsGeneral settings
      Parameter Description
      Image Name Click Select Image. In the dialog box that appears, select an image and click OK. In this example, an NGINX image is selected.

      You can also enter the path of an image stored in a private registry. The image path must be in the domainname/namespace/imagename:tag format.

      Image Version

      • Click Select Image Version and select an image version. If you do not specify an image version, the latest image version is used.
      • You can select the following image pull policies:
        • ifNotPresent: If the image that you want to pull is found on your on-premises machine, the image on your on-premises machine is used. Otherwise, ACK pulls the image from the corresponding repository.
        • Always: ACK pulls the image from Container Registry each time the application is deployed or scaled out.
        • Never: ACK uses only images on your on-premises machine.
        Note If you select Image Pull Policy, no image pull policy is applied.
      • To pull the image without a password, click Set Image Pull Secret to set a Secret that is used to pull the image. For more information, see Use the aliyun-acr-credential-helper component to pull images without a password.
      Resource Limit You can specify an upper limit for the CPU, memory, and ephemeral storage space that the container can consume. This prevents the container from occupying an excessive amount of resources. The CPU resource is measured in milicores (one thousandth of one core). The memory resource is measured in MiB. The ephemeral storage resource is measured in GiB.
      Required Resources The amount of CPU and memory resources that are reserved for this application. These resources are exclusive to the container. This prevents the application from becoming unavailable if other services or processes compete for computing resources.
      Container Start Parameter
      • stdin: Pass stdin to the container.
      • tty: Stdin is a TeleTYpewriter (TTY).
      Privileged Container
      • If you select Privileged Container, privileged=true is set for the container and the privilege mode is enabled.
      • If you do not select Privileged Container, privileged=false is set for the container and the privilege mode is disabled.
      Init Container If you select Init Container, an init container is created. An init container provides tools to manage pods. For more information, see Init Containers.
    • Optional:Ports
      Configure container ports.
      • Name: Enter a name for the port.
      • Container Port: Enter the container port that you want to open. Enter a port number from 1 to 65535.
      • Protocol: TCP or UDP.
    • Optional:Environments
      You can configure environment variables in key-value pairs for pods. Environment variables are used to apply pod configurations to containers. For more information, see Pod variables.
      Parameter Description

      Type

      Select the type of environment variable. Valid values:
      • Custom
      • ConfigMaps
      • Secrets
      • Value/ValueFrom
      • ResourceFieldRef
      If you select ConfigMaps or Secrets, you can pass all data in the selected ConfigMap or Secret to the container environment variables. In this example, Secrets is selected. Select Secrets from the Type drop-down list and select a Secret from the Value/ValueFrom drop-down list. By default, all data in the selected Secret is passed to the environment variable. Environment variables
      In this case, the YAML file that is used to deploy the application contains the settings that reference all data in the selected Secret. yaml

      Variable Key

      Specify the name of the environment variable.

      Value/ValueFrom

      Specify a reference that is used to define the environment variable.

    • Optional:Health Check

      Health check settings include liveness, readiness, and startup probes. Liveness probes determine when to restart the container. Readiness probes determine whether the container is ready to accept network traffic. Startup probes detect whether the application in the container is started. For more information about health checks, see Configure Liveness, Readiness, and Startup Probes.

      Request type Description
      HTTP Sends an HTTP GET request to the container. You can configure the following parameters:
      • Protocol: HTTP or HTTPS.
      • Path: the requested path on the server.
      • Port: Enter the container port that you want to open. Enter a port number from 1 to 65535.
      • HTTP Header: Enter the custom headers in the HTTP request. Duplicate headers are allowed. Key-value pairs are supported.
      • Initial Delay (s): the initialDelaySeconds field in the YAML file. This field specifies the time (in seconds) that the system must wait before it can send a probe to the container after the container is started. Default value: 3.
      • Period (s): the periodSeconds field in the YAML file. This field specifies the interval (in seconds) at which probes are performed. Default value: 10. Minimum value: 1.
      • Timeout (s): the timeoutSeconds field in the YAML file. This field specifies the time (in seconds) after which a probe times out. Default value: 1. Minimum value: 1.
      • Healthy Threshold: the minimum number of times that an unhealthy container must consecutively pass health checks before it is considered healthy. Default value: 1. Minimum value: 1. For liveness probes, this parameter must be set to 1.
      • Unhealthy Threshold: the minimum number of times that a healthy container must consecutively fail health checks before it is considered unhealthy. Default value: 3. Minimum value: 1.
      TCP Sends a TCP socket to the container. kubelet attempts to open the socket on the specified port. If the connection can be established, the container is considered healthy. Otherwise, the container is considered unhealthy. You can set the following parameters:
      • Port: Enter the container port that you want to open. Enter a port number from 1 to 65535.
      • Initial Delay (s): the initialDelaySeconds field in the YAML file. This field specifies the time (in seconds) that the system must wait before it can send a probe to the container after the container is started. Default value: 15.
      • Period (s): the periodSeconds field in the YAML file. This field specifies the interval (in seconds) at which probes are performed. Default value: 10. Minimum value: 1.
      • Timeout (s): the timeoutSeconds field in the YAML file. This field specifies the time (in seconds) after which a probe times out. Default value: 1. Minimum value: 1.
      • Healthy Threshold: the minimum number of times that an unhealthy container must consecutively pass health checks before it is considered healthy. Default value: 1. Minimum value: 1. For liveness probes, this parameter must be set to 1.
      • Unhealthy Threshold: the minimum number of times that a healthy container must consecutively fail health checks before it is considered unhealthy. Default value: 3. Minimum value: 1.
      Command Runs a probe command in the container to check the health status of the container. You can set the following parameters:
      • Command: the probe command that is run to check the health status of the container.
      • Initial Delay (s): the initialDelaySeconds field in the YAML file. This field specifies the time (in seconds) that the system must wait before it can send a probe to the container after the container is started. Default value: 5.
      • Period (s): the periodSeconds field in the YAML file. This field specifies the interval (in seconds) at which probes are performed. Default value: 10. Minimum value: 1.
      • Timeout (s): the timeoutSeconds field in the YAML file. This field specifies the time (in seconds) after which a probe times out. Default value: 1. Minimum value: 1.
      • Healthy Threshold: the minimum number of times that an unhealthy container must consecutively pass health checks before it is considered healthy. Default value: 1. Minimum value: 1. For liveness probes, this parameter must be set to 1.
      • Unhealthy Threshold: the minimum number of times that a healthy container must consecutively fail health checks before it is considered unhealthy. Default value: 3. Minimum value: 1.
    • Lifecycle

      You can configure the lifecycle of the container by using the following parameters: Start, Post Start, and Pre Stop. For more information, see Configure the lifecycle of a container.

      • Start: Set the command and parameter that take effect before the container starts.
      • Post Start: Set the command that takes effect after the container starts.
      • Pre Stop: Set the command that takes effect before the container stops.
      Lifecycle
    • Optional:Volume
      You can mount local volumes and persistent volume claims (PVCs) to the container.
      • Add Local Storage: You can select HostPath, ConfigMap, Secret, and EmptyDir. The specified volume is mounted to a path in the container. For more information, see Volumes.
      • Add PVC: You can select Cloud Storage.
      In this example, a PVC named disk-ssd is mounted to the /tmp path of the container. Configure a volume
    • Optional:Log
      Configure Log Service. You can specify collection configurations and add custom tags.
      Notice Make sure that the Log Service agent is installed in the cluster.
      Parameter Description
      Collection Configuration Logstore: creates a Logstore in Log Service to store collected log data.
      Log Path in Container: specifies stdout or a path to collect log data.
      • stdout: specifies that the stdout files are collected.
      • Text Logs: specifies that log data in the specified path of the container is collected. In this example, /var/log/nginx is specified as the path. Wildcard characters can be used to specify the path.
      Custom Tag You can also add custom tags. Custom tags are added to the log data of the container when the log data is collected. Log data with tags is easier to aggregate and filter.
  8. Set the preceding parameters based on your business requirements and click Next.
  9. Optional:Configure advanced settings.
    • Access Control
      Note

      You can configure the following access control settings based on your business requirements:

      • Internal applications: For applications that run inside the cluster, you can create a ClusterIP or NodePort Service to enable internal communication.
      • External applications: For applications that are open to the Internet, you can configure access control by using one of the following methods:
        • Create a LoadBalancer Service and enable access to your application over the Internet by using a Server Load Balancer (SLB) instance.
        • Create an Ingress and use the Ingress to expose your application to the Internet. For more information, see Ingress.

      You can also specify how the backend pods are exposed to the Internet. In this example, a ClusterIP Service and an Ingress are created to expose the NGINX application to the Internet.

      Parameter Description
      Services Click Create on the right side of Services. In the Create Service dialog box, set the parameters. For more information about the parameters that are required to create a Service, see Manage Services. Cluster IP is selected in this example.
      Ingresses Click Create on the right side of Ingresses. In the Create dialog box, set the parameters. For more information, see Create an Ingress.
      Note When you deploy an application from an image, you can create an Ingress only for one Service. In this example, a virtual hostname is used as the test domain name. You must add the following entry to the hosts file to map the domain name to the IP address of the Ingress. In actual scenarios, use a domain name that has obtained an Internet Content Provider (ICP) number.
      101.37.224.146   foo.bar.com    #The IP address of the Ingress.

      You can find the created Service and Ingress in the Access Control section. You can click Update or Delete to change the configurations.

    • Scaling
      In the Scaling section, specify whether to enable HPA and CronHPA. This allows you to meet the resource requirements of the application at different load levels.
      Scaling metrics
      • HPA can automatically scale the number of pods in an ACK cluster based on the CPU and memory usage.
        Note To enable HPA, you must configure required resources for the container. Otherwise, HPA does not take effect.
        Parameter Description
        Metric Select CPU Usage or Memory Usage. The selected resource type must be the same as that specified in the Required Resources field.
        Condition Specify the resource usage threshold. HPA triggers scaling activities when the threshold is exceeded.
        Max. Replicas Specify the maximum number of pod replicas to which the application can be scaled.
        Min. Replicas Specify the minimum number of pod replicas that must run.
      • CronHPA can scale an ACK cluster at a scheduled time. For more information about CronHPA, see Create CronHPA jobs.
    • Scheduling
      You can set the following parameters: Update Method, Node Affinity, Pod Affinity, Pod Anti Affinity, and Toleration. For more information, see Affinity and anti-affinity.
      Note Node affinity and pod affinity affect pod scheduling based on node labels and pod labels. You can add node labels and pod labels that are provided by Kubernetes to configure node affinity and pod affinity. You can also add custom labels to nodes and pods, and then configure node affinity and pod affinity based on these custom labels.
      Parameter Description
      Update Method

      Select Rolling Update or OnDelete. For more information, see Deployments.

      Node Affinity Set Node Affinity by adding labels to worker nodes.
      Node affinity supports required and preferred rules, and various operators, such as In, NotIn, Exists, DoesNotExist, Gt, and Lt.
      • Required: Specify the rules that must be matched for pod scheduling. In the YAML file, these rules are defined by the requiredDuringSchedulingIgnoredDuringExecution field of the nodeAffinity parameter. These rules have the same effect as the NodeSelector parameter. In this example, pods can be scheduled only to nodes with the specified labels. You can create multiple required rules. However, only one of them must be met.
      • Preferred: Specify the rules that are not required to be matched for pod scheduling. Pods are scheduled to a node that matches the preferred rules when multiple nodes match the required rules. In the YAML file, these rules are defined by the preferredDuringSchedulingIgnoredDuringExecution field of the nodeAffinity parameter. In this example, the scheduler attempts to schedule a pod to a node that matches the preferred rules. You can also set weights for preferred rules. If multiple nodes match the rule, the node with the highest weight is preferred. You can create multiple preferred rules. However, all of them must be met before the pod can be scheduled.
      Pod Affinity Pod affinity rules specify how pods are deployed relative to other pods in the same topology domain. For example, you can use pod affinity to deploy services that communicate with each other to the same topological domain, such as a host. This reduces the network latency between these services.
      Pod affinity enables you to select nodes to which pods can be scheduled based on the labels of other running pods. Pod affinity supports required and preferred rules, and the following operators: In, NotIn, Exists, and DoesNotExist.
      • Required: Specify rules that must be matched for pod scheduling. In the YAML file, these rules are defined by the requiredDuringSchedulingIgnoredDuringExecution field of the podAffinity parameter. A node must match the required rules before pods can be scheduled to the node.
        • Namespace: Specify the namespace to apply the required rule. Pod affinity rules are defined based on the labels that are added to pods and therefore must be scoped to a namespace.
        • Topological Domain: Set the topologyKey. This specifies the key for the node label that the system uses to denote the topological domain. For example, if you set the parameter to kubernetes.io/hostname, topologies are determined by nodes. If you set the parameter to beta.kubernetes.io/os, topologies are determined by the operating systems of nodes.
        • Selector: Click Add to add pod labels.
        • View Applications: Click View Applications and set the namespace and application in the dialog box that appears. You can view the pod labels on the selected application and select the labels as selectors.
        • Required Rules: Specify labels on existing applications, the operator, and the label value. In this example, the required rule specifies that the application to be created is scheduled to a host that runs applications with the app:nginx label.
      • Preferred: Specify rules that are not required to be matched for pod scheduling. In the YAML file, preferred rules are defined by the preferredDuringSchedulingIgnoredDuringExecution field of the podAffinity parameter. The scheduler attempts to schedule the pod to a node that matches the preferred rules. You can set weights for preferred rules. The other parameters are the same as those of required rules.
        Note Weight: Set the weight of a preferred rule to a value from 1 to 100. The scheduler calculates the weight of each node that meets the preferred rule based on an algorithm, and then schedules the pod to the node with the highest weight.
      Pod Anti-affinity
      Pod anti-affinity rules specify that pods are not scheduled to topological domains where pods with matching labels are deployed. Pod anti-affinity rules apply to the following scenarios:
      • Schedule the pods of an application to different topological domains, such as multiple hosts. This allows you to enhance the stability of the service.
      • Grant a pod exclusive access to a node. This enables resource isolation and ensures that no other pod can share the resources of the specified node.
      • Schedule pods of an application to different hosts if the pods may interfere with each other.
      Note The parameters of pod anti-affinity rules are the same as those of pod affinity rules. You can create the rules for different scenarios.
      Toleration Configure toleration rules to allow pods to be scheduled to nodes with matching taints.
      Schedule to Virtual Nodes Specify whether to schedule pods to virtual nodes. This option is unavailable if the cluster does not contain a virtual node.
    • Labels and Annotations
      • Pod Labels: Add a label to the pod. The label is used to identify the application.
      • Pod Annotations: Add an annotation to the pod.
  10. Click Create.
    After the application is deployed, you are redirected to the Complete page. You can find the resource objects under the application and click View Details to view application details.View details
  11. Go to the details page of the cluster. In the left-side navigation pane, click Ingresses.
    You can find that the newly created Ingress rule appears on the page.Ingress rules

Result

Enter the test domain into the address bar of your browser and press the Enter key. The NGINX welcome page appears.NGINX welcome page