This topic describes how to use an image to create an NGINX application based on sandboxed containers and make it accessible to the Internet.

Prerequisites

You have created a Kubernetes cluster that supports sandboxed containers. For more information, see Create a managed ACK cluster that supports sandboxed containers.

Procedure

  1. Log on to the ACK console.
  2. In the left-side navigation pane, choose Applications > Deployments and then click Create from Image in the upper-right corner.
  3. Set the following parameters: Name, Cluster, Namespace, Replicas, Type, Container Runtime, Annotations, and Labels. The replicas parameter denotes the number of pods that run the application. Click Next.
    Note
    • In this example, select Deployment as the application type.
    • Container Runtime: You can select between runc and runv. Default is runc. To create an application based on sandboxed containers, select runv.

    If you do not specify the Namespace parameter, the default namespace is used.

  4. Configure containers.
    Note You can configure multiple containers for the pods of the application.
    1. Container general settings.
      • Image Name: You can click Select Image to select an image in the dialog box that appears. In this example, select NGINX and click OK.

        You can also specify a private registry in the format of domainname/namespace/imagename:tag.

      • Image Version: You can click Select Image Version to select a version. If you do not specify the image version, the latest version is used by default.
      • Always Pull Images: To improve loading efficiency, Container Service caches the image. When Container Service deploys the application, if the specified image version is the same as the cached image version, the cached image will be reused. Therefore, when you update the application code, if you do not change the image version for reasons such as to support the upper-layer workloads, the previously cached image will be used. When this check box is selected, Container Service will always pull the image from the repository to deploy the application. This ensures that the latest image and code are used.
      • Set Image Pull Secret: Click Set Image Pull Secret to set an image pull secret. You must set the secret if you need to access a private repository. For more information, see Use an image Secret.
      • Resource Limit: The upper limits of CPU and memory resources that can be used by this application. This prevents the application from using excessive resources. CPU usage is measured in cores. Memory usage is measured in MiB.
      • Required Resources: The amount of CPU and memory resources that are reserved for this application. These resources are exclusive to the container. This prevents the application from being unavailable when other services or processes compete for resources.
      • Init Container: When this option is selected, the system creates an Init Container that contains useful tools. For more information, see Init Containers.
      General settings
    2. Optional:Set environment variables.

      You can use key-value pairs to set environment variables for pods. Environment variables are used to expose pod configurations to containers. For more information, see Pod variable.

    3. Optional:Configure health check settings.

      Health check settings include liveness and readiness probes. Liveness probes determine when to restart the container. Readiness probes determine if the container is ready to start accepting traffic. For more information about health checks, see configure-liveness-readiness-probes.

      Configure health check
      Request type Description
      HTTP request Sends an HTTP GET request to the container. Supported parameters are as follows:
      • Protocol: HTTP or HTTPS
      • Path: The requested path on the server.
      • Port: The port exposed by the container. The port number must be in the range of 1 to 65535.
      • HTTP Header: The custom headers in the HTTP request. Duplicate headers are allowed. Key-value pairs are supported.
      • Initial Delay (s): The initialDelaySeconds field. The time (in seconds) to wait before performing the first probe after the container is started. Default is 3.
      • Period (s): The periodSeconds field. How often (in seconds) to perform the probe. Default is 10. Minimum is 1.
      • Timeout (s): The timeoutSeconds field. The time (in seconds) after which the probe times out. Default is 1. Minimum is 1.
      • Healthy Threshold: The minimum number of consecutive successes that must occur for the probe to be considered successful after having failed. Default is 1. Minimum is 1. For liveness probes, this parameter must be set to 1.
      • Unhealthy Threshold: The minimum number of consecutive failures that must occur for the probe to be considered failed after having succeeded. Default is 3. Minimum is 1.
      TCP connection Sends a TCP socket to the container. The kubelet will attempt to open the socket on the specified port. If the connection can be established, the container is considered healthy. Otherwise, it is considered unhealthy. Supported parameters are as follows:
      • Port: The port exposed by the container. The port number must be in the range of 1 to 65535.
      • Initial Delay (s): The initialDelaySeconds field. The time (in seconds) to wait before performing the first probe after the container is started. Default is 15.
      • Period (s): The periodSeconds field. How often (in seconds) to perform the probe. Default is 10. Minimum is 1.
      • Timeout (s): The timeoutSeconds field. The time (in seconds) after which the probe times out. Default is 1. Minimum is 1.
      • Healthy Threshold: The minimum number of consecutive successes that must occur for the probe to be considered successful after having failed. Default is 1. Minimum is 1. For liveness probes, this parameter must be set to 1.
      • Unhealthy Threshold: The minimum number of consecutive failures that must occur for the probe to be considered failed after having succeeded. Default is 3. Minimum is 1.
      Command line Runs a probe command in the container to check its health status. Supported parameters are as follows:
      • Command: The probe command that is used to check the health status of the container.
      • Initial Delay (s): The initialDelaySeconds field. The time (in seconds) to wait before performing the first probe after the container is started. Default is 5.
      • Period (s): The periodSeconds field. How often (in seconds) to perform the probe. Default is 10. Minimum is 1.
      • Timeout (s): The timeoutSeconds field. The time (in seconds) after which the probe times out. Default is 1. Minimum is 1.
      • Healthy Threshold: The minimum number of consecutive successes that must occur for the probe to be considered successful after having failed. Default is 1. Minimum is 1. For liveness probes, this parameter must be set to 1.
      • Unhealthy Threshold: The minimum number of consecutive failures that must occur for the probe to be considered failed after having succeeded. Default is 3. Minimum is 1.
    4. Configure the lifecycle.

      Specify the following parameters to configure the lifecycle of the container: Start, postStart, and preStop. For more information, see Attach-Handler-Lifecycle-Event.

      • Start: The pre-start command and parameter.
      • Post Start: The postStart command.
      • Pre Stop: The preStop command.
      Configure lifecycle
    5. Optional:Configure volumes.

      Local storage and cloud storage are supported.

      • Local Storage: Supports hostPath, ConfigMaps, secrets, and temporary directories. Mounts the source to a path in the container. For more information, see Volumes.
      • Cloud Storage: Supports three types of PVs: disks, NAS, and OSS.
      In this example, select a PV of disk type and mount the PV to the /tmp path in the container. Data generated in this path is stored in the disk.
      Configure volumes
  5. Set other parameters based on your needs and then click Next.
  6. Configure advanced settings.
    1. Configure access control settings.
      You can configure how to expose pods and then click Create to create the application. This example creates a service of the Cluster IP type and an ingress to enable Internet access to the NGINX application.
      Note

      You can configure access control settings based on your needs:

      • Internal applications: For applications that run inside the cluster, you can create a service of the Cluster IP or Node Port type to enable internal communication as required.
      • External applications: For applications that need to be exposed to the Internet, you can configure access control by using one of the following methods:
        • Create a service of the Server Load Balancer (SLB) type and expose your application to the Internet through the SLB instance.
        • Create a service of the Cluster IP or Node Port type, create an ingress, and expose your application to the Internet through the ingress. For more information, see Ingress.
      Create application 1
      1. To create a service, click Create in the Access Control section. Configure the service in the dialog box that appears, and then click Create.
        Create application 2
        • Name: The service name. Default is applicationname-svc.
        • Type: Select one from the following three types.
          • Cluster IP: Expose the service through an internal IP address of the cluster. If you select this type, the service is only accessible within the cluster.
          • Node Port: Expose the service through the IP address and static port (NodePort) of each node. A NodePort service can route requests to a Cluster IP service, which is automatically created by the system. You can access a Node Port service from outside the cluster by requesting <NodeIP>:<NodePort>.
          • Server Load Balancer: Expose the service through an SLB instance, which supports Internet access or internal access. An SLB instance can route requests to Node Port and Cluster IP services.
        • Port Mapping: Set a service port and a container port. If the Type parameter is set to Node Port, you must set a node port to avoid port conflicts. TCP and UDP protocols are supported.
        • Annotations: Add annotations to the service. SLB parameters are supported. For more information, see Use SLB to access services.
        • Labels: Add labels to the service.
      2. To create an ingress, click Create in the Access Control section. Configure ingress rules in the dialog box that appears, and then click Create. For more information about ingress configuration, see Ingress configurations.
        Note When you create an application from an image, you can create an ingress for only one service. This example uses a virtual host name as the test domain. You need to add a record to the hosts file. In actual scenarios, use a domain that has obtained an ICP filing.
        101.37.224.146   foo.bar.com    #The IP address of the ingress
        Configure routing rules
      3. You can find the newly created service and ingress in the Access Control section. Click Update or Delete to make changes.
        Change and delete routing rules
    2. Optional:Configure Horizontal Pod Autoscaling (HPA).
      You can enable HPA to automatically scale the number of pods based on the CPU and memory utilization. This enables the application to run smoothly at different load levels.
      Configure HPA
      Note To enable HPA, you must configure resource objects that can be scaled for the container. Otherwise, HPA would not work. For more information, see the general settings section.
      • Metric: Supports CPU and memory. The resource type must be the same as the one you have specified in the Required Resources field.
      • Condition: Specify the resource usage threshold. HPA starts scaling up the cluster when the threshold is exceeded.
      • Max. Replicas: The maximum number of replicas that the deployment can expand to.
      • Min. Replicas: The minimum number of replicas that the deployment keeps running.
    3. Optional:Configure scheduling settings.

      You can specify the following parameters: update method, node affinity, pod affinity, and pod anti affinity. For more information, see Affinity-and-anti-affinity.

      Note The affinity feature facilitates scheduling based on node labels and pod labels. You can use built-in labels or add custom labels based on needs.
      1. Set the Update Method.

        You can select between Rolling Update and OnDelete. For more information, see Deployments.

      2. Set Node Affinity rules
        Set node affinity
        Node scheduling supports required and preferred rules, and various operators such as In, NotIn, Exists, DoesNotExist, Gt, and Lt.
        • Required rules must be met and are specified in the requiredDuringSchedulingIgnoredDuringExecution field of nodeAffinity. These rules have the same effect as NodeSelector. In this example, pods can only be scheduled to nodes with specific labels. You can create multiple required rules in a way that only one of them must be met.
        • Preferred rules may not be met and are specified in the preferredDuringSchedulingIgnoredDuringExecution field of nodeAffinity. In this example, the scheduler tries not to schedule pods to nodes with specific labels. You can also set weights for preferred rules. If multiple nodes match the rule, the node with the highest weight is preferred. You can create multiple preferred rules in a way that all of them must be met before scheduling the pods.
      3. Set Pod Affinity rules. These rules specify how pods are deployed relative to other pods in the same topology domain. For example, you can use pod affinity rules to deploy services that communicate with one other to the same topology domain, such as a host. This helps reduce the network latency between these services.
        Configure pod affinity

        Pod affinity enables you to specify which nodes the pods can be scheduled to based on the labels on other pods. This feature supports required and preferred rules, and the following operators: In, NotIn, Exists, DoesNotExist.

        • Required rules must be met and are specified in the requiredDuringSchedulingIgnoredDuringExecution field of podAffinity. Required rules must be met before a pod can be scheduled to a node.
          • Namespace: The rules are defined based on the labels on pods and therefore must be scoped to a namespace.
          • Topological Domain: The topologyKey, which is the key for the node label that the system uses to denote the topology domain. For example, if you set the parameter to kubernetes.io/hostname, nodes are used to determine topologies. If set to beta.kubernetes.io/os, the operating systems of nodes are used to determine topologies.
          • Selector: Click Add to add multiple required rules.
          • View Applications: Click View Applications and specify the namespace and application in the dialog box that appears. You can view the labels on the selected application and add them to the rule configuration page.
          • Required rule: Specify the label on the existing application, the operator, and the label value. This example schedules the application to be created to a host that runs applications with the app: nginx label.
        • Preferred rules may not be met and are specified in the preferredDuringSchedulingIgnoredDuringExecution field of podAffinity. A preferred rule specifies that, if the rule is met, the scheduler tries to enforce the rule. You can set weights for preferred rules. The other parameters are the same as those of required rules.
          Note

          Weight: Set the weight of a preferred rule to a value between 1 to 100. The scheduler calculates a weight for each node that meets the rule based on an algorithm and schedules the pod to the node with the highest weight.

      4. Set Pod Anti Affinity rules to prevent scheduling pods to the same topology where other pods with specific labels are already deployed. Pod anti affinity rules can be used in the following scenarios:
        • Schedule the pods of a service to different topology domains, such as multiple hosts, to enhance the stability of the service.
        • Grant a pod exclusive access to a node. This ensures resource isolation and guarantees that no other pod can share the specified node.
        • Schedule the pods of a service to different hosts if these pods may interfere each other.
        Note The parameters of pod anti affinity rules are the same as those of pod affinity rules. Create the rules based on different scenarios.
  7. Click Create.
  8. After the application is created, you are redirected to the Complete page. You can find the resource objects under the application and click View Details to view application details.
    View details

    The nginx-deployment details page appears by default.

    View details 2
    Note You can also create ingresses and services as follows: On the preceding page, click the Access Method tab.
    • Click Create next to Services. Follow the steps introduced in 6.a.1 to create a service.
    • Click Create next to Ingresses. Follow the steps introduced in 6.a.2 to create an ingress.
  9. In the left-side navigation pane, choose Ingresses and Load Balancing > Ingresses. You can find the newly created Ingress on this page.
    Ingress rules
  10. Enter the testing domain into your browser and press the Enter key. The NGINX welcome page appears.
    NGINX welcome page