All Products
Search
Document Center

Container Service for Kubernetes:Create an application from an image

Last Updated:Mar 12, 2024

This topic describes how to create an application from an image in the Container Service for Kubernetes (ACK) console.

Prerequisites

An ACK Serverless cluster is created. For more information, see Create an ACK Serverless cluster.

Step 1: Configure basic settings

  1. Log on to the Container Service for Kubernetes (ACK) console.
  2. In the left-side navigation pane of the ACK console, click Clusters.

  3. On the Clusters page, click the name of a cluster or click Details in the Actions column.
  4. In the left-side navigation pane of the details page, choose Workloads > Deployments.

  5. In the upper-right corner of the Deployments page, click Create from Image.

  6. In the Basic Information step, configure the basic information about the application.

    Parameter

    Description

    Name

    Enter a name for the application.

    Replicas

    Specify the number of pods that are provisioned for the application.

    Type

    The type of resource object that you want to create. Valid values: Deployment, StatefulSet, Job, CronJob, and DaemonSet.

    Label

    Add labels to the application to identify the application.

    Annotations

    Add annotations to the application.

  7. Click Next. Proceed to the Container step.

Step 2: Configure the containers

In the Container step, complete the configurations of the containers for the application. The configurations include the container image, computing resources, container ports, environment variables, health checks, lifecycle, and volumes.

Note

In the upper part of the Container step, click Add Container to add more containers for the application.

  1. In the General section, complete the basic configurations of the container.

    Parameter

    Description

    Image Name

    • Select images

      You can click Select images and select an image. The following types of images are supported:

      • Container Registry Enterprise Edition: Select an image stored on a Container Registry Enterprise Edition instance. You must select the region and the Container Registry instance to which the image belongs. For more information about Container Registry, see What is Container Registry?

      • Container Registry Personal Edition: Select an image stored on a Container Registry Personal Edition instance. You must select the region and the Container Registry instance to which the image belongs.

      • Artifact Center: The artifact center contains base OS images, base language images, and AI- and big data-related images for application containerization. In this example, an NGINX image is selected. For more information, see Artifact center.

        Note

        The artifact center of Container Registry provides base images that are updated and patched by Alibaba Cloud or OpenAnolis. If you have other requirements or questions, join the DingTalk group 33605007047 to request technical support.

      You can also enter the address of an image that is stored in a private registry. The image address must be in the following format: domainname/namespace/imagename:tag.

    • Image Pull Policy

      You can select the following image pulling policies:

      • ifNotPresent: If the image that you want to pull is found on your on-premises machine, the image on your on-premises machine is used. Otherwise, ACK pulls the image from the image registry.

      • Always: ACK pulls the image from the registry each time the application is deployed or expanded.

      • Never: ACK uses only images on your on-premises machine.

      Note

      If you select Image Pull Policy, no image pulling policy is applied.

    • Set Image Pull Secret

      You can click Set Image Pull Secret to set a Secret for pulling images from a private registry.

      • You can use Secrets to pull images from Container Registry Personal Edition instances. For more information about how to set a Secret, see Manage Secrets.

      • You can pull images without using Secrets from Container Registry Enterprise Edition instances.

    Required Resources

    The amount of CPU and memory resources that are reserved for the application. The resources are exclusive to the containers of the application. This prevents the application from becoming unavailable when other services or processes compete for computing resources.

    Container Start Parameter

    • stdin: passes input in the ACK console to the container.

    • tty: passes start parameters that are defined in a virtual terminal to the ACK console.

    Init Containers

    If you select Init Containers, an init container is created. An init container provides tools for pod management. For more information, see Init Containers.

  2. In the Ports section, click Add to add a container port.

    • Name: Enter a name for the container port.

    • Container Port: Enter the port number that you want to open. Valid values: 1 to 65535.

    • Protocol: TCP and UDP are supported.

  3. In the Environments section, click Add to configure environment variables.

    Note

    To configure environment variables, make sure that the corresponding ConfigMaps or Secrets are created. For more information, see Manage ConfigMaps and Manage Secrets.

    You can configure environment variables in key-value pairs for pods. Environment variables are used to apply pod configurations to containers. For more information, see Pod variables.

    • Type: Specify the type of the environment variable. Valid values: Custom, ConfigMaps, Secrets, Value/ValueFrom, and ResourceFieldRef. If you select ConfigMaps or Secrets, you can pass all data in the selected ConfigMap or Secret to the container environment variables. In this example, Secrets is selected.

      Select Secrets from the Type drop-down list and select a Secret from the Value/ValueFrom drop-down list. By default, all data in the selected Secret is passed to the environment variable.

      In this case, the YAML file that is used to deploy the application contains the settings that reference all data in the specified Secret.yaml

    • Variable Key: Specify the name of the environment variable.

    • Value/ValueFrom: Select the Secret from which the environment variable references data.

  4. In the Health Check section, enable liveness and readiness probes as needed.

    Note

    To configure health checks, do not select Set as Init Container in the General section.

    For more information about health checks, see Configure Liveness, Readiness, and Startup Probes.

    Parameter

    Request type

    Description

    • Liveness: Liveness probes are used to determine when to restart a container.

    • Readiness: Readiness probes are used to determine whether a container is ready to accept traffic.

    • Startup: Startup probes are used to determine when to start a container.

      Note

      Startup probes are supported in Kubernetes 1.18 and later.

    HTTP

    Sends an HTTP GET request to the container. You can set the following parameters:

    • Protocol: HTTP or HTTPS.

    • Path: the requested HTTP path on the server.

    • Port: Enter the container port that you want to expose. Enter a port number that ranges from 1 to 65535.

    • HTTP Header: the custom headers in the HTTP request. Duplicate headers are allowed. You can set HTTP headers in key-value pairs.

    • Initial Delay (s): the initialDelaySeconds field in the YAML file. This field specifies the waiting time (in seconds) before the first probe is performed after the container is started. Default value: 3.

    • Period (s): the periodSeconds field in the YAML file. This field specifies the time interval (in seconds) at which probes are performed. Default value: 10. Minimum value: 1.

    • Timeout (s): the timeoutSeconds field in the YAML file. This field specifies the time (in seconds) after which a probe times out. Default value: 1. Minimum value: 1.

    • Healthy Threshold: the minimum number of consecutive successes that must occur before a container is considered healthy after a failed probe. Default value: 1. Minimum value: 1. For liveness probes, this parameter must be set to 1.

    • Unhealthy Threshold: the minimum number of consecutive failures that must occur before a container is considered unhealthy after a success. Default value: 3. Minimum value: 1.

    TCP

    Sends a TCP socket to the container. kubelet attempts to open the socket on the specified port. If the connection can be established, the container is considered healthy. Otherwise, the container is considered unhealthy. You can configure the following parameters:

    • Port: Enter the container port that you want to open. Enter a port number that ranges from 1 to 65535.

    • Initial Delay (s): the initialDelaySeconds field in the YAML file. This field specifies the waiting time (in seconds) before the first probe is performed after the container is started. Default value: 15.

    • Period (s): the periodSeconds field in the YAML file. This field specifies the time interval (in seconds) at which probes are performed. Default value: 10. Minimum value: 1.

    • Timeout (s): the timeoutSeconds field in the YAML file. This field specifies the time (in seconds) after which a probe times out. Default value: 1. Minimum value: 1.

    • Healthy Threshold: the minimum number of consecutive successes that must occur before a container is considered healthy after a failed probe. Default value: 1. Minimum value: 1. For liveness probes, this parameter must be set to 1.

    • Unhealthy Threshold: the minimum number of consecutive failures that must occur before a container is considered unhealthy after a success. Default value: 3. Minimum value: 1.

    Command

    Runs a probe command in the container to check the health status of the container. You can configure the following parameters:

    • Command: the probe command that is run to check the health status of the container.

    • Initial Delay (s): the initialDelaySeconds field in the YAML file. This field specifies the waiting time (in seconds) before the first probe is performed after the container is started. Default value: 5.

    • Period (s): the periodSeconds field in the YAML file. This field specifies the time interval (in seconds) at which probes are performed. Default value: 10. Minimum value: 1.

    • Timeout (s): the timeoutSeconds field in the YAML file. This field specifies the time (in seconds) after which a probe times out. Default value: 1. Minimum value: 1.

    • Healthy Threshold: the minimum number of consecutive successes that must occur before a container is considered healthy after a failed probe. Default value: 1. Minimum value: 1. For liveness probes, this parameter must be set to 1.

    • Unhealthy Threshold: the minimum number of consecutive failures that must occur before a container is considered unhealthy after a success. Default value: 3. Minimum value: 1.

  5. In the Lifecycle section, configure the lifecycle of the container.

    You can specify the following parameters to configure the lifecycle of the container: Start, Post Start, and Pre Stop. For more information, see Configure the lifecycle of a container.

    • Start: Specify the command and parameter that take effect before the container starts.

    • Post Start: Specify the command that takes effect after the container starts.

    • Pre Stop: Specify the command that takes effect before the container stops.

  6. In the Volume section, add on-premises storage volumes, persistent volume claims (PVCs), or Apsara File Storage NAS (NAS) volumes.

    The following types of storage volumes are supported:

    • On-premises storage volume

    • PVC

    • NAS

    • Disk

    For more information, see Use a statically provisioned disk volume, Use a dynamically provisioned disk volume, and Mount a statically provisioned NAS volume.

  7. In the Log section, configure the log-related parameters. For more information, see Method 1: Create an application from an image and configure Simple Log Service to collect application logs.

  8. Click Next to configure advanced settings.

Step 3: Configure advanced settings

In the Advanced step, configure access control, scaling configurations, labels, and annotations.

  1. In the Access Control section, configure how to expose backend pods.

    Note

    You can configure the following access control settings based on your business requirements:

    • Internal applications: For applications that run inside the cluster, you can create a Service of the ClusterIP or NodePort type to enable internal communication.

    • External applications: For applications that are exposed to the Internet, you can configure access control by using one of the following methods:

      • Create a LoadBalancer Service. When you create a Service, set Type to Server Load Balancer. You can select or create a Server Load Balancer (SLB) instance for the Service and use the Service to expose your application to the Internet.

      • Create an Ingress and use it to expose your application to the Internet. For more information, see Ingress.

    You can specify how the backend pods are exposed to the Internet. In this example, a ClusterIP Service and an Ingress are created to expose an NGINX application to the Internet.

    • Click Create to the right side of Services. In the Create Service dialog box, configure the following parameters.

      Parameter

      Description

      Name

      Enter a name for the Service. In this example, nginx-svc is used.

      Type

      The type of Service. This parameter determines how the Service is accessed. In this example, select Server Load Balancer.

      • Cluster IP: The ClusterIP type Service. This type of Service exposes the Service by using an internal IP address of the cluster. If you select this type, the Service is accessible only within the cluster. This is the default type.

        Note

        The Headless Service check box is displayed only when you set Type to Cluster IP.

      • Server Load Balancer: The LoadBalancer type Service. This type of Service exposes the Service by using an SLB instance. If you select this type, you can enable internal or external access to the Service. SLB instances can be used to route requests to NodePort and ClusterIP Services.

        • Create SLB Instance: You can click Modify to change the specification of the SLB instance.

        • Use Existing SLB Instance: You can select an existing SLB instance.

        Note

        You can create an SLB instance or use an existing SLB instance. You can also associate an SLB instance with multiple Services. However, you must take note of the following limits:

        • If you use an existing SLB instance, the listeners of the SLB instance overwrite the listeners of the Service.

        • If an SLB instance is created along with a Service, you cannot reuse this SLB instance when you create other Services. Otherwise, the SLB instance may be deleted. Only SLB instances that are manually created in the SLB console or by calling the API can be used to expose multiple Services.

        • Services that share the same SLB instance must use different frontend ports. Otherwise, port conflicts may occur.

        • If multiple Services share the same SLB instance, you must use the listener names and the vServer group names as unique identifiers in Kubernetes. Do not modify the names of listeners or vServer groups.

        • You cannot share SLB instances across clusters.

      Port Mapping

      Specify a Service port and a container port. The container port must be the same as the one that is exposed in the backend pod. Examples:

      Service Port: 80

      Container Port: 80

      External Traffic Policy

      • Local: Traffic is routed only to the node where the Service is deployed.

      • Cluster: Traffic can be routed to pods on other nodes.

      Note

      The External Traffic Policy parameter is available only if you set Type to Node Port or Server Load Balancer.

      Annotations

      Add an annotation to the Service to modify the configurations of the SLB instance. For example, service.beta.kubernetes.io/alicloud-loadbalancer-bandwidth:20 specifies that the maximum bandwidth of the Service is 20 Mbit/s. This limits the amount of traffic that flows through the Service. For more information, see Add annotations to the YAML file of a Service to configure CLB instances.

      Label

      Add labels to the Service to identify the Service.

    • To create an Ingress, click Create to the right side of Ingresses. In the Create dialog box, set the parameters.

      For more information, see Create an Ingress.

      Important

      When you create an application from an image, you can create an Ingress only for one Service. In this example, the name of a virtual host is used as the test domain name. You must add a mapping in the following format to the hosts file: Ingress external endpoint + Ingress domain name. In actual scenarios, use a domain name that has an Internet Content Provider (ICP) number.

      101.37.xx.xx   foo.bar.com    # The IP address of the Ingress.

      To obtain the IP address of the Ingress, go to the application details page and click the Access Method tab. The IP address displayed in the External Endpoint column is the IP address of the Ingress.

      Parameter

      Description

      Name

      Enter a name for the Ingress. In this example, nginx-ingress is used.

      Rule

      Ingress rules are used to enable access to specific Services in a cluster. For more information, see Create an Ingress.

      • Domain: Enter the domain name of the Ingress. In this example, the test domain name foo.bar.com is used.

      • Path: Enter the Service URL. The default path is the root path /. The default path is used in this example. Each path is associated with a backend Service. SLB forwards traffic to a backend Service only when inbound requests match the domain name and the path.

      • Services: Select a Service and a Service port. In this example, nginx-svc is used.

      • EnableTLS: Select this check box to enable TLS. For more information, see Advanced NGINX Ingress configurations.

      Weight

      Set the weight for each Service in the path. Each weight is calculated as a percentage value. Default value: 100.

      Canary Release

      Enable or disable the canary release feature. We recommend that you select Open Source Solution.

      Ingress Class

      Specify the class of the Ingress.

      Annotations

      Click Add and enter a key and a value. For more information about Ingress annotations, see Annotations.

      Labels

      Add labels to describe the characteristics of the Ingress.

    You can find the created Service and Ingress in the Access Control section. You can click Update or Delete to change the configurations.

  2. In the Scaling section, select HPA and CronHPA based on your business requirements.

    ACK supports the auto scaling of pods. This allows you to automatically adjust the number of pods based on the CPU and memory usage.

    Note

    To enable HPA, you must configure the resources required by the container. Otherwise, HPA does not take effect.

    Parameter

    Description

    Metric

    Select CPU Usage or Memory Usage. The selected resource type must be the same as that specified in the Required Resources field.

    Condition

    Specify the resource usage threshold. HPA triggers scale-out events when the threshold is exceeded.

    Max. Replicas

    Specify the maximum number of replicated pods to which the application can be scaled.

    Min. Replicas

    Specify the minimum number of replicated pods that must run.

  3. In the Labels,Annotations section, click Add to add labels and annotations to the pod.

    • Pod Labels: Add a label to the pod. The label is used to identify the application.

    • Pod Annotations: Add an annotation to the pod.

  4. Click Create.

Step 4: Check the application

In the Complete step, you can view the created application.

  1. In the Complete step, click View Details. On the Deployments page, you can find the newly created application named serverless-app-svc.

    部署列表

  2. In the left-side navigation pane of the details page of the cluster, choose Network > Services. On the Services page, you can find the newly created Service named serverless-app-svc.

    服务列表

  3. To visit the NGINX welcome page, you can use your browser to access the external endpoint or domain name of the Service.

    Important
    • When you use a browser to access the Service, make sure that the Service type is Server Load Balancer.

    • If you use a domain name to access the Service, you must configure the hosts file. For more information, see the Important section of this topic.

    nginx