Enterprise Distributed Application Service (EDAS) integrates with Container Service for Kubernetes (ACK) on the basis of cloud-native Kubernetes. EDAS allows you to manage the full lifecycle of Kubernetes containerized applications. Serverless Kubernetes clusters are applicable to scenarios where agility is important and can be used to process individual and multiple tasks. This topic describes how to use an image to deploy an application in a serverless Kubernetes cluster.

Prerequisites

Step 1: Create a serverless Kubernetes cluster

Log on to the ACK console and create a serverless Kubernetes cluster. For more information, see Create an ASK cluster.

Step 2: Import the serverless Kubernetes cluster in the EDAS console

By default, the ack-ahas-sentinel-pilot, ack-arms-pilot, and ack-arms-prometheus components are installed when you import an ACK cluster to EDAS in the EDAS console. The ack-ahas-sentinel-pilot component is an application protection component for throttling and degradation. The ack-arms-pilot component is an Application Real-Time Monitoring Service (ARMS) monitoring component. The ack-arms-prometheus component is a Prometheus monitoring component.

  1. Log on to the EDAS console.
  2. In the left-side navigation pane, choose Resource Management > Serverless Kubernetes Clusters.
  3. In the top navigation bar, select the region of the microservice namespace that you want to manage. From the Microservice Namespace drop-down list, select the namespace to which you want to import the cluster. Then, click Synchronize Serverless Kubernetes Cluster.
  4. In the Actions column of the imported serverless Kubernetes cluster, click Import.
    If the serverless Kubernetes cluster is in the Running state and the value in the Import Status column of the serverless Kubernetes cluster is Imported, the serverless Kubernetes cluster is imported to EDAS as expected.

Step 3: Deploy an application in the serverless Kubernetes cluster

  1. Log on to the EDAS console.
  2. In the left-side navigation pane, click Applications. In the top navigation bar, select a region. In the upper part of the page, select a namespace. In the upper-left corner of the Applications page, click Create Application.
  3. In the Basic Information step, set the parameters in the Cluster Type and Application Runtime Environment sections and click Next.
    Use an image to deploy a Kubernetes application
    GUI element Description
    Cluster Type The type of the cluster where you want to deploy the application. Select Kubernetes Clusters.
    Application Runtime Environment The runtime environment of the application. In this topic, Java is selected and a custom image is used for deployment. Valid values:
    • Custom: Select this option if you want to use a custom image to deploy the application in a Kubernetes cluster.
    • Java: Select this option if you want to use a universal JAR package to deploy the application as a Dubbo or a Spring Boot application. You can set the Java Environment parameter after you select this option.
    • Tomcat: Select this option if you want to use a universal WAR package to deploy the application as a Dubbo or a Spring application. You can set the Java Environment and Container Version parameters after you select this option.
    • EDAS-Container (HSF): Select this option if you want to use a WAR or FatJar package to deploy the application as a High-Speed Service Framework (HSF) application. You can set the Java Environment, Pandora Version, and Ali-Tomcat Version parameters after you select this option.
  4. In the Configurations step, configure the environment information, basic information, deployment method, and resource parameters for the application, and click Next.
    Application configuration - Kubernetes - Image
    Parameter Description
    Microservice Namespaces The microservice namespace of the serverless Kubernetes cluster. If you have not created a microservice namespace or do not select a microservice namespace, this parameter is set to Default.

    If you have not created Microservice Namespaces , or you need to create a new Microservice Namespaces , you can click Create a microservice space , create a brand new Microservice Namespaces . see Create a microservice space.

    Cluster The cluster where you want to deploy the application. From the Cluster drop-down list, select the imported serverless Kubernetes cluster.

    If the K8s cluster you selected is not imported to EDAS, select The cluster is used in EDAS for the first time. After you select the check box, the cluster will be imported to EDAS when the application is created, which will be time-consuming. and confirm whether the service mesh is turned on.

    Note You can select a cluster that is not in the microservice namespace where you want to deploy the application.
    K8s Namespace The Kubernetes namespace of the cluster. Internal system objects are allocated to different Kubernetes namespaces to form logically isolated projects, groups, or user groups. This way, different groups can be separately managed and can also share the resources of the entire cluster. Valid values:
    • default: the default Kubernetes namespace. If no Kubernetes namespace is specified for an object, the default Kubernetes namespace is used.
    • kube-system: the Kubernetes namespace of the objects that are created by the system.
    • kube-public: the Kubernetes namespace that is automatically created by the system. This Kubernetes namespace can be read by all the users, including the users who are not authenticated.

    In this example, default is selected.

    If you need to customize the creation of K8s Namespace, click Create K8s Namespace and set K8s Namespace The name. The name setting must contain only digits, lowercase letters, and dashes (-), and must be letters or digits at the beginning and end, and must be 1 to 63 characters in length.

    Application Name The name of the application. The name must start with a letter and can contain digits, letters, and hyphens (-). The application name can be up to 36 characters in length.
    Application Description The description of the application. The description can be up to 128 characters in length.
    Image Type
    • The type of the image used to deploy the application. Valid values: Configure Image
      • If you select this option, you can set the Alibaba Cloud Container Registry parameter to Current Account.

        In this case, you must also set the Region, Container Registry, Image Repository Namespace, Image Repository Name parameters and select an image version.

      • You can also set the Alibaba Cloud Container Registry parameter to Other Alibaba Cloud Accounts.
    • Demo Image

      If you select this option, you must select a demo image provided by EDAS and then an image version.

    Note Before you use the images in a repository of Container Registry Enterprise Edition to deploy applications as a RAM user, the RAM user must obtain the required permissions from the relevant Alibaba Cloud account. For more information, see Configure policies for RAM users to access Container Registry.
    Total Pods The number of pods on which the application is to be deployed.
    Single-pod Resource Quota The amount of CPU and memory resources that you want to reserve for a pod. To set a limit, enter a numeric value. The default value 0 indicates that no limit is set.
  5. Optional. In the Advanced Settings step, configure the advanced settings.
  6. Click Create Application.
    The application requires several minutes to be deployed. During the process, you can view the change records to track the deployment progress of the application. For more information, see View application overview. After the application is deployed, the Application Overview page appears. On this page, you can check the status of the pod. If the pod is in the Running state, the application is published. You can click the state of the pod to view the advanced settings of the application instance, such as Deployment, Pods, and Startup Command.

What to do next

After the application is deployed, you can add an Internet-facing Server Load Balancer (SLB) instance to enable the access to the application over the Internet. You can also add an internal-facing SLB instance so that all the nodes in the same VPC can access the application by using this internal-facing SLB instance. For more information, see Bind SLB instances or Reuse an SLB instance.