Enterprise Distributed Application Service (EDAS) integrates with Container Service for Kubernetes (ACK) on the basis of cloud-native Kubernetes. EDAS allows you to manage the full lifecycle of Kubernetes containerized applications. Serverless Kubernetes clusters are applicable to agile business scenarios that require fast scaling. This topic describes how to use a demo JAR package or a demo WAR package provided by EDAS to deploy an application in a serverless Kubernetes cluster.
To deploy an application in a serverless Kubernetes cluster, create a serverless Kubernetes in the ACK console. Then, import the cluster in the EDAS console, and use a deployment package or an image to deploy the application in the imported cluster.
- Both EDAS and ACK are activated for your Alibaba Cloud account. For more information, see the following topics:
- A microservice namespace is created. For more information, see the "Create a namespace" section of the Manage microservice namespaces topic.
- Role authorization is complete in ACK. For more information, see ACK default roles.
Step 1: Create a serverless Kubernetes cluster
Step 2: Import the serverless Kubernetes cluster in the EDAS console
By default, the ack-ahas-sentinel-pilot, ack-arms-pilot, and ack-arms-prometheus components are installed when you import an ACK cluster to EDAS in the EDAS console. The ack-ahas-sentinel-pilot component is an application protection component for throttling and degradation. The ack-arms-pilot component is an Application Real-Time Monitoring Service (ARMS) monitoring component. The ack-arms-prometheus component is a Prometheus monitoring component.
- Log on to the EDAS console.
- In the left-side navigation pane, choose .
- In the top navigation bar, select the region of the microservice namespace that you want to manage. From the Microservice Namespace drop-down list, select the namespace to which you want to import the cluster. Then, click Synchronize Serverless Kubernetes Cluster.
- In the Actions column of the imported serverless Kubernetes cluster, click Import. If the serverless Kubernetes cluster is in the Running state and the value in the Import Status column of the serverless Kubernetes cluster is Imported, the serverless Kubernetes cluster is imported to EDAS as expected.
Step 3: Deploy the application in the serverless Kubernetes cluster
- Log on to the EDAS console.
- In the left-side navigation pane, click Applications. In the top navigation bar, select a region. In the upper part of the page, select a namespace. In the upper-left corner of the Applications page, click Create Application.
- In the Basic Information step, set the parameters in the Cluster Type and Application Runtime Environment
sections and click Next.
GUI element Description Cluster Type The type of the cluster where you want to deploy the application. Select Kubernetes Clusters. Application Runtime Environment The runtime environment of the application. In this topic, Java is selected and a JAR package is used for deployment. Valid values:
- Custom: Select this option if you want to use a custom image to deploy the application in a Kubernetes cluster.
- Java: Select this option if you want to use a universal JAR package to deploy the application as a Dubbo or a Spring Boot application. You can set the Java Environment parameter after you select this option.
- Tomcat: Select this option if you want to use a universal WAR package to deploy the application as a Dubbo or a Spring application. You can set the Java Environment and Container Version parameters after you select this option.
- EDAS-Container (HSF): Select this option if you want to use a WAR or FatJar package to deploy the application as a High-Speed Service Framework (HSF) application. You can set the Java Environment, Pandora Version, and Ali-Tomcat Version parameters after you select this option.
- In the Configurations step, configure the environment information, basic information, deployment method,
and resource parameters for the application, and click Next.
Parameter Description Microservice Namespaces The microservice namespace of the serverless Kubernetes cluster. If you have not created a microservice namespace or do not select a microservice namespace, this parameter is set to Default.
If you have not created Microservice Namespaces , or you need to create a new Microservice Namespaces , you can click Create a microservice space , create a brand new Microservice Namespaces . see Create a microservice space.
Cluster The cluster where you want to deploy the application. From the Cluster drop-down list, select the imported serverless Kubernetes cluster.
If the K8s cluster you selected is not imported to EDAS, select The cluster is used in EDAS for the first time. After you select the check box, the cluster will be imported to EDAS when the application is created, which will be time-consuming. and confirm whether the service mesh is turned on.Note You can select a cluster that is not in the microservice namespace where you want to deploy the application.
K8s Namespace The Kubernetes namespace of the cluster. Internal system objects are allocated to different Kubernetes namespaces to form logically isolated projects, groups, or user groups. This way, different groups can be separately managed and can also share the resources of the entire cluster. Valid values:
- default: the default Kubernetes namespace. If no Kubernetes namespace is specified for an object, the default Kubernetes namespace is used.
- kube-system: the Kubernetes namespace of the objects that are created by the system.
- kube-public: the Kubernetes namespace that is automatically created by the system. This Kubernetes namespace can be read by all the users, including the users who are not authenticated.
In this example, default is selected.
If you need to customize the creation of K8s Namespace, click Create K8s Namespace and set K8s Namespace The name. The name setting must contain only digits, lowercase letters, and dashes (-), and must be letters or digits at the beginning and end, and must be 1 to 63 characters in length.
Application Name The name of the application. The name must start with a letter and can contain digits, letters, and hyphens (-). The application name can be up to 36 characters in length. Application Description The description of the application. The description can be up to 128 characters in length. Source of Deployment Package
- The way in which you want to specify the deployment package. Valid values: Custom ProgramIf you select this option, the File Uploading Method parameter is required. Valid values of the File Uploading Method parameter:
- Upload JAR Package: Select and upload the JAR package that you have downloaded.
- JAR Package Address: Enter the address of the demo package.
- Official Demo
EDAS provides the following demo types: Spring Cloud Server Application, Spring Cloud Client Application, Dubbo Server Application, and Dubbo Client Application. Select a demo type based on your needs.
Version The version of the application. You can specify a custom version number or click Use Timestamp as Version Number to generate a version number. Time Zone The time zone for the application. Total Pods The number of pods on which the application is to be deployed. Single-pod Resource Quota The amount of CPU and memory resources that you want to reserve for a pod. To set a limit, enter a numeric value. The default value 0 indicates that no limit is set.
- Optional. In the Advanced Settings step, configure the advanced settings.
- Click Create Application. The application requires several minutes to be deployed. During the process, you can view the change records to track the deployment progress of the application. For more information, see View application overview. After the application is deployed, the Application Overview page appears. On this page, you can check the status of the pod. If the pod is in the Running state, the application is published. You can click the state of the pod to view the advanced settings of the application instance, such as Deployment, Pods, and Startup Command.
What to do next
After the application is deployed, you can add an Internet-facing Server Load Balancer (SLB) instance to enable the access to the application over the Internet. You can also add an internal-facing SLB instance so that all the nodes in the same VPC can access the application by using this internal-facing SLB instance. For more information, see Add an SLB instance to an application in a Kubernetes cluster.