This glossary defines terms specific to Enterprise Distributed Application Service (EDAS) or terms that have a specific meaning within the EDAS platform. For general Alibaba Cloud terminology, see the Alibaba Cloud glossary.
Platform and runtime
EDAS
Enterprise Distributed Application Service (EDAS) is a platform as a service (PaaS) for hosting applications and managing microservices. EDAS provides full-stack solutions to develop, deploy, monitor, and maintain applications, with built-in support for various microservice frameworks such as Dubbo and Spring Cloud. You can use EDAS to migrate existing applications to Alibaba Cloud.
EDAS Agent
A daemon process installed on ECS instances that serves as the communication channel between the EDAS console and your deployed applications. EDAS Agent manages applications, reports status statistics, and retrieves information within a cluster.
EDAS Container
The base runtime for HSF applications on EDAS. EDAS Container consists of Ali-Tomcat and Pandora.
Ali-Tomcat
A Servlet container built on Apache Tomcat that supports all core Tomcat features. Ali-Tomcat automatically loads a Pandora container at startup to provide class isolation between your application and middleware services.
Pandora
A lightweight class-isolation container (also known as taobao-hsf.sar) that isolates applications and middleware services, and also isolates different middleware services from each other. Pandora for EDAS integrates middleware plug-ins for service registration, configuration push, and trace collection, enabling you to monitor, process, trace, analyze, maintain, and manage EDAS applications.
Pandora Boot
A lighter-weight variant of Pandora that uses Pandora and FatJar technologies to let you directly launch Pandora Boot in your IDE.
Application runtime environment
The container or server that runs your application. HSF applications run in EDAS Container, while open source applications run in standard Apache Tomcat runtimes.
Alibaba Cloud Toolkit
Alibaba Cloud Toolkit is a free IDE plug-in for IntelliJ IDEA, Eclipse, and Maven that lets you develop, test, diagnose, and deploy applications to Alibaba Cloud services directly from your development environment.
Microservice frameworks
Dubbo
An open source distributed service framework that provides high-performance, transparent remote procedure calls (RPCs).
HSF
High-speed Service Framework (HSF) is a distributed service framework designed for enterprise-scale architectures. Built on a high-performance network communication layer, HSF supports service publishing and registration, service calls, routing, authentication, throttling, graceful degradation, and trace queries.
Infrastructure
Cluster
A collection of cloud resources that run your applications. EDAS supports the following cluster types:
ECS cluster: Each ECS instance runs one application.
Kubernetes cluster: Created in Container Service for Kubernetes (ACK). Kubernetes clusters have passed Cloud Native Computing Foundation (CNCF) conformance tests and integrate with services such as SLB and File Storage NAS (NAS). After you create a Kubernetes cluster in ACK, import it into EDAS to deploy applications.
Swarm cluster (phasing out): A Docker Swarm-based cluster that supports multiple Docker instances per ECS instance. Swarm is a container management tool released by Docker. Swarm clusters will soon phase out and are unavailable to first-time users.
ECS
Elastic Compute Service (ECS) provides scalable, on-demand computing resources on Alibaba Cloud for building stable and secure applications.
VPC
Virtual Private Cloud (VPC) is a logically isolated private network on Alibaba Cloud where you provision and manage ECS instances, SLB instances, and ApsaraDB for RDS instances.
SLB
Server Load Balancer (SLB) distributes incoming network traffic across multiple backend servers to improve the responsiveness and availability of your applications and prevent service interruption caused by single points of failure (SPOFs).
Pod
The smallest deployable and billable unit in Kubernetes. A pod contains one or more containers that share computing resources, storage, an IP address, and a port. You can limit the proportion of computing resources allocated to each container. In EDAS, each pod serves as an application instance within a Kubernetes cluster.
Kubernetes namespace
A logical partition within a Kubernetes cluster that isolates system objects into separate projects, teams, or user groups. Each namespace can be managed independently while still sharing the overall cluster resources.
CPU sharing ratio
An approach to improve the resource usage of individual instances in Docker by sharing CPU on a host to increase instance density. For example, on a 2-core 8 GB host:
A ratio of 1:2 allows up to four 1-core 2 GB Docker instances.
A ratio of 1:4 allows up to eight 1-core 1 GB Docker instances.
Memory is always exclusive and cannot be shared.
Resource group
A logical grouping of EDAS resources -- such as ECS instances, clusters, and SLB instances -- used to organize access control. Grant Resource Access Management (RAM) users permissions scoped to specific resource groups.
Application management
Application lifecycle
The full set of operations available for an EDAS application: create, deploy, start, roll back, scale out, scale in, stop, and delete. Each application is the basic manageable unit in EDAS and typically contains multiple application instances.
Application instance
An ECS instance or elastic container instance on which an application is deployed. In an ECS cluster, each ECS instance runs one application instance. In a Kubernetes cluster, each pod serves as an application instance.
Application instance group
A way to organize the ECS instances of an application into separate groups, each running a different version. Instance groups support beta releases, A/B testing, and phased rollouts. You can manage the application lifecycle, monitor resources, and handle alerts independently for each group.
Process
A record of a lifecycle operation -- such as deploy, start, or scale -- performed on an application. EDAS abstracts the business logic of each operation and saves it as an application change record, viewable in the EDAS console.
Auto scaling
Automatically scales a cluster in or out based on real-time instance metrics such as CPU utilization, response time (RT), and load. Auto scaling helps maintain service performance and cluster availability.
Batch operations
A feature in the EDAS console that lets you run commands across multiple ECS instances that have EDAS Agent installed at once -- whether all instances in a cluster, all instances hosting the same application, or a specific set of instances.
Microservice namespace
A logical isolation boundary for services within EDAS. Use microservice namespaces to separate runtime environments (for example, development, staging, and production) so that service calls and configuration pushes in one environment do not affect another.
Microservice namespaces are distinct from Kubernetes namespaces.
Observability and diagnostics
ARMS
Application Real-Time Monitoring Service (ARMS) is an application performance management (APM) service on Alibaba Cloud that lets you build a monitoring system capable of responding within a few seconds.
Application monitoring
Real-time and historical monitoring of application traffic. Use application monitoring data to assess application health and to locate and troubleshoot issues.
Service monitoring
Tracks the service calls made by your application, including metrics for queries per second (QPS), response time, and error rate.
Service statistics
An aggregated view of service status across all applications for the current tenant over the last 24 hours, including call counts, call durations, and error counts.
Infrastructure monitoring
Collects and aggregates system-level metrics -- CPU, memory, workload, network, and disk -- from the ECS instances that run your applications. Data is organized by application.
Service topology
A visual map that shows how services are connected and displays performance data for each connection, helping you understand dependencies at a glance.
Distributed tracing
EDAS EagleEye analyzes every service call, message submission, and database access in a distributed system to help you pinpoint bottlenecks and potential risks.
Trace query
A feature for querying the status of system traces, with a focus on identifying slow or abnormal services.
Method tracing
Uses JVM bytecode enhancement to record the elapsed time and call sequence of all method calls within a selected method. This lets you inspect execution order in real time.
Application diagnostics
Detailed per-instance troubleshooting and performance analysis, covering JVM heap and non-heap memory, class loaders, threads, Tomcat connector statistics, and method tracing.
Health check
Periodically inspects containers and applications, then reports the results to the EDAS console. Health checks keep you informed of application status within a cluster and help you locate problems quickly.
Log collector
An agent that collects system monitoring logs to generate monitoring data and trace information. In a VPC, where instances are isolated from on-premises machines, you can install a log collector to collect data from all instances deployed in the VPC by connecting them to your on-premises infrastructure.
Real-time log
Standard runtime logs streamed from Docker containers.
Traffic management
Service throttling
Rate-limiting rules applied per application to control service traffic and protect service stability. EDAS supports QPS-based and thread-based throttling rules to keep services responsive during traffic spikes.
Service degradation
Graceful service degradation pinpoints and blocks slow or underperforming services to protect overall application stability. Unlike service throttling, which controls service traffic, service degradation targets low-performance dependencies. Configure degradation rules based on service response time to block problematic services during peak hours.
Service search
A lookup tool for querying which services a specific machine provides or consumes.
Configuration and scheduling
Lightweight configuration center
An on-premises configuration center provided by EDAS that supports service discovery and configuration management in on-premises environments.
Distributed job management
A scheduled job service powered by SchedulerX, developed by the Alibaba Cloud middleware team. To use it, add SchedulerX-Client dependencies to your application, create a scheduled job in the SchedulerX console, and configure its parameters. After the application starts, the SchedulerX-Server cluster distributes and triggers jobs across the SchedulerX-Client cluster, ensuring that jobs are stably and reliably triggered and scheduled.
Continuous integration
A development practice where team members frequently merge code into a shared repository, triggering automated builds and tests with each integration.
Billing
Billing account
An Alibaba Cloud account used to purchase EDAS. A single billing account can be linked to up to five Alibaba Cloud accounts. The billing account must also be an Alibaba Cloud account.