All Products
Search
Document Center

Enterprise Distributed Application Service:Choose between ECS and Kubernetes deployment environments

Last Updated:Mar 12, 2026

Enterprise Distributed Application Service (EDAS) supports application deployment in both Elastic Compute Service (ECS) and Kubernetes (K8s) clusters. Although both environments can host your applications, they offer different features. If you are selecting a technology or migrating your architecture, you may be unsure which environment to choose. This topic provides suggestions and a feature comparison to help you decide.

Introduction to deployment environments

EDAS provides two cluster deployment environments for applications: ECS and K8s.

  • Both ECS and K8s clusters support hosting, service administration, and observability for Java applications that use the Spring Cloud, Dubbo, or High-Speed Service Framework (HSF) microservice frameworks.

  • Only K8s clusters support hosting, service administration, and observability for polyglot applications.

Additionally, the two deployment environments have different requirements for applications and technology stacks. The application management capabilities that EDAS provides also vary between the two environments.

Recommendations for selecting a deployment environment

In most cases, we recommend using the K8s environment to deploy applications. EDAS is deeply integrated with Alibaba Cloud Container Service for Kubernetes (ACK), which provides a wide range of application management features and enables higher resource utilization.

Select the appropriate environment based on your scenario.

Scenario

Environment

  • Your application is a container image or is not written in Java.

  • You need to deploy multiple instances on a single node or require high-density deployment.

  • You want to use K8s for management, for example, with the kubectl tool, or use other features of K8s.

K8s environment

  • Many applications are not containerized.

    The main advantage of the ECS environment is that it is more friendly to non-containerized deployments and makes it easier to reuse existing application O&M systems.

  • Applications that require extremely high single-instance performance and stability.

ECS environment

Note

If you already use the ECS environment to manage applications and require the advanced features that K8s provides, you can migrate the applications to a K8s environment.

Comparison of application hosting features

The following table compares the features of the ECS and K8s environments. 'Y' indicates that the feature is supported, and 'N' indicates that it is not.

Feature

ECS environment

K8s environment

Remarks

Deploy application

Y

Y

The K8s environment supports more instance scheduling policies and lets you deploy multiple applications on a single node.

Start application

Y

Y

None

Stop application

Y

Y

None

Delete application

Y

Y

None

Application scaling

Y

Y

None

Reset application

Y

N

This feature is not required in the K8s environment. To reset an application, delete the pod.

Upgrade or downgrade container

Y

Y

None

Application rollback

Y

Y

None

Automatic horizontal scaling

Y

Y

The supported methods and rules are different.

Scheduled scaling

N

Y

None

Phased release

Y

Y

None

Application group

Y

N

None

Application group configuration

Y

N

None

Real-time log

Y

Y

None

Log directory

Y

Y

None

SLS log

Y

Y

None

Server Load Balancer

Y

Y

None

Health check

Y

Y

The K8s environment supports readiness and liveness probes, which are different from the health checks in the ECS environment.

JVM parameter settings

Y

Y

None

Tomcat configuration

Y

Y

None

Lifecycle hook

Y

Y

The K8s environment supports PostStart and PreStop hooks, which are different from those in ECS.

Environment variable

Y

Y

None

Canary release

Y

Y

None

Traffic monitoring

Y

Y

None

Throttling and degradation

Y

Y

In the K8s environment, this can be implemented without modifying the application code.

Service list query

Y

Y

None

Configuration push

Y

Y

None

Event Center

Y

Y

None

Notifications

Y

Y

None

Application diagnostics

Y

Y

Kubernetes (K8s) provides more powerful, integrated monitoring, control, and diagnostics capabilities.

Resource purchase

Y

N

None

Service Mesh

N

Y

None

Image deployment support

N

Y

None

Polyglot support

N

Y

None

NAS support

N

Y

None

FAQ

Can I deploy multiple application instances on a single node in an ECS environment?

No, you cannot. If you require this capability, use the K8s environment.

Can I deploy polyglot applications in an ECS environment?

No, you cannot. If you require this capability, use the K8s environment.

Are the OpenAPI operations for the ECS and K8s environments the same?

Do both ECS and K8s environments support developer tools?

Yes, they do, but the configurations are different. Be aware of these differences.

Do both ECS and K8s environments support Apsara DevOps?

How does the K8s environment scale out nodes by purchasing resources?

In a K8s environment, elastic scaling refers to the scaling of pods. Typically, this does not involve purchasing new ECS instances (nodes) or releasing existing ones.

You can use the features of Container Service to implement node elastic scaling.

If I use the mount script feature in an ECS environment, how do I migrate to a K8s environment?

Mount scripts for ECS applications are used to run specified commands at specific stages of the deployment process. You can mount scripts to four lifecycle stages: Prepare Instance, Start Application, Stop Application, and Destroy Instance.

The lifecycle hooks provided by the K8s environment are limited to PostStart and PreStop and do not directly correspond to the ECS application lifecycle. Therefore, when you migrate an application that uses mount scripts to a K8s environment, you must make some modifications.

  • For mount scripts that run before the Prepare Instance stage, add them to a Dockerfile and build them into the image.

  • For mount scripts that run before the Start Application stage, you can also add them to a Dockerfile and build them into the image.

    For a pod, preparing the instance and starting the application are part of the same process.

  • For mount scripts that run after the start stage, configure them in the PostStart hook.

  • For mount scripts that run before the stop instance stage, configure them in the PreStop hook.

  • For mount scripts that run after the stop instance stage, perform cleanup tasks during the graceful shutdown process of the application. For example, you can use a Java ShutdownHook or listen for the SIGTERM signal. You can also move these tasks to the PreStop hook as needed.

  • For mount scripts that run before the destroy instance stage, perform cleanup tasks during the graceful shutdown process of the application. For example, you can use a Java ShutdownHook or listen for the SIGTERM signal. You can also move these tasks to the PreStop hook as needed.

    For a pod, destroying the instance and stopping the instance are part of the same process.