All Products
Search
Document Center

Serverless App Engine:Get started with SAE

Last Updated:May 07, 2025

Serverless App Engine (SAE) is an application-oriented serverless Platform as a Service (PaaS) platform. It helps PaaS users deploy applications without managing the underlying infrastructure. You can use SAE to deploy applications developed in Java, PHP, and other programming languages with microservices architecture. This topic describes how to use SAE and provides tutorials to help you quickly understand SAE.

Background information

If you are new to SAE, we recommend that you watch the introductory video to learn about SAE and its basic operations. For more information, see What is Serverless App Engine.

Features

Level 1 category

Level 2 category

Deployment

Command and Arg, Database whitelist, CI/CD, Upgrade and rollback, Permission setting, Others

Network

Alibaba Cloud network infrastructure, Scenarios and methods of SAE network access, Comparison items in SAE networks, Optimized microservices

Multiple programming languages

Supported PHP runtime environments, Static file hosting, Perform remote debugging

Logs

Collection of text logs, Real-time log

Storage

NAS, OSS, File upload and download

Monitoring and Alerts

Basic monitoring, Application monitoring, Alert settings

High availability

Multi-vSwitch deployment, Graceful start and shutdown, Throttling and degradation

Elasticity and cost-effectiveness

Manual scaling, Scheduled scaling, Metric-based scaling, Hybrid scaling, Scheduled start and stop

Best practices

Best practices

Procedure

The following figure shows the procedure for using SAE.dg_sae_workflow

  1. Before you deploy an application to SAE for the first time, you must plan the virtual private cloud (VPC), vSwitch, and namespace (to distinguish between test, staging, and production environments).

    For more information, see Preparations.

  2. Deploy an application in the SAE console.

    For more information, see Application deployment. In addition to deploying applications in the console, SAE allows you to deploy applications using Jenkins, integrated development environment (IDE) plug-ins, Maven plug-ins, Terraform, OpenAPI, Apsara Devops, and kubectl-sae.

    Note

    If you are deploying an application to SAE for the first time, you must create an application in the SAE console.

  3. Use one of the following methods to access SAE applications:

  4. Configure advanced features for SAE applications.

    SAE provides the following advanced features and capabilities: enterprise-level permission control, elasticity and cost-effectiveness, optimized Java microservices, high availability, and storage.

[Back to top]

Deployment

SAE supports package-based deployment and image-based deployment. Package-based deployment supports Java WAR packages, Java JAR packages, PHP ZIP packages, and Python ZIP packages. When you create an SAE application, you must manually configure or automatically configure the VPC, vSwitch, and security group. You must also specify the instance type. You can modify the instance type after the application is created. This section describes the reference deployment of SAE based on the following configurations.

Command and Arg

  • Image-based Deployment

    You can write the startup command and related parameters in the Dockerfile. You can also override the startup command in the SAE console, as shown in the following figure.

    sc_set_start_command_image

  • Package-based deployment

    For example, if you deploy a JAR package, you can configure startup parameters in the SAE console, as shown in the following figure.

    sc_set_start_command_java_jar

[Back to top]

Database whitelist

Unlike Elastic Compute Service (ECS) instances, SAE applications run in containers. The IP addresses of the applications may change each time the applications are deployed. However, the CIDR blocks of the vSwitches to which the applications are connected remain unchanged. Therefore, you can add the CIDR blocks of the vSwitches to the database whitelist. For more information, see Access Alibaba Cloud databases from applications.

[Back to top]

CI/CD

In addition to console-based deployment and API-based deployment, SAE is integrated with multiple CI/CD tools, such as Apsara Devops and Jenkins. You can use these tools to automatically deploy applications after code is committed. For more information, see Overview of application hosting.

  • Plug-in-based deployment

    If you use Java, SAE provides a variety of plug-ins for deployment, including Maven plug-ins, IntelliJ IDEA plug-ins, and Eclipse plug-ins. For more information, see Overview of application hosting.

  • Interconnection between on-premises and cloud applications

    In microservice scenarios, a service is separated into multiple applications. If your test environment is deployed on Alibaba Cloud, you must establish communication between on-premises applications and cloud applications. SAE works with Cloud Toolkit to provide remote interconnection between on-premises and cloud applications. You can directly call cloud-based consumers and providers on your on-premises machine. For more information, see Use Cloud Toolkit to implement interconnection between on-premises and cloud applications (IntelliJ IDEA).

  • Terraform

    SAE supports Terraform. For more information, see terraform-provider-alicloud and Overview of Terraform.

[Back to top]

Upgrade and rollback

SAE supports a variety of rollback policies, including single-batch release, phased release, canary release, and rollback to historical versions. For more information, see Upgrade and roll back an application.

[Back to top]

Permission setting

SAE supports fine-grained permission control. You can configure permissions based on namespaces, applications, and read and write operations. The permission assistant feature simplifies the configuration process. For more information, see SAE permission assistant.

[Back to top]

Others

  • Environment variables

    Before an application can run in a system, you must configure specific environment variables for the application. Then, you can run commands to manage the application. Environment variables are stored in the key-value pair format. Different applications use different environment variables and the application configurations do not affect each other. For more information, see Configure environment variables.

  • Hosts binding

    SAE supports application-level instances. You can use hosts binding to resolve hostnames so that you can access application instances by using hostnames. For more information, see Configure hosts binding.

[Back to top]

Network

After you deploy an application to SAE, you may have different network access requirements. For more information, see Network-related concepts and capabilities of SAE.

Alibaba Cloud network infrastructure

dg_aliyun_cloud_network

  • Virtual private cloud (VPC): A VPC is a custom private network that you create on Alibaba Cloud. VPCs are logically isolated from each other.

    Note

    By default, access from VPCs to the Internet is denied.

  • vSwitch: A vSwitch is a basic network component that connects different cloud resources in a VPC. A vSwitch corresponds to a physical server. When you create a cloud resource in a VPC, you must specify a vSwitch to which the cloud resource is connected.

  • Elastic IP address (EIP): An EIP can be associated with only one resource, such as an ECS instance or SAE instance. After you associate an EIP with a resource, the resource can access and be accessed by other services over the Internet.

  • NAT gateway: The source network address translation (SNAT) feature of a NAT gateway allows all resources in a VPC to access the Internet. An Internet NAT gateway is suitable for all resources in a VPC, whereas an EIP is suitable for only one resource in a VPC.

[Back to top]

Scenarios and methods of SAE network access

After you deploy an application to SAE, you may have the following network access requirements. The following figure shows the concept.dg_sae_network

Mutual access between SAE applications over an internal network (non-microservices scenarios)

In serverless mode, new internal IP addresses are generated each time you deploy an application. However, you cannot access the application by using the IP addresses of the instances in the application. You can use one of the following methods to enable access:

  • SAE Service (CLB): You can use an SAE service that is implemented based on an Alibaba Cloud Server Load Balancer (SLB) instance (internal CLB instance) to access applications. For more information, see Configure CLB-based application access.

  • SAE Ingress (ALB/CLB): You can use gateway routing that is implemented based on an Alibaba Cloud SLB instance (internal Application Load Balancer (ALB) or CLB instance) to route traffic to different SAE applications based on different domain names and paths. For more information, see Configure gateway routing (Ingress) access.

[Back to top]

Access to SAE applications from the Internet (inbound traffic)

You can use one of the following methods to enable access:

  • SAE Service (CLB): You can use an SAE service that is implemented based on an Alibaba Cloud SLB instance (Internet CLB instance) to access applications. For more information, see Configure CLB-based application access.

  • SAE Ingress (ALB/CLB): You can use gateway routing that is implemented based on an Alibaba Cloud SLB instance (Internet CLB or ALB instance) to route traffic to different SAE applications based on different domain names and paths. For more information, see Configure gateway routing (Ingress) access.

  • SAE EIP: You can associate an EIP with each instance of an SAE application. Then, the instance can access and be accessed by other services over the Internet. For more information, see Configure EIP-based Internet access for SAE instances.

[Back to top]

Access to the Internet from SAE applications (outbound traffic)

You can use one of the following methods to enable access:

  • SAE Service (CLB): You can use an SAE service that is implemented based on an Alibaba Cloud SLB instance (Internet CLB instance) to access applications. For more information, see Configure CLB-based application access.

  • SAE Ingress (ALB/CLB): You can use gateway routing that is implemented based on an Alibaba Cloud SLB instance (Internet CLB/ALB instance) to route traffic to different SAE applications based on different domain names and paths. For more information, see Configure gateway routing (Ingress) access

  • SAE EIP: You can associate an EIP with each instance of an SAE application. Then, the instance can access and be accessed by other services over the Internet. For more information, see Configure EIP-based Internet access for SAE instances.

[Back to top]

Access to ECS instances, ApsaraDB RDS, and Tair (Redis OSS-compatible) from SAE applications in the same VPC

  • SAE is based on Alibaba Cloud VPC networks. Therefore, you do not need to configure additional settings to access resources in the same VPC, such as ECS instances, ApsaraDB RDS, and Tair (Redis OSS-compatible). Similarly, Alibaba Cloud resources in the same VPC can access SAE.

  • You must check whether the related security groups and service whitelists are configured. If you encounter issues, follow the troubleshooting steps in FAQ.

[Back to top]

Access to registries from microservices applications and mutual access between instances

For more information, see Network-related concepts and capabilities of SAE.

[Back to top]

Comparison items in SAE networks

Differences between ServiceName and gateway routing in SAE

Gateway routing (Ingress) in SAE is implemented based on Alibaba Cloud SLB (CLB and ALB). Gateway routing can route traffic to different applications based on domain names and paths (as shown in the following figure). ServiceName does not have this capability. We recommend that you use the gateway routing feature if your business requirements can be met. You can use ServiceName in scenarios in which you need to use the Layer 4 TCP protocol for access or you cannot access applications by using domain names.dg_slb

[Back to top]

Differences between CLB-based application access and Kubernetes Service name-based application access

Kubernetes Services are classified into the following types: CLB-based Services and ClusterIP-based Services. SAE does not directly provide ClusterIP. Instead, SAE provides a domain name that can be accessed. The following table describes the differences between the Service types.

Comparison item

CLB

Domain (ClusterIP)

Billing

CLB billing

Free of charge

O&M

CLB is an independent Alibaba Cloud service that provides multiple features: monitoring, alerting, and log collection to Log Service. CLB provides fine-grained troubleshooting capabilities.

This type of Service do not provide independent monitoring, alerting, or access log collection capabilities. You need to configure alerts and logs for an application.

[Back to top]

Differences between ALB-based gateway routing and CLB-based gateway routing

ALB is a load balancing service that runs at the application layer, and supports protocols such as HTTP, HTTPS, and QUIC. We recommend that you use an ALB instance in gateway routing scenarios. For more information, see Introduction to Server Load Balancer (SLB) product family.

[Back to top]

Differences between NAT-based Internet access and EIP-based Internet access

The following figure shows how to enable EIP-based Internet access. Each instance is associated with an EIP. If the EIPs are insufficient, the instances fail to be created and cannot provide services.dg_eip

The following table shows the differences between NAT-based Internet access and EIP-based Internet access.

Comparison item

NAT

EIP

Effective scope

The effective scope of a NAT gateway is a VPC or a vSwitch. An Internet NAT gateway allows all instances that are deployed in a VPC or vSwitch to access the Internet even when no public IP addresses are associated with the instances. Only one NAT gateway is required in a VPC or vSwitch. Then, all instances that reside in the VPC can access the Internet.

The effective scope of an EIP is an instance. If you have 10 instances, you must configure 10 EIPs for the instances. After you associate an EIP with an instance, the instance can access and be accessed by other services over the Internet.

Fixed public IP address

Yes.

No. SAE destroys the original instance and disassociates the original EIP only after a new instance is successfully associated with an EIP. In this case, you must prepare additional EIPs for the new instances. An EIP is a pool of IP addresses.

Common scenario

NAT-based Internet access is suitable for scenarios in which auto scaling policies are configured for applications, and new instances require access to the Internet by default and require fixed IP addresses. This method can meet the business requirements of 95% SAE users.

EIP-based Internet access is suitable for scenarios in which EIPs are changeable, instances need to be directly connected (such as online conferences), and the lifecycle of each instance need to be managed in a fine-grained manner.

Billing

For more information about billing, see NAT Gateway billing.

For more information about billing, see EIP billing. If the number of instances is less than or equal to 20, the EIP-based Internet access method is more cost-efficient.

[Back to top]

Optimized microservices

SAE is a best practice product for serverless microservices architecture. SAE provides a variety of enhanced microservices capabilities. For more information, see Microservices-related concepts and capabilities of SAE.

Registries

Instructions

SAE provides the serverless Nacos configuration center. The configuration center is suitable for microservices applications that use the Nacos 1.x or 2.X client. You can refer to Use the built-in Nacos registry of SAE to use the configuration center of SAE. The following information describes how to use the configuration center of SAE.

  • After selecting SAE Built-in Nacos, SAE supports automatic modification of the program's configuration center address by automatically injecting relevant environment variables and using Java Agent technology to modify bytecode. Therefore, you can deploy your program directly to SAE without making any modifications to the program.

  • The SAE built-in Nacos configuration center is not suitable for programs that use non-Nacos configuration centers. The related logic is managed by your program.

  • The configuration center was once provided as an independent Alibaba Cloud service. However, the independent service is no longer available. You can still use the configuration feature in SAE. We recommend that you use Microservices Engine (MSE) Nacos 2.0 to manage configurations. For more information, see Nacos version features.

[Back to top]

Application configurations

For information about how application configurations take effect, see Use the built-in Nacos registry of SAE. For information about how to manage configurations in the console, see Distributed configuration management.

[Back to top]

Configuration center

Instructions

SAE provides the serverless Nacos configuration center. The configuration center is suitable for microservices applications that use the Nacos 1.x or 2.X client. You can refer to Use the built-in Nacos registry of SAE to use the configuration center of SAE. The following information describes how to use the configuration center of SAE.

  • After selecting SAE Built-in Nacos, SAE supports automatic modification of the program's configuration center address by automatically injecting relevant environment variables and using Java Agent technology to modify bytecode. Therefore, you can deploy your program directly to SAE without making any modifications to the program.

  • The SAE built-in Nacos configuration center is not suitable for programs that use non-Nacos configuration centers. The related logic is managed by your program.

  • The configuration center was once provided as an independent Alibaba Cloud service. However, the independent service is no longer available. You can still use the configuration feature in SAE. We recommend that you use Microservices Engine (MSE) Nacos 2.0 to manage configurations. For more information, see Nacos version features.

[Back to top]

Application configurations

For information about how application configurations take effect, see Use the built-in Nacos registry of SAE. For information about how to manage configurations in the console, see Distributed configuration management.

[Back to top]

Microservices development

IDE-based automatic deployment

You may need to upload a package each time you deploy an application in the SAE console. You can perform integrated development environment (IDE)-based deployment to improve development efficiency.

For more information, see Use Alibaba Cloud Toolkit to automatically deploy microservices to SAE.

[Back to top]

On-premises and cloud interconnection

After you adopt the microservices model, the number of your applications increases. In some cases, all related microservices applications need to be started during on-premises development and interconnection. To resolve this issue, you can use the Alibaba Cloud Toolkit plugin to perform interconnection between on-premises and cloud applications. For example, you can connect your on-premises consumer directly to a provider deployed in SAE. This way, you do not need to start the provider on premises, which significantly reduces the development and debugging costs.

For more information, see Use Cloud Toolkit to implement on-premises and cloud interconnection (IntelliJ IDEA).

[Back to top]

Service administration

Service list

For applications using the built-in Nacos, SAE provides basic service list query capabilities. If you use a self-built registry or MSE registry, you can log on to the corresponding console to query services without needing to view them in the SAE console.

For more information, see View service list.

[Back to top]

Graceful shutdown

Because the Consumer client has a cache and cannot receive offline notifications from the microservice Provider in a timely manner, it is usually necessary to remove the Provider instance from the registry and wait for the Consumer's cache to refresh. To address this issue, SAE has integrated with the graceful shutdown feature of MSE (Microservices Engine) to productize this process.

For more information about scenarios where this feature is not applicable and its advantages compared to open-source Spring Cloud and open-source Dubbo, see Configure graceful start and shutdown.

For information about how to configure graceful shutdown in SAE, see Configure graceful start and shutdown.

[Back to top]

Graceful start

A microservice Provider can be called by a Consumer as soon as it is registered with the registry. However, at this point, the Provider may still need further initialization, such as initializing the database connection pool. We recommend that you enable the graceful start feature for microservices applications that receive large amounts of traffic.

For more information, see Configure graceful start and shutdown.

[Back to top]

Canary release of microservices

SAE not only supports application lifecycle management but also provides grayscale capabilities for microservice application deployment.

For more information, see Manage grayscale rules and Perform phased release of applications.

[Back to top]

Throttling and degradation

In burst traffic scenarios such as flash sales activities, a sudden function failure may occur in microservices applications and monolithic applications. For microservices applications, this may also cause an avalanche effect. Therefore, it is necessary to take protective measures. SAE integrates with Alibaba Cloud Application High Availability Service (AHAS) to easily configure and manage throttling and degradation rules.

For more information, see Traffic protection.

[Back to top]

Application monitoring

In a microservices architecture, issues may not be identified and diagnosed if no monitoring system is provided. SAE integrates with Application Real-Time Monitoring Service (ARMS) to provide application dashboards, JVM monitoring, slow call monitoring, trace analysis, and alerting capabilities, which helps reduce the barriers for enterprises to implement microservices architecture.

For more information, see Application monitoring.

[Back to top]

Multiple programming languages

Supported PHP runtime environments

SAE supports the following deployment methods.

  • Image: This method is suitable for PHP applications that support all architectures.

  • PHP ZIP package: This method is suitable for all online applications that combine PHP-FPM and NGINX.

SAE provides a default PHP runtime environment. For more information, see PHP runtime environment description.

[Back to top]

Static file hosting

With NAS and OSS, SAE supports independent hosting of static files, persistent storage of code, templates, and uploaded files during runtime, and cross-instance file sharing.

[Back to top]

Perform remote debugging

With different features of SAE, SAE supports various debugging capabilities.

  • PHP remote debugging

    SAE has a built-in Xdebug plugin that enables remote debugging.

  • File download

    SAE allows you to log on to instances through Webshell and download files using SAE or OSS features. For more information, see Upload and download files using Webshell.

  • File upload

    SAE facilitates code development and debugging with NAS and OSS.

[Back to top]

Logs

SAE integrates with SLS and Kafka log collection. The real-time log feature of SAE supports viewing 500 lines of log information. If you have higher requirements for log viewing, we recommend that you use the file log collection feature. SAE collects business file logs (log paths in containers) and container stdout logs, and then sends them to SLS or Kafka. This allows you to view an unlimited number of log lines and aggregate and analyze logs on your own, which facilitates business log integration.

File logs

SAE integrates with SLS log collection, which you can enable in the SAE console. Unlike the ECS era when you had to manually maintain the list of collection machines, after you configure the collection directories or files in SAE, SAE automatically connects with SLS for log collection during subsequent deployments and scale-outs. You can search for logs by keywords in the SLS console. For more information, see Configure log collection to SLS.

Log sources support wildcard characters, for example, /tmp/log/*.log indicates the collection of all files ending with /tmp/loglog in the directory and its subdirectories.

XC2X4lpGGZ

If you cannot use SLS to collect logs or you cannot view logs in the SLS console as a RAM user, you can import logs to Kafka. Then, you can deliver data from Kafka to other persistent databases, such as Elasticsearch databases, based on your business requirements. This way, you can manage and analyze logs in a centralized manner. For more information, see Configure log collection to Kafka.

You can use environment variables to configure Logtail startup parameters. For more information, see Configure environment variables to improve Logtail collection performance.

[Back to top]

Real-time logs

SAE automatically collects stdout logs, retains the latest 500 entries, and allows you to view them in the SAE console. For more information, see View real-time logs.

vweXtvvflh

If you want to collect stdout logs to SLS, you can export the logs as files, and then configure file collection. The following figure shows the parameters.

AjI5WIBvlp

[Back to top]

Storage

SAE provides 20 GB of system disk storage. If you need to read and write to external storage, we recommend using NAS and OSS. Diagnosing SAE applications involves two methods: routine checks and log uploads. For log uploads, you can use not only OSS but also the built-in one-click upload and download feature in SAE.

Note

We recommend that you use Simple Log Service instead of NAS or OSS in log collection and storage scenarios. For more information, see Configure log collection to SLS.

NAS

SAE supports NAS storage, which solves the problems of data persistence and data distribution between application instances. NAS storage can be accessed only when it is mounted to ECS or SAE. For more information, see Configure NAS storage.

[Back to top]

OSS

OSS provides convenient tools and a console that supports visual management of buckets. OSS is suitable for scenarios in which you need to perform more read operations than write operations, such as mounting configuration files or frontend static files. After you configure OSS storage when deploying applications in the SAE console, you can access the data through the OSS console. For more information, see Configure OSS storage.

Note

You cannot use the ossfs tool in log writing scenarios. For more information, see ossfs 1.0.

[Back to top]

File upload and download

If you want to download files from SAE to your local machine, you can use the file upload and download feature built into Webshell. For more information, see Use Webshell to upload and download files.

In addition to NAS or OSS storage, you can also use the ossutil tool. For information about how to use Alibaba Cloud OSS service for log upload and download, see Diagnose applications through routine checks.

[Back to top]

Monitoring and alerts

SAE features built-in infrastructure monitoring and ARMS business monitoring (Java and PHP). The alert management module provides capabilities such as alert convergence, notification, automatic escalation, and other functions to help you quickly detect and resolve business alerts.

Infrastructure monitoring

Infrastructure monitoring includes CPU, Load, MEM, disk, network, and TCP connections. For more information, see Infrastructure monitoring. The current built-in infrastructure monitoring is the Alibaba Cloud Monitor product. You can also log on to the Cloud Monitor console to configure custom dashboards.

[Back to top]

Application monitoring

Standard Edition applications integrate ARMS Basic Edition monitoring. After enabling application monitoring, you can use this feature for free. However, there is also an option to enable ARMS Pro monitoring features. After activation, additional fees will be incurred, and you will need to view the data in ARMS. For more information, see Standard Edition application monitoring.

SAE Professional Edition applications integrate ARMS Pro application monitoring features. After enabling application monitoring, no additional fees will be incurred, and this feature will monitor your application in real time. For more information, see Professional Edition application monitoring.

[Back to top]

Alert settings

SAE supports setting alerts for all the monitoring metrics mentioned above. For more information, see Alert management.

[Back to top]

High availability

After you deploy an application in SAE, you can use the health check feature to verify whether application instances and business processes are running properly for graceful online and offline traffic management. This helps you identify issues when exceptions occur. Additionally, SAE supports deploying multiple vSwitches to handle data center-level failures and uses AHAS to implement throttling and degradation for Java applications, comprehensively ensuring application availability.

Multi-vSwitch deployment

To handle data center-level failures, we recommend configuring multiple vSwitches for production-level SAE applications. You can specify multiple vSwitches when creating an application or add a vSwitch after an application is created. When creating a vSwitch, we recommend specifying more than 100 IP addresses. If IP addresses are insufficient, vSwitch creation or auto scaling might fail. For more information, see Switch vSwitch.

With multiple vSwitches, SAE can automatically scale resources across multiple zones. Users do not need to monitor resource distribution because SAE ensures overall resource availability. For example, when a single point of failure or zone failure occurs, SAE migrates the failed instances to normal nodes or other zones within seconds.

  • Select multiple vSwitches when creating an application.

    ARFB0hYGg5

  • Add a vSwitch after an application is created.

    GTxZLvIHiJ

    Note

    When adding a vSwitch for an application, ensure you add the vSwitch to the database whitelist. For more information, see Application access to ApsaraDB.

[Back to top]

Graceful start and shutdown

When deploying applications in SAE, the process typically involves scaling out first and then scaling in. However, two major pain points exist for graceful online and offline traffic management:

  • Verifying whether traffic can be forwarded to newly added instances.

  • Performing graceful destruction on old instances.

SAE is based on Kubernetes and provides two health check methods: liveness probe (Liveness configuration) and readiness probe (Readiness configuration). For the pain points mentioned above, SAE supports configuring the Readiness health check. The Readiness probe periodically checks whether an instance is ready. After a new instance is ready, SAE forwards traffic to the instance. If the check fails, SAE does not forward traffic to the instance. Before destroying an old instance, SAE removes the instance from the traffic source. You can configure the shutdown script and specify the waiting period before SAE destroys the instance. For more information, see Configure health checks.

The Liveness health check also periodically verifies whether an instance has started. If the check fails, SAE automatically restarts the container. When exceptions occur, you can use the liveness check feature to perform automatic O&M. However, you cannot identify the failure cause because container data is lost after restart. Consider your business scenario when implementing the liveness check feature.

When creating a readiness configuration or a liveness configuration, you must configure graceful shutdown for microservices due to registry caching in microservice scenarios. For more information, see Configure lossless release and shutdown. In production environments, services may become unavailable or business monitoring might report numerous errors during a short period due to automatic scaling, rollback, or upgrade operations. To address these issues, SAE supports configuring graceful start for microservices. For more information, see Configure lossless release and shutdown.

[Back to top]

Throttling and degradation

For high-traffic scenarios, SAE integrates AHAS throttling and degradation to ensure application availability. For more information, see Traffic protection.

[Back to top]

Elasticity and cost-effectiveness

SAE supports elastic policies such as manual scaling, scheduled scaling, metric-based scaling, hybrid scaling, and scheduled start and stop. Elasticity is a common characteristic of cloud native architectures and applications. You can configure elasticity-related settings to reduce machine costs and improve O&M efficiency.

Manual scaling

Manual scaling is suitable for manual O&M scenarios. Compared to the relatively complex and slow scaling process of ECS, SAE scaling is based on container images and is faster. For more information, see Manual scaling.

[Back to top]

Scheduled scaling

Scheduled scaling is suitable for scenarios in which traffic can be predicted. For example, the catering and education industries have clear morning and evening business peaks every day. Therefore, you can configure different numbers of instances to run at different time periods to make server resources match the actual business traffic as closely as possible. For more information, see Configure Auto Scaling policies.

[Back to top]

Metric-based scaling

Metric-based scaling is suitable for scenarios in which traffic cannot be predicted. Metrics such as CPU, MEM, TCP connections, QPS, and RT are supported. For more information, see Configure Auto Scaling policies.

[Back to top]

Hybrid scaling

Hybrid scaling is suitable for scenarios in which burst traffic and periodic traffic occur at the same time, such as in industries like the Internet, education, and catering. You can specify the number of instances to run for specific periods of time in a fine-grained manner.

For example, on weekdays, the maximum number of elastic instances is configured as max and the minimum number is configured as min. However, if you do not need to maintain min instances on weekends, you can configure a different number of instances for weekends to reduce min. For more information, see Configure Auto Scaling policies.

[Back to top]

Scheduled start and stop

You can use the scheduled start and stop feature to start and stop applications at specific points in time by namespace. For example, you can start and stop all applications in the development environment or test environment at a specified point in time. Assume that for development and test environments, you only need to use them from 08:00 to 20:00 every day, and they remain idle for the rest of the time. You can configure scheduled start and stop in SAE to reduce costs. For more information, see Create a scheduled start and stop rule.

[Back to top]

Best practices

For various business requirements, SAE provides relevant best practices. The preceding sections describe the settings that you can configure, such as elasticity, network, storage, and access control for Alibaba Cloud databases. SAE also allows you to configure images, application acceleration, and JVM parameters. For more common scenarios, see Best practices.