All Products
Search
Document Center

Edge Security Acceleration:Get started with traffic features

Last Updated:Jun 19, 2025

Edge Security Acceleration (ESA) reduces global network latency with smart routing and ensures business continuity and high availability during high concurrency through traffic control and load balancing.

Reduce network latency

Smart routing detects real-time network conditions based on Alibaba Cloud global edge points of presence (POPs). It intelligently selects the optimal route for request data transmission and combines high-performance protocol stack with other technologies to significantly reduce latency and request failure rates. This effectively improves user experience and ensures your business continuity.

Scenarios

Especially in scenarios with extensive service coverage, smart routing can provide better access experience for users in geographically distant locations.

For example, for cross-region enterprises, smart routing can significantly enhance capabilities in the following aspects:

  • Network performance: Reduces latency and packet loss, improving data transmission efficiency.

  • Availability: Ensures continuous stable operation of critical business applications.

Improve availability

By traffic control

Traffic control is essential for ensuring high availability of origin servers. It effectively mitigates the impact of burst traffic and prevents server overload. ESA provides the waiting room feature to implement traffic control.

How it works

When many users attempt to access the origin server, the waiting room feature intelligently distributes requests, limits the number of simultaneous users, and manages waiting users to maintain the server's stability:

  • During traffic peaks, the waiting room data analytics module monitors key metrics, such as peak traffic and user waiting times, in real time. This information helps identify pressure points and optimize the waiting room settings.

  • The waiting room scheduled event feature allows you to adjust queuing rules and the number of active users, expanding service capabilities based on actual business needs.

  • Additionally, waiting room bypass rules can be established to exempt specific requests (such as critical traffic or high-priority users), allowing them to access server resources during peak periods without affecting core business functions.

Scenarios

During e-commerce promotions, online events, or sudden traffic surges, the multi-layered ESA approach effectively manages burst traffic through traffic distribution, data analytics, dynamic adjustments, and priority control. This ensures continuity for critical business operations while safeguarding overall stability and reliability.

By dynamic traffic allocation

Load balancing is a critical network management tool that intelligently distributes incoming network or application traffic across multiple origin pools or servers. By dynamically allocating traffic, load balancing enhances system performance, reliability, and user experience, ensuring high availability.

How it works

Load balancing operates by monitoring the status and load conditions of a server cluster in real time. It selects the optimal server to process user requests based on load balancing policies, such as primary/secondary, weighted round-robin, or area-based scheduling. This approach prevents traffic concentration on a single server, reducing the risk of service interruption or performance degradation due to overload.

The following are the benefits of load balancing:

  • Improved availability and stability: Reduces single points of failure and protects against overload.

  • Enhanced performance: Optimizes resource allocation, improving server response times and processing capacity.

  • Optimized resource utilization: Ensures efficient use of server resources, minimizing waste.

  • Better user experience: Reduces latency and enhances response speed for users.

Load balancing also supports health checks, which monitor server operational status in real time. Faulty or underperforming servers are automatically isolated, ensuring traffic is redirected to functioning servers. Custom rule configuration allows for optimized traffic management, using criteria such as client source IP or HTTP request headers for rule matching and tailored load balancing settings, further improving service reliability and fault tolerance.

Scenarios

Load balancers are commonly used in high-concurrency, large-scale scenarios, including web services, cloud platforms, microservices architectures, and content delivery networks.

For organizations of all sizes, load balancing is essential for building an efficient, reliable, and scalable network architecture.