This topic describes the architecture of Classic Load Balancer (CLB). CLB instances are deployed in clusters and serve as intermediaries between clients and backend servers. This provides several advantages as sessions can be synchronized among all the instances in a CLB cluster, which eliminates single points of failure (SPOFs) and improves redundancy and service stability.
CLB provides Layer 4 load balancing for Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) applications and Layer 7 load balancing for HTTP and HTTPS applications.
- Layer 4 CLB uses the open source Linux Virtual Server (LVS) software and the Keepalived framework to balance loads, and adapts LVS and Keepalived to meet cloud computing requirements.
- Layer 7 CLB uses Tengine to balance loads. Tengine is a web server project developed by Taobao. Built on top of NGINX, Tengine provides various advanced features that are designed for high-traffic websites.
In each region, Layer 4 CLB runs in an LVS cluster that consists of multiple backend servers. This cluster deployment mode enhances the availability, stability, and scalability of load balancing services across a variety of scenarios. The following figure shows a typical Layer 4 CLB implementation.
Each backend server in an LVS cluster uses multicast packets to synchronize sessions across the cluster. For example, as shown in the following figure, after the client sends three packets to the server, Session A established on LVS1 is synchronized with other servers. Solid lines indicate the active connections. Dashed lines indicate that requests are sent to other healthy servers (LVS2 in this case) in the event that LVS1 fails or is under maintenance. This allows you to perform hot upgrades, troubleshoot servers, and take clusters offline for maintenance without affecting the services provided by your applications.
Inbound traffic flow
CLB distributes inbound traffic based on forwarding rules configured in the console or by using API operations. The following figure shows the inbound traffic flow.
- Inbound traffic that uses TCP, UDP, HTTP, or HTTPS must be forwarded through the LVS cluster first.
- The large amount of inbound traffic is distributed evenly among all node servers in
the LVS cluster, and the node servers synchronize sessions to ensure high availability.
- If Layer 4 listeners based on UDP or TCP are used by the CLB instance, the node servers in the LVS cluster distribute requests directly to backend ECS instances based on forwarding rules configured for the CLB instance.
- If Layer 7 listeners based on HTTP are used by the CLB instance, the node servers in the LVS cluster first distribute requests to the Tengine cluster. Then, the node servers in the Tengine cluster distribute the requests to backend ECS instances based on the forwarding rules configured for the CLB instance.
- If Layer 7 listeners based on HTTPS are used by the CLB instance, requests are distributed in a similar way to how requests are distributed by a CLB instance that uses listeners based on HTTP. The difference is that the system calls the Key Server to validate certificates and decrypt data packets before requests are distributed to backend ECS instances.
Outbound traffic flow
- If backend ECS instances handle only the traffic distributed from CLB, you do not need to configure public IP addresses for the ECS instances, or purchase
public bandwidth resources, elastic IP addresses (EIPs), or NAT gateways for the ECS
Note Previously created ECS instances are directly allocated with public IP addresses. You can view the public IP addresses by running the
ipconfigcommand. If these ECS instances provide external services only through CLB, no traffic fees are incurred for Internet traffic even if traffic statistics are read at the elastic network interfaces (ENIs).
- If you want your backend ECS instances to directly provide external services or access the Internet, you must configure or purchase public bandwidth resources, public IP addresses, EIPs, or NAT gateways for the instances.
The following figure shows the outbound traffic flow.
A general principle for how outbound traffic flows is that traffic goes out from where it comes in.
- You are charged for outbound traffic from CLB instances, but not for inbound traffic to CLB instances and internal communication between CLB instances and ECS instances. However, this may change in the future. You can throttle traffic speed on CLB instances.
- You are charged for traffic from EIPs or NAT gateways. You can throttle traffic speed on EIPs or NAT gateways. If public bandwidth resources are configured for ECS instances, you are charged for traffic from the ECS instances, and you can throttle traffic speed on the ECS instances.
- CLB supports responsive access to the Internet. Backend ECS instances can access the Internet only if they need to respond to requests from the Internet. These requests are forwarded to them by CLB instances. If your backend ECS instances need to proactively access the Internet, you must configure or purchase public bandwidth resources, EIPs, or NAT gateways for the ECS instances.
- The public bandwidth resources configured for ECS instances, EIPs, and NAT gateways allow ECS instances to access the Internet or be accessed from the Internet, but they cannot forward traffic or balance traffic loads.