All Products
Search
Document Center

Server Load Balancer:Service architecture

Last Updated:Jan 26, 2024

Classic Load Balancer (CLB) is deployed in clusters and provides Layer 4 (TCP and UDP) and Layer 7 (HTTP and HTTPS) load balancing services. CLB synchronizes sessions and eliminates single points of failure (SPOFs) to improve redundancy and ensure service stability.

Basic architecture

CLB uses CLB clusters to forward client requests to backend servers and receives responses from backend servers over the internal network.

CLB provides load balancing services at Layer 4 and Layer 7.

  • CLB implements load balancing at Layer 4 by using a dedicated Layer 4 cluster and keepalived.

  • CLB implements load balancing at Layer 7 by using a dedicated Layer 7 cluster. Compared with traditional self-managed NGINX clusters, dedicated Layer 7 clusters support advanced features and provide optimized performance for scenarios that need to process large amounts of traffic or require SSL offloading for HTTPS traffic.

    image

As shown in the following figure, CLB runs in a dedicated Layer 4 cluster that consists of multiple nodes in each region. The cluster deployment mode enhances the availability, stability, and scalability of load balancing services across various scenarios.

image

Each backend server in the dedicated Layer 4 cluster uses multicast packets to synchronize sessions across the cluster. As shown in the following figure, after the client sends three packets to the server, Session A established on Server 1 is synchronized with other servers. Solid lines indicate the active connections. Dashed lines indicate that requests are sent to other healthy servers (Server 2 in this case) if Server 1 fails or is under maintenance. This allows you to perform hot upgrades, troubleshoot servers, and take clusters offline for maintenance without affecting the services provided by your applications.

Note

If a connection is not established because the three-way handshake fails, or if a connection has been established but sessions are not synchronized during a hot upgrade, your services may be interrupted. In this case, you must re-initiate a connection request from the client.

image

Inbound traffic flow

CLB distributes inbound traffic based on forwarding rules configured in the console or by using API operations. The following figure shows the inbound traffic flow.

Figure 1. Inbound traffic flow
  1. Inbound traffic that uses TCP, UDP, HTTP, or HTTPS must be forwarded through the LVS cluster first.
  2. The large amount of inbound traffic is distributed evenly among all node servers in the LVS cluster, and the node servers synchronize sessions to ensure high availability.
    • If Layer 4 listeners based on UDP or TCP are used by the CLB instance, the node servers in the LVS cluster distribute requests directly to backend ECS instances based on forwarding rules configured for the CLB instance.
    • If Layer 7 listeners based on HTTP are used by the CLB instance, the node servers in the LVS cluster first distribute requests to the Tengine cluster. Then, the node servers in the Tengine cluster distribute the requests to backend ECS instances based on the forwarding rules configured for the CLB instance.
    • If Layer 7 listeners based on HTTPS are used by the CLB instance, requests are distributed in a similar way to how requests are distributed by a CLB instance that uses listeners based on HTTP. The difference is that the system calls the Key Server to validate certificates and decrypt data packets before requests are distributed to backend ECS instances.

Outbound traffic flow

CLB and backend ECS instances communicate over the internal network.
  • If backend ECS instances handle only the traffic distributed from CLB, you do not need to configure public IP addresses for the ECS instances, or purchase public bandwidth resources, elastic IP addresses (EIPs), or NAT gateways for the ECS instances.
    Note Previously created ECS instances are directly allocated with public IP addresses. You can view the public IP addresses by running the ipconfig command. If these ECS instances provide external services only through CLB, no traffic fees are incurred for Internet traffic even if traffic statistics are read at the elastic network interfaces (ENIs).
  • If you want your backend ECS instances to directly provide external services or access the Internet, you must configure or purchase public bandwidth resources, public IP addresses, EIPs, or NAT gateways for the instances.

The following figure shows the outbound traffic flow.

Figure 2. Outbound traffic flow

A general principle for how outbound traffic flows is that traffic goes out from where it comes in.

  • You are charged for outbound traffic from CLB instances, but not for inbound traffic to CLB instances and internal communication between CLB instances and ECS instances. However, this may change in the future. You can throttle traffic speed on CLB instances.
  • You are charged for traffic from EIPs or NAT gateways. You can throttle traffic speed on EIPs or NAT gateways. If public bandwidth resources are configured for ECS instances, you are charged for traffic from the ECS instances, and you can throttle traffic speed on the ECS instances.
  • CLB supports responsive access to the Internet. Backend ECS instances can access the Internet only if they need to respond to requests from the Internet. These requests are forwarded to them by CLB instances. If your backend ECS instances need to proactively access the Internet, you must configure or purchase public bandwidth resources, EIPs, or NAT gateways for the ECS instances.
  • The public bandwidth resources configured for ECS instances, EIPs, and NAT gateways allow ECS instances to access the Internet or be accessed from the Internet, but they cannot forward traffic or balance traffic loads.