PolarDB supports the Load Balance Based on Connections and Active Request-based Load Balancing policies to balance loads among multiple read-only nodes.

Load balancing policies

Note A PolarDB cluster endpoint in Read Only mode supports two load balancing policies: Load Balance Based on Connections and Active Request-based Load Balancing. A cluster endpoint in Read and Write (Automatic Read-write Splitting) mode supports only the Active Request-based Load Balancing policy.
Load balancing policyDifferenceSimilarity
Connections-based load balancing
  • A connection of an application is established with only one read-only node in the cluster endpoint. The total number of connections that an application can establish is the sum of the maximum numbers of connections to all read-only nodes in the cluster endpoint.
  • Advanced features such as consistency level, transaction splitting, persistent connections, and automatic request distribution between row store and column store nodes are not supported.
  • The performance is high because only establishing connections is involved, but no further load balancing.
For a cluster endpoint in Read Only mode, no read requests are forwarded to the primary node no matter what policy is selected.
Active requests-based load balancing
  • A connection of an application is established with all read-only node in the cluster endpoint. The total number of connections that an application can establish is the minimum value among the maximum numbers of connections to all read-only nodes in the cluster endpoint.
  • Advanced features such as consistency level, transaction splitting, persistent connections, and automatic request distribution between row store and column store nodes are supported.
  • The performance is low because route parsing and determine for each request are involved.

Primary node accepts read requests

After you set Primary Node Accepts Read Requests to No, common read requests are no longer forwarded to the primary node. In a transaction, read requests that require high consistency are still forwarded to the primary node to meet business requirements. If all read-only nodes fail, read requests are forwarded to the primary node. If your workloads do not require high consistency, you can set the consistency level to eventual consistency to reduce the number of read requests that are forwarded to the primary node. You can also use the transaction splitting feature to reduce the number of read requests that are forwarded to the primary node before a transaction is started. However, broadcast requests such as SET and PREPARE requests are forwarded to the primary node.
Note
  • The Primary Node Accepts Read Requests parameter is available only if the Read/Write parameter is set to Read and Write (Automatic Read-write Splitting). The Primary Node Accepts Read Requests feature is disabled by default. For information about how to modify Primary Node Accepts Read Requests settings, see Configure PolarProxy.
  • If your PolarProxy is 1.x.x or 2.5.1 or later, the new Primary Node Accepts Read Requests value take effect immediately.
  • If your PolarProxy is 2.x. x and earlier than 2.5.1 and if a persistent connection is used, you must re-establish the connection to validate the new Primary Node Accepts Read Requests value. If a short connection is used, the new value takes effect immediately.

Transaction splitting

If the cluster endpoint that is used to connect to the PolarDB cluster is in read/write mode, PolarProxy forwards read and write requests to the primary node and read-only nodes. To ensure data consistency among transactions within a session, PolarProxy sends all requests in transactions of the session to the primary node. For example, database client drivers such as the Java Database Connectivity (JDBC) encapsulate requests in a transaction. In this case, all requests from applications are sent to the primary node. This results in heavy loads on the primary node. However, no requests are sent to read-only nodes. The following figure shows the process. Without transaction splitting
To fix this issue, PolarDB provides the transaction splitting feature. This feature ensures data consistency in a session and allows PolarDB to send read requests to read-only nodes to reduce the loads on the primary node. You can reduce the read loads on the primary node without the need to modify the code or configuration of your application. This way, the stability of the primary node is improved.
Note Only transactions in the sessions that are at the Read Committed isolation level can be split.

To reduce the load of the primary node, PolarProxy sends read requests that are received before the first write request in a transaction is sent to read-only nodes. Uncommitted data in transactions cannot be queried from read-only nodes. To ensure data consistency in transactions, all read and write requests that are received after the first write request are still forwarded to the primary node. For more information about how to enable transaction splitting, see Configure PolarProxy.

Weight-based load balancing

By default, PolarDB for MySQL PolarProxy selects the node that has minimum concurrent requests to route requests. This policy can basically route traffic to different backend nodes in a balanced manner. However, customers have varied business loads and different requirements for traffic distribution.

PolarDB for MySQL introduces the weight-based load balancing feature. You can configure different weights for nodes. Then, the weight and the number of concurrent requests are referenced for the final routing decision.

Precautions
  • To use this feature, PolarProxy must be 2.8.3 or later. Clusters of PolarDB for MySQL 5.6, 5.7, and 8.0 are used.
  • Because both the current node load and custom weight are considered, the actual overall ratio may differ a bit from the specified ratio. However, the former will gradually move closer to the latter.

How it works

All backend nodes have the same initial weight. In the Database Proxy Enterprise universal version section of the Basic Information page, click Database proxy service Configuration. In the dialog box that appears, configure a weight for each node based on your business requirements. The weight ranges from 0 to 100. If the weight of a node is set to 0, no requests are routed to the node if any of other nodes is available.

The final weight of each node is dynamically adjusted based on the weight you specify and the number of concurrent requests on each node. A simplified formula can be used:

Dynamic weight = Custom weight/Number of concurrent requests

A higher value of the dynamic weight means that the node is more likely to be used to route requests. The weight-based load balancing policy provides a flexible routing method. In reality, business traffic gradually changes based on the weight you specify. More time is required compared with the pure weight-based polling method.

Data for testing

If the weight ratio for the three node is 1:2:3 (1 for the primary node), expected test results can be obtained. Note that the sysbench oltp_read_only test set is used.

456789
Note The pi-bp1d1mtcobuzv**** and pcbp14vvpolardbma23957**** internal nodes are not involved in routing requests. Their metrics can be skipped.