PolarDB for MySQL clusters provide read/write splitting. This feature enables PolarDB clusters to distribute read and write requests from applications by using cluster endpoints. The built-in proxy of a PolarDB cluster forwards write requests to the primary node, and forwards read requests to the primary node or read-only nodes based on the loads on nodes. The number of requests that are not processed on a node indicates the loads on the node.
- Read consistency
PolarDB uses a proxy to achieve read/write splitting. The proxy tracks the log sequence number (LSN) of the redo log that is replayed on each node. Each time the data in the primary node is updated, PolarDB records the update checkpoint as a session LSN. When a new request arrives, PolarDB compares the session LSN with the LSN of the redo log that is stored in each node. Then, the proxy forwards the request to a node where the LSN of the redo log is equal to or greater than the session LSN. This ensures session consistency. This solution may increase the loads on the primary node. However, the increased loads can be ignored because PolarDB implements physical replication at a high rate. The updates to the primary node are being replicated to each read-only node while the update result is being returned to the client. This way, the updates are replicated to each read-only node at a high probability before the subsequent read request arrives. In most of the application scenarios, PolarDB processes a large number of read requests and a small number of write requests. Based on the verification results of this method, PolarDB can implement session consistency, read/write splitting, and load balancing.
- Built-in proxy for read/write splitting
You can build your own proxy on the cloud to achieve read/write splitting. However, an excessive latency may occur because data is parsed and forwarded by multiple components before the data arrives at a database. PolarDB uses a built-in proxy that is deployed in the existing secure links to achieve read/write splitting. Therefore, data does not need to pass through multiple components. This reduces the latency and increases the speed of processing data.
- O&M efficiency
For traditional database services, read/write splitting can be time-consuming. You must specify the endpoints of the primary node and each read-only node in applications. You also need to configure the forwarding logic to distribute write requests to the primary node and read requests to read-only nodes.
PolarDB provides an optimal solution by using cluster endpoints. After applications connect to the cluster endpoints, write requests are automatically forwarded to the primary node, and read requests are automatically forwarded to read-only nodes. You can view the forwarding results in the console. This reduces maintenance costs.
You need only to add read-only nodes to improve the processing capabilities of clusters, and do not need to modify your applications.
- Health checks for nodes
The read/write splitting module of PolarDB automatically performs health checks on all the nodes in a cluster. If a node fails or its latency exceeds a specified threshold, PolarDB stops distributing read requests to this node, and distributes write and read requests to other healthy nodes. This ensures that applications can access the cluster even if a single read-only node fails. After the node recovers, PolarDB automatically adds the node into the list of nodes that are available to receive requests.
- Free of charge
The read/write splitting feature is available for free.
- Forwarding logic in read/write splitting mode
- The following requests are forwarded to only the primary node:
- All the data manipulation language (DML) operations, such as INSERT, UPDATE, DELETE, and SELECT FOR UPDATE operations
- All the data definition language (DDL) operations, such as creating databases or tables, deleting databases or tables, and changing table schemas or permissions
- All the requests in transactions
- Queries by using user defined functions
- Queries by using stored procedures
- EXECUTE statements
- Requests that involve temporary tables
- SELECT last_insert_id()
- All the requests to query or modify user environment variables
- SHOW PROCESSLIST statements
- KILL statements in Structured Query Language (SQL) statements, but not KILL commands in Linux
- The following requests are forwarded to the primary node or read-only nodes:
- Non-transactional read requests
- COM_STMT_EXECUTE commands
- The following requests are forwarded to all the nodes:
- All the requests to change system variables
- USE statements
- COM_STMT_PREPARE commands
- COM_CHANGE_USER, COM_QUIT, and COM_SET_OPTION, and other commands
- The following requests are forwarded to only the primary node:
- Forwarding logic in read-only mode:
- DDL and DML operations are not supported.
- Requests are forwarded to read-only nodes in load balancing mode.
- Requests are not forwarded to the primary node.
- If you execute multi-statements or call stored procedures, all subsequent requests for the current connection are routed to the primary node. For more information about multi-statements, see Multi Statements. To use the read/write splitting feature, you must close the current connection and create a connection.
- Do not change environment variables before you call stored procedures or execute multi-statements,
set names utf8mb4;select * from t1;. Otherwise, the session statuses of the primary node and read-only nodes are inconsistent.
- In read-only mode, you cannot use cluster endpoints to configure environment variables.
For example, if you execute
set @endtime=now(); select * from tab where dt < @endtime, you may not retrieve the intended environment variables.
- When you use views, session consistency cannot be ensured. For example, if you execute
CREATE VIEW tab2 AS SELECT * FROM tab1; INSERT INTO tab1(key, value) (1, 1); SELECT * FROM tab2 where key=1;, the result may not be returned because of a time-out.
SHOW FULL PROCESSLIST;statements are not supported.
PolarDB has the following read/write splitting features:
- Load balancing
PolarDB supports an automatic load-based scheduling policy. Read requests are automatically forwarded to read-only nodes based on the number of active connections. This ensures load balancing among read-only nodes.
- Consistency levels
PolarDB implements asynchronous replication to replicate the updates from the primary node to read-only nodes. In read/write splitting mode, a read request that follows a write request may fail to obtain the latest write result. This issue results in data inconsistency. PolarDB provides the eventual consistency, session consistency, eventual consistency, and global consistency options. For more information, see Consistency levels.
- Transaction splitting
At the default Read Committed isolation level, PolarDB does not immediately start a transaction after it receives a transaction statement. For example, you can execute
SET AUTOCOMMIT=0to verify that PolarDB starts the transaction after a write operation is performed.
In most cases, PolarProxy sends all the requests of a transaction to the primary node. However, requests are encapsulated in transactions due to framework reasons. This results in the heavy loads on the primary node. To resolve this issue, you can enable the transaction splitting feature. This feature allows PolarProxy to identify the current transaction status. Then, the proxy distributes read requests to read-only nodes through the load balancing module before the transaction is started.Note
- By default, the transaction splitting feature is enabled in read/write splitting mode. For more information about how to change the transaction splitting status, see Modify a cluster endpoint.
- The transaction splitting feature is not suitable for services that require high consistency. Therefore, before you enable transaction splitting, make sure that you are fully aware of the impacts of transaction splitting on your services.
- The configuration takes effect on only the connections that occur after the configuration.
- Offload Reads from Primary Node
If you set the Offload Reads from Primary Node parameter to On for the primary node, normal read requests are not routed to the primary node. In transactions, read requests that require consistency are routed to the primary node to fit your needs. In addition, if all the read-only nodes fail, the read requests are routed to the primary node. If your workloads do not require high consistency, you can set Consistency Level to Eventual Consistency to reduce the read requests that are routed to the primary node. You can also set the Transaction Splitting parameter to On to reduce the read requests that are routed to the primary node before a transaction is started. Broadcast requests such as SET or PREPARE are routed to the primary node.Note
- By default, the Offload Reads from Primary Node is set to On in read/write splitting mode. For more information about how to configure the Offload Reads from Primary Node parameter, see Modify a cluster endpoint.
- After you configure the Offload Reads from Primary Node parameter, the configuration immediately takes effect.
Add the prefix
/*FORCE_SLAVE*/to an SQL statement to specify the routing direction of this SQL statement.
SELECT * FROM testis routed to a read-only node. If the SQL statement is changed to
/* FORCE_MASTER */ SELECT * FROM test, it is routed to the primary node.Note
- If you need to execute the statements that contain the preceding hints in the client of the native MySQL system, add the -c parameter to the statements. Otherwise, the hints in the statements are filtered out and become invalid. For more information, see mysql Client Options.
- The route priorities that are determined by the hints are the highest and are not limited by consistency levels and transaction splitting. Before you execute SQL statements, evaluate the route priorities of the statements.
- The hints cannot contain the settings that are used to change environment variables.
For example, if you use the
/*FORCE_SLAVE*/ set names utf8;hint, the query result may be unexpected.