PolarDB-O supports the read/write splitting feature. This feature allows you to use each PolarDB-O cluster to distribute read and write requests from applications by using only one cluster endpoint. The built-in proxy of each cluster can be used to forward write requests to the primary node, and forward read requests to the primary node or read-only nodes based on the load of each node. The load on a node is indicated by the number of ongoing requests that are processed by the node.
- Easy maintenance based on a unified endpoint
If you do not use cluster endpoints whose read/write mode is Read and Write (Automatic Read-write Splitting), you must configure the endpoints of the primary node and each read-only node in your application. Otherwise, you cannot send write requests to the primary node and read requests to read-only nodes. If you connect your application to cluster endpoints whose read/write mode is Read and Write (Automatic Read-write Splitting), the cluster endpoints can automatically forward read and write requests to the relevant nodes. This reduces maintenance costs. You need only to add read-only nodes to improve the processing capabilities of clusters, and do not need to modify your applications.
- Session-level read consistency
When a client connects to the backend by using the cluster endpoint, the built-in proxy for read/write splitting automatically establishes a connection to the primary node and each read-only node. In the same session, the built-in proxy first selects an appropriate node based on the data synchronization progress of each database node. Then, the proxy forwards read and write requests to the nodes whose data is up-to-date and correct. This balances read and write requests among the nodes.
- Even distribution of the PREPARE statements
The PREPARE statements that contain write operations and the related EXECUTE statements are sent to only the primary node. The PREPARE statements that contain read-only operations are broadcast to all the nodes, and the related EXECUTE statements are routed based on the loads on these nodes. This achieves load balancing for query requests.
- Support for native high security links, and improved performance
You can build your own proxy on the cloud to achieve read/write splitting. However, an excessive latency may occur because data is parsed and forwarded by multiple components before the data arrives at a database. However, PolarDB uses a built-in proxy that works as a cluster component for read/write splitting. The built-in proxy provides a lower latency and higher data processing speed than external components.
- Node health checks to enhance database availability
The read/write splitting module of PolarDB performs health checks on the primary node and read-only nodes of a cluster. If a node fails or its latency exceeds a specified threshold, PolarDB stops distributing read requests to this node, and distributes write and read requests to other healthy nodes. This ensures that applications can access the cluster even if a single read-only node fails. After the node recovers, PolarDB automatically adds the node into the list of nodes that are available to receive requests.
- PolarDB does not support the following statements or features:
- Connect to a cluster through the replication-mode method. If you need to set up dual-node clusters based on a primary/secondary replication architecture, use the endpoint of the primary node.
- Use the name of the temporary table to declare the %ROWTYPE attribute.
create temp table fullname (first text, last text); select '(Joe,von Blow)'::fullname, '(Joe,d''Blow)'::fullname;
- Create temporary resources by using functions.
- If you create a temporary table by using functions and execute an SQL statement to query the temporary table, an error message may be returned. The error message indicates that the table does not exist.
- If your function contains the PREPARE statement, an error message may be returned when you execute the EXECUTE statement. The error message indicates that the PREPARE statement name does not exist.
- Routing-related restrictions:
- Requests in the transaction are routed to the primary node, and load balancing is resumed after the transaction terminates.
- All statements that use functions except aggregate functions such as COUNT() and SUM() are routed to the primary node.
Create or modify a cluster endpoint
- For more information about how to create a custom cluster endpoint, see Create a custom cluster endpoint for a PolarDB-O cluster.
- For more information about how to modify a cluster endpoint, see Modify a cluster endpoint.
Configure transaction splitting
At the default Read Committed isolation level, Apsara PolarDB does not immediately start a transaction after it receives a transactional statement. You can use BEGIN or SET AUTOCOMMIT=0 to verify that Apsara PolarDB starts the transaction after a write operation occurs.
By default, an Apsara PolarDB cluster can be used to send all requests in the same transaction to the primary node. This allows you to ensure the accuracy of the transaction. However, some frameworks encapsulate all requests in the same transaction. This results in the heavy load on the primary node. To fix this issue, you can enable the transaction splitting feature. This feature allows Apsara PolarDB to identify the current transaction status. Then, you can distribute read requests to read-only nodes by using the load balancing module before the transaction is started.
To enable transaction splitting, perform the following steps:
- Log on to the PolarDB console.
- On the top of the page, select the region where the target cluster is located.
- Find the target cluster and click the cluster ID to go to the Overview page.
- In the Endpoints section, find the cluster endpoint that you want to modify, and on the right of the
cluster endpoint, choose .
- In the Configure Nodes dialog box, enable Transaction Splitting.
Note The configuration takes effect only on the connections that occur after you enable this feature. The connections that occur before you enable this feature must be restarted before the configuration takes effect on these connections.
- Click OK.
Specify a consistency level
For more information, see PolarDB-O consistency levels.
- Why am I unable to retrieve a record immediately after I insert the record?
This is because in a read/write splitting architecture, a replication delay may occur during data replication between the primary node and read-only nodes. However, PolarDB supports session consistency. This allows you to query the updates within a session. Therefore, you can retrieve the inserted record after the data replication is completed.
- Why do read-only nodes have no workloads?
By default, requests in transactions are routed to only the primary node. If you use sysbench for stress testing, you can add --oltp-skip-trx=on for sysbench 0.5 to your code or add --skip-trx=on to your code for sysbench 1.0. This skips BEGIN and COMMIT statements. If a large number of transactions cause excessively low workloads on read-only nodes, you can submit a ticket to enable the transaction splitting feature.
- Why does a node receive more requests than other nodes?
Requests are distributed to each node based on workloads. The node that has low workloads receives more requests.
- Does PolarDB support zero-latency data access?
No, PolarDB does not support zero-latency data access. If the primary node and read-only nodes process normal workloads, a replication delay is in milliseconds when data is replicated between the primary node and read-only nodes. If the read/write mode of a cluster endpoint is Read and Write (Automatic Read-write Splitting), PolarDB does not support zero-latency data access. If you require zero-latency data access, you can connect your applications to the primary endpoint to send all the read and write requests to the primary node.
- Are new read-only nodes automatically available to receive read requests?
Yes, a read/write splitting connection that is created after a read-only node is added forwards requests to the read-only node. A read/write splitting connection that is created before a read-only node is added does not forward requests to the read-only node. You must close the connection and establish the connection again. For example, you can restart the application to establish the connection.