Without read/write splitting, you must configure separate connection strings for the primary node and each read-only node in your application. As you add read-only nodes to scale read capacity, your application code grows more complex. PolarDB for PostgreSQL (Compatible with Oracle) solves this with a built-in proxy that exposes a single cluster endpoint: write requests go to the primary node, and read requests are distributed across available nodes based on current load—with no application changes required.
How it works
Connect your application to the cluster endpoint with Read and Write (Automatic Read-write Splitting) mode enabled.
The built-in proxy inspects each incoming request and classifies it as a read or write.
Write requests go to the primary node. Read requests are distributed across available nodes by load, where load is measured by the number of requests queued on each node.
The proxy monitors node health continuously. If a node fails or its latency exceeds the configured threshold, the proxy stops sending requests to that node and redistributes traffic to healthy nodes. When the node recovers, the proxy automatically adds it back.
Benefits
Unified endpoint, no application changes
A single connection string handles all traffic. Add read-only nodes to scale read capacity without modifying your application.
Session-level read consistency
Within a session, the proxy tracks the data synchronization progress of each node and routes reads only to nodes whose data is current. This prevents stale reads within the same session.
Even distribution of PREPARE statements
PREPARE statements with write operations, and their corresponding EXECUTE statements, are sent to the primary node only. PREPARE statements with read-only operations are broadcast to all nodes, and the corresponding EXECUTE statements are routed by load.
Lower latency than external proxies
The built-in proxy runs as a native cluster component, eliminating the extra parsing and forwarding hops that external proxy setups introduce.
Automatic node health management
Health checks run continuously on all nodes. If a node fails or exceeds the latency threshold, traffic is redirected immediately to healthy nodes, keeping your application available even when individual read-only nodes fail.
Limitations
Unsupported connection methods and statements:
Connecting via the replication mode. To set up dual-node clusters with a primary/secondary replication architecture, use the primary node endpoint directly.
Declaring the
%ROWTYPEattribute using a temporary table name:create temp table fullname (first text, last text); select '(Joe,von Blow)'::fullname, '(Joe,d''Blow)'::fullname;Creating temporary resources inside functions:
If a function creates a temporary table and an SQL statement later queries that table, you may get a "table does not exist" error.
If a function contains a
PREPAREstatement, the correspondingEXECUTEmay fail with a "PREPARE statement name does not exist" error.
Routing restrictions:
All requests within a transaction are routed to the primary node. Load balancing resumes after the transaction ends.
All statements that use functions except aggregate functions such as
COUNT()andSUM()are routed to the primary node.
Set up read/write splitting
To use read/write splitting, configure a cluster endpoint with Read and Write (Automatic Read-write Splitting) mode:
To create a custom cluster endpoint, see Create and modify a custom cluster endpoint.
To modify an existing cluster endpoint (change its read/write mode, add or remove nodes, and so on), see FAQ.
Advanced settings
Transaction splitting — Routes read-only statements inside transactions to read-only nodes instead of always sending them to the primary node. Enable this when your workload is transaction-heavy and read-only nodes are underutilized. See Transaction splitting.
Consistency levels — Controls how strictly the proxy enforces read consistency across nodes. Relaxing the level can improve read throughput; tightening it reduces the risk of stale reads. See Consistency levels.
FAQ
Why can't I immediately read a record after inserting it?
A replication delay between the primary node and read-only nodes means the inserted row may not yet be visible on the read-only node that handled your query. PolarDB supports session consistency, which allows you to query your own writes within the same session.
Why are my read-only nodes showing no load?
By default, all requests inside transactions go to the primary node. If your benchmark tool (such as sysbench) wraps every query in a transaction, all traffic lands on the primary node and read-only nodes appear idle.
To work around this in sysbench, skip transactions: use --oltp-skip-trx=on in sysbench 0.5 or --skip-trx=on in sysbench 1.0.
For production workloads with many transactions, enable transaction splitting in the console so that read-only statements inside transactions are routed to read-only nodes. See Transaction splitting.
Why does one node receive more requests than the others?
Requests are distributed by load (queued request count). A node with a lower queue depth receives more incoming requests. This is expected behavior—the proxy continuously rebalances traffic as load fluctuates.
Does PolarDB support zero-delay reads?
No. Under normal load, a replication delay of several milliseconds exists between the primary node and read-only nodes. If the read/write mode of a cluster endpoint is Read and Write (Automatic Read-write Splitting), data cannot be read with a zero delay after the data is written. If your application requires zero-delay reads after writes, connect to the primary endpoint. The primary endpoint always points to the primary node and returns the latest data immediately.
Are new read-only nodes automatically included in read/write splitting?
Sessions created after you add a read-only node will route reads to the new node. Sessions created before you add the node will not. To include the new node for an existing session, close and reopen the session—for example, by restarting your application.