On the Node Management page, you can adjust the number of replica nodes to improve disaster recovery, and enable read/write splitting to boost read performance in high-concurrency workloads.
Prerequisites
Before you begin, make sure that:
The instance is deployed in cloud-native mode
The instance is a Redis Open-Source Edition or Tair (Enterprise Edition) DRAM-optimized or persistent memory-optimized instance
The instance has at least 1 GB of memory
The instance is a high availability instance
Supported limits
The following table shows the replica node and read replica limits by instance type.
| Instance type | Replica nodes | Read replicas |
|---|---|---|
| Standard instance | 1–9 | 1–9 |
| Cluster instance (per shard) | 1–4 | 1–4 |
Adjust the number of replica nodes
Log on to the console and go to the Instances page. In the top navigation bar, select the region where your instance resides. Find the instance and click its ID.
In the left-side navigation pane, click Node Management.
On the Node Management page, click Modify in the Actions column.
In the panel that appears, set the number of replica nodes.
Standard instance: 1–9 replica nodes
Cluster instance: 1–4 replica nodes per shard
Follow the instructions to complete the payment.
After payment, the instance status changes to Changing Configuration. The status returns to Running within 1–5 minutes. Track progress on the instance details page.
Dual-zone configuration
For dual-zone instances, use the following node distribution to minimize failover latency:
| Zone | Recommended configuration |
|---|---|
| Primary zone | 1 master node + 1 replica node or read replica |
| Secondary zone | 2 replica nodes or read replicas |
When a high availability (HA) switchover is triggered, the system performs the switchover within the primary zone first to avoid the latency increase that comes with a cross-zone failover.
Enable read/write splitting
Read/write splitting uses a star replication architecture where all read replicas sync directly from the master node, keeping data synchronization latency low. Once enabled, the instance automatically routes read and write requests without any code changes — making it suited for high-concurrency, read-heavy workloads.
Enabling or disabling read/write splitting causes a transient connection to the instance and triggers data migration in the background. Adjusting the number of read replicas does not cause a transient connection. Perform this operation during off-peak hours when write traffic is low, and make sure your application can automatically reconnect.
Log on to the console and go to the Instances page. In the top navigation bar, select the region where your instance resides. Find the instance and click its ID.
In the left-side navigation pane, click Node Management.
Turn on the Read/Write Splitting switch.
In the panel that appears, confirm the instance configuration and order cost, then click Pay.
NoteNew read replicas have the same specifications as the master node (original instance).
Follow the instructions to complete the payment.
After payment, the instance status changes to Changing Configuration. The status returns to Running within 1–5 minutes. Track progress on the instance details page.
(Optional) To adjust the number of read replicas, click Modify in the Actions column on the Node Management page.
Standard instance: 1–9 read replicas
Cluster instance: 1–4 read replicas per shard
If the instance spans two zones, it provides separate endpoints for the primary zone and the secondary zone (both support read and write operations). Direct requests for the secondary zone to the secondary zone endpoint to achieve proximity-based access and load balancing.