All Products
Search
Document Center

Tair (Redis® OSS-Compatible):Manage nodes

Last Updated:Mar 28, 2026

On the Node Management page, you can adjust the number of replica nodes to improve disaster recovery, and enable read/write splitting to boost read performance in high-concurrency workloads.

Prerequisites

Before you begin, make sure that:

  • The instance is deployed in cloud-native mode

  • The instance is a Redis Open-Source Edition or Tair (Enterprise Edition) DRAM-optimized or persistent memory-optimized instance

  • The instance has at least 1 GB of memory

  • The instance is a high availability instance

Supported limits

The following table shows the replica node and read replica limits by instance type.

Instance typeReplica nodesRead replicas
Standard instance1–91–9
Cluster instance (per shard)1–41–4

Adjust the number of replica nodes

  1. Log on to the console and go to the Instances page. In the top navigation bar, select the region where your instance resides. Find the instance and click its ID.

  2. In the left-side navigation pane, click Node Management.

  3. On the Node Management page, click Modify in the Actions column.

  4. In the panel that appears, set the number of replica nodes.

    • Standard instance: 1–9 replica nodes

    • Cluster instance: 1–4 replica nodes per shard

  5. Follow the instructions to complete the payment.

After payment, the instance status changes to Changing Configuration. The status returns to Running within 1–5 minutes. Track progress on the instance details page.

Dual-zone configuration

For dual-zone instances, use the following node distribution to minimize failover latency:

ZoneRecommended configuration
Primary zone1 master node + 1 replica node or read replica
Secondary zone2 replica nodes or read replicas

When a high availability (HA) switchover is triggered, the system performs the switchover within the primary zone first to avoid the latency increase that comes with a cross-zone failover.

Enable read/write splitting

Read/write splitting uses a star replication architecture where all read replicas sync directly from the master node, keeping data synchronization latency low. Once enabled, the instance automatically routes read and write requests without any code changes — making it suited for high-concurrency, read-heavy workloads.

Important

Enabling or disabling read/write splitting causes a transient connection to the instance and triggers data migration in the background. Adjusting the number of read replicas does not cause a transient connection. Perform this operation during off-peak hours when write traffic is low, and make sure your application can automatically reconnect.

  1. Log on to the console and go to the Instances page. In the top navigation bar, select the region where your instance resides. Find the instance and click its ID.

  2. In the left-side navigation pane, click Node Management.

  3. Turn on the Read/Write Splitting switch.

  4. In the panel that appears, confirm the instance configuration and order cost, then click Pay.

    Note

    New read replicas have the same specifications as the master node (original instance).

  5. Follow the instructions to complete the payment.

After payment, the instance status changes to Changing Configuration. The status returns to Running within 1–5 minutes. Track progress on the instance details page.

  1. (Optional) To adjust the number of read replicas, click Modify in the Actions column on the Node Management page.

    • Standard instance: 1–9 read replicas

    • Cluster instance: 1–4 read replicas per shard

Note

If the instance spans two zones, it provides separate endpoints for the primary zone and the secondary zone (both support read and write operations). Direct requests for the secondary zone to the secondary zone endpoint to achieve proximity-based access and load balancing.