All Products
Search
Document Center

Tair (Redis® OSS-Compatible):Cluster architecture

Last Updated:Mar 28, 2026

The cluster architecture scales Tair (Redis OSS-compatible) beyond the limits of a single-threaded Redis process by distributing data across multiple shards. Each shard runs an independent master-replica pair, so both storage capacity and throughput grow linearly as you add shards—up to 16 TB (64 GB × 256 shards) and millions of requests per second. Two connection modes are available: proxy mode and direct connection mode.

Choose a connection mode

Proxy modeDirect connection mode
Client compatibilityWorks with any Redis client; connects the same way as a standard master-replica instanceRequires a Redis Cluster-aware client library
Read/write splittingSupported (cloud-native instances only)Not supported
Advanced featuresHotkey data caching, enhanced failoverStandard Redis Cluster behavior
When to useRecommended for most workloadsUse when your client requires native Redis Cluster protocol access
Proxy mode and direct connection mode cannot be enabled at the same time on cloud-native cluster instances.

Proxy mode (recommended)

Proxy nodes sit between clients and data shards. Because proxy nodes expose a standard Redis endpoint, no client-side changes are required when migrating from a standard master-replica instance.

Architecture

Multi-replica cluster architecture in proxy mode

image
ComponentDescription
Proxy nodeForwards client requests to the correct data shard. Multiple proxy nodes run in parallel to serve traffic and provide disaster recovery.
Data shardEach shard uses a high availability (HA) architecture with one master node and up to four replica nodes, deployed on separate hosts. Replica nodes can be placed in a secondary zone. More replicas reduce the risk of data loss and improve disaster recovery coverage.
HA systemIf a master node fails, the system automatically switches workloads to a replica node within 30 seconds. In dual-zone deployments, if a replica exists in the primary zone, workloads switch to that replica first to avoid cross-zone access latency.

For a full list of capabilities provided by proxy nodes, see Features of proxy nodes.

Read/write splitting

Enable read/write splitting when read traffic saturates the master node of a cloud-native cluster instance in proxy mode. After enabling it, the proxy routes read requests to replica nodes, freeing the master node for write-heavy workloads.

For setup instructions, see Enable read/write splitting for cluster instances.

Read/write splitting is available only on cloud-native cluster instances in proxy mode. To use this feature on a non-cluster instance, migrate your data to a cloud-native cluster instance in proxy mode using Data Transmission Service (DTS).

Direct connection mode

In direct connection mode, clients connect to data shards directly over the Redis Cluster protocol. On the first connection, DNS resolves the instance's private endpoint to a random virtual IP address (VIP), which the client uses to discover the full cluster topology.

Architecture of the cluster in direct connection mode

image

Direct connection mode supports multiple replicas but does not support read/write splitting.

For connection requirements and usage examples, see Use the direct connection mode to connect to a cluster instance.

When to use the cluster architecture

Your dataset has outgrown a single instance

The cluster scales storage to 16 TB (64 GB × 256 shards). Use the cluster architecture when a single standard instance can no longer hold your data.

Read or write throughput is a bottleneck

Add shards to scale throughput linearly across multiple CPU cores. If read traffic on the master node is the bottleneck, enable read/write splitting on a cloud-native cluster instance in proxy mode.

Your workload is throughput-intensive

The cluster distributes requests across multiple shards and vCPUs, enabling efficient hot-data access and sustained high-throughput operation.

Your application uses few multi-key operations

The cluster uses a distributed architecture where all keys in a multi-key operation must reside in the same hash slot. If your workload relies heavily on multi-key commands, review the restrictions before choosing the cluster architecture. For a list of affected commands, see Limits on commands supported by cluster instances and read/write splitting instances.

Your deployment is latency-sensitive and spans two zones

In a dual-zone cluster instance, add a replica node to the primary zone (for example: one master node and one replica node in the primary zone, one replica node in the secondary zone). If a master-replica switchover occurs, traffic stays within the primary zone and avoids cross-zone latency.

Usage notes

  • The classic cluster architecture supports the master-replica model but does not support read/write splitting.

  • Proxy mode and direct connection mode cannot be enabled simultaneously on cloud-native cluster instances.

Modify cluster instance configurations

TaskSteps
Add replica nodesGo to the Node Management page on the instance details page. Click Modify.
Add read replicasGo to the Node Management page on the instance details page. Turn on Read/Write Splitting, then click Modify.
Add shardsIn the upper-right corner of the instance details page, choose Shard Adjustment > Add Shards.
Change shard specificationsIn the upper-right corner of the instance details page, choose Specification Adjustment > Specification Upgrade/Downgrade.