All Products
Search
Document Center

Tair (Redis® OSS-Compatible):Standard architecture

Last Updated:Mar 28, 2026

The standard architecture of Tair (Redis OSS-compatible) runs without cluster mode. All data is stored in a single shard, which makes it straightforward to operate, cost-efficient, and compatible with the full Redis protocol. Unlike the cluster architecture, the standard architecture does not support shard quantity adjustment and provides a high availability (multi-replica) instance type.

Architecture types

Two deployment modes are available: cloud-native and classic.

Cloud-native standard architecture

A cloud-native instance supports one master node and up to 9 replica nodes. The master node handles all write and read workloads. Replica nodes stay in hot standby.

When a failover is triggered across multiple zones, the system first attempts to switch within the same zone to avoid cross-zone access for your application.

image

Classic standard architecture

A classic instance supports one master node and one replica node. In a multi-zone deployment, the replica node is placed in the secondary zone.

image

How it works

The standard architecture uses a master-replica model. The master node handles daily workloads. Replica nodes remain in hot standby.

When the master node fails, the proprietary high availability (HA) system detects the failure and performs a failover:

  1. The HA system detects the master node failure (disk I/O failure, CPU failure, or similar).

  2. A replica node is promoted to become the new master node.

The entire failover completes within 30 seconds.

Standard architecture vs. cluster architecture

CapabilityStandard architectureCluster architecture
Data distributionSingle shardMultiple shards
Shard quantity adjustmentNot supportedSupported
Read replicasUp to 9 (cloud-native)Per shard
Read/write splittingSupportedSupported
Redis protocol compatibilityFullPartial (multi-key commands restricted)
Recommended max QPS100,000200,000+
Typical use caseStable single-instance workloadsHigh-throughput or large-scale workloads

Features

Reliability

Service reliability: The master and replica nodes run on separate physical hosts. If the master node fails, the proprietary HA system performs a failover automatically.

Data reliability: Data persistence is enabled by default—all data is written to disk. Backup and restoration are supported: clone or roll back an instance from a backup set to recover from accidental operations. Instances in zones with disaster recovery capabilities (for example, Hangzhou Zone H and Zone I) support zone-disaster recovery.

Redis protocol compatibility

The standard architecture is fully compatible with the Redis protocol. Migrate workloads from a self-managed Redis database to a standard instance without service interruption, using Alibaba Cloud Data Transmission Service (DTS) for incremental data migration.

Proprietary replication improvements

Alibaba Cloud's master-replica replication mechanism addresses several limitations of native Redis replication:

Native Redis limitationHow Tair addresses it
Full sync requires a fork, causing latency of milliseconds to secondsNon-blocking replication: the fork issue is resolved, eliminating sync-induced latency
PSYNC failure triggers a full Redis Database (RDB) sync, consuming disk I/O and CPUWrite-ahead logs (WALs) replicate data between nodes; replication interruptions have minimal impact on performance
Child processes performing copy-on-write (COW) consume master node memory, risking out-of-memory crashesMemory pressure from COW is eliminated
GB-level RDB file transfer causes outbound traffic bursts and sequential I/O spikesWAL-based replication avoids large file transfers

When to use the standard architecture

QPS below 100,000 on a single instance

Native Redis uses a single-threaded model. The standard architecture fits most workloads under 100,000 queries per second (QPS). If your QPS requirements grow, enable read/write splitting or switch to the cluster architecture.

Full Redis protocol compatibility required

The standard architecture supports the full Redis protocol, including multi-key commands that are restricted in cluster mode. Migrate from a self-managed Redis database or Redis Sentinel without modifying application code.

Redis used as persistent storage

Data persistence, backup, and point-in-time restoration are built in. Use backups to clone an instance or roll back after accidental data changes.

Read-heavy workload with few sorting or compute-intensive commands

Enable read/write splitting to scale read throughput. For CPU-intensive sorting or compute workloads at high scale, use the cluster architecture instead.

FAQ

I'm running Redis in Sentinel mode. Which architecture should I choose when migrating to the cloud?

Choose the standard HA architecture, then enable Sentinel-compatible mode on the instance. This lets you connect to Tair the same way you connect to Redis Sentinel—no application code changes required.

My instance has 8 GB of memory with the standard architecture. How do I improve performance without upgrading memory?

Two options are available depending on your bottleneck:

  • Connections or bandwidth are the bottleneck: Enable read/write splitting. This requires no code changes and no endpoint modification, and can be disabled at any time. The instance automatically routes read and write requests to the appropriate nodes.

  • CPU utilization is consistently high: Upgrade to a cluster instance. Adding shards distributes the CPU load across multiple nodes. Before upgrading, check command compatibility between standard and cluster architectures—see Limits on commands supported by cluster instances and read/write splitting instances.

The following table compares performance for an 8 GB Redis Open-Source Edition instance across architectures:

ArchitectureMemory (GB)CPU coresBandwidth (Mbit/s)Maximum connectionsQPS reference value
Standard architecture (master and replica nodes)829620,000100,000
Standard architecture (read/write splitting enabled, one master node, one read replica)84 (2 × 2)192 (96 × 2)40,000 (20,000 × 2)200,000
Cluster architecture (2 shards)8 (4 GB × 2 shards)4 (2 × 2)192 (96 × 2)40,000 (20,000 × 2)200,000

What's next

Manage nodes

References