The standard architecture of Tair (Redis OSS-compatible) runs without cluster mode. All data is stored in a single shard, which makes it straightforward to operate, cost-efficient, and compatible with the full Redis protocol. Unlike the cluster architecture, the standard architecture does not support shard quantity adjustment and provides a high availability (multi-replica) instance type.
Architecture types
Two deployment modes are available: cloud-native and classic.
Cloud-native standard architecture
A cloud-native instance supports one master node and up to 9 replica nodes. The master node handles all write and read workloads. Replica nodes stay in hot standby.
When a failover is triggered across multiple zones, the system first attempts to switch within the same zone to avoid cross-zone access for your application.
Classic standard architecture
A classic instance supports one master node and one replica node. In a multi-zone deployment, the replica node is placed in the secondary zone.
How it works
The standard architecture uses a master-replica model. The master node handles daily workloads. Replica nodes remain in hot standby.
When the master node fails, the proprietary high availability (HA) system detects the failure and performs a failover:
The HA system detects the master node failure (disk I/O failure, CPU failure, or similar).
A replica node is promoted to become the new master node.
The entire failover completes within 30 seconds.
Standard architecture vs. cluster architecture
| Capability | Standard architecture | Cluster architecture |
|---|---|---|
| Data distribution | Single shard | Multiple shards |
| Shard quantity adjustment | Not supported | Supported |
| Read replicas | Up to 9 (cloud-native) | Per shard |
| Read/write splitting | Supported | Supported |
| Redis protocol compatibility | Full | Partial (multi-key commands restricted) |
| Recommended max QPS | 100,000 | 200,000+ |
| Typical use case | Stable single-instance workloads | High-throughput or large-scale workloads |
Features
Reliability
Service reliability: The master and replica nodes run on separate physical hosts. If the master node fails, the proprietary HA system performs a failover automatically.
Data reliability: Data persistence is enabled by default—all data is written to disk. Backup and restoration are supported: clone or roll back an instance from a backup set to recover from accidental operations. Instances in zones with disaster recovery capabilities (for example, Hangzhou Zone H and Zone I) support zone-disaster recovery.
Redis protocol compatibility
The standard architecture is fully compatible with the Redis protocol. Migrate workloads from a self-managed Redis database to a standard instance without service interruption, using Alibaba Cloud Data Transmission Service (DTS) for incremental data migration.
Proprietary replication improvements
Alibaba Cloud's master-replica replication mechanism addresses several limitations of native Redis replication:
| Native Redis limitation | How Tair addresses it |
|---|---|
| Full sync requires a fork, causing latency of milliseconds to seconds | Non-blocking replication: the fork issue is resolved, eliminating sync-induced latency |
| PSYNC failure triggers a full Redis Database (RDB) sync, consuming disk I/O and CPU | Write-ahead logs (WALs) replicate data between nodes; replication interruptions have minimal impact on performance |
| Child processes performing copy-on-write (COW) consume master node memory, risking out-of-memory crashes | Memory pressure from COW is eliminated |
| GB-level RDB file transfer causes outbound traffic bursts and sequential I/O spikes | WAL-based replication avoids large file transfers |
When to use the standard architecture
QPS below 100,000 on a single instance
Native Redis uses a single-threaded model. The standard architecture fits most workloads under 100,000 queries per second (QPS). If your QPS requirements grow, enable read/write splitting or switch to the cluster architecture.
Full Redis protocol compatibility required
The standard architecture supports the full Redis protocol, including multi-key commands that are restricted in cluster mode. Migrate from a self-managed Redis database or Redis Sentinel without modifying application code.
Redis used as persistent storage
Data persistence, backup, and point-in-time restoration are built in. Use backups to clone an instance or roll back after accidental data changes.
Read-heavy workload with few sorting or compute-intensive commands
Enable read/write splitting to scale read throughput. For CPU-intensive sorting or compute workloads at high scale, use the cluster architecture instead.
FAQ
I'm running Redis in Sentinel mode. Which architecture should I choose when migrating to the cloud?
Choose the standard HA architecture, then enable Sentinel-compatible mode on the instance. This lets you connect to Tair the same way you connect to Redis Sentinel—no application code changes required.
My instance has 8 GB of memory with the standard architecture. How do I improve performance without upgrading memory?
Two options are available depending on your bottleneck:
Connections or bandwidth are the bottleneck: Enable read/write splitting. This requires no code changes and no endpoint modification, and can be disabled at any time. The instance automatically routes read and write requests to the appropriate nodes.
CPU utilization is consistently high: Upgrade to a cluster instance. Adding shards distributes the CPU load across multiple nodes. Before upgrading, check command compatibility between standard and cluster architectures—see Limits on commands supported by cluster instances and read/write splitting instances.
The following table compares performance for an 8 GB Redis Open-Source Edition instance across architectures:
| Architecture | Memory (GB) | CPU cores | Bandwidth (Mbit/s) | Maximum connections | QPS reference value |
|---|---|---|---|---|---|
| Standard architecture (master and replica nodes) | 8 | 2 | 96 | 20,000 | 100,000 |
| Standard architecture (read/write splitting enabled, one master node, one read replica) | 8 | 4 (2 × 2) | 192 (96 × 2) | 40,000 (20,000 × 2) | 200,000 |
| Cluster architecture (2 shards) | 8 (4 GB × 2 shards) | 4 (2 × 2) | 192 (96 × 2) | 40,000 (20,000 × 2) | 200,000 |