Tair (Redis OSS-compatible) allows you to enable the read/write splitting feature for multi-replica cluster instances. Cluster instances can resolve the bottlenecks of the single-threaded model of open source Redis and meet the requirements for large capacity and high performance. Cluster instances support two connection modes: proxy mode and direct connection mode. You can select a connection mode based on your business requirements. This topic describes Tair (Redis OSS-compatible) cluster instances.
Usage notes
You cannot enable the proxy mode and the direct connection mode at the same time for cloud-native cluster instances. Only cloud-native cluster instances in proxy mode support the read/write splitting feature.
Classic cluster instances support only the master-replica architecture.
Proxy mode (recommended)
The proxy mode simplifies the use of cluster instances. You can connect to cluster instances in proxy mode in the same manner as you connect to standard master-replica instances. Proxy nodes automatically forward requests from clients to data shards and provide advanced features, such as hotkey data caching and failover. For more information, see Features of proxy nodes.
For more information about the architecture and components of a cluster instance in proxy mode, see the following figure and table.
Multiple replicas
Table 1. Components of a cluster instance in proxy mode
Component | Description |
Proxy node | Proxy nodes forward requests from clients to data shards. A cluster instance contains multiple proxy nodes. |
Data shard | Each data shard uses a high availability (HA) architecture in which a master node and multiple replica nodes are deployed on different hosts. The number of replica nodes ranges from 1 to 4. You can deploy replica nodes in the secondary zone. Multiple replica nodes enhance disaster recovery capabilities and reduce the risk of data loss. |
HA component | If the master node fails, the system automatically switches workloads to a replica node within 30 seconds to ensure high service availability and data reliability. If the instance is deployed in dual-zone mode and a replica node exists in the primary zone, workloads are preferentially switched to the replica node in the primary zone to prevent cross-zone access. |
The component quantities and configurations of a cluster instance vary based on the instance specifications. In classic deployment mode, you cannot change component quantities or configurations. However, you can change the specifications or architecture of the instance. For more information, see Change the configurations of an instance and Service architectures. In cloud-native deployment mode, you can change the number of shards on a per-shard basis within the range of 2 to 256. The number of proxy nodes is automatically increased or decreased to match the new configuration. For more information about how to change the number of shards, see Adjust the number of shards for an instance.
Enable read/write splitting
Figure 2. Architecture of a cluster read/write splitting instance
Table 2. Components of a cluster read/write splitting instance
Component | Description |
Proxy node | After a client connects to a proxy node, the proxy node automatically detects client requests and forwards the requests to read and write nodes in each data shard. For example, write requests are forwarded to the master node and read requests are forwarded to the master node and read replicas. |
Data shard | Each data shard consists of one master node and up to four read replicas.
|
HA component | If the master node fails, the HA system performs a master-replica switchover. If a read replica fails, the HA system creates another read replica to process read requests. During the switchover, the HA system updates the routing information to ensure high service availability and data reliability. |
If the instance is deployed in single-zone mode, the master node and read replicas are all located in the primary zone. Endpoints are available only for the primary zone.
If the instance is deployed in dual-zone mode, separate endpoints are available for the primary and secondary zones. Each endpoint supports both read and write operations. Write requests are routed to the primary node in the primary zone. Read requests are routed to the primary node or read replicas within the same zone from which the requests originated. This ensures that the requests are handled by the geographically closest nodes. If all read replicas in the secondary zone become unavailable, read requests from the secondary zone are routed to the master node without affecting business operations.
Direct connection mode
In direct connection mode, you can connect to a cluster instance in a similar manner as you connect to an open source Redis cluster. The first time a client connects to the instance, the Domain Name System (DNS) resolves the private endpoint of the instance into a random virtual IP address (VIP). Then, the client can connect to the data shards of the instance over the Redis Cluster protocol. For more information about the architecture of a cluster instance in direct connection mode, see the following figure and description.
The direct connect mode and the proxy mode are different from each other. For information about the usage notes and usage examples of these connection modes, see Use the direct connection mode to connect to a cluster instance.
You can also configure multiple replicas for cloud-native cluster instances in direct connection mode.
Scenarios
Large volumes of data
Compared with a standard instance, a cluster instance supports a storage capacity of up to 16 TB (65 GB × 256 shards).
High request load
A standard instance cannot support a high request load. A cluster instance allows you to deploy multiple data shards. The data shards can work together to eliminate the performance bottlenecks of the single-threaded model used by open source Redis.
If the master node of a cluster instance is overloaded with read requests, you can enable the read/write splitting feature.
NoteOnly cloud-native cluster instances in proxy mode support the read/write splitting feature. You can migrate data from non-cluster instances to cluster read/write splitting instances in proxy mode by creating instances and using Data Transmission Service (DTS) for data synchronization.
Throughput-intensive applications
Compared with a standard instance, a cluster instance can linearly scale its throughput over an internal network by increasing the number of shards. This allows you to efficiently read hot data and manage high-throughput workloads.
Applications that involve few multi-key operations
Cluster instances use a distributed architecture. In a distributed architecture, operations that involve multiple keys may be limited because all keys must reside in the same slot. For more information, see Limits on commands supported by cluster instances and read/write splitting instances.
Latency-sensitive applications
For a dual-zone instance, you can increase the number of replica nodes in the primary zone. For example, you can include one master node and one replica node in the primary zone and one replica node in the secondary zone. This improves the reliability of disaster recovery and prevents increased latency caused by cross-zone access after a master-replica switchover.
Operation guide
Add replica nodes: On the Node Management page of the instance details page, click Modify.
Add read replicas: On the Node Management page of the instance details page, turn on Read/Write Splitting and click Modify.
Add shards: In the upper-right corner of the instance details page, choose
.Change shard specifications: In the upper-right corner of the instance details page, choose
.