Read/write splitting instances of ApsaraDB for Redis are suitable for read-heavy workloads. These instances ensure high availability and high performance, and support multiple specifications. The read/write splitting architecture allows a large number of concurrent requests to read hot data from read replicas. This helps reduce the loads on the master node and minimize operations and maintenance costs.
Overview
A read/write splitting instance of ApsaraDB for Redis consists of a master node, a replica node, read replicas, proxy servers, and a high-availability system.

Service | Description |
---|---|
Master node | The master node processes only write requests and handles read requests together with the read replicas. |
Replica node | As a hot standby node, the replica node does not provide services. |
Read replica | Read replicas process only read requests. The read/write splitting architecture supports chain replication. This allows you to scale out read replicas to increase the read capacity. Optimized binlog files are used to replicate data. This way, full synchronization can be avoided. |
Proxy server | When a client is connected to a proxy server, the proxy server automatically identifies
the type of requests and forwards the requests to different nodes based on the weights
of the nodes. You cannot change the weights of the nodes. For example, the proxy server
forwards write requests to the master node and forwards read requests to the master
node or read replicas.
Note
|
High-availability system | The high-availability system monitors the status of each node. If a master node fails, the high-availability system performs a failover between the master node and the replica node. If a read replica fails, the high-availability system creates another read replica to process read requests. During this process, the high-availability system updates the routing and weight information. |
Key features
- High availability
- Alibaba Cloud has developed a high-availability system for read/write splitting instances. The high-availability system monitors the status of all nodes on an instance to ensure high availability. If a master node fails, the high-availability system switches the workloads from the master node to the replica node and updates the instance topology. If a read replica fails, the high-availability system creates another read replica. The high-availability system synchronizes data and forwards read requests to the new read replica, and suspends the failed read replica.
- The proxy server monitors the service status of each read replica in real time. If a read replica is unavailable due to an exception, the proxy server reduces the weight of this read replica. If a read replica fails to be connected for a specified number of times, the system suspends the read replica and forwards read requests to available read replicas. If the unavailable read replica recovers, the system enables the read replica and continues to monitor the read replica.
- High performance
The read/write splitting architecture supports chain replication. This allows you to scale out read replicas to increase the read capacity and improve the usage of physical resources for each read replica. The replication process is optimized based on the Redis source code to maximize workloads stability during replication.
Scenarios
- High QPS
Standard instances of ApsaraDB for Redis do not support high queries per second (QPS). If your application is read-heavy, you must deploy multiple read replicas. This allows you to resolve the performance bottleneck issue caused by the single-threading feature of Redis. You can configure one, three, or five read replicas for a cluster instance of ApsaraDB for Redis. The cluster instance has the QPS performance five times higher than a standard instance supports.
- Support for more Redis-native features
Read/write splitting instances of ApsaraDB for Redis are compatible with Redis-native commands. You can migrate data from a self-managed Redis database to a read/write splitting instance. You can also upgrade a standard master-replica instance to a read/write splitting instance.
Note Read /write splitting instances have limits on some commands. For more information, see Limits on the commands supported by read/write splitting instances. - Use ApsaraDB for Redis instances to persist data
Read/write splitting instances support data persistence, backup, and recovery. This ensures data reliability.
Usage notes
- If a read replica fails, requests are forwarded to another read replica. If all read replicas are unavailable, requests are forwarded to the master node. Read replica failures may result in an increased amount of workloads on the master node and an increased response time. If a large number of read requests must be processed, we recommend that you use multiple read replicas.
- If an error occurs on a read replica, the high availability module suspends the read replica and creates another read replica. This failover process involves resource allocation, data synchronization, and service loading. The amount of time that is required by a failover is based on the system workloads and data volume. ApsaraDB for Redis does not specify a time period within which a failed read replica can recover.
- Full synchronization between read replicas is triggered in specific scenarios, for
example, when a failover occurs on the master node. During full synchronization, read
replicas are unavailable. If your requests are forwarded to the read replicas, the
following error message is returned:
-LOADING Redis is loading the dataset in memory\r\n
. - The master node conforms to the Service Level Agreement of ApsaraDB for Redis.