Read/write splitting instances of ApsaraDB for Redis are suitable for workloads where reads are more than writes. These instances ensure high availability (HA) and high performance, and support multiple specifications. The read/write splitting architecture allows a large number of concurrent requests to read hot data from read replicas. This can reduce the loads on the master node and minimize operations and maintenance costs.

Component

A read/write splitting instance of ApsaraDB for Redis consists of a master node, a replica node, read replicas, proxy servers, and the HA system.

Architecture of a read/write splitting instance of ApsaraDB for Redis
Component Description
Master node The master node only processes write requests and handles read requests together with the read replicas.
Replica node As a hot standby node, the replica node does not provide services.
Read replica Read replicas only process read requests. The read/write splitting architecture supports chain replication. This allows you to scale out read replicas to increase the read capacity.
Proxy server When a client is connected to a proxy server, the proxy server automatically identifies the type of requests and forwards the requests to different nodes based on specified weights. For example, write requests are forwarded to the master node, and read requests are forwarded to the master node or read replicas.
Note
  • The client must connect to the proxy node. Direct connect with multiple nodes is not supported.
  • The system evenly distributes read requests among the master node and read replicas. Custom weights for read requests are not supported. For example, if you purchase an instance with three read replicas, the weight specified for the master node and three read replicas is 25%.
High availability system The HA system monitors the status of each node. If a master node fails, the HA system performs a failover between the master node and the replica node. If a read replica fails, the HA system creates a new read replica to process read requests. During this process, the HA system updates the routing and weight information.

Key features

  • High availability
    • Alibaba Cloud has developed an HA system for read/write splitting instances. The HA system monitors the status of all nodes on an instance to guarantee high availability. If a master node fails, the HA system will switch the workloads from the master node to the replica node and update the instance topology. If a read replica fails, the HA system will enable another read replica. The HA system will then synchronize data and forward read requests to this enabled read replica and suspend the failed read replica.
    • The proxy server detects the service status of each read replica in real time. If a read replica is unavailable due to an exception, the proxy server reduces the weight of this read replica. After a read replica cannot be connected for a specified number of times, the system suspends the read replica and forwards read requests to available read replicas. If the unavailable read replica recovers, the system enables and continues to monitor the read replica.
  • High performance

    The read/write splitting architecture supports chain replication. This allows you to scale out read replicas to increase the read capacity and optimize the usage of physical resources for each read replica. The source code of the replication process has been customized and optimized by specialists of Alibaba Cloud to maximize workloads stability during replication.

Scenarios

  • High QPS

    Standard instances of ApsaraDB for Redis cannot support high queries per second (QPS) scenarios. If your application processes more read requests than write requests in your workloads, then you must deploy multiple read replicas. This allows you to fix the performance bottleneck issue caused by the single-threading mode. You can attach one, three, or five read replicas to a cluster instance of ApsaraDB for Redis. Compared with a standard instance, the cluster instance has the QPS performance improved by about five times.

  • More native Redis features

    ApsaraDB for read/write splitting instances are compatible with Redis commands. You can migrate data from an on-premises Redis database to the read/write splitting instances. You can also upgrade standard master-replica instances to read/write splitting instances.

  • Persistent storage on ApsaraDB for Redis instances

    Read/write splitting instances support data persistence, backup, and recovery to ensure data reliability.

Notes

  • If a read replica fails, requests are forwarded to another read replica. If all read replicas are unavailable, requests are forwarded to the master node. Read replica failures may result in an increased amount of workloads on the master node and an increased response time. We recommend that you use multiple read replicas to process a large number of read requests.
  • If an error occurs on a read replica, the HA system suspends the read replica and creates a new read replica. This failover process involves resource allocation, data synchronization, and service loading. The amount of time that it takes to perform a failover depends on the system workloads and data size. ApsaraDB for Redis does not specify a time for a faulty read replica to recover.
  • Full synchronization between read replicas is triggered in specific scenarios, for example, when a failover occurs on the master node. During full synchronization, read replicas are unavailable. If your requests are forwarded to the read replicas, the following error message is returned: -LOADING Redis is loading the dataset in memory\r\n.
  • The master node conforms to the Service Level Agreement of ApsaraDB for Redis.