Tair DRAM-based instances are suitable for scenarios that involve a large number of highly concurrent read and write operations on hot data and require higher performance than what ApsaraDB for Redis Community Edition instances can provide. Compared with ApsaraDB for Redis Community Edition instances, DRAM-based instances provide more benefits, including enhanced multi-threading performance and integration with multiple extended data structures.
Integration with multiple Redis modules
Tair DRAM-based instances can be used in scenarios such as live video streaming, flash sales, and online education. The following section describes typical scenarios:
Scenario 1: During flash sales, the number of QPS on some cached hotkeys may exceed 200,000. ApsaraDB for Redis Community Edition instances cannot meet this requirement.
Tair DRAM-based instances can efficiently process requests during these flash sales without performance issues.
Scenario 2: ApsaraDB for Redis Community Edition cluster instances have limits on database transactions and Lua scripts.
Tair DRAM-based instances provide high performance and eliminate the limits on the usage of commands in ApsaraDB for Redis Community Edition cluster instances.
Scenario 3: You have created a self-managed Redis instance that consists of one master node and multiple replica nodes. The number of replica nodes and O&M costs increase as your workloads increase.
Tair DRAM-based instances that use the read/write splitting architecture can provide one data node and up to five read replicas to help you handle millions of QPS.
Scenario 4: You have created a self-managed Redis cluster to handle tens of millions of QPS. The number of data shards and O&M costs increase as your workloads increase.
Tair DRAM-based instances can downsize clusters by two thirds and significantly reduce O&M costs.
Comparison between threading models
ApsaraDB for Redis Community Edition instances and native Redis databases adopt the single-threading model. During request handling, native Redis databases and ApsaraDB for Redis Community Edition instances must undergo the following steps: read requests, parse requests, process data, and then send responses. In this situation, network I/O operations and request parsing consume most of available resources.
To improve performance, each Tair DRAM-based instance runs on multiple threads to process the tasks in these steps in parallel.
Each DRAM-based instance reads and parses requests in I/O threads, places the parsed requests as commands in a queue, and then sends these commands to worker threads. Then, the worker threads run the commands to process the requests and send the responses to I/O threads by using a different queue.
A Tair DRAM-based instance can process up to four I/O threads in concurrency. Unlocked queues and pipelines are used to transmit data between I/O threads and worker threads to improve multi-threading performance.
The multi-threading model of Redis 6.0 consumes large amounts of CPU resources to deliver performance that is up to twice higher than that delivered by the single-threading model of a major version earlier than Redis 6.0. The Real Multi-I/O model of DRAM-based instances provides fully accelerated I/O threads to sustain a large number of concurrent connections and offer a linear increase in throughput.
ApsaraDB for Redis instances and native Redis databases adopt the single-threading model. In the single-threading model, each data node supports 80,000 to 100,000 QPS. Tair DRAM-based instances adopt the multi-threading model. The multi-threading model allows the I/O, worker, and auxiliary threads to process requests in parallel. Each data node of a DRAM-based instance delivers performance that is about three times that delivered by each data node of an ApsaraDB for Redis Community Edition instance. The following table describes ApsaraDB for Redis and Tair DRAM-based instances of different architectures and their use cases.
ApsaraDB for Redis
Tair DRAM-based instance
These instances are not suitable if the number of QPS that is required on a single node exceeds 100,000.
These instances are suitable if the number of QPS that is required on a single node exceeds 100,000.
A cluster instance consists of multiple data nodes. Each data node provides performance that is similar to that of a standard instance. If a data node stores hot data and receives a large number of concurrent requests for hot data, the read and write operations on other data that is stored on the data node may be affected. As a result, the performance of the data node deteriorates.
These instances provide high performance to read and write hot data at reduced maintenance costs.
Read /write splitting
These instances provide high read performance and are suitable for scenarios in which the number of read operations is larger than the number of write operations. However, these instances cannot support a large number of concurrent write operations.
These instances provide high read performance and can support a large number of concurrent write operations. These instances are suitable for scenarios in which a large number of write operations need to be processed but the number of read operations is larger than the number of write operations.
Integration with multiple Redis modules
Similar to open source Redis, ApsaraDB for Redis Community Edition supports a variety of data structures such as strings, lists, hashes, sets, sorted sets, and streams. These data structures are sufficient to support common development workloads but not sophisticated workloads. To manage sophisticated workloads, you must modify your application data or run Lua scripts.
DRAM-based instances are integrated with multiple in-house Redis modules to expand the applicable scope of ApsaraDB for Redis. These modules include exString (including commands that enhance Redis string functionality), exHash, GIS, Bloom, Doc, TS, Cpc, exZset, Roaring, Vector, and Search. These modules simplify business development in complex scenarios and allow you to focus on your business innovation.
After you enable the data flashback feature, ApsaraDB for Redis retains append-only file (AOF) backup data for up to seven days. During the retention period, you can specify a point in time that is accurate to the second to create an instance and restore the backup data at the specified point in time to the new instance.
After you enable the proxy query cache feature, the configured proxy nodes cache requests and responses for hotkeys. If the same requests are received from a client within a specific validity period, ApsaraDB for Redis retrieves the responses to the requests from the cache and returns the responses to the client. During this process, ApsaraDB for Redis does not need to interact with backend data shards. For more information, see Use proxy query cache to address issues caused by hotkeys.
Global Distributed Cache for Redis is an active geo-redundancy database system that is developed based on ApsaraDB for Redis. Global Distributed Cache for Redis supports business scenarios in which multiple sites in different regions provide services at the same time. It helps enterprises replicate the active geo-redundancy architecture of Alibaba.
Two-way data synchronization by using DTS
Data Transmission Service (DTS) supports two-way data synchronization between ApsaraDB for Redis Enhanced Edition (Tair) instances. This synchronization solution is suitable for scenarios such as active geo-redundancy and geo-disaster recovery. For more information, see Configure two-way synchronization between ApsaraDB for Redis Enhanced Edition (Tair) instances. For more information about DTS, see What is DTS?
Q: What do I do if a client does not support the commands that are provided by new data structures?
A: You can define the commands that are provided by new data structures in your application code before you use the commands in your client. Alternatively, you can use Tair clients to invoke these data structures. For more information, see and .