Global Distributed Cache for Redis is an active geo-redundancy database system that is developed based on ApsaraDB for Redis. Global Distributed Cache for Redis supports business scenarios in which multiple sites in different regions provide services at the same time. It helps enterprises replicate the active geo-redundancy architecture of Alibaba.
If your business rapidly grows and branches out into a wide range of regions, cross-region and long-distance access can result in high latency and deteriorate user experience. The Global Distributed Cache for Redis feature of Alibaba Cloud can help you reduce the high latency caused by cross-region access. Global Distributed Cache for Redis has the following benefits:
- You can directly create child instances or specify the child instances that need to be synchronized without the need to implement redundancy in your business logic. This greatly reduces the complexity of business design and allows you to focus on the development of upper-layer business.
- The geo-replication capability is provided for you to implement geo-disaster recovery or active geo-redundancy.
This feature applies to cross-region data synchronization scenarios and global business deployment in industries such as multimedia, gaming, and e-commerce.
|Active geo-redundancy||In the active geo-redundancy scenario, multiple sites in different regions provide services at the same time. Active geo-redundancy is a type of high-availability architecture. The difference from the traditional disaster recovery design is that all sites provide services at the same time in the active geo-redundancy architecture. This allows applications to connect to nearby nodes.|
|Disaster recovery||Global Distributed Cache for Redis can synchronize data among child instances in a two-way manner to support disaster recovery scenarios, such as zone-disaster recovery, disaster recovery based on three data centers across two regions, and three-region disaster recovery.|
|Load balancing||In specific scenarios such as large promotional events, the ultra-high queries per second (QPS) and a large amount of access traffic are predicted to occur. In such scenarios, you can balance loads among child instances to extend the load limit of a single instance.|
|Data synchronization||Global Distributed Cache for Redis can perform two-way data synchronization among child instances in a distributed instance. This feature can be used in scenarios such as data analysis and testing.|
You are not charged for creating a distributed instance. Only child instances in the distributed instance are billed. Child instances and regular ApsaraDB for Redis instances are billed in the same manner. For more information, see Billable items and prices.
Supported instance series
Performance-enhanced instances of ApsaraDB for Redis Enhanced Edition (Tair)
Global Distributed Cache for Redis
In the architecture of Global Distributed Cache for Redis, a distributed instance is a logical collection of distributed child instances and synchronization channels. Data is synchronized in real time among the child instances by using these synchronization channels. A distributed instance consists of the following components:
- Child instances
- A child instance is the basic service unit that constitutes a distributed instance.
Each child instance is an independent ApsaraDB for Redis instance. For more information,
see What is ApsaraDB for Redis? All child instances are readable and writable. Data is synchronized in real time
among child instances in a two-way manner. A distributed instance supports geo-replication.
You can create child instances in different regions to implement geo-disaster recovery
or active geo-redundancy.
Note A child instance must be a performance-enhanced instance of ApsaraDB for Redis Enhanced Edition (Tair). For more information about performance-enhanced instances, see Performance-enhanced instances.
- Synchronization channels
- A synchronization channel is a one-way link that is used to synchronize data in real
time from one child instance to another. Two opposite synchronization channels are
required to implement two-way replication between two child instances.
Note In addition to append-only files (AOFs) supported by open source Redis, Global Distributed Cache for Redis also uses information such as server-id and opid to synchronize data. Global Distributed Cache for Redis transmits binlogs over synchronization channels to synchronize data.
- Channel manager
- The channel manager manages the lifecycle of synchronization channels and performs operations to handle exceptions that occur in child instances, such as a switchover between the primary and secondary databases and the rebuilding of secondary databases.
|High synchronization reliability||
|High synchronization performance||
|High synchronization accuracy||
Comparison between Global Distributed Cache for Redis and the two-way synchronization solution of DTS
The following figure compares Global Distributed Cache for Redis with the two-way synchronization solution of Data Transmission Service (DTS) in a scenario where data is synchronized in one direction. The following figure also shows where latency occurs in communication.
The overall performance of Global Distributed Cache for Redis is better than that of the two-way synchronization solution of DTS. For more information, see Configure two-way data synchronization between ApsaraDB for Redis Enhanced Edition (Tair) instances. The following table compares the solutions in detail.
|Item||Global Distributed Cache for Redis||Two-way synchronization solution of DTS|
|Price||You are not charged for creating a distributed instance. Only child instances in the distributed instance are billed. Child instances and regular ApsaraDB for Redis instances are billed in the same manner. For more information, see Billable items and prices.||You are charged for the data synchronization links. For more information, see Pricing.|
|Latency||The latency is consistent with small fluctuations.
The replicator in the T2 phase shown in the preceding figure has independent resources. If a large amount of data is written to the source, the replicator can still obtain the data to be synchronized in a quick manner. The latency in the T1 and T2 phases is fixed at about 400 milliseconds in most cases.
|The latency fluctuates based on the amount of data that is written to the source.
Binlogs are accumulated in the T1 phase shown in the preceding figure. The T2 phase does not have a Service Level Agreement (SLA) guarantee. If a large amount of data is written to the source, the latency in the T1 phase increases from 10 milliseconds to 400 milliseconds, or even several seconds. This affects the efficiency of pulling data and the performance of the entire link.
|Number of synchronization destinations||Data can be synchronized among up to three instances.||Data can be synchronized among more instances. DTS can synchronize data from one instance to multiple instances because the pulled binlogs can be consumed by multiple instances at the same time.|
|Scenarios||This feature is suitable for users who need to write a large amount of data to the
source and have high requirements on the average latency. For example, users can use
this feature to implement active geo-redundancy or modular business.
Note Cross-region synchronization is affected by the latency of carrier networks. We recommend that you configure your business system to write data to multiple sources at the same time if your business needs to support instant response.
|This solution is suitable for scenarios in which only a small amount of data is written and data is read from nearby nodes, such as the cache update scenario.|