Global Distributed Cache for Redis is an active geo-redundancy database system that is developed based on ApsaraDB for Redis. Global Distributed Cache for Redis supports business scenarios in which multiple sites in different regions provide services at the same time. It helps enterprises replicate the active geo-redundancy architecture of Alibaba.

Background information

If your business rapidly grows and branches out into a wide range of regions, cross-region and long-distance access can result in high latency and deteriorate user experience. The Global Distributed Cache for Redis feature of Alibaba Cloud can help you reduce the high latency caused by cross-region access. Global Distributed Cache for Redis has the following benefits:

  • You can directly create child instances or specify the child instances that need to be synchronized without the need to implement redundancy in your business logic. This greatly reduces the complexity of business design and allows you to focus on the development of upper-layer business.
  • The geo-replication capability is provided for you to implement geo-disaster recovery or active geo-redundancy.

This feature applies to cross-region data synchronization scenarios and global business deployment in industries such as multimedia, gaming, and e-commerce.

Scenarios

Scenario Description
Active geo-redundancy In the active geo-redundancy scenario, multiple sites in different regions provide services at the same time. Active geo-redundancy is a type of high-availability architecture. The difference from the traditional disaster recovery design is that all sites provide services at the same time in the active geo-redundancy architecture. This allows applications to connect to nearby nodes.
Disaster recovery Global Distributed Cache for Redis can synchronize data among child instances in a two-way manner to support disaster recovery scenarios, such as zone-disaster recovery, disaster recovery based on three data centers across two regions, and three-region disaster recovery.
Load balancing In specific scenarios such as large promotional events, the ultra-high queries per second (QPS) and a large amount of access traffic are predicted to occur. In such scenarios, you can balance loads among child instances to extend the load limit of a single instance.
Data synchronization Global Distributed Cache for Redis can perform two-way data synchronization among child instances in a distributed instance. This feature can be used in scenarios such as data analysis and testing.

Billing

You are not charged for creating a distributed instance. Only child instances in the distributed instance are billed. Child instances and regular ApsaraDB for Redis instances are billed in the same manner. For more information, see Billable items.

Supported instance series

Performance-enhanced instances of ApsaraDB for Redis Enhanced Edition (Tair)

Global Distributed Cache for Redis

Architecture of Global Distributed Cache for Redis

In the architecture of Global Distributed Cache for Redis, a distributed instance is a logical collection of distributed child instances and synchronization channels. Data is synchronized in real time among the child instances by using these synchronization channels. A distributed instance consists of the following components:

Child instances
A child instance is the basic service unit that constitutes a distributed instance. Each child instance is an independent ApsaraDB for Redis instance. For more information, see What is ApsaraDB for Redis? All child instances are readable and writable. Data is synchronized in real time among child instances in a two-way manner. A distributed instance supports geo-replication. You can create child instances in different regions to implement geo-disaster recovery or active geo-redundancy.
Note A child instance must be a performance-enhanced instance of ApsaraDB for Redis Enhanced Edition (Tair). For more information about performance-enhanced instances, see Performance-enhanced instances.
Synchronization channels
A synchronization channel is a one-way link that is used to synchronize data in real time from one child instance to another. Two opposite synchronization channels are required to implement two-way replication between two child instances.
Note In addition to append-only files (AOFs) supported by open source Redis, Global Distributed Cache for Redis also uses information such as server-id and opid to synchronize data. Global Distributed Cache for Redis transmits binlogs over synchronization channels to synchronize data.
Channel manager
The channel manager manages the lifecycle of synchronization channels and performs operations to handle exceptions that occur in child instances, such as a switchover between the primary and secondary databases and the rebuilding of secondary databases.

Benefits

Benefit Description
High synchronization reliability
  • Global Distributed Cache for Redis supports resumable upload and tolerates day-level synchronization interruptions. It is exempt from the limit of the native Redis architecture for incremental synchronization across data centers or regions.
  • Troubleshooting operations such as a switchover between the primary and secondary databases and the rebuilding of the secondary databases, are automatically performed on child instances.
High synchronization performance
  • High throughput
    • For child instances in the standard architecture, a synchronization channel supports up to 50,000 transactions per second (TPS) in one direction.
    • For child instances in the cluster or read/write splitting architecture, the throughput can linearly increase with the number of shards or nodes.
  • Low latency
    • For synchronization between regions in the same continent, the latency ranges from 100 milliseconds to seconds, and the average latency is about 1.2 seconds.
    • For synchronization between regions in different continents, the latency ranges from 1 second to 5 seconds. The latency is determined by the throughput and round-trip time (RTT) of links.
High synchronization accuracy
  • Binlogs are synchronized to the peer instance in the order in which they are generated.
  • Backloop control is supported to prevent binlogs from being synchronized in a loop.
  • The exactly once mechanism is supported to ensure that synchronized binlogs are applied only once.