All Products
Search
Document Center

ApsaraDB for Redis:Overview

Last Updated:Sep 18, 2023

Global Distributed Cache for Redis is an active geo-redundancy database system that is developed in-house by Alibaba Cloud based on ApsaraDB for Redis. Global Distributed Cache for Redis supports business scenarios in which multiple sites in different regions simultaneously provide services. It helps enterprises replicate the active geo-redundancy architecture of Alibaba.

Background information

If your business rapidly grows and branches out into a wide range of regions, cross-region and long-distance access can result in high latency and deteriorate user experience. The Global Distributed Cache for Redis feature of Alibaba Cloud can help you address the high latency caused by cross-region access. Global Distributed Cache for Redis has the following benefits:

  • You can directly create child instances or specify the child instances that need to be synchronized without the need to implement redundancy in your business logic. This greatly reduces the complexity of business design and allows you to focus on the development of upper-layer business.

  • The geo-replication capability is provided for you to implement geo-disaster recovery or active geo-redundancy.

This feature applies to cross-region data synchronization scenarios and global business deployment in industries such as multimedia, gaming, and e-commerce.

Scenarios

Scenario

Description

Active geo-redundancy

In active geo-redundancy scenarios, multiple sites in different regions simultaneously provide services. Active geo-redundancy is a type of high-availability architecture. The difference from the traditional disaster recovery design is that all sites provide services at the same time in the active geo-redundancy architecture. This allows applications to connect to nearby nodes.

Disaster recovery

Global Distributed Cache for Redis can synchronize data across child instances in both directions to support disaster recovery scenarios, such as zone-disaster recovery, disaster recovery based on three data centers across two regions, and three-region disaster recovery.

Load balancing

In specific scenarios such as large promotional events, ultra-high queries per second (QPS) and a large amount of access traffic are predicted to occur. In such scenarios, you can balance loads among child instances to extend the load limit of a single instance.

Data synchronization

Global Distributed Cache for Redis can perform two-way data synchronization across child instances in a distributed instance. This feature can be used in scenarios such as data analysis and testing.

Billing

You are not charged for creating a distributed instance. Only child instances in the distributed instance are billed. Child instances and regular ApsaraDB for Redis instances are billed in the same manner. For more information, see Billable items.

Supported instance series

Classic DRAM-based instances

Global Distributed Cache for Redis

全球多活架构

In the architecture of Global Distributed Cache for Redis, a distributed instance is a logical collection of distributed child instances and synchronization channels. Data is synchronized in real time across child instances by using these synchronization channels. A distributed instance consists of the following components:

  • Child instances

    • A child instance is the basic service unit that constitutes a distributed instance. Each child instance is an independent ApsaraDB for Redis instance. For more information, see What is ApsaraDB for Redis? All child instances are readable and writable. Data is synchronized in real time across child instances in both directions. A distributed instance supports geo-replication. You can create child instances in different regions to implement geo-disaster recovery or active geo-redundancy.

      Note

      A child instance must be a DRAM-based instance of ApsaraDB for Redis Enhanced Edition (Tair).

  • Synchronization channels

    • A synchronization channel is a one-way link that is used to synchronize data in real time from one child instance to another. Two opposite synchronization channels are required to implement two-way replication between two child instances.

      Note

      In addition to append-only files (AOFs) supported by open source Redis, Global Distributed Cache for Redis also uses information such as server-id and opid to synchronize data. Global Distributed Cache for Redis transmits binlogs over synchronization channels to synchronize data.

  • Channel manager

    • The channel manager manages the lifecycle of synchronization channels and performs operations to handle exceptions that occur in child instances, such as a switchover between the primary and secondary databases and the rebuilding of secondary databases.

Benefits

Benefit

Description

High synchronization reliability

  • Global Distributed Cache for Redis supports resumable upload and tolerates day-level synchronization interruptions. It is exempt from the limit of the native Redis architecture for incremental synchronization across data centers or regions.

  • Troubleshooting operations such as a switchover between the primary and secondary databases and the rebuilding of the secondary databases, are automatically performed on child instances.

High synchronization performance

  • High throughput

    • For child instances in the standard architecture, a synchronization channel supports up to 50,000 transactions per second (TPS) in one direction.

    • For child instances in the cluster or read/write splitting architecture, the throughput can linearly increase with the number of shards or nodes.

  • Low latency

    • The synchronization latency between regions within the same continent can vary from tens of milliseconds to seconds, with an average latency around 1.2 seconds.

    • The synchronization latency between regions in different continents can range from approximately 1 second to 5 seconds. The latency is determined by the throughput and round-trip time (RTT) of links.

High synchronization accuracy

  • Binlogs are synchronized to the peer instance in the order in which they are generated.

  • Backloop control is supported to prevent binlogs from being synchronized in a loop.

  • The exactly once mechanism is supported to ensure that synchronized binlogs are applied only once.