All Products
Search
Document Center

Tair (Redis® OSS-Compatible):Limits

Last Updated:Mar 28, 2026

Before deploying Global Distributed Cache, review these limits to avoid instance instability and data inconsistency.

Instance requirements

  • Each child instance must be a DRAM-based instance.

  • All child instances in a distributed instance must have the same specifications. If you change the specifications of one child instance, update all others to match. Mismatched specifications cause performance or capacity issues.

  • A distributed instance can contain up to three child instances. Child instances cannot be in the same zone. The first child instance can be converted from an existing instance; the second and third must be newly purchased.

  • All child instances must either reside in the Chinese mainland or in regions outside the Chinese mainland. To sync data from the Chinese mainland to regions outside the Chinese mainland, use Data Transmission Service (DTS). For more information, see Apply for permissions to synchronize data across borders.

  • Once a child instance joins a distributed instance, the following changes are not allowed:

    AttributeChange allowed?Notes
    ZoneNoCannot be reassigned after joining
    ArchitectureNoFor example, you cannot switch from cluster to standard architecture
    Shard configuration (cluster instances)PartialYou can adjust either shard specifications or shard count, but not both simultaneously. See Why am I unable to change the configurations of a classic (local disk-based) cluster instance?

Command restrictions

The following commands are not synchronized across child instances or have conditional behavior. Running them without understanding these restrictions can cause permanent data inconsistency.

  • FLUSHDB and FLUSHALL: These commands are not synchronized. Running either command on one child instance does not affect the others, leaving instances in an inconsistent state.

    • To clear all data across a distributed instance, run FLUSHALL on each child instance individually.

    • To resynchronize data from Instance A to Instance B, remove Instance B from the distributed instance, run FLUSHALL on Instance B, then reconnect Instance B.

  • Pub/Sub commands: Pub/Sub commands are not synchronized across child instances. Use the Stream data structure to replicate notification messages across zones or regions instead.

  • RESTORE: The RESTORE command applies on the destination instance only if no key with the same name already exists there.

  • DEL commands from data eviction and expiration: DEL commands triggered by data eviction or key expiration are not synchronized to other child instances. Keep memory usage within instance specifications to prevent unexpected eviction.

Synchronization granularity

Data is synchronized at the instance level. Each child instance is always synchronized as a whole — Global Distributed Cache cannot synchronize a subset of data from one child instance to another.

Note

To synchronize only a portion of data, modify your business logic to control which data is written to each child instance.

Data consistency

Single writable instance

In scenarios where only one child instance is writable (for example, disaster recovery), the expected final state is that all child instances hold the same data. Data inconsistency can still occur in the following situations:

  • Keys with a TTL: Synchronization latency can cause a key renewal to arrive at another child instance after the key has already expired. For example, if the EXPIRE command successfully renews a key on Child Instance A but the key expires on Child Instance B before the synchronization command arrives, the key is lost on Child Instance B. Mitigation: If your application requires strong consistency, avoid setting a time to live (TTL) on keys in a Global Distributed Cache instance.

  • Data eviction: Each child instance selects keys for eviction independently and randomly. The default eviction policy is volatile-lru, and eviction-triggered DEL operations are not synchronized. Mitigation: Keep memory usage within instance specifications to prevent eviction from triggering.

Multiple writable instances

In scenarios where multiple child instances are writable simultaneously (for example, active geo-redundancy), avoid writing to the same key across multiple child instances at the same time or in quick succession.

Important

Global Distributed Cache does not support conflict-free replicated data types (CRDTs). Your application is responsible for managing data consistency. As a general rule, partition your data so that each key is owned and written by a single child instance. Use cross-instance reads but single-instance writes to eliminate the conflict scenarios described below.

The following table describes the types of data inconsistency that can occur, how they arise, and how to avoid them.

InconsistencyIllustrationDescriptionMitigation
Failed key renewalimage1. At time 1, both Child Instances A and B hold a key with a TTL expiring at time 3. 2. At time 2, Child Instance A runs PEXPIREAT to extend the key to time 5. 3. At time 3, the key in Child Instance B expires before the PEXPIREAT command arrives due to network latency. 4. At time 4, Child Instance B receives the extension command, but the key is already deleted. Child Instance A still holds the key; Child Instance B has lost it.Avoid using TTLs on keys that are written from multiple instances, or ensure only one instance owns and renews a given key.
Inconsistent command results due to TTLimageUsing SMOVE as an example: 1. At time 1, both instances hold Key1 (expiring at time 3) and Key2 (no TTL). 2. At time 2, Child Instance A runs SMOVE Key1 Key2 "foo" to move the "foo" member. 3. At time 3, Key1 expires on both instances before the sync command reaches Child Instance B. 4. At time 4, Child Instance B receives the SMOVE command, but Key1 is already gone. Key2 in Child Instance A contains "foo"; Key2 in Child Instance B does not.Avoid commands that operate on keys with a TTL across multiple writable instances. Complete these operations from a single designated writer instance.
Random data evictionimage1. At time 1, both instances hold Key1 and Key2. 2. At time 2, Child Instance A runs out of memory, triggers eviction, and randomly deletes Key2. 3. At time 3, Child Instance B runs out of memory, triggers eviction, and randomly deletes Key1. Because the volatile-lru eviction policy selects keys independently on each instance, and eviction-triggered deletions are not synchronized, the two instances end up with different key sets.Keep memory usage within instance specifications. Monitor memory utilization and scale up before eviction can occur.
Concurrent writes — value swappingimageIf the same key is written to multiple child instances simultaneously, values may be swapped after synchronization. 1. At time 1, Key:valA is written to Child Instance A. 2. At time 2, Key:valB is written to Child Instance B. 3. At time 3, both instances synchronize to each other simultaneously. After synchronization, Child Instance A holds Key:valB and Child Instance B holds Key:valA.Assign key ownership to a single instance. Do not write the same key from multiple instances simultaneously.
Concurrent writes — data type conflictimage1. At time 1, a key of the Hash data type is written to Child Instance A. 2. At time 2, a key of the String data type is written to Child Instance B. 3. At time 3, synchronization fails due to a data type conflict.Ensure each key has a consistent data type across all instances. Do not write the same key name with different data types from different instances.
Concurrent writes — unsatisfied write conditionsimage1. At time 1, SETNX Key valA is written to Child Instance A. 2. At time 2, SETNX Key valB is written to Child Instance B. 3. At time 3, synchronization fails because the write condition (key must not exist) is no longer satisfied on the receiving instance.
Note

Commands such as HSETNX and SET[NX | XX] can cause the same issue.

Use conditional write commands only from a single designated writer instance. Route all conditional writes for a given key to one instance.
Concurrent writes — disordered data or data lossimage1. At time 1, RPUSH Key valA is written to Child Instance A. 2. At time 2, RPUSH Key valB is written to Child Instance B. 3. At time 3, synchronization results in out-of-order or lost data.
Note

Commands such as LPUSH, APPEND, DEL, HDEL, INCR, and XADD can cause the same issue.

For ordered or append-only data structures, designate a single writer instance per key.