All Products
Search
Document Center

Tair (Redis® OSS-Compatible):One-way synchronization between instances in the same Alibaba Cloud account

Last Updated:Mar 30, 2026

Data Transmission Service (DTS) replicates data between Tair (Redis OSS-compatible) instances in the same Alibaba Cloud account, combining a one-time full migration with continuous real-time incremental synchronization. Use this for active geo-redundancy or data disaster recovery.

How synchronization works

DTS runs in two sequential stages:

  1. Full migration — copies all existing data from the source to the destination. Free of charge.

  2. Incremental synchronization — after full migration completes, DTS continuously syncs writes from source to destination in real time. Charged based on duration of use, regardless of data volume. See Billing items.

Prerequisites

Before you begin, make sure you have:

  • A source Tair (Redis OSS-compatible) instance with data to sync

  • A destination Tair (Redis OSS-compatible) instance with more available memory than the source instance currently uses

We recommend that you keep the total amount of memory of the destination database at least 10% larger than the amount of memory used by the source database. If the destination runs out of memory during migration, issues such as data inconsistency or task failures may occur. In this case, empty the destination database and reconfigure the data migration task.

Usage notes

  • Do not scale, change specifications, or modify the endpoint of the source or destination instance while a sync task is running. If you do, the task fails and you must reconfigure it from scratch.

  • Run sync tasks during off-peak hours to minimize impact on production traffic.

  • If the destination uses a cluster architecture, only single-key Redis operations are supported during synchronization. Cross-slot operations within a single command cause link interruptions.

  • If transparent data encryption (TDE) is enabled on the destination instance, DTS cannot migrate data to it.

Set up one-way synchronization

Overview of steps:

  1. Open the Data Synchronization Tasks page and create a task.

  2. Configure source and destination instances.

  3. Select objects to sync and configure conflict handling.

  4. Review advanced settings.

  5. Configure data verification.

  6. Run the precheck and purchase the sync link.

Step 1: Open Data Synchronization Tasks

  1. Log in to the DMS console.

  2. In the top menu bar, choose Data + AI > Data Transmission Service (DTS) > Data Synchronization.

  3. To the right of Sync Tasks, select the region of the instance to sync.

Step 2: Create and configure the task

  1. Click Create Task.

  2. Configure the source and destination databases using the settings below, then click Test Connection And Proceed.

    The Task Name field is auto-generated. Use a descriptive name to make the task easier to identify. It does not need to be unique.

    Source Database

    Field Setting
    Select DMS Database Instance If the source is already added to DMS, select it here. Otherwise, fill in the fields below.
    Database Type Select Tair/Redis.
    Connection Type Select Cloud Instance.
    Instance Region Select the region of the source instance.
    Cross-Alibaba Cloud Account Select No (same-account sync).
    Instance ID Select the source instance ID.
    Authentication Method Select Password Logon or Password-free Logon. If password-free access is not enabled over VPC, select Password Logon.
    Database Password Enter the password. If using a custom account, enter it as <user>:<password> — for example, admin:Rp829dlwa. The account must have read permissions. Leave blank if no password is set.

    Destination Database

    Field Setting
    Select DMS Database Instance If the destination is already added to DMS, select it here. Otherwise, fill in the fields below.
    Database Type Tair/Redis (pre-selected).
    Connection Type Select Cloud Instance.
    Instance Region Select the region of the destination instance.
    Instance ID Select the destination instance ID.
    Authentication Method Select Password Logon or Password-free Logon. If password-free access is not enabled over VPC, select Password Logon.
    Database Password Enter the password. If using a custom account, enter it as <user>:<password>. The account must have write permissions.

Step 3: Select objects and configure conflict handling

Configure the task objects, then click Next: Advanced Configurations.

Field Options
Synchronization Type Full Synchronization + Incremental Synchronization is selected by default.
Processing Mode for Existing Destination Tables Precheck And Block (default): If any keys already exist in the destination, the precheck fails and the task does not start. Ignore Errors And Continue Execution: Skips the destination data existence check. If a key with the same name exists in the destination, it is overwritten.
Source Objects and Selected Objects In Source Objects, select the databases to sync (DB 0–255), then click the arrow to move them to Selected Objects. To remove a database, select it in Selected Objects and click the back arrow.

Step 4: Configure advanced settings

Review the advanced settings and click Next: Data Verification. The defaults work for most scenarios. For details, see Appendix: Advanced settings.

Step 5: Configure data verification

Review the data verification settings and click Next: Save Task Settings and Precheck. The defaults work for most scenarios. For details, see Configure data verification.

Step 6: Run the precheck

DTS validates the task configuration before starting. If any Warning or Failed items appear:

  • Click View Details to see the specific issue.

  • Fix the issue and run the precheck again, or click Confirm Alert Details to acknowledge and continue. Ignoring warnings may result in data inconsistencies.

For common precheck failures, see FAQ.

After the precheck passes, click Next: Purchase Instance.

Step 7: Purchase and start

On the Purchase page:

  1. (Optional) Under Resource Group Configuration, select a resource group for the DTS instance. If left blank, the default resource group is used.

  2. (Optional) Select a sync link specification. Higher specifications provide faster sync speeds at higher cost. The default is large. See Specifications of data synchronization links.

  3. Accept the terms of service, then click Purchase And Start.

The sync task starts automatically. Monitor progress on the Data Synchronization Tasks page.

What's next

To stop syncing, manually end or release the task in the DTS console.

FAQ

The connection test failed. What should I check?

The most common cause is an incorrect password format. If you use a custom account, the password must be in user:password format — for example, admin:Rp829dlwa. See Connect to an instance for details.

If the source is a self-managed database in an on-premises data center or on a third-party cloud, a firewall may be blocking DTS server access. Add the CIDR blocks for DTS servers in your region to the allowlist. See Add DTS server IP addresses to a whitelist.

The sync task failed. What went wrong?

Common causes:

  • You scaled or changed the specifications or endpoint of the source or destination instance while the task was running. Reconfigure the task to account for the changes.

  • The destination instance ran out of memory, or a specific shard in a cluster instance hit its memory limit. This triggers an out of memory (OOM) error and fails the task.

  • Transparent data encryption (TDE) is enabled on the destination instance. DTS cannot migrate data to TDE-enabled instances.

The key counts differ between source and destination. Why?

There are several reasons this can happen:

  • Keys with expiration policies in the source may not be deleted immediately after they expire. The destination may have fewer keys than the source as a result.

  • When DTS uses PSYNC or SYNC to transmit list data, it does not FLUSH existing data in the destination first. This can leave duplicate entries.

  • If the network is interrupted during full migration, DTS may restart the full migration after reconnecting. It overwrites matching keys automatically, but delete operations that ran during the interruption are not replayed — leaving the destination with more keys than the source.

Should I set the eviction policy to `noeviction` on the destination?

Yes, if you want to prevent silent data loss. By default, Tair (Redis OSS-compatible) instances use volatile-lru as the maxmemory-policy. If the destination runs out of memory under this policy, Redis evicts keys silently — the sync task keeps running but the destination ends up with less data than the source.

Setting maxmemory-policy to noeviction changes this behavior: instead of evicting keys, the instance returns an error, which causes the sync task to fail. This makes data loss visible so you can act on it. See What is the default eviction policy?

There's a key with the prefix `DTS_REDIS_TIMESTAMP_HEARTBEAT` in my source database. What is it?

DTS inserts this key into the source database to track when updates occur and maintain sync efficiency. For cluster instances, it inserts one key per shard. DTS filters this key out during migration so it does not appear in the destination. The key expires automatically after migration completes.

I see the error `CROSSSLOT Keys in request don't hash to the same slot`. What does this mean?

This happens when the destination uses a cluster architecture. Redis cluster does not support operations that touch keys in different hash slots within a single command. During DTS synchronization with a cluster destination, use only single-key operations to avoid this error.

Which Redis commands does DTS support for synchronization?

DTS supports the following commands:

  • APPEND

  • BITOP, BLPOP, BRPOP, BRPOPLPUSH

  • DECR, DECRBY, DEL

  • EVAL, EVALSHA, EXEC, EXPIRE, EXPIREAT

  • GEOADD, GETSET

  • HDEL, HINCRBY, HINCRBYFLOAT, HMSET, HSET, HSETNX

  • INCR, INCRBY, INCRBYFLOAT

  • LINSERT, LPOP, LPUSH, LPUSHX, LREM, LSET, LTRIM

  • MOVE, MSET, MSETNX, MULTI

  • PERSIST, PEXPIRE, PEXPIREAT, PFADD, PFMERGE, PSETEX

  • RENAME, RENAMENX, RESTORE, RPOP, RPOPLPUSH, RPUSH, RPUSHX

  • SADD, SDIFFSTORE, SELECT, SET, SETBIT, SETEX, SETNX, SETRANGE, SINTERSTORE, SMOVE, SPOP, SREM, SUNIONSTORE

  • ZADD, ZINCRBY, ZINTERSTORE, ZREM, ZREMRANGEBYLEX, ZUNIONSTORE, ZREMRANGEBYRANK, ZREMRANGEBYSCORE

  • SWAPDB, UNLINK (source engine must be Redis 4.0)

  • XADD, XCLAIM, XDEL, XAUTOCLAIM, XGROUP CREATECONSUMER, XTRIM

PUBLISH commands are not supported.

If you use EVAL or EVALSHA to call Lua scripts, DTS cannot confirm whether the script executed successfully on the destination. The destination does not return explicit execution results for Lua scripts during incremental synchronization.