Use this topic if you have already created a destination Tair (Redis OSS-compatible) instance and need to move data from a source instance into it. If you haven't created a destination instance yet, clone one directly from a backup instead—see Restore from a backup set or Restore to a point in time.
This topic describes how to use Data Transmission Service (DTS) to perform one-way data migration between Tair (Redis OSS-compatible) instances or self-managed Redis databases. It also compares DTS with the two backup-based alternatives to help you choose the right method.
Compare migration methods
| DTS | Restore from a backup set | Restore to a point in time | |
|---|---|---|---|
| Use when | Migrating data to an existing instance | Cloning a new instance from a backup | Cloning a new instance from a backup |
| Fees | Full migration: free. Incremental migration: pay-as-you-go by duration. Data transfer fees apply for Internet or cross-cloud traffic. | Data restoration: free. New instance creation: charged. | Data restoration: free during trial period (last 7 days). New instance creation: charged. |
| Migration granularity | Database level (DB 0–DB 255) | Instance level | Instance level or key level |
| Incremental migration | Supported | Not supported | Not supported |
| Cross-region migration | Supported | Not supported | Not supported |
| Different database versions | Supported¹ | Not supported | Not supported |
| Different architectures | Supported² | Partially supported² | Partially supported² |
¹ Using the same database version for source and destination is recommended to avoid compatibility issues.
² Before migrating from a standard architecture instance to a cluster or read/write splitting architecture instance, review the command limits for those architectures.
How DTS migrates data
DTS supports two migration types. You can select both when configuring a task to migrate without stopping the source database.
Full Migration + Incremental Migration (default): Uses native Redis SYNC/PSYNC logic to write data from a memory snapshot to the destination, then synchronizes incremental updates in real time. Full migration is free; incremental migration is billed by duration.
Full Data Migration: Uses the SCAN command to traverse the entire source database. Do not write to the source during this type of migration, as doing so can cause data inconsistency.
Prerequisites
Before you begin, make sure that:
A destination Tair (Redis OSS-compatible) instance exists.
The destination instance has more total memory than the amount currently used by the source instance — at least 10% more. For example, if the source is using 10 GB, the destination needs at least 11 GB of total memory. Insufficient destination memory causes data inconsistency or task failure.
Transparent Data Encryption (TDE) is not enabled on the destination instance. DTS does not support migration to TDE-enabled instances.
The destination instance is not running Tair (Redis OSS-compatible) 2.8. DTS does not support 2.8 instances.
Limitations
Do not scale, change the instance type, or change the endpoint of the source or destination database while a migration task is running. These operations cause the task to fail and require you to reconfigure it.
Perform migration during off-peak hours, as the process consumes resources on both instances.
The precheck verifies that the destination database's eviction policy is set to noeviction. The default eviction policy for Tair (Redis OSS-compatible) is volatile-lru. If the destination runs out of memory with volatile-lru, data is silently evicted, causing inconsistency. Setting noeviction instead means writes fail (and the task fails) if memory runs out, but no data is silently evicted. For more information, see Redis data eviction policies.
Configure a migration task
Step 1: Open the migration task list
Log on to the Data Management (DMS) console.
In the top menu bar, click Data + AI > Data Transmission Service (DTS) > Data Migration.
On the right side of Migration Tasks, select the region where your destination instance is located.
Step 2: Create a task and configure databases
Click Create Task.
Configure the source and destination databases using the following parameters.
General settings
Parameter Description Task Name DTS generates a name automatically. Specify a descriptive name for easy identification. The name does not need to be unique. Source Database
Parameter Description Select DMS Database Instance If the source database is already added to DMS, select it here. The remaining source parameters are filled automatically. Otherwise, skip this field. Database Type Select Tair/Redis. Connection Type Select the type that matches where your source database is deployed. For a cloud instance, select Cloud Instance. Instance Region Select the region where the source instance resides. Cross-account Migration Select No if the source and destination are in the same Alibaba Cloud account. Instance ID Select the ID of the source instance. Authentication Method Select Password Login or Password-free Login. If password-free access over VPC is not enabled, select Password Login. Database Password Enter the password in <user>:<password>format — for example,admin:Rp829dlwa. Leave blank if no password is set.Destination Database
Parameter Description Select DMS Database Instance If the destination database is already added to DMS, select it here. The remaining destination parameters are filled automatically. Otherwise, skip this field. Database Type Tair/Redis is selected by default. Connection Type Select Cloud Instance. Instance Region Select the region where the destination instance resides. Instance ID Select the ID of the destination instance. Authentication Method Select Password Login or Password-free Login. If password-free access over VPC is not enabled, select Password Login. Database Password Enter the password in <user>:<password>format — for example,admin:Rp829dlwa.Click Test Connection And Proceed at the bottom of the page.
Step 3: Select migration objects
Configure the task objects, then click Next: Advanced Configuration.
| Parameter | Description |
|---|---|
| Migration Types | Choose based on your requirements. Full Migration + Incremental Migration (default): migrates data without stopping the source. Select Full Data Migration only if you lack SYNC or PSYNC permissions on the source. |
| Processing Mode for Existing Destination Tables | Precheck and Block on Error (default): checks if any keys exist in the destination database. If keys exist, an error is reported during the precheck phase and the migration task does not start. Ignore and Continue: skips the Check For Existing Objects In The Destination Database. If keys with the same names already exist in the destination database, they will be overwritten. |
| Source Objects / Selected Objects | Select the databases (DB 0–DB 255) to migrate, then click the arrow to move them to Selected Objects. |
Step 4: Configure advanced settings
Click Next: Data Validation. In most cases, keep the default settings. For details, see Appendix: Advanced settings.
Step 5: Configure validation settings
Click Next: Save Task and Precheck. In most cases, keep the default settings. For details, see Configure data validation for a DTS sync or migration instance.
Step 6: Run the precheck
After the precheck completes:
If all items pass, proceed to the next step.
If any item shows Warning or Failed, click View Details and resolve it based on the instructions. You can click Confirm Alert Details to acknowledge a warning, but doing so may cause data inconsistency. For help resolving precheck failures, see Precheck issues. Rerun the precheck after fixing all issues.
Click Next: Purchase.
Step 7: Purchase and start the task
On the Purchase page:
(Optional) Assign a Resource Group. The default resource group is used by default.
(Optional) Select an instance class for the DTS migration link. A higher class provides a faster migration rate but costs more. The default is large. For specifications, see Data migration link specifications.
Read and accept the terms of service.
Click Purchase and Start.
The migration task starts. You can monitor its progress on the Data Migration page.
Step 8: Complete the migration
If you selected Full Migration + Incremental Migration, DTS continues synchronizing incremental updates after the initial migration completes. When you're ready to cut over, manually stop or release the task in the console.
To verify that all data migrated correctly, see Validate migrated data.
FAQ
Why does the connection test fail?
Check the following:
The account or password is incorrect. The Redis password format is
user:password. For more information, see Logon methods for an instance.If the source database is in an on-premises data center or a third-party cloud, a network firewall may be blocking DTS. Add the IP addresses of DTS servers for your region to the firewall whitelist. For instructions, see Add the CIDR blocks of DTS servers to a whitelist.
Why does the migration task fail?
Scaling, changing the instance type, or changing the endpoint of the source or destination instance during migration causes task failure. Reconfigure the task after the operation completes.
If the destination instance or a cluster shard runs out of memory (OOM error), the task fails. Make sure the destination has enough memory before starting.
If TDE is enabled on the destination instance, DTS migration is not supported.
Why is there a data inconsistency?
TTL keys: If some keys in the source have a time-to-live (TTL) policy, the destination may have fewer keys because expired keys may not be deleted promptly.
List objects: DTS does not perform a FLUSH on existing List data in the destination when using PSYNC or SYNC, which can result in duplicate entries.
Full migration retries: If the network is interrupted during full migration, DTS retries and overwrites keys with the same name. If the source deletes a key during a retry, that delete is not synchronized, so the destination may temporarily have more data than the source.
Why can't I select a Tair (Redis OSS-compatible) 2.8 instance?
DTS does not support Tair (Redis OSS-compatible) 2.8 instances.
Why does the precheck verify that the eviction policy is set to noeviction?
The default eviction policy (maxmemory-policy) for Tair (Redis OSS-compatible) is volatile-lru. If the destination runs out of memory, this policy triggers data eviction, causing inconsistency between source and destination without failing the task. Setting the policy to noeviction prevents silent data loss: if memory runs out, writes fail and the task fails, but no data is evicted. For more information, see Redis data eviction policies.
Why is there a DTS_REDIS_TIMESTAMP_HEARTBEAT key in the source database?
To monitor migration and sync quality, DTS inserts a key prefixed with DTS_REDIS_TIMESTAMP_HEARTBEAT into the source database to track update timestamps. For cluster architectures, DTS inserts this key into each shard. DTS filters this key out during migration, and the key automatically expires when the task ends.
Which commands are supported for incremental migration?
The following commands are supported:
APPEND
BITOP, BLPOP, BRPOP, BRPOPLPUSH
DECR, DECRBY, DEL
EVAL, EVALSHA, EXEC, EXPIRE, EXPIREAT
FLUSHALL, FLUSHDB
GEOADD, GETSET
HDEL, HINCRBY, HINCRBYFLOAT, HMSET, HSET, HSETNX
INCR, INCRBY, INCRBYFLOAT
LINSERT, LPOP, LPUSH, LPUSHX, LREM, LSET, LTRIM
MOVE, MSET, MSETNX, MULTI
PERSIST, PEXPIRE, PEXPIREAT, PFADD, PFMERGE, PSETEX, PUBLISH
RENAME, RENAMENX, RESTORE, RPOP, RPOPLPUSH, RPUSH, RPUSHX
SADD, SDIFFSTORE, SELECT, SET, SETBIT, SETEX, SETNX, SETRANGE, SINTERSTORE, SMOVE, SPOP, SREM, SUNIONSTORE
ZADD, ZINCRBY, ZINTERSTORE, ZREM, ZREMRANGEBYLEX, ZUNIONSTORE, ZREMRANGEBYRANK, ZREMRANGEBYSCORE
XADD, XCLAIM, XDEL, XAUTOCLAIM, XGROUP CREATECONSUMER, XTRIM
Note: For Lua scripts called by EVAL or EVALSHA, DTS cannot confirm whether the script executed successfully on the destination, because the destination does not return an explicit execution result.
References
If you want to clone the full data of a Tair (Redis OSS-compatible) instance to a new instance rather than migrating into an existing one, use the backup and restore feature: