Use Data Transmission Service (DTS) to migrate data from a Tair (Redis OSS-Compatible) instance owned by one Alibaba Cloud account to an instance owned by another. DTS supports full data migration and incremental data migration — run both together to keep your application online during the migration.
How it works
Cross-account migration requires coordination between two accounts:
| Step | Account | Action |
|---|---|---|
| 1 | Account A (source) | Log on to the Resource Access Management (RAM) console. Create a RAM role, set Account B as the trusted account, and grant the role access to Account A's cloud resources. |
| 2 | Account B (destination) | Log on to the DTS console and configure the migration task. DTS uses the RAM role to read the source instance across account boundaries. |
Prerequisites
Before you begin, make sure that you have:
A destination Tair (Redis OSS-Compatible) instance with available storage larger than the total data size of the source instance
Completed RAM authorization — see Configure RAM authorization for cross-account DTS tasks
For supported source and destination instance versions, see Overview of data migration scenarios.
Supported configurations
Use this table to confirm your source and destination instances are compatible before proceeding.
| Dimension | Requirement |
|---|---|
| Source version | Any version except 2.8 |
| Destination version | Same as or later than the source version |
| Instance type | Storage-optimized Tair (Redis OSS-Compatible) Enhanced Edition cannot be source or destination |
| Destination encryption | TDE (Transparent Data Encryption) must be disabled on the destination |
| Destination eviction policy | We recommend that you set maxmemory-policy to noeviction (default is volatile-lru, which risks data loss) |
Migration types
| Migration type | Description | Use when |
|---|---|---|
| Full data migration | Migrates all existing data from the source to the destination at a point in time. | Your application can tolerate downtime, or you do not have SYNC/PSYNC permissions on the source. |
| Incremental data migration | After full migration completes, continuously replicates changes from the source to the destination. | Your application must stay online. Combine with full migration for zero-downtime cutover. |
For most scenarios, select both types. If you do not have permissions to run SYNC or PSYNC on the source, select full data migration only.
Limitations
Source instance
The source instance version cannot be 2.8.
The source server must have enough outbound bandwidth. Insufficient bandwidth reduces migration speed.
During full data migration only: do not write to the source instance. Writes during full-only migration cause data inconsistency. To avoid this, select both full and incremental migration types.
A storage-optimized Tair (Redis OSS-Compatible) Enhanced Edition instance cannot be the source or destination.
If the source is read-only, or the migration account lacks the
SETEXpermission, reported migration latency may be inaccurate.DTS inserts a heartbeat key prefixed with
DTS_REDIS_TIMESTAMP_HEARTBEATinto the source instance (one per shard in cluster mode) to track update timestamps. This key is filtered during migration and expires when the task completes.
Destination instance
The destination Redis version must be the same as or later than the source version. An older destination version causes compatibility issues.
The default data eviction policy (
maxmemory-policy) isvolatile-lru. If memory runs out, data inconsistency may occur. We recommend that you set the policy tonoevictionso that DTS fails writes explicitly rather than silently evicting data.If Transparent Data Encryption (TDE) is enabled on the destination, DTS cannot migrate data.
If a shard in the destination cluster runs out of memory, or available storage is insufficient, the migration task fails with an out of memory (OOM) error.
Other limitations
When DTS transfers LIST type data using
PSYNCorSYNC, it does not flush existing data in the destination. Duplicate records may result.DTS cannot confirm whether Lua scripts (run via
EVALorEVALSHA) execute successfully on the destination during incremental migration, because the destination does not return explicit execution results.Keys with an expiration policy may not be deleted immediately after expiry. The number of keys in the destination may be temporarily fewer than in the source. Run the
INFOcommand to check the key count.After full data migration, the destination tablespace is larger than the source due to fragmentation from concurrent INSERT operations.
DTS retries failed migration tasks for up to seven days. Before switching traffic to the destination, stop or release any failed tasks — otherwise, DTS may overwrite destination data with source data when the task resumes.
If source or destination database configurations change (such as specifications or port number) during migration, the task is interrupted. Delete destination data and reconfigure the migration task after the change.
During full data migration, DTS consumes read and write resources on both instances and increases server load. Run migrations during off-peak hours.
If resumable upload fails during data migration due to transient connections that occur on the source Redis database, full data may be migrated to the destination database again. This may cause data inconsistency between the source and destination databases.
Cluster architecture limitations
Each Redis command in a cluster can only operate on keys within a single slot. If a command targets keys across multiple slots, the following error occurs:
CROSSSLOT Keys in request don't hash to the same slotOperate on one key at a time during migration to avoid this error.
When migrating from a standalone source to a cluster destination, commands that span multiple slots cause the migration task to stop.
Self-managed Redis special cases
If the source database is a self-managed Redis database, take note of the following additional limitations:
If you perform a primary/secondary switchover on the source database while the data migration task is running, the migration task fails.
DTS calculates migration latency based on the timestamp of the latest migrated data in the destination database and the current timestamp in the source database. If no DML operation is performed on the source database for an extended period of time, the migration latency may be inaccurate. Perform a DML operation on the source database to refresh the latency. If you selected an entire database as the migration object, you can create a heartbeat table that receives writes every second.
Commands supported for incremental migration
DTS supports the following commands during incremental data migration:
APPEND
BITOP, BLPOP, BRPOP, BRPOPLPUSH
DECR, DECRBY, DEL
EVAL, EVALSHA, EXEC, EXPIRE, EXPIREAT
FLUSHALL, FLUSHDB
GEOADD, GETSET
HDEL, HINCRBY, HINCRBYFLOAT, HMSET, HSET, HSETNX
INCR, INCRBY, INCRBYFLOAT
LINSERT, LPOP, LPUSH, LPUSHX, LREM, LSET, LTRIM
MOVE, MSET, MSETNX, MULTI
PERSIST, PEXPIRE, PEXPIREAT, PFADD, PFMERGE, PSETEX, PUBLISH
RENAME, RENAMENX, RESTORE, RPOP, RPOPLPUSH, RPUSH, RPUSHX
SADD, SDIFFSTORE, SELECT, SET, SETBIT, SETEX, SETNX, SETRANGE, SINTERSTORE, SMOVE, SPOP, SREM, SUNIONSTORE
ZADD, ZINCRBY, ZINTERSTORE, ZREM, ZREMRANGEBYLEX, ZUNIONSTORE, ZREMRANGEBYRANK, ZREMRANGEBYSCORE
XADD, XCLAIM, XDEL, XAUTOCLAIM, XGROUP CREATECONSUMER, XTRIM
Configure the migration task
Use Account B (the destination account) to complete the following steps.
Step 1: Open the Data Migration Tasks page
Go to the Data Migration Tasks page using either console:
DTS console
Log on to the DTS console.
In the left-side navigation pane, click Data Migration.
In the upper-left corner, select the region where the migration instance resides.
DMS console
The exact navigation path may vary depending on your DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console.
Log on to the DMS console.
In the top navigation bar, move the pointer over Data + AI > DTS (DTS) > Data Migration.
From the drop-down list to the right of Data Migration Tasks, select the region where the migration instance resides.
Step 2: Create a task
Click Create Task.
(Optional) If New Configuration Page appears in the upper-right corner, click it. Skip this step if Back to Previous Version is shown instead.
Specific parameters may differ between the new and previous configuration pages. Use the new version.
Step 3: Configure the source and destination databases
Review the Limits displayed at the top of the configuration page before proceeding. Skipping this may cause task failures or data inconsistency.
Configure the following parameters:
Source database
| Parameter | Description |
|---|---|
| Task Name | DTS generates a name automatically. Specify a descriptive name to identify the task. The name does not need to be unique. |
| Select Existing Connection | Select a registered instance from the list, or leave blank and configure the parameters below manually. |
| Database Type | Select Tair/Redis. |
| Access Method | Select Alibaba Cloud Instance. |
| Instance Region | Select the region of the source instance. |
| Replicate Data Across Alibaba Cloud Accounts | Select Yes. |
| Alibaba Cloud Account | Enter the ID of Account A (the source account). |
| RAM Role Name | Enter the name of the RAM role you created in the prerequisites. |
| Instance ID | Enter the ID of the source Tair (Redis OSS-Compatible) instance. |
| Authentication Method | Select Password Login (used in this example) or Secret-free login. If you select Secret-free login, make sure password-free access is enabled on the instance. See Enable password-free access. |
| Database Password | Enter the password for the source instance. The account must have read permissions. To reset the password, see Change or reset the password. For the default account, enter the password only. For a custom account, use the format <username>:<password> — for example, testaccount:Test1234. Leave blank if no password is set. |
| Encryption | Select Non-encrypted or SSL-encrypted. If you select SSL-encrypted for a self-managed database, upload a CA Certificate and enter a CA Key. |
Destination database
| Parameter | Description |
|---|---|
| Select Existing Connection | Select a registered instance, or configure parameters manually. |
| Database Type | Select Tair/Redis. |
| Access Method | Select Alibaba Cloud Instance. |
| Instance Region | Select the region of the destination instance. |
| Replicate Data Across Alibaba Cloud Accounts | Select No. |
| Instance ID | Enter the ID of the destination Tair (Redis OSS-Compatible) instance. |
| Authentication Method | Select Password Login or Secret-free login. See the note above for secret-free login requirements. |
| Database Password | Enter the password for the destination instance. For a custom account, use the format <username>:<password>. |
| Encryption | Select Non-encrypted or SSL-encrypted. |
Step 4: Test connectivity
Click Test Connectivity and Proceed at the bottom of the page.
DTS server CIDR blocks must be added to the security settings of both instances before the connectivity test succeeds. See Add the CIDR blocks of DTS servers. For self-managed databases not using Alibaba Cloud Instance access, click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.
Step 5: Configure objects to migrate
On the Configure Objects page, set the following parameters:
| Parameter | Description |
|---|---|
| Migration Types | Select Full Data Migration and Incremental Data Migration (or Full Data Migration + Incremental Data Migration). If you lack SYNC or PSYNC permissions on the source, select Full Data Migration only. |
| Processing Mode of Conflicting Tables | Precheck and Report Errors: the precheck fails if the destination is not empty. Ignore Errors and Proceed: skips the empty-destination check. Use with caution — source data overwrites destination data with matching keys, which may cause data loss. |
| Source Objects | Select one or more databases from the Source Objects section, then click the arrow icon to move them to Selected Objects. You can only select databases, not individual keys. |
| Selected Objects | To migrate from specific database numbers (DB 0–255) or filter by key prefix, right-click the database in Selected Objects and configure settings in the Edit Schema dialog box. See Map object names and Set filter conditions. Object name mapping applies to one database at a time. |
Click Next: Advanced Settings.
Step 6: Configure advanced settings
| Parameter | Description |
|---|---|
| Dedicated Cluster for Task Scheduling | By default, DTS uses the shared cluster. Purchase a dedicated cluster for higher task stability. See What is a DTS dedicated cluster. |
| Retry Time for Failed Connections | How long DTS retries after a connection failure. Valid range: 10–1,440 minutes. Default: 720 minutes. We recommend that you set this parameter to a value greater than 30 minutes. If DTS reconnects within this period, the task resumes; otherwise, the task fails. When multiple tasks share the same source or destination, the most recently set value applies. DTS charges for the instance during retry periods. |
| Retry Time for Other Issues | How long DTS retries after DDL or DML operation failures. Valid range: 1–1,440 minutes. Default: 10 minutes. We recommend that you set this parameter to a value greater than 10 minutes. This value must be less than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Migration | Limit DTS resource consumption during full migration by setting Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. |
| Enable Throttling for Incremental Data Migration | Limit resource consumption during incremental migration by setting RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. |
| Extend Expiration Time of Destination Database Key | Extend the validity of migrated keys by the specified number of seconds. Useful when your application uses commands like EXPIRE, PEXPIRE, EXPIREAT, or PEXPIREAT. If you use this parameter for distributed lock scenarios, you may not be able to release locks promptly. |
| Environment Tag | Tag the DTS instance by environment. Optional. |
| Configure ETL | Select Yes to configure extract, transform, and load (ETL) rules. See What is ETL? and Configure ETL in a data migration or data synchronization task. |
| Monitoring and Alerting | Select Yes to receive notifications when the task fails or migration latency exceeds a threshold. Configure the alert threshold and notification settings. See Configure monitoring and alerting. |
Click Next Step: Data Verification to configure data verification. See Configure a data verification task.
Step 7: Save and run the precheck
To preview API parameters for this task, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
Click Next: Save Task Settings and Precheck to save the configuration and start the precheck.
The migration task cannot start until the precheck passes.
If a check item fails, click View Details next to it, fix the issue, then click Precheck Again.
If a check item raises an alert that you can safely ignore, click Confirm Alert Details > Ignore > OK, then click Precheck Again. Ignoring alerts may cause data inconsistency.
Step 8: Purchase an instance and start the task
Wait until Success Rate reaches 100%, then click Next: Purchase Instance.
On the Purchase Instance page, configure the following:
Parameter Description Resource Group The resource group for the migration instance. Default: default resource group. See What is Resource Management?. Instance Class Select an instance class based on your required migration speed. See Instance classes of data migration instances. Read and agree to Data Transmission Service (Pay-as-you-go) Service Terms by selecting the checkbox.
Click Buy and Start, then click OK in the dialog box.
Monitor task progress on the Data Migration page.
If the task uses both full and incremental migration types, Incremental Data Migration is displayed as the active phase after full migration completes.
After migration completes
Before you switch application traffic to the destination instance, complete the following steps to verify data consistency and prevent accidental overwrites:
Verify data — Run the
INFOcommand on both instances to compare key counts. Non-expiring keys and unexpired keys should match. Configure a data verification task if needed: Configure a data verification task.Stop or release failed tasks — DTS retries failed tasks for up to seven days. If a paused task resumes after you have switched traffic, it overwrites destination data with source data. Stop or release all failed tasks before switching.
Switch traffic — Update your application's connection strings to point to the destination instance.
Stop the migration task — After confirming your application is running correctly on the destination, stop the migration task. If the task is incremental, stopping it ends replication from the source.
FAQ
Why might the number of keys in the destination be fewer than in the source?
Keys with an expiration policy may still exist in the source after their expiry time because Redis deletes expired keys lazily. The destination only receives keys that are still valid at migration time. Run the INFO command on the destination to check the current key count. The count of non-expiring keys and unexpired keys is the same in both instances.
What happens to Lua scripts during incremental migration?
DTS migrates Lua script invocations (EVAL and EVALSHA commands), but cannot confirm whether the scripts execute successfully on the destination. The destination does not return explicit execution results for Lua scripts during incremental migration.
Why does the destination have duplicate records for LIST type data?
When DTS transfers LIST type data using PSYNC or SYNC, it does not flush existing LIST data in the destination before writing. If the destination already contains LIST keys, the migrated data is appended, which may produce duplicates.
What should I do if the migration task is interrupted after a database configuration change?
If the source or destination instance configuration changes (such as specifications or port number) during migration, DTS loses log continuity and connection information. Delete all data in the destination instance and reconfigure the migration task from the beginning.
Can I resume a failed migration task after switching workloads to the destination?
Stop or release failed tasks before switching. DTS retries failed tasks for up to seven days. If a task resumes after you have switched workloads, the source data overwrites the destination data, which may cause data loss. To prevent this, revoke write permissions from the DTS accounts on the destination database.
What if migration latency appears unusually high?
Latency is calculated from the timestamp of the latest migrated data versus the current source timestamp. If no DML operations occur on the source for a long period, the latency reading becomes stale. Perform a DML operation on the source to refresh the latency. If you selected the entire database as the migration object, create a heartbeat table that receives writes every second.