Use Data Transmission Service (DTS) to migrate data between Tair (Redis OSS-compatible) instances with minimal downtime. DTS supports full data migration and incremental data migration, so you can keep your source instance running while the migration completes.
Prerequisites
Before you begin, make sure that you have:
A source and a destination Tair (Redis OSS-compatible) instance. To create instances, see Step 1: Create an instance
Enough free storage on the destination instance — it must exceed the storage used by the source instance
Source instance account with read permissions; destination instance account with read and write permissions
For supported database versions, see Overview of migration solutions.
Choose a migration type
Select the migration type based on your downtime tolerance:
| Migration type | How it works | When to use |
|---|---|---|
| Full data migration | Copies all existing data from source to destination. Free of charge. When the destination Access Method is set to Public IP Address, Internet traffic fees apply — see Billing overview. | You can tolerate a maintenance window and will stop writes to the source during migration. |
| Full + incremental data migration | Copies existing data, then continuously replicates writes until you cut over. Incremental phase is billed — see Billing overview. | You need zero or near-zero downtime. Recommended for production workloads. |
If you select full data migration only, stop all writes to the source instance during the migration. Writes to the source after the snapshot is taken are not reflected in the destination, causing data inconsistency.
Limitations
Review these limitations before starting your migration task.
Source instance limits:
Source instance version cannot be 2.8.
The server hosting the source instance must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed.
DTS writes a heartbeat key prefixed with
DTS_REDIS_TIMESTAMP_HEARTBEATto the source instance (one per shard in cluster deployments). The key is filtered during migration and expires after the task completes.If the source is a Storage Medium: Persistent Memory Tair (Enterprise Edition) instance, set the
appendonlyparameter toyesbefore migrating. See Disable AOF persistence.If the source is read-only, or the account lacks the
SETEXpermission, reported migration latency may be inaccurate.Cluster architecture: Each Redis command operates on a single slot. Cross-slot operations on the source cause this error and interrupt the migration task:
CROSSSLOT Keys in request don't hash to the same slotOperate on one key at a time during migration.
Destination instance limits:
Storage-optimized instances of Tair (Redis OSS-Compatible) Enhanced Edition cannot be used as the source or destination.
The destination instance version must be the same as or later than the source version.
If Transparent Data Encryption (TDE) is enabled on the destination, DTS cannot migrate data to it.
If a destination cluster shard runs out of memory or storage, the migration task fails with an out of memory (OOM) error.
Default eviction policy (
maxmemory-policy) isvolatile-lru. If the destination runs low on memory, data inconsistency may result. Set the eviction policy tonoevictionso DTS fails writes explicitly rather than silently evicting data. For details, see How does Tair (Redis OSS-Compatible) evict data by default?
Other limits:
DTS does not flush existing LIST data in the destination before writing. If the destination already has LIST keys, duplicate records may appear.
EVALandEVALSHALua scripts: DTS cannot confirm execution on the destination during incremental migration.Keys with expiration policies may not be deleted immediately after expiry, so the destination key count may be lower than the source.
If the Access Method for a self-managed Redis database is Public IP Address, you cannot set Instance Mode to Cluster.
TLS: If TLS is enabled on a Tair instance, connect via SSL-encrypted. TLSv1.3 is not supported, and you cannot connect an SSL-enabled instance as an Alibaba Cloud Instance.
Changing instance configuration (specs, port number) during migration interrupts the task. Delete the destination data and reconfigure the migration task after any configuration changes.
DTS retries failed migration tasks for up to seven days. Before switching workloads to the destination, stop or release any failed DTS tasks. Otherwise, a resumed task may overwrite destination data with stale source data.
If a data migration instance contains both full data and incremental data migration tasks, DTS may restart both tasks after the instance is restarted.
If the instance fails to run, DTS technical support will attempt to restore it within 8 hours. During restoration, the instance may be restarted or DTS instance parameters may be adjusted (database parameters are not modified). For parameters that may be adjusted, see Modify instance parameters.
The following events may cause a full re-migration, leading to potential data inconsistency:
Resumable upload failure due to transient connections
Primary/secondary switchover or failover on source or destination
Endpoint change on source or destination
Self-managed Redis only:
A primary/secondary switchover on the source during migration causes the task to fail.
Migration latency is calculated from the timestamp of the latest migrated record against the current source time. If the source has no DML activity for an extended period, the reported latency may be inaccurate. To keep latency accurate, create a heartbeat table that writes every second when migrating an entire database.
Supported commands for incremental data migration
DTS supports the following Redis commands during incremental data migration:
APPENDBITOP,BLPOP,BRPOP,BRPOPLPUSHDECR,DECRBY,DELEVAL,EVALSHA,EXEC,EXPIRE,EXPIREATFLUSHALL,FLUSHDBGEOADD,GETSETHDEL,HINCRBY,HINCRBYFLOAT,HMSET,HSET,HSETNXINCR,INCRBY,INCRBYFLOATLINSERT,LPOP,LPUSH,LPUSHX,LREM,LSET,LTRIMMOVE,MSET,MSETNX,MULTIPERSIST,PEXPIRE,PEXPIREAT,PFADD,PFMERGE,PSETEX,PUBLISHRENAME,RENAMENX,RESTORE,RPOP,RPOPLPUSH,RPUSH,RPUSHXSADD,SDIFFSTORE,SELECT,SET,SETBIT,SETEX,SETNX,SETRANGE,SINTERSTORE,SMOVE,SPOP,SREM,SUNIONSTOREZADD,ZINCRBY,ZINTERSTORE,ZREM,ZREMRANGEBYLEX,ZUNIONSTORE,ZREMRANGEBYRANK,ZREMRANGEBYSCOREXADD,XCLAIM,XDEL,XAUTOCLAIM,XGROUP CREATECONSUMER,XTRIM
Create a migration task
Step 1: Go to the Data Migration page
Use either the DTS console or the DMS console.
DTS console
Log on to the DTS console.DTS console
In the left-side navigation pane, click Data Migration.
In the upper-left corner, select the region where your migration instance resides.
DMS console
Steps may vary based on the DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console.
Log on to the DMS console.DMS console
In the top navigation bar, go to Data + AI > DTS (DTS) > Data Migration.
From the dropdown to the right of Data Migration Tasks, select the region where your migration instance resides.
Step 2: Configure the source and destination databases
Click Create Task to open the task configuration page.
Configure the source and destination databases using the parameters in the following table.
Category Parameter Description N/A Task Name Enter a descriptive name for the DTS task. The name does not need to be unique. Source Database Select Existing Connection Select a registered database instance from the dropdown to auto-fill the fields below. If the instance is not registered, fill in the fields manually. Database Type Select Tair/Redis. Access Method Select Alibaba Cloud Instance. Instance Region Select the region of the source Tair (Redis OSS-compatible) instance. Replicate Data Across Alibaba Cloud Accounts Select No for same-account migration. Instance ID Select the source instance ID. Authentication Method Select an authentication method. This example uses Password Login. Account + Password Login requires Redis 6.0 or later. For Secret-free login, enable password-free access on the instance first — see Enable password-free access. Database Password Enter the password for the source instance (read permissions required). For the default account, enter the password only. For a custom account, use the format <account>:<password>— for example,testaccount:Test1234. Leave blank if no password is set. To reset a forgotten password, see Change or reset a password.Encryption Select Non-encrypted or SSL-encrypted. For self-managed Redis with SSL, upload a CA Certificate and enter a CA Key. Destination Database Select Existing Connection Same as source — select a registered instance or fill in fields manually. Database Type Select Tair/Redis. Access Method Select Alibaba Cloud Instance. Instance Region Select the region of the destination Tair (Redis OSS-compatible) instance. Replicate Data Across Alibaba Cloud Accounts Select No for same-account migration. Instance ID Select the destination instance ID. Authentication Method Same options as source. Database Password Enter the password for the destination instance (read and write permissions required). Use the same password format as the source. Encryption Select Non-encrypted or SSL-encrypted. Click Test Connectivity and Proceed.
Add the CIDR blocks of DTS servers to the security settings of your source and destination instances. See Add the CIDR blocks of DTS servers. For self-managed databases not using Alibaba Cloud Instance access, click Test Connectivity in the CIDR Blocks of DTS Servers dialog.
Step 3: Select objects and configure advanced settings
On the Configure Objects page, set the following parameters.
Parameter Description Migration Types Select Full Data Migration and Incremental Data Migration for production workloads with near-zero downtime. If the DTS account lacks SYNCorPSYNCpermissions on the source, select Full Data Migration only.Processing Mode of Conflicting Tables Precheck and Report Errors: verifies the destination is empty before starting; fails the precheck if it is not. Ignore Errors and Proceed: skips the empty-destination check. WarningSource data overwrites destination data for matching keys, which may cause data loss.
Source Objects Select one or more databases from the Source Objects list, then click the arrow icon to move them to Selected Objects. To migrate specific keys, use the data filtering feature in the Selected Objects box. Selected Objects To map to a specific destination DB (DB 0–DB 255) or filter by key prefix, right-click a database in the Selected Objects box, then configure the Edit Schema dialog. See Schema mapping and Set filter conditions. Click Next: Advanced Settings and configure the following.
Parameter Description Dedicated Cluster for Task Scheduling By default, DTS uses a shared cluster. Purchase a dedicated cluster for better migration stability. See What is a DTS dedicated cluster. Retry Time for Failed Connections How long DTS retries after a connection failure. Valid values: 10–1,440 minutes. Default: 720 minutes. Set to at least 30 minutes. If DTS reconnects within this window, the task resumes. Otherwise, the task fails. NoteWhen multiple tasks share the same source or destination, the most recently set retry time takes precedence. DTS charges for the instance during retries.
Retry Time for Other Issues How long DTS retries after DDL or DML failures. Valid values: 1–1,440 minutes. Default: 10 minutes. Set to greater than 10 minutes. Must be shorter than Retry Time for Failed Connections. Enable Throttling for Full Data Migration Limits full migration throughput to reduce load on source and destination. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Migration Types includes Full Data Migration. Enable Throttling for Incremental Data Migration Limits incremental migration throughput. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Migration Types includes Incremental Data Migration. Extend Expiration Time of Destination Database Key Adds extra expiration time to keys migrated to the destination. Use this when your application uses commands like expire,pexpire,expireat, orpexpireatto avoid key expiry before cutover.NoteThis may delay release of distributed locks.
Environment Tag Optional. Select a tag to label the instance. Configure ETL Select Yesalert notification settings to configure extract, transform, and load (ETL) rules. Enter data processing statements in the code editor. See Configure ETL in a data migration or data synchronization task. Select No to skip. Monitoring and Alerting Select Yes to receive alerts when the task fails or migration latency exceeds a threshold. Configure the alert threshold and notification settings. See Configure monitoring and alerting. Click Next Step: Data Verification to configure data verification (optional). See Configure a data verification task.
Step 4: Run the precheck
Click Next: Save Task Settings and Precheck.
To view the corresponding API parameters before saving, hover over the button and click Preview OpenAPI parameters.
Wait for the precheck to complete. DTS must pass the precheck before the migration task starts.
If a check item fails, click View Details next to the failed item, fix the issue, then click Precheck Again.
If an alert is triggered: if it cannot be ignored, fix it and recheck. If it can be ignored, click Confirm Alert Details > Ignore > OK > Precheck Again. Ignoring an alert may cause data inconsistency.
Step 5: Purchase and start the instance
Wait for Success Rate to reach 100%, then click Next: Purchase Instance.
On the Purchase Instance page, configure the following.
Section Parameter Description New Instance Class Resource Group The resource group for the migration instance. Default: default resource group. See What is Resource Management? Instance Class Select an instance class based on the required migration speed. See Instance classes of data migration instances. Read and select the checkbox to agree to Data Transmission Service (Pay-as-you-go) Service Terms.
Click Buy And Start, then click OK in the confirmation dialog.
Track progress on the data migration page.
If the task includes both full and incremental migration, it appears as Incremental Data Migration on the task list.
After the migration
Once migration is running (or complete for full-only migrations), complete these steps before decommissioning the source instance.
1. Verify data consistency
Compare key counts and spot-check data in the destination instance. Keep in mind that expired keys may cause a slightly lower count on the destination.
If you configured data verification during task setup, check the verification results in the DTS console.
2. Switch application traffic
Update your application's connection string to point to the destination instance. For incremental migrations, wait until the migration latency drops to zero or near zero before switching — this confirms the destination is fully caught up.
3. Stop and release the DTS task
After confirming your application works correctly with the destination instance, stop the DTS migration task. DTS retries failed tasks for up to seven days, so releasing the task prevents it from resuming and overwriting data on the destination.