Data Transmission Service (DTS) migrates data from a self-managed Redis database to a Tair (Redis OSS-Compatible) instance with minimal downtime. Run full data migration and incremental data migration together to keep your applications online throughout the migration.
Prerequisites
Before you begin, make sure that:
The source self-managed Redis database and the destination Tair (Redis OSS-Compatible) instance are both created. To create a Tair instance, see Step 1: Create a Tair instance.
The destination Tair (Redis OSS-Compatible) instance uses direct connection mode. DTS does not support other connection modes.
The
PSYNCorSYNCcommand can be run on the source database. For incremental data migration, the source database account must also havePSYNCandSYNCpermissions.The available storage space of the destination instance is larger than the total data size of the source database.
For supported source and destination database versions, see Overview of data migration scenarios.
Migration types
DTS supports two migration types for this scenario.
| Migration type | What DTS migrates | Billed |
|---|---|---|
| Full data migration | All existing data in the source database at the time the task starts | Free |
| Incremental data migration | Data changes that occur during migration, enabling near-zero-downtime cutover | Yes — see Billing overview |
Run Full Data Migration + Incremental Data Migration together to migrate existing data and keep the destination in sync until you're ready to cut over.
If you run only full data migration, stop writes to the source database during migration to prevent data inconsistency.
Internet traffic fees apply when the destination Access Method is set to Public IP Address.
Commands supported for incremental data migration
Incremental data migration supports the following Redis commands:
APPEND; BITOP, BLPOP, BRPOP, BRPOPLPUSH; DECR, DECRBY, DEL; EVAL, EVALSHA, EXEC, EXPIRE, EXPIREAT; FLUSHALL, FLUSHDB; GEOADD, GETSET; HDEL, HINCRBY, HINCRBYFLOAT, HMSET, HSET, HSETNX; INCR, INCRBY, INCRBYFLOAT; LINSERT, LPOP, LPUSH, LPUSHX, LREM, LSET, LTRIM; MOVE, MSET, MSETNX, MULTI; PERSIST, PEXPIRE, PEXPIREAT, PFADD, PFMERGE, PSETEX, PUBLISH; RENAME, RENAMENX, RESTORE, RPOP, RPOPLPUSH, RPUSH, RPUSHX; SADD, SDIFFSTORE, SELECT, SET, SETBIT, SETEX, SETNX, SETRANGE, SINTERSTORE, SMOVE, SPOP, SREM, SUNIONSTORE; ZADD, ZINCRBY, ZINTERSTORE, ZREM, ZREMRANGEBYLEX, ZUNIONSTORE, ZREMRANGEBYRANK, ZREMRANGEBYSCORE; XADD, XCLAIM, XDEL, XAUTOCLAIM, XGROUP CREATECONSUMER, XTRIM
Usage notes
Review the following constraints before starting the migration.
Source database constraints
| Constraint | Details |
|---|---|
| Bandwidth | The server hosting the source database must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed. |
| Writes during full migration | Do not write to the source database during a full-only migration. Concurrent writes cause data inconsistency between source and destination. |
| Cluster cross-slot operations | When migrating from a standalone Redis database to a cluster architecture, each command must operate on a single slot. Cross-slot commands cause the following error and interrupt migration: CROSSSLOT Keys in request don't hash to the same slot. Perform single-key operations during migration. |
| Key expiration | Keys with an expiration policy may not be immediately deleted after they expire. As a result, the destination key count may be lower than the source. Run the INFO command on the destination to check key counts. Keys with expiration disabled or keys still within their TTL are consistent between source and destination. |
| Heartbeat key | DTS inserts a key prefixed with DTS_REDIS_TIMESTAMP_HEARTBEAT into the source database to track data updates. In cluster mode, DTS inserts this key into each shard. The key is filtered during migration and expires after the task completes. |
| Latency accuracy | If the source database is read-only or the source account lacks SETEX permission, the reported migration latency may be inaccurate. |
Other constraints
| Constraint | Details |
|---|---|
| Performance impact | Full data migration consumes resources on both source and destination. Run migration during off-peak hours for large datasets or under-provisioned servers. |
| Memory eviction | The default eviction policy (maxmemory-policy) of the destination Tair instance is volatile-lru. If the destination runs low on memory, data inconsistency may occur without failing the task. Set the eviction policy to noeviction to prevent silent data loss — DTS will then fail writes explicitly if memory is insufficient, rather than silently evicting data. For details on eviction policies, see What is the default eviction policy of Tair? |
| Lua scripts | DTS cannot confirm whether Lua scripts called via EVAL or EVALSHA are executed on the destination during incremental migration, because the destination does not return explicit execution results for these commands. |
| LIST type duplicates | When DTS transfers LIST data using PSYNC or SYNC, it does not flush existing data in the destination instance. This may result in duplicate records in the destination. |
| Reconfiguration triggers | You must reconfigure the migration task if any of the following occur during migration: the number of shards in the source cluster changes, the source database specifications change (such as a memory scale-up), or the source endpoint changes. Delete migrated data from the destination before reconfiguring to ensure data consistency. |
| Task resumption risk | If a failed task is automatically resumed, data from the source may overwrite data in the destination. Stop or release the task before switching workloads to the destination. |
| Destination OOM | If a shard in the destination cluster reaches its memory limit, or if available storage is insufficient, the task fails with an out of memory (OOM) error. |
| TDE | If Transparent Data Encryption (TDE) is enabled on the destination database, DTS cannot migrate data. |
| Full re-migration triggers | Full data may be re-migrated — potentially causing inconsistency — if a resumable upload fails due to transient connection errors, a primary/secondary switchover or failover occurs, or an endpoint changes on either side. |
| TLS | If TLS (Transport Layer Security) is enabled on the destination Tair instance, connect via SSL-encrypted (TLSv1.3 is not supported). Tair instances with TLS enabled cannot be connected to DTS as an Alibaba Cloud Instance. |
| Task restart | If a task containing both full and incremental migration is restarted, DTS may restart both subtasks. |
| DTS support recovery | If a DTS task fails, DTS technical support will attempt to restore it within 8 hours. During restoration, the task may be restarted and task parameters may be modified. Database parameters are not modified. |
If you select Ignore Errors and Proceed for conflicting keys, source data overwrites destination data with matching keys. This may cause data loss in the destination. Proceed with caution.
Prepare for incremental data migration
Skip this section if you are running only full data migration.
Remove the replication output buffer limit on the source database to ensure the incremental migration task runs without interruption.
Connect to the source Redis database using redis-cli. Install the Redis client first if needed. See Redis community official website.
Placeholder Description Example <host>Endpoint of the source database 127.0.0.1<port>Service port of the source database 6379(default)<password>Password of the source database Test123456redis-cli -h <host> -p <port> -a <password>Replace the following:
Run the following command to remove the replication output buffer limit:
config set client-output-buffer-limit 'slave 0 0 0'
Create a migration task
Step 1: Open the Data Migration page
Use one of the following consoles:
DTS console
Log on to the DTS console.DTS console
In the left-side navigation pane, click Data Migration.
In the upper-left corner, select the region where the migration instance resides.
DMS console
Log on to the DMS console.DMS console
In the top navigation bar, move the pointer over Data + AI > DTS (DTS) > Data Migration.
From the drop-down list next to Data Migration Tasks, select the region where the migration instance resides.
Step 2: Configure source and destination databases
Click Create Task, then configure the following parameters.
Task name
| Parameter | Description |
|---|---|
| Task Name | A name for the DTS task. DTS generates one automatically. Use a descriptive name to identify the task — it does not need to be unique. |
Source database
| Parameter | Description |
|---|---|
| Select Existing Connection | Select a database instance already registered with DTS to auto-populate connection parameters. See Manage database connections. If the instance is not registered, configure the parameters manually. |
| Database Type | Select Tair/Redis. |
| Access Method | Select Self-managed Database on ECS. For other access methods, see Preparation overview. |
| Instance Region | The region where the Elastic Compute Service (ECS) instance hosting the source Redis database resides. |
| Replicate Data Across Alibaba Cloud Accounts | Select No for same-account migration. |
| ECS Instance ID | The ID of the ECS instance hosting the source Redis database. For cluster-mode sources, select the ID of the ECS instance running a master node, and add the CIDR block of DTS servers to the security group rules of each ECS instance. See Create a security group to create one, Associate security groups with an instance (primary ENI) to associate it, and Add the CIDR blocks of DTS servers for the CIDR block list. |
| Instance Mode | The deployment architecture of the source Redis database: Standalone or Cluster. If Access Method is set to Public IP Address, Cluster is not supported. |
| Port | The service port of the source database. Default: 6379. For cluster-mode sources, enter the port of a master node. |
| Authentication Method | Select based on your setup. Account + Password Login requires Redis 6.0 or later. For Secret-free login, enable password-free access on the source database first. |
| Database Password | The password for the source database. Leave blank if no password is set. Format: <user>:<password> (for example, admin:Rp829dlwa). |
| Encryption | Select Non-encrypted or SSL-encrypted. If Access Method is not Alibaba Cloud Instance and you select SSL-encrypted, upload a CA Certificate and enter a CA Key. |
Destination database
| Parameter | Description |
|---|---|
| Select Existing Connection | Select a registered DTS instance to auto-populate connection parameters, or configure manually. |
| Database Type | Select Tair/Redis. |
| Access Method | Select Alibaba Cloud Instance. |
| Instance Region | The region where the destination Tair (Redis OSS-Compatible) instance resides. |
| Replicate Data Across Alibaba Cloud Accounts | Select No for same-account migration. |
| Instance ID | The ID of the destination Tair (Redis OSS-Compatible) instance. |
| Authentication Method | Select based on your setup. Account + Password Login requires Redis 6.0 or later. For Secret-free login, enable password-free access on the destination instance first. See Enable password-free access. |
| Database Password | The password for the destination instance. Format: <user>:<password> (for example, admin:Rp829dlwa). |
| Encryption | Select Non-encrypted or SSL-encrypted. |
Step 3: Test connectivity and proceed
Click Test Connectivity and Proceed at the bottom of the page.
DTS servers must be able to access both the source and destination databases. Add the CIDR blocks of DTS servers to the security settings of both databases before proceeding — see Add the CIDR blocks of DTS servers for instructions. If the source or destination uses an access method other than Alibaba Cloud Instance, click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.
Step 4: Configure objects to migrate
On the Configure Objects page, set the following parameters.
Parameter Description Migration Types Select Full Data Migration + Incremental Data Migration. If the source account lacks SYNCorPSYNCpermission, select Full Data Migration only.Processing Mode of Conflicting Tables Precheck and Report Errors: checks that the destination database is empty before starting. An error is returned if the destination is not empty. Ignore Errors and Proceed: skips the empty-destination check. Source data overwrites destination data with the same keys. Source Objects Select one or more databases from Source Objects, then click
to add them to Selected Objects. You can select databases but not individual keys.Selected Objects To map a database (DB 0 to DB 255) or filter by key prefix, right-click a database in Selected Objects and configure options in the Edit Schema dialog box. For instructions, see Map object names for renaming databases and Set filter conditions for prefix filtering. Object name mapping must be done one at a time. Click Next: Advanced Settings and configure the following parameters.
Parameter Description Dedicated Cluster for Task Scheduling By default, DTS schedules the task to the shared cluster. For higher stability, purchase a dedicated cluster. See What is a DTS dedicated cluster. Retry Time for Failed Connections How long DTS retries after a connection failure. Valid values: 10–1,440 minutes. Default: 720. Set to at least 30 minutes. If multiple tasks share the same source or destination, the value set last takes precedence. DTS charges for the instance during retry periods. Retry Time for Other Issues How long DTS retries after DDL or DML failures. Valid values: 1–1,440 minutes. Default: 10. Set to at least 10 minutes. This value must be less than Retry Time for Failed Connections. Enable Throttling for Full Data Migration Limits read/write load on source and destination during full migration. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. Enable Throttling for Incremental Data Migration Limits load on the destination during incremental migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. Extend Expiration Time of Destination Database Key Extends the TTL of migrated keys by the specified number of seconds. Set this when using commands such as expire key seconds,pexpire key milliseconds,expireat key timestamp, orpexpireat key timestampMsto maintain data consistency. Note that this may delay the release of distributed locks.Use Slave Node Available when the source Instance Mode is Cluster. Specifies whether to read data from slave (replica) nodes. Default: No (reads from master nodes). Environment Tag An optional tag to identify the DTS instance. Configure ETL Enables the extract, transform, and load (ETL) feature. Select Yesalert notification settings to enter data processing statements. See Configure ETL in a data migration or data synchronization task. Monitoring and Alerting Select Yes to receive notifications when the task fails or migration latency exceeds a threshold. Configure the alert threshold and notification contacts. See Configure monitoring and alerting when you create a DTS task. Click Next Step: Data Verification to configure data verification. For details, see Configure a data verification task.
Step 5: Save settings and run the precheck
To preview the API parameters for this task, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
Click Next: Save Task Settings and Precheck to save and start the precheck.
The task can only start after it passes the precheck.
If the precheck fails, click View Details next to each failed item. Fix the issues, then click Precheck Again.
If a precheck alert appears:
For alerts that cannot be skipped: fix the issue and rerun the precheck.
For alerts that can be skipped: click Confirm Alert Details > Ignore > OK, then click Precheck Again. Skipping an alert may result in data inconsistency.
Step 6: Purchase the migration instance
Wait until Success Rate reaches 100%, then click Next: Purchase Instance.
On the Purchase Instance page, configure the instance class.
Section Parameter Description New Instance Class Resource Group The resource group for the migration instance. Default: default resource group. See What is Resource Management? Instance Class Determines migration speed. Select based on your requirements. See Instance classes of data migration instances. Read and accept Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start > OK.
Track progress on the Data Migration page.
For tasks with both full and incremental migration selected, the Data Migration page displays Incremental Data Migration as the active status.
What's next
After incremental data migration is complete, stop or release the migration task before switching your applications to the destination. This prevents the source from overwriting data written to the destination after the cutover.