All Products
Search
Document Center

Data Transmission Service:Migrate data from a self-managed Redis database to a Tair (Redis OSS-compatible) instance

Last Updated:Mar 28, 2026

Data Transmission Service (DTS) migrates data from a self-managed Redis database to a Tair (Redis OSS-Compatible) instance with minimal downtime. Run full data migration and incremental data migration together to keep your applications online throughout the migration.

Prerequisites

Before you begin, make sure that:

  • The source self-managed Redis database and the destination Tair (Redis OSS-Compatible) instance are both created. To create a Tair instance, see Step 1: Create a Tair instance.

  • The destination Tair (Redis OSS-Compatible) instance uses direct connection mode. DTS does not support other connection modes.

  • The PSYNC or SYNC command can be run on the source database. For incremental data migration, the source database account must also have PSYNC and SYNC permissions.

  • The available storage space of the destination instance is larger than the total data size of the source database.

  • For supported source and destination database versions, see Overview of data migration scenarios.

Migration types

DTS supports two migration types for this scenario.

Migration typeWhat DTS migratesBilled
Full data migrationAll existing data in the source database at the time the task startsFree
Incremental data migrationData changes that occur during migration, enabling near-zero-downtime cutoverYes — see Billing overview

Run Full Data Migration + Incremental Data Migration together to migrate existing data and keep the destination in sync until you're ready to cut over.

If you run only full data migration, stop writes to the source database during migration to prevent data inconsistency.

Internet traffic fees apply when the destination Access Method is set to Public IP Address.

Commands supported for incremental data migration

Incremental data migration supports the following Redis commands:

APPEND; BITOP, BLPOP, BRPOP, BRPOPLPUSH; DECR, DECRBY, DEL; EVAL, EVALSHA, EXEC, EXPIRE, EXPIREAT; FLUSHALL, FLUSHDB; GEOADD, GETSET; HDEL, HINCRBY, HINCRBYFLOAT, HMSET, HSET, HSETNX; INCR, INCRBY, INCRBYFLOAT; LINSERT, LPOP, LPUSH, LPUSHX, LREM, LSET, LTRIM; MOVE, MSET, MSETNX, MULTI; PERSIST, PEXPIRE, PEXPIREAT, PFADD, PFMERGE, PSETEX, PUBLISH; RENAME, RENAMENX, RESTORE, RPOP, RPOPLPUSH, RPUSH, RPUSHX; SADD, SDIFFSTORE, SELECT, SET, SETBIT, SETEX, SETNX, SETRANGE, SINTERSTORE, SMOVE, SPOP, SREM, SUNIONSTORE; ZADD, ZINCRBY, ZINTERSTORE, ZREM, ZREMRANGEBYLEX, ZUNIONSTORE, ZREMRANGEBYRANK, ZREMRANGEBYSCORE; XADD, XCLAIM, XDEL, XAUTOCLAIM, XGROUP CREATECONSUMER, XTRIM

Usage notes

Review the following constraints before starting the migration.

Source database constraints

ConstraintDetails
BandwidthThe server hosting the source database must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed.
Writes during full migrationDo not write to the source database during a full-only migration. Concurrent writes cause data inconsistency between source and destination.
Cluster cross-slot operationsWhen migrating from a standalone Redis database to a cluster architecture, each command must operate on a single slot. Cross-slot commands cause the following error and interrupt migration: CROSSSLOT Keys in request don't hash to the same slot. Perform single-key operations during migration.
Key expirationKeys with an expiration policy may not be immediately deleted after they expire. As a result, the destination key count may be lower than the source. Run the INFO command on the destination to check key counts. Keys with expiration disabled or keys still within their TTL are consistent between source and destination.
Heartbeat keyDTS inserts a key prefixed with DTS_REDIS_TIMESTAMP_HEARTBEAT into the source database to track data updates. In cluster mode, DTS inserts this key into each shard. The key is filtered during migration and expires after the task completes.
Latency accuracyIf the source database is read-only or the source account lacks SETEX permission, the reported migration latency may be inaccurate.

Other constraints

ConstraintDetails
Performance impactFull data migration consumes resources on both source and destination. Run migration during off-peak hours for large datasets or under-provisioned servers.
Memory evictionThe default eviction policy (maxmemory-policy) of the destination Tair instance is volatile-lru. If the destination runs low on memory, data inconsistency may occur without failing the task. Set the eviction policy to noeviction to prevent silent data loss — DTS will then fail writes explicitly if memory is insufficient, rather than silently evicting data. For details on eviction policies, see What is the default eviction policy of Tair?
Lua scriptsDTS cannot confirm whether Lua scripts called via EVAL or EVALSHA are executed on the destination during incremental migration, because the destination does not return explicit execution results for these commands.
LIST type duplicatesWhen DTS transfers LIST data using PSYNC or SYNC, it does not flush existing data in the destination instance. This may result in duplicate records in the destination.
Reconfiguration triggersYou must reconfigure the migration task if any of the following occur during migration: the number of shards in the source cluster changes, the source database specifications change (such as a memory scale-up), or the source endpoint changes. Delete migrated data from the destination before reconfiguring to ensure data consistency.
Task resumption riskIf a failed task is automatically resumed, data from the source may overwrite data in the destination. Stop or release the task before switching workloads to the destination.
Destination OOMIf a shard in the destination cluster reaches its memory limit, or if available storage is insufficient, the task fails with an out of memory (OOM) error.
TDEIf Transparent Data Encryption (TDE) is enabled on the destination database, DTS cannot migrate data.
Full re-migration triggersFull data may be re-migrated — potentially causing inconsistency — if a resumable upload fails due to transient connection errors, a primary/secondary switchover or failover occurs, or an endpoint changes on either side.
TLSIf TLS (Transport Layer Security) is enabled on the destination Tair instance, connect via SSL-encrypted (TLSv1.3 is not supported). Tair instances with TLS enabled cannot be connected to DTS as an Alibaba Cloud Instance.
Task restartIf a task containing both full and incremental migration is restarted, DTS may restart both subtasks.
DTS support recoveryIf a DTS task fails, DTS technical support will attempt to restore it within 8 hours. During restoration, the task may be restarted and task parameters may be modified. Database parameters are not modified.
Warning

If you select Ignore Errors and Proceed for conflicting keys, source data overwrites destination data with matching keys. This may cause data loss in the destination. Proceed with caution.

Prepare for incremental data migration

Skip this section if you are running only full data migration.

Remove the replication output buffer limit on the source database to ensure the incremental migration task runs without interruption.

  1. Connect to the source Redis database using redis-cli. Install the Redis client first if needed. See Redis community official website.

    PlaceholderDescriptionExample
    <host>Endpoint of the source database127.0.0.1
    <port>Service port of the source database6379 (default)
    <password>Password of the source databaseTest123456
    redis-cli -h <host> -p <port> -a <password>

    Replace the following:

  2. Run the following command to remove the replication output buffer limit:

    config set client-output-buffer-limit 'slave 0 0 0'

Create a migration task

Step 1: Open the Data Migration page

Use one of the following consoles:

DTS console

  1. Log on to the DTS console.DTS console

  2. In the left-side navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where the migration instance resides.

DMS console

The actual steps vary depending on the DMS console mode and layout. For more information, see Simple mode in the DMS documentation, or Customize the layout and style of the DMS console if you use a custom layout.
  1. Log on to the DMS console.DMS console

  2. In the top navigation bar, move the pointer over Data + AI > DTS (DTS) > Data Migration.

  3. From the drop-down list next to Data Migration Tasks, select the region where the migration instance resides.

Step 2: Configure source and destination databases

Click Create Task, then configure the following parameters.

Task name

ParameterDescription
Task NameA name for the DTS task. DTS generates one automatically. Use a descriptive name to identify the task — it does not need to be unique.

Source database

ParameterDescription
Select Existing ConnectionSelect a database instance already registered with DTS to auto-populate connection parameters. See Manage database connections. If the instance is not registered, configure the parameters manually.
Database TypeSelect Tair/Redis.
Access MethodSelect Self-managed Database on ECS. For other access methods, see Preparation overview.
Instance RegionThe region where the Elastic Compute Service (ECS) instance hosting the source Redis database resides.
Replicate Data Across Alibaba Cloud AccountsSelect No for same-account migration.
ECS Instance IDThe ID of the ECS instance hosting the source Redis database. For cluster-mode sources, select the ID of the ECS instance running a master node, and add the CIDR block of DTS servers to the security group rules of each ECS instance. See Create a security group to create one, Associate security groups with an instance (primary ENI) to associate it, and Add the CIDR blocks of DTS servers for the CIDR block list.
Instance ModeThe deployment architecture of the source Redis database: Standalone or Cluster. If Access Method is set to Public IP Address, Cluster is not supported.
PortThe service port of the source database. Default: 6379. For cluster-mode sources, enter the port of a master node.
Authentication MethodSelect based on your setup. Account + Password Login requires Redis 6.0 or later. For Secret-free login, enable password-free access on the source database first.
Database PasswordThe password for the source database. Leave blank if no password is set. Format: <user>:<password> (for example, admin:Rp829dlwa).
EncryptionSelect Non-encrypted or SSL-encrypted. If Access Method is not Alibaba Cloud Instance and you select SSL-encrypted, upload a CA Certificate and enter a CA Key.

Destination database

ParameterDescription
Select Existing ConnectionSelect a registered DTS instance to auto-populate connection parameters, or configure manually.
Database TypeSelect Tair/Redis.
Access MethodSelect Alibaba Cloud Instance.
Instance RegionThe region where the destination Tair (Redis OSS-Compatible) instance resides.
Replicate Data Across Alibaba Cloud AccountsSelect No for same-account migration.
Instance IDThe ID of the destination Tair (Redis OSS-Compatible) instance.
Authentication MethodSelect based on your setup. Account + Password Login requires Redis 6.0 or later. For Secret-free login, enable password-free access on the destination instance first. See Enable password-free access.
Database PasswordThe password for the destination instance. Format: <user>:<password> (for example, admin:Rp829dlwa).
EncryptionSelect Non-encrypted or SSL-encrypted.

Step 3: Test connectivity and proceed

Click Test Connectivity and Proceed at the bottom of the page.

DTS servers must be able to access both the source and destination databases. Add the CIDR blocks of DTS servers to the security settings of both databases before proceeding — see Add the CIDR blocks of DTS servers for instructions. If the source or destination uses an access method other than Alibaba Cloud Instance, click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.

Step 4: Configure objects to migrate

  1. On the Configure Objects page, set the following parameters.

    ParameterDescription
    Migration TypesSelect Full Data Migration + Incremental Data Migration. If the source account lacks SYNC or PSYNC permission, select Full Data Migration only.
    Processing Mode of Conflicting TablesPrecheck and Report Errors: checks that the destination database is empty before starting. An error is returned if the destination is not empty. Ignore Errors and Proceed: skips the empty-destination check. Source data overwrites destination data with the same keys.
    Source ObjectsSelect one or more databases from Source Objects, then click 向右小箭头 to add them to Selected Objects. You can select databases but not individual keys.
    Selected ObjectsTo map a database (DB 0 to DB 255) or filter by key prefix, right-click a database in Selected Objects and configure options in the Edit Schema dialog box. For instructions, see Map object names for renaming databases and Set filter conditions for prefix filtering. Object name mapping must be done one at a time.
  2. Click Next: Advanced Settings and configure the following parameters.

    ParameterDescription
    Dedicated Cluster for Task SchedulingBy default, DTS schedules the task to the shared cluster. For higher stability, purchase a dedicated cluster. See What is a DTS dedicated cluster.
    Retry Time for Failed ConnectionsHow long DTS retries after a connection failure. Valid values: 10–1,440 minutes. Default: 720. Set to at least 30 minutes. If multiple tasks share the same source or destination, the value set last takes precedence. DTS charges for the instance during retry periods.
    Retry Time for Other IssuesHow long DTS retries after DDL or DML failures. Valid values: 1–1,440 minutes. Default: 10. Set to at least 10 minutes. This value must be less than Retry Time for Failed Connections.
    Enable Throttling for Full Data MigrationLimits read/write load on source and destination during full migration. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected.
    Enable Throttling for Incremental Data MigrationLimits load on the destination during incremental migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected.
    Extend Expiration Time of Destination Database KeyExtends the TTL of migrated keys by the specified number of seconds. Set this when using commands such as expire key seconds, pexpire key milliseconds, expireat key timestamp, or pexpireat key timestampMs to maintain data consistency. Note that this may delay the release of distributed locks.
    Use Slave NodeAvailable when the source Instance Mode is Cluster. Specifies whether to read data from slave (replica) nodes. Default: No (reads from master nodes).
    Environment TagAn optional tag to identify the DTS instance.
    Configure ETLEnables the extract, transform, and load (ETL) feature. Select Yesalert notification settings to enter data processing statements. See Configure ETL in a data migration or data synchronization task.
    Monitoring and AlertingSelect Yes to receive notifications when the task fails or migration latency exceeds a threshold. Configure the alert threshold and notification contacts. See Configure monitoring and alerting when you create a DTS task.
  3. Click Next Step: Data Verification to configure data verification. For details, see Configure a data verification task.

Step 5: Save settings and run the precheck

  • To preview the API parameters for this task, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

  • Click Next: Save Task Settings and Precheck to save and start the precheck.

The task can only start after it passes the precheck.

  • If the precheck fails, click View Details next to each failed item. Fix the issues, then click Precheck Again.

  • If a precheck alert appears:

    • For alerts that cannot be skipped: fix the issue and rerun the precheck.

    • For alerts that can be skipped: click Confirm Alert Details > Ignore > OK, then click Precheck Again. Skipping an alert may result in data inconsistency.

Step 6: Purchase the migration instance

  1. Wait until Success Rate reaches 100%, then click Next: Purchase Instance.

  2. On the Purchase Instance page, configure the instance class.

    SectionParameterDescription
    New Instance ClassResource GroupThe resource group for the migration instance. Default: default resource group. See What is Resource Management?
    Instance ClassDetermines migration speed. Select based on your requirements. See Instance classes of data migration instances.
  3. Read and accept Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start > OK.

Track progress on the Data Migration page.

For tasks with both full and incremental migration selected, the Data Migration page displays Incremental Data Migration as the active status.

What's next

After incremental data migration is complete, stop or release the migration task before switching your applications to the destination. This prevents the source from overwriting data written to the destination after the cutover.