All Products
Search
Document Center

Tair (Redis® OSS-Compatible):Migrate from on-premises or other clouds to Alibaba Cloud Tair

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate a self-managed Redis database to Tair (Redis OSS-compatible) without interrupting your service. DTS supports both full and incremental migration, keeping your source database online throughout the process.

Choose a migration method

MethodHow it worksBest forDowntime
Full + incremental migration (recommended)Uses native Redis replication to stream a memory snapshot, then syncs incremental changes in real timeProduction databases that cannot afford downtimeNone
Full migration onlyUses the SCAN command to traverse and copy all keysDatabases where you cannot run PSYNC or SYNC, or where writes can be pausedBrief pause recommended during cutover
AOF file importImports an Append-Only File (AOF) using redis-cliOffline migrations or lightweight scenariosRequired

This topic covers DTS-based migration (full or full + incremental). For AOF import, see Migrate data from an AOF file.

How it works

Full migration copies all existing data from the source database to the destination. It is free of charge.

Incremental migration runs on top of full migration. After the initial copy, DTS continuously synchronizes new writes from the source to the destination using native Redis replication (PSYNC or SYNC). It is billed based on the duration of the migration, not data volume. For pricing details, see Billing items.

Before starting an incremental migration, disable the replication output buffer limit on the source database to prevent connection drops:
CONFIG SET client-output-buffer-limit 'slave 0 0 0'

Prerequisites

Before you begin, make sure that:

Destination instance

  • A Tair (Redis OSS-compatible) instance is created. See Create an instance.

  • The destination instance has at least 10% more memory than the memory currently used by the source database. Insufficient memory causes data inconsistency or task failure.

  • Transparent Data Encryption (TDE) is not enabled on the destination instance. DTS does not support migrating to TDE-enabled instances.

Source Database

  • If using incremental migration, confirm that the source allows the PSYNC or SYNC command.

  • The source database account has read permissions. If using a custom account, the password format is <user>:<password> — for example, admin:Rp829dlwa.

Network

Usage notes

  • Do not scale the instance, change the instance type, or change the endpoint of the source or destination database during migration. These operations cause the task to fail and require you to reconfigure it from scratch.

  • Migration consumes resources on both databases. Run the migration during off-peak hours.

  • The destination database account must have write permissions.

Migrate your Redis database

Step 1: Open the migration task list

  1. Log on to the Data Management (DMS) console.

  2. In the top menu bar, select Data + AI > Data Transmission Service (DTS) > Data Migration.

  3. To the right of Migration Tasks, select the region where your destination instance is located.

Step 2: Create a task

Click Create Task.

Step 3: Configure source and destination databases

Fill in the source and destination database information, then click Test Connection and Proceed.

Source Database

FieldValue
Task NameDTS generates a name automatically. Specify a descriptive name for easy identification. The name does not need to be unique.
Select DMS database instanceIf the source is already added to DMS, select it here and skip the remaining source fields.
Database TypeSelect Tair/Redis.
Connection TypeSelect based on where the source is deployed: Public IP for on-premises or third-party cloud; Self-managed Database On ECS for a database on an ECS instance.
Instance RegionSelect the region of the ECS instance, or the region closest to the source if it is outside Alibaba Cloud.
Cross-Alibaba Cloud accountSelect No for migrations within the same account.
ECS instance IDSelect the ECS instance that hosts the source database. For cluster architectures, select any master node's ECS instance.
Instance modeSelect Basic Edition for master-replica architecture, or Cluster Edition for cluster architecture. For cluster mode, enter the port of any master node.
PortEnter the source Redis port. Default: 6379.
Authentication methodSelect Password Logon or Password-free Logon.
Database PasswordEnter the password in <user>:<password> format. Leave blank if no password is set. For SSL connections from non-cloud sources, also upload a CA Certificate and enter the CA Key.
Connection methodSelect Unencrypted Connection or SSL Encrypted Connection.

Destination Database

FieldValue
Select DMS database instanceIf the destination is already added to DMS, select it here.
Database TypeTair/Redis (set by default).
Connection TypeSelect Cloud Instance.
Instance RegionSelect the region of the destination instance.
Instance IDSelect the destination instance.
Authentication methodSelect Password Logon unless you have enabled password-free access for VPCs.
Database PasswordEnter the password in <user>:<password> format. The account must have write permissions.
Connection methodSelect Unencrypted Connection or SSL Encrypted Connection.

Step 4: Select objects and migration type

Configure the task objects, then click Next: Advanced Configuration.

FieldValue
Migration typesFull Migration + Incremental Migration (default): streams a memory snapshot and then syncs ongoing writes. Select Full Data Migration if you cannot run PSYNC or SYNC on the source. With Full Data Migration, DTS uses the SCAN command to traverse the entire source database — do not write new data to the source during this migration to ensure data consistency.
Processing Mode for Existing Destination TablesPrecheck and Block on Error (default): blocks the task if any keys already exist in the destination. Ignore and Continue: skips the check and overwrites any keys with matching names.
Source objects and Selected objectsSelect the databases to migrate (DB 0–255) and move them to Selected Objects. To deselect, move them back using the reverse arrow.

Step 5: Configure advanced settings (optional)

In most cases, keep the defaults. Click Next: Data Validation when done.

FieldDefaultDescription
Retry time for failed connections720 min (range: 10–1440 min)DTS retries failed connections for this duration. Charges continue during retries. Set to 30 minutes or more.
Retry time for other issues10 min (range: 10–1440 min)DTS retries non-connectivity errors for this duration. Set to 10 minutes or more.
Enable throttling for incremental data migrationNoLimits rows and data volume per second to reduce load on the destination instance.
Environment tagOptional tag to identify the instance.
Extend key expiration time in the destination database1800sExtends TTLs for migrated keys. Keys already expired at migration time are not migrated.
Use slave nodeNo (reads from master)For cluster-mode sources, choose whether to read from a replica node instead.
Configure ETLNoEnable to configure extract, transform, and load (ETL) processing. See Configure ETL in a DTS migration or sync task.
Monitoring and alertingNoEnable to receive alerts when the migration fails or latency exceeds a threshold. See alert notificationsConfigure monitoring and alerting during task configuration.

Step 6: Configure data validation (optional)

Keep the defaults in most cases. Click Next: Save Task and Precheck. For details, see Configure data validation for a DTS sync or migration instance.

Step 7: Run the precheck

Click Next: Purchase after the precheck completes successfully.

If any items show Warning or Failed:

  • Click View Details to see the fix instructions for each item.

  • Optionally, click Confirm Alert Details to skip a warning — but skipping warnings may cause data inconsistency.

  • After fixing all issues, run the precheck again.

For common precheck errors, see Precheck issues.

Step 8: Purchase and start

On the Purchase page:

  1. (Optional) Select a Resource Group for the migration link. The default resource group is used if not specified.

  2. (Optional) Select an instance class. A higher class provides faster migration but costs more. The default is large. For specifications, see Data migration link specifications.

  3. Read and accept the terms of service, then click Purchase and Start.

After purchase, the migration task starts. Track progress on the Data Migration page.

What's next

  • Incremental migration only: After all data is in sync, manually stop or release the task in the DTS console before switching your application to the new instance.

  • Verify migrated data: See Validate migrated data.

FAQ

Why does the connection test fail?

Check the password format — Redis credentials use user:password syntax. For more details on login methods, see Logon methods for an instance. If the source is behind a firewall (on-premises or third-party cloud), add the DTS server IP addresses for your region to the allowlist. See Add the CIDR blocks of DTS servers to a whitelist.

Why did the migration task fail?

The most common causes are:

  • Scaling, changing the instance type, or changing the endpoint during migration — these require you to reconfigure the task from scratch.

  • Destination instance out of memory, or a shard in a cluster instance reaching its memory limit. DTS reports an OOM (out of memory) error and stops.

  • TDE enabled on the destination instance — DTS does not support this configuration.

Why is there data inconsistency after migration?

A few known scenarios cause this:

  • Expired keys: Keys with a TTL (time-to-live) policy may expire before being deleted. The destination may have fewer keys than the source.

  • Duplicate List entries: For List objects, DTS does not flush existing data in the destination when using PSYNC or SYNC. This can produce duplicates.

  • Network retries during full migration: If the network is interrupted, DTS retries and overwrites keys with the same name. Delete operations from the source during that retry window are not synced, leaving the destination with more data than the source.

Why does the precheck verify the Redis eviction policy?

The default maxmemory-policy for Tair (Redis OSS-compatible) is volatile-lru. If the destination runs out of memory, this policy evicts keys and causes data inconsistency. Setting the policy to noeviction instead means writes fail when memory is full — the task stops, but no data is silently dropped. For details on eviction policies, see Redis data eviction policies.

Why is there a `DTS_REDIS_TIMESTAMP_HEARTBEAT` key in my source database?

DTS writes a heartbeat key with the prefix DTS_REDIS_TIMESTAMP_HEARTBEAT to track update timestamps. For cluster sources, it writes one key per shard. The key is filtered out during migration and expires automatically after the task ends.

Which commands does incremental migration support?

DTS captures all standard Redis write commands, including:

APPEND, BITOP, BLPOP, BRPOP, BRPOPLPUSH, DECR, DECRBY, DEL, EVAL, EVALSHA, EXEC, EXPIRE, EXPIREAT, FLUSHALL, FLUSHDB, GEOADD, GETSET, HDEL, HINCRBY, HINCRBYFLOAT, HMSET, HSET, HSETNX, INCR, INCRBY, INCRBYFLOAT, LINSERT, LPOP, LPUSH, LPUSHX, LREM, LSET, LTRIM, MOVE, MSET, MSETNX, MULTI, PERSIST, PEXPIRE, PEXPIREAT, PFADD, PFMERGE, PSETEX, PUBLISH, RENAME, RENAMENX, RESTORE, RPOP, RPOPLPUSH, RPUSH, RPUSHX, SADD, SDIFFSTORE, SELECT, SET, SETBIT, SETEX, SETNX, SETRANGE, SINTERSTORE, SMOVE, SPOP, SREM, SUNIONSTORE, ZADD, ZINCRBY, ZINTERSTORE, ZREM, ZREMRANGEBYLEX, ZUNIONSTORE, ZREMRANGEBYRANK, ZREMRANGEBYSCORE, XADD, XCLAIM, XDEL, XAUTOCLAIM, XGROUP CREATECONSUMER, XTRIM

For Lua scripts executed via EVAL or EVALSHA, DTS cannot confirm whether the script succeeded because the destination does not return an explicit result.