All Products
Search
Document Center

Data Transmission Service:Migrate data from RDS MySQL to Tair/Redis

Last Updated:Mar 30, 2026

Data Transmission Service (DTS) migrates rows from an RDS MySQL instance to a Tair (Redis OSS-compatible) instance, converting each row to a Redis key-value pair. Use this to offload hot data to an in-memory cache and reduce load on your relational database.

How it works

DTS supports two migration modes:

Migration mode Use when
Full data migration You need a one-time initial load or cutover. DTS migrates all existing data once, then stops.
Full data migration + incremental data migration Your application is live and you need the destination kept in sync. DTS migrates existing data, then continuously replicates changes (INSERT, UPDATE, DELETE) from the source binary log.

Requirements

Source database

Requirement Details
Primary keys All tables to migrate must have primary keys. Tables without primary keys cannot be migrated.
Binary log (incremental migration only) binlog_format = row; binlog_row_image = full. RDS MySQL binary logs must be retained for at least 3 days (7 days recommended). Self-managed MySQL binary logs must be retained for at least 7 days.
Dual-primary MySQL cluster (incremental only) Set log_slave_updates = ON
MySQL 8.0.23+ invisible columns Make columns visible before migrating: ALTER TABLE <table_name> ALTER COLUMN <column_name> SET VISIBLE;. Tables without primary keys auto-generate invisible primary keys — make these visible too.
Table-level migration with object name mapping A single task supports up to 1,000 tables. To migrate more, split across multiple tasks or migrate the entire database.

Destination instance

Requirement Details
Storage capacity Must exceed the storage space used by the source RDS MySQL instance. See Create an instance.
Transparent Data Encryption (TDE) Must be disabled. DTS cannot migrate to a TDE-enabled Tair instance.
Memory eviction policy Set maxmemory-policy to noeviction to prevent silent data loss. The default volatile-lru policy evicts keys when memory runs low, causing data inconsistency without stopping the task. See What is the default eviction policy?

Source database accounts

Database Required permissions How to configure
Source RDS MySQL Read on the objects to migrate Create an account, then modify account permissions
Destination Tair (Redis OSS-compatible) Read and write on the instance Create and manage accounts

Billing

Migration type Cost
Schema migration and full data migration Free
Incremental data migration Paid. See Billing overview.

SQL operations supported by incremental migration

Operation type SQL statement
DML INSERT, UPDATE, DELETE

Known behaviors and limitations

During full data migration:

  • Do not run DDL operations on the source database. Schema changes cause the task to fail.

    DTS places metadata locks on the source database during full data migration, which may also block DDL operations you run manually.
  • Do not write new data to the source if you select full migration only. For real-time consistency, enable both full and incremental migration.

  • If the always-encrypted (EncDB) feature is enabled on the RDS MySQL instance, full data migration is not supported.

    RDS MySQL instances with TDE enabled support both full and incremental data migration.

During the migration task:

  • Do not write to the destination from sources other than DTS. External writes cause data inconsistency.

  • If the destination Tair instance is a cluster and a shard reaches its memory limit or runs out of storage, the task fails with an Out of Memory error.

  • If the destination Tair instance experiences a transient connection loss, a primary/secondary switchover, or an endpoint change, full data may be re-migrated to the destination, which can cause data inconsistency.

  • If a primary/secondary switchover occurs on a self-managed MySQL source while the task runs, the task fails.

  • A read-only ApsaraDB RDS for MySQL V5.6 instance (which does not record transaction logs) cannot be used as the source for incremental data migration.

  • Data from binary log change operations — such as data restored from a physical backup or from a cascade operation — is not replicated to the destination.

  • DTS periodically runs CREATE DATABASE IF NOT EXISTS \test\`` on the source to advance the binary log position.

  • DTS calculates migration latency using the timestamp of the latest migrated record versus the current source timestamp. If the source receives no DML activity for an extended period, the displayed latency may be inaccurate. Run any DML operation on the source to update the latency reading. If you migrate an entire database, create a heartbeat table that receives updates every second to keep the latency accurate.

  • If a DTS instance fails, the DTS team attempts recovery within 8 hours. During recovery, the instance may be restarted or its parameters adjusted. Only DTS instance parameters are modified — your database parameters are not changed.

Create a migration task

Step 1: Go to the data migration page

DTS console

  1. Log on to the DTS console.

  2. In the left navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where the migration instance will reside.

DMS console

Steps may vary based on the Data Management (DMS) console mode and layout. See Simple mode and Customize the layout and style of the DMS console.
  1. Log on to the DMS console.

  2. In the top navigation bar, move the pointer over Data + AI > DTS (DTS) > Data Migration.

  3. From the drop-down list next to Data Migration Tasks, select the region where the migration instance will reside.

Step 2: Configure source and destination databases

Click Create Task and configure the following parameters.

Task name

Parameter Description
Task Name A name for this DTS task. DTS generates a name automatically. Use a descriptive name to identify the task. The name does not need to be unique.

Source database

Parameter Description
Select Existing Connection If the instance is registered with DTS, select it from the list — DTS fills in the remaining fields automatically. Otherwise, configure the fields below. See Manage database connections.
Database Type Select MySQL.
Access Method Select Alibaba Cloud Instance.
Instance Region Select the region where the source RDS MySQL instance resides.
Replicate Data Across Alibaba Cloud Accounts Select No if the source instance belongs to the current Alibaba Cloud account.
RDS Instance ID Select the ID of the source RDS MySQL instance.
Database Account The database account for the source instance. See Requirements for required permissions.
Database Password The password for the database account.
Encryption Select Non-encrypted or SSL-encrypted. To use SSL encryption, enable SSL on the RDS instance before configuring the task.

Destination database

Parameter Description
Select Existing Connection If the instance is registered with DTS, select it from the list. Otherwise, configure the fields below.
Database Type Select Tair/Redis.
Access Method Select Alibaba Cloud Instance.
Instance Region Select the region where the destination Tair (Redis OSS-compatible) instance resides.
Replicate Data Across Alibaba Cloud Accounts Select No if the destination instance belongs to the current Alibaba Cloud account.
Instance ID Select the ID of the destination Tair (Redis OSS-compatible) instance.
Authentication Method Select an authentication method. Password Login is used in this example. Account + Password Login requires Redis 6.0 or later. For Secret-free login, enable password-free access on the instance first — see Enable password-free access.
Database Password The password to connect to the destination instance. Use the format <user>:<password> — for example, admin:Rp829dlwa.

Step 3: Test connectivity

Click Test Connectivity and Proceed.

DTS server IP addresses must be added to the security settings (whitelists) of both the source and destination databases. See Add DTS server IP addresses to a whitelist.

Step 4: Configure migration objects

On the Configure Objects page, set the following options.

Parameter Description
Migration Types Select Full Data Migration for a one-time load. Select both Full Data Migration and Incremental Data Migration to keep the destination in sync while your application runs. If you select full migration only, avoid writing to the source during migration to prevent data inconsistency.
Processing Mode of Conflicting Tables Precheck and Report Errors: the precheck verifies the destination is empty. The task does not start if existing data is found. Ignore Errors and Proceed: skips the check — source data overwrites any destination keys with matching names. This can cause data loss.
Source Objects Select one or more objects from the Source Objects section, then click the arrow icon to move them to Selected Objects. Objects can be selected at the database, table, or column level.
Selected Objects To map migrated data to a specific Redis DB index, right-click a schema in Selected Objects and configure: Mapping Name of Redis/Tair Database (0 to 255) — the Redis DB index that receives the data (numeric only); Cache Mapping Mode — the key format for migrated data. If you select Key-Value Model Based on Database, Table, and Primary Key, also set Value Separation Method. To filter rows, right-click the table and set a WHERE clause. See Set a filter condition.

Click Next: Advanced Settings.

Step 5: Configure advanced settings

Parameter Description
Dedicated Cluster for Task Scheduling By default, DTS schedules tasks to the shared cluster. For higher stability, purchase a dedicated cluster. See What is a DTS dedicated cluster.
Retry Time for Failed Connections How long DTS retries when the source or destination connection fails. Valid range: 10–1,440 minutes. Default: 720 minutes. Set to a value greater than 30 minutes. If DTS reconnects within this window, the task resumes. Otherwise, the task fails. Note: when DTS retries a connection, you are charged for the DTS instance. Set this parameter based on your business requirements, and release the DTS instance promptly once the source and destination instances are no longer needed.
Retry Time for Other Issues How long DTS retries failed DDL or DML operations. Valid range: 1–1,440 minutes. Default: 10 minutes. Set to a value greater than 10 minutes. This value must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data Migration Limits the read/write rate during full data migration to reduce load on source and destination databases. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected.
Enable Throttling for Incremental Data Migration Limits the rate during incremental data migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected.
Cache Expiration Time The TTL for keys after migration. Set to -1 to disable expiration (keys never expire). A value of -1 risks filling Redis memory and causing task errors.
Whether to delete SQL operations on heartbeat tables of forward and reverse tasks Controls whether DTS writes heartbeat table SQL to the source while running. Yes: DTS does not write SQL operations on heartbeat tables. In this case, migration latency may be displayed. No: DTS writes heartbeat SQL, which may affect physical backup and cloning of the source.
Environment Tag An optional tag to identify the environment (for example, production or staging).
Configure ETL Whether to enable extract, transform, and load (ETL). If enabled, enter data processing statements in the code editor. For setup details, see Configure ETL in a data migration or data synchronization task. For a feature overview, see What is ETL?
Monitoring and Alerting Whether to configure alerts for task failures or latency exceeding a threshold. If enabled, configure the alert threshold and notification settings. See Configure monitoring and alerting.

Step 6: Run the precheck

Click Next: Save Task Settings and Precheck.

To preview the API parameters for this task configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

DTS runs a precheck before the task starts. If any check fails:

  1. Click View Details next to the failed item.

  2. Fix the issue based on the check results.

  3. Click Precheck Again.

If an alert (non-blocking warning) appears:

  1. Click Confirm Alert Details.

  2. In the View Details dialog, click Ignore > OK.

  3. Click Precheck Again.

Important

Ignoring alerts may result in data inconsistency. Proceed with caution.

Step 7: Purchase and start the instance

  1. Wait until Success Rate reaches 100%, then click Next: Purchase Instance.

  2. On the Purchase Instance page, set the following parameters:

Parameter Description
Resource Group The resource group for the migration instance. Default: default resource group. See What is Resource Management?
Instance Class The instance class determines migration speed. See Instance classes of data migration instances.
  1. Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms.

  2. Click Buy and Start > OK.

Verify migration status

After the task starts, go to the Data Migration page to monitor progress.

  • Full data migration only: the task stops automatically when complete. The status shows Completed.

  • Full + incremental data migration: the task continues running after the initial load, replicating ongoing changes from the source. The status shows Running.

For supported source and destination database versions, see Migration solutions overview.