All Products
Search
Document Center

Data Transmission Service:Migrate data between RDS MySQL instances

Last Updated:Mar 30, 2026

Data Transmission Service (DTS) lets you migrate data between ApsaraDB RDS for MySQL instances with minimal downtime. DTS supports three migration types — schema migration, full data migration, and incremental data migration — which you can combine to keep your source database running throughout the process.

Choose migration types

Select migration types based on your tolerance for downtime:

Goal Migration types to select
One-time offline migration (source database can be stopped) Schema migration + Full data migration
Zero-downtime migration (source database stays live) Schema migration + Full data migration + Incremental data migration
If you skip incremental data migration, do not write data to the source database during migration to keep data consistent.

What each migration type does

  • Schema migration: Migrates schemas of tables, views, triggers, stored procedures, and stored functions. DTS changes the SECURITY attribute from DEFINER to INVOKER for views, stored procedures, and stored functions, and does not migrate user accounts. During schema migration, DTS also migrates foreign keys from the source database to the destination database.

  • Full data migration: Migrates all existing data from selected objects in the source database.

  • Incremental data migration: After full data migration completes, continuously replicates changes from the source database. This lets applications keep running against the source database while data catches up in the destination.

Supported source and destination databases

Both source and destination databases can be any of the following:

  • ApsaraDB RDS for MySQL instance

  • Self-managed MySQL databases:

    • Database with a public IP address

    • Database hosted on Elastic Compute Service (ECS)

    • Database connected over Express Connect, VPN Gateway, or Smart Access Gateway (SAG)

    • Database connected over Database Gateway

Prerequisites

Before you begin, make sure that you have:

  • Source and destination ApsaraDB RDS for MySQL instances created. See Create an ApsaraDB RDS for MySQL instance

  • Destination instance with more available storage than the source instance

  • Matching MySQL versions on both source and destination instances

Limitations

Source database requirements

  • The source database server must have enough outbound bandwidth. Insufficient bandwidth reduces migration speed.

  • Tables to be migrated must have PRIMARY KEY or UNIQUE constraints, with all fields unique. Without these, the destination database may contain duplicate records.

  • If you select individual tables as objects to migrate and plan to rename tables or columns in the destination, a single task supports up to 1,000 tables. For more than 1,000 tables, configure multiple tasks or migrate the entire database instead.

  • DTS temporarily disables foreign key constraint checks and cascade operations at the session level during full and incremental data migration. Avoid running CASCADE or DELETE operations on the source database during migration, as this can cause data inconsistency.

  • During schema migration and full data migration, do not run DDL statements that change database or table schemas. This will cause the task to fail.

Binary log requirements for incremental data migration

Configure the following binary log settings on your source database before starting an incremental data migration task:

Parameter Required value Why
Binary logging Enabled DTS reads changes from binary logs
binlog_format ROW Ensures DTS captures complete row-level changes
binlog_row_image FULL Ensures DTS captures all column values
log_slave_updates ON Required only for self-managed MySQL in a dual-primary cluster, so DTS can access all binary logs
Binary log retention At least 7 days DTS may fail and data loss may occur if logs are purged before being consumed
Important

If binary log requirements are not met, the precheck fails and the task cannot start. In exceptional cases, missing logs can cause data loss.

Other limitations

  • Run migration during off-peak hours. Full data migration uses read and write resources on both databases, which increases server load.

  • After full data migration, the destination tablespace is larger than the source due to fragmentation from concurrent INSERT operations.

  • DTS uses ROUND(COLUMN, PRECISION) to read FLOAT and DOUBLE columns. Default precision is 38 digits for FLOAT and 308 digits for DOUBLE. Verify that these match your requirements.

  • DTS retries failed tasks for up to 7 days. Before switching your workload to the destination database, stop or release the failed task, or run REVOKE to remove DTS's write access to the destination. Otherwise, the resumed task may overwrite data in the destination.

  • If DDL statements fail in the destination database, the task continues running. Check task logs to review failed DDL statements. See View task logs.

  • Column names that differ only in capitalization in the same destination table may produce unexpected results, because MySQL column names are case-insensitive.

  • If the source ApsaraDB RDS for MySQL instance has the EncDB feature enabled, full data migration is not supported.

Special cases

Scenario Limitation
Self-managed MySQL source A primary/secondary switchover during the task causes the task to fail
Self-managed MySQL — migration latency Latency is calculated from the timestamp of the last migrated record versus the current source timestamp. If no DML operations run on the source for a long time, latency readings may be inaccurate. Run a DML operation on the source to refresh latency. If you migrate an entire database, create a heartbeat table that is updated every second.
Self-managed MySQL — binary log position DTS periodically runs CREATE DATABASE IF NOT EXISTS \test\`` on the source to advance the binary log position
ApsaraDB RDS for MySQL V5.6 read-only source Cannot be used as a source for incremental data migration because read-only V5.6 instances do not record transaction logs
Destination database naming DTS automatically creates the database in the destination instance. If the source database name does not comply with ApsaraDB RDS for MySQL naming conventions, manually create the database in the destination before configuring the task. See Manage databases.

Billing

Migration type Instance configuration fee Internet traffic fee
Schema migration + Full data migration Free Charged only when migrating from Alibaba Cloud over the Internet. See Billing overview.
Incremental data migration Charged. See Billing overview.

SQL operations supported for incremental data migration

Type Supported operations
DML INSERT, UPDATE, DELETE
DDL ALTER TABLE, ALTER VIEW, CREATE FUNCTION, CREATE INDEX, CREATE PROCEDURE, CREATE TABLE, CREATE VIEW, DROP INDEX, DROP TABLE, RENAME TABLE, TRUNCATE TABLE
Important

RENAME TABLE operations may cause data inconsistency. If you selected a table as the object to migrate, and that table is renamed during migration, its data is not migrated to the destination. To avoid this, select the database (not individual tables) as the migration object, and make sure that the databases to which the table belongs both before and after the RENAME TABLE operation are included as migration objects.

Permissions required

Database Schema migration Full data migration Incremental data migration
Source ApsaraDB RDS for MySQL SELECT SELECT Read and write
Destination ApsaraDB RDS for MySQL Read and write Read and write Read and write

If the source account was not created through the ApsaraDB RDS for MySQL console, grant it the following MySQL permissions manually: REPLICATION CLIENT, REPLICATION SLAVE, SHOW VIEW, and SELECT.

To migrate account information from the source database, additional permissions are required. See Migrate database accounts.

For instructions on creating and configuring database accounts, see Create an account and Modify the permissions of an account.

Configure and run a migration task

Step 1: Go to the Data Migration page

  1. Log on to the Data Management (DMS) console.

  2. In the top navigation bar, move the pointer over DTS, then select DTS (DTS) > Data Migration.

    You can also go directly to the Data Migration page in the new DTS console. Navigation may vary depending on the DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console.
  3. From the drop-down list on the right side of Data Migration Tasks, select the region where your migration instance resides.

    In the new DTS console, select the region in the upper-left corner.

Step 2: Create and configure the task

  1. Click Create Task.

  2. Optional: In the upper-right corner, click New Configuration Page to switch to the latest configuration UI.

    Skip this step if the page shows a Back to Previous Version button — you are already on the new page.
  3. Configure the source and destination databases using the following parameters.

    Warning

    After configuring source and destination databases, read the Limits section displayed at the top of the page before proceeding. Skipping this may cause task failure or data inconsistency.

    Source Database

    Parameter Description
    Task Name A descriptive name for the task. DTS assigns a default name automatically. Names do not need to be unique.
    Select a DMS database instance Skip for this example — configure database information manually below.
    Database Type Select MySQL.
    Connection Type Select Alibaba Cloud Instance.
    Instance Region The region where the source ApsaraDB RDS for MySQL instance resides.
    Replicate Data Across Alibaba Cloud Accounts Select No for same-account migration. To migrate across Alibaba Cloud accounts, select Yesalert notification settings. See Configure a DTS task across Alibaba Cloud accounts.
    RDS Instance ID The ID of the source instance. The source and destination instances can be the same, which lets you migrate data within a single instance.
    Database Account The account for the source instance. See Permissions required.
    Database Password The password for the database account.
    Connection Method Select Non-encrypted or SSL-encrypted. If you select SSL-encrypted, enable SSL on the source instance first. See Configure the SSL encryption feature.

    Destination Database

    Parameter Description
    Select a DMS database instance Skip for this example — configure database information manually below.
    Database Type Select MySQL.
    Connection Type Select Alibaba Cloud Instance.
    Instance Region The region where the destination ApsaraDB RDS for MySQL instance resides.
    Replicate Data Across Alibaba Cloud Accounts Select No for same-account migration.
    RDS Instance ID The ID of the destination instance.
    Database Account The account for the destination instance. See Permissions required.
    Database Password The password for the database account.
    Connection Method Select Non-encrypted or SSL-encrypted. If you select SSL-encrypted, enable SSL on the destination instance first. See Configure the SSL encryption feature.
  4. Click Test Connectivity and Proceed. DTS automatically adds its server CIDR blocks to the IP address whitelist of Alibaba Cloud database instances and to the security group rules of ECS-hosted databases. If the self-managed database is hosted on multiple ECS instances, you must manually add the CIDR blocks of DTS servers to the security group rules of each ECS instance. For self-managed databases in your own data center or from a third-party cloud provider, manually add DTS server CIDR blocks to the database whitelist. See Add the CIDR blocks of DTS servers.

    Warning

    Adding DTS CIDR blocks to whitelists or security groups introduces potential security exposure. Take preventive measures such as strengthening password security, limiting exposed ports, authenticating API calls, and regularly auditing whitelist rules. Consider using Express Connect, VPN Gateway, or Smart Access Gateway to connect your database to DTS instead of exposing it over the public internet.

Step 3: Select objects and configure advanced settings

  1. On the Select Objects page, configure the following parameters.

    Parameter Description
    Migration Types Select migration types based on your downtime requirements. See Choose migration types.
    Method to Migrate Triggers in source database How to migrate triggers from the source. Available only when both Schema Migration and Incremental Data Migration are selected. See Synchronize or migrate triggers from the source database.
    Enable Migration Assessment Checks whether source and destination schemas — including index lengths, stored procedures, and dependent tables — meet migration requirements. Available only when Schema Migration is selected. Assessment results do not block the precheck.
    Processing Mode of Conflicting Tables Precheck and Report Errors (default): Fails the precheck if the destination contains tables with the same names as source tables. Use object name mapping to rename conflicting objects before starting. Ignore Errors and Proceed: Skips the check and proceeds. Records with matching primary keys are not migrated, and schema differences may cause partial migration or task failure.
    Capitalization of object names in destination instance Controls how database, table, and column names are capitalized in the destination. Default is DTS default policy. See Specify the capitalization of object names.
    Source Objects Select databases, tables, or columns to migrate, then click 向右小箭头 to add them to Selected Objects. Selecting only tables or columns excludes views, triggers, and stored procedures.
    Selected Objects Right-click an object to rename it or set WHERE conditions to filter rows. Click Batch Edit to rename multiple objects at once. See Map object names and Set filter conditions. Renaming an object may cause dependent objects to fail migration.
  2. Click Next: Advanced Settings and configure the following parameters.

    Parameter Description
    Dedicated Cluster for Task Scheduling By default, DTS uses a shared cluster. To improve task stability, purchase a dedicated cluster. See What is a DTS dedicated cluster.
    Copy the temporary table of the Online DDL tool that is generated in the source table to the destination database. Controls how DTS handles temporary tables from Online DDL operations (DMS or gh-ost only — pt-online-schema-change is not supported and will cause task failure). Yes: Migrates temporary tables. May introduce latency for large operations. No, Adapt to DMS Online DDL: Skips temporary tables; migrates only the final DDL. Tables in the destination may be locked. No, Adapt to gh-ost: Skips temporary tables; migrates only the final DDL. Supports custom regular expressions to filter shadow tables. Tables in the destination may be locked.
    Whether to Migrate Accounts Select Yes to migrate source database account information. You must select the accounts to migrate and verify that source and destination account permissions are sufficient.
    Retry Time for Failed Connections How long DTS retries after a connection failure. Range: 10–1,440 minutes. Default: 720 minutes. Set to at least 30 minutes. If DTS reconnects within this window, the task resumes automatically; otherwise, the task fails.
    Retry Time for Other Issues How long DTS retries after DDL or DML failures. Range: 1–1,440 minutes. Default: 10 minutes. Set to at least 10 minutes. Must be less than Retry Time for Failed Connections.
    Enable Throttling for Full Data Migration Limits resource usage during full data migration by setting QPS to the source, RPS, and bandwidth caps. Available only when Full Data Migration is selected.
    Enable Throttling for Incremental Data Migration Limits resource usage during incremental data migration by setting RPS and bandwidth caps. Available only when Incremental Data Migration is selected.
    Environment Tag An optional tag to identify the DTS instance environment.
    Whether to delete SQL operations on heartbeat tables of forward and reverse tasks Yes: DTS does not write heartbeat operations to the source database. Migration latency may be shown. No: DTS writes heartbeat operations to the source. This may affect physical backup and cloning of the source database.
    Configure ETL Select Yes to configure extract, transform, and load (ETL) rules using the code editor. See Configure ETL in a data migration or data synchronization task.
    Monitoring and Alerting Select Yes to receive notifications when the task fails or migration latency exceeds a threshold. See Configure monitoring and alerting.
  3. Click Next Step: Verification Configurations to configure data verification. See Configure a data verification task.

Step 4: Run the precheck

Click Next: Save Task Settings and Precheck.

To preview API parameters for programmatic configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

DTS runs a precheck before the task can start. If any item fails:

  1. Click View Details next to the failed item.

  2. Resolve the issue based on the check results.

  3. Click Precheck Again.

If an item generates an alert that you want to ignore, click Confirm Alert Details, then click Ignore in the dialog box, and click Precheck Again. Ignoring alerts may lead to data inconsistency.

Step 5: Purchase the migration instance and start the task

  1. Wait until the precheck success rate reaches 100%, then click Next: Purchase Instance.

  2. On the Purchase Instance page, configure the following parameters.

    Parameter Description
    Resource Group Settings The resource group for the migration instance. Default: default resource group. See What is Resource Management?
    Instance Class The instance class determines migration speed. Select based on your data volume and time requirements. See Instance classes of data migration instances.
  3. Select the Data Transmission Service (Pay-as-you-go) Service Terms checkbox.

  4. Click Buy and Start. The task appears in the task list where you can track its progress.

After migration

Once the migration task shows 100% progress (or latency drops to near zero for incremental migration), complete the following steps before switching your workload to the destination database:

  1. Verify data: Run ANALYZE TABLE <table_name> on the destination database to confirm data was written correctly. This is especially important after high-availability (HA) switchovers on the source, where data may have been written only to memory.

  2. Stop writes to the source: Redirect all application write traffic away from the source database.

  3. Wait for incremental migration to complete: If running incremental data migration, wait for the latency to reach zero.

  4. Stop or release the migration task: Before switching over, stop or release the task, or run REVOKE to remove DTS's write permissions on the destination. This prevents a resumed failed task from overwriting destination data.

  5. Restore permissions and accounts: If DTS changed SECURITY attributes for views, stored procedures, or stored functions (DEFINER to INVOKER), grant the required read and write permissions to INVOKER. If you did not migrate accounts, recreate application accounts on the destination.

  6. Switch your application: Update connection strings to point to the destination database.

What's next