All Products
Search
Document Center

Data Transmission Service:Migrate data from a self-managed MySQL database to an RDS for MySQL instance

Last Updated:Feb 28, 2026

Data Transmission Service (DTS) migrates data from a self-managed MySQL database to an ApsaraDB RDS for MySQL instance without service downtime. The source database can be deployed on-premises, on an Elastic Compute Service (ECS) instance, or on a third-party cloud platform. DTS supports schema migration, full data migration, and incremental data migration. Select all three types to maintain service continuity during migration.

Prerequisites

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration and full data migrationFree of charge.Charged when the destination Access Method is set to Public IP Address. For more information, see Billing overview.
Incremental data migrationCharged. For more information, see Billing overview.

Migration types

  • Schema migration DTS migrates the schemas of selected objects from the source to the destination database.

    • Supported object types: tables, views, triggers, stored procedures, and stored functions. > Note: The routine_body of stored procedures, routine_body of stored functions, and select_statement of views cannot be modified during migration.

    • DTS changes the SECURITY attribute from DEFINER to INVOKER for views, stored procedures, and functions, and sets the DEFINER to the destination database account used in the migration. > Note: The SECURITY attribute and DEFINER cannot be modified during migration.

    • DTS does not migrate user information. To call a view, stored procedure, or stored function in the destination database, grant read and write permissions to INVOKER.

  • Full data migration DTS migrates historical data of selected objects from the source to the destination database.

  • Incremental data migration After full data migration completes, DTS continuously migrates incremental data from the source to the destination database, allowing migration without service interruption.

Supported SQL operations for incremental data migration

Operation typeSQL statements
DMLINSERT, UPDATE, and DELETE
DDLALTER TABLE, ALTER VIEW, CREATE FUNCTION, CREATE INDEX, CREATE PROCEDURE, CREATE TABLE, CREATE VIEW, DROP INDEX, DROP TABLE, RENAME TABLE, and TRUNCATE TABLE
Important

RENAME TABLE operations may cause data inconsistency. If you select a table as the migration object and rename it during migration, data from that table is not migrated. To prevent this, select the database to which the table belongs as the migration object. Ensure both the pre-rename and post-rename databases are included in the migration objects.

Permissions required for database accounts

Database typeSchema migrationFull migrationIncremental migration
Self-managed MySQL databaseSELECTSELECTSELECT on migration objects, REPLICATION CLIENT, REPLICATION SLAVE, SHOW VIEW, and permissions to create databases and tables (DTS creates a database named dts to store heartbeat data).
ApsaraDB RDS for MySQL instanceRead and write permissions

To create and authorize a database account:

Note

Migrating account information from the source database requires additional permissions for both source and destination accounts. For more information, see Migrate database accounts.

Limitations

Foreign key behavior

Note

During schema migration, DTS migrates foreign keys from the source to the destination database.

Note

During full and incremental data migration, DTS temporarily disables constraint checks and cascade operations on foreign keys at the session level. Performing cascade updates or deletes on the source database during migration may cause data inconsistency.

Source database limits

  • The source server must have sufficient outbound bandwidth. Insufficient bandwidth slows migration.

  • Tables to be migrated must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate records.

  • A single migration task supports a maximum of 1,000 tables when you select tables as migration objects and modify them in the destination (for example, renaming tables or columns). To migrate more than 1,000 tables, split the task or migrate the entire database.

  • Binary logs for incremental data migration must meet these requirements:

    • Binary logging is enabled with binlog_format set to row and binlog_row_image set to full. Otherwise, errors occur during precheck and migration cannot start. > Important: For a self-managed MySQL database in a dual-primary cluster, set log_slave_updates to ON so that DTS can obtain all binary logs.

    • Binary logs must be retained for at least seven days. Otherwise, DTS may fail to obtain the logs, causing task failure. In exceptional cases, data inconsistency or loss may occur. Failure to meet this requirement may void the service reliability or performance stated in the DTS Service Level Agreement (SLA).

  • Operations on the source database during migration:

    • During schema migration and full data migration, do not execute DDL statements that change database or table schemas. Otherwise, the task fails.

    • For full-data-only migration, do not write data to the source database during migration. Otherwise, data inconsistency occurs. To ensure consistency, select all three migration types.

  • Data generated by binary log change operations, such as data restored from physical backups or cascade operations, is not recorded or migrated while the migration instance is running.

    Note

    If this data is not migrated, you can run full data migration again if your business allows it.

  • For source MySQL 8.0.23 or later, invisible columns cannot be read and data loss occurs.

    Note

    - To make columns visible, run ALTER TABLE <table_name> ALTER COLUMN <column_name> SET VISIBLE;. For more information, see Invisible Columns. - Tables without primary keys auto-generate invisible primary keys. Make these visible. For more information, see Generated Invisible Primary Keys.

Other limits

  • Source and destination MySQL database versions must be the same for compatibility.

  • DTS does not migrate data where a parser defined by comments is used.

  • For destination MySQL 8.0.23 or later, invisible columns cannot receive data, causing the DTS instance to fail and data loss.

    Note

    - To make columns visible, run ALTER TABLE <table_name> ALTER COLUMN <column_name> SET VISIBLE;. For more information, see Invisible Columns. - Tables without primary keys auto-generate invisible primary keys. Make these visible. For more information, see Generated Invisible Primary Keys.

  • If you skip schema migration, ensure field type compatibility between source and destination. For example, a text field in the source may be truncated when written to a varchar(255) field in the destination.

  • If migrated data contains four-byte characters (rare characters or emojis), the destination databases and tables must use the UTF8mb4 character set.

    Note

    When using schema migration, set the character_set_server parameter in the destination database to UTF8mb4.

  • Evaluate the performance impact on both databases before starting. Migrate during off-peak hours. During full data migration, DTS consumes read and write resources of both databases, which may increase server load.

  • Full data migration with concurrent INSERTs causes table fragmentation. The destination tablespace will be larger than the source tablespace after migration.

  • DTS uses the ROUND(COLUMN,PRECISION) function for FLOAT and DOUBLE columns. Default precision: 38 digits for FLOAT, 308 digits for DOUBLE. Verify that precision settings meet your requirements.

  • DTS attempts to resume failed tasks within seven days. Before switching workloads to the destination, stop or release any failed tasks, or use REVOKE to remove the write permissions of the DTS accounts on the destination database. Otherwise, a resumed task may overwrite destination data.

  • Failed DDL statements in the destination do not stop the DTS task. View failed DDL statements in task logs. For more information, see View task logs.

  • Column names that differ only in capitalization within the same table may produce unexpected results because MySQL column names are case-insensitive.

  • After migration completes (the Status changes to Completed), run analyze table <table name> to verify that data is written to the destination table. A high-availability (HA) switchover in the destination may cause data to remain only in memory, resulting in data loss.

  • ApsaraDB RDS for MySQL instances with the EncDB feature enabled do not support full data migration.

    Note

    ApsaraDB RDS for MySQL instances with Transparent Data Encryption (TDE) enabled support schema migration, full data migration, and incremental data migration.

  • To migrate accounts, review the prerequisites and precautions in Migrate database accounts.

  • If a DTS task fails, DTS technical support attempts to restore it within 8 hours. The task may be restarted, and task parameters may be modified during restoration.

    Note

    Only task parameters may be modified; database parameters are not changed. Modified parameters include but are not limited to those in the Modify instance parameters section of the Modify the parameters of a DTS instance topic.

Special cases

  • Self-managed MySQL database as source:

    • A primary/secondary switchover on the source database during migration causes the task to fail.

    • DTS calculates migration latency from the timestamp of the latest migrated data versus the current source timestamp. If no DML operations occur on the source for a long time, latency may be inaccurate. To update latency, perform a DML operation on the source database. > Note: If you select an entire database as the migration object, you can create a heartbeat table that is updated every second.

    • DTS periodically executes `CREATE DATABASE IF NOT EXISTS test ` on the source database to advance the binary log file position.

  • ApsaraDB RDS for MySQL instance as source:

    • A read-only ApsaraDB RDS for MySQL V5.6 instance (or any instance that does not record transaction logs) cannot be used as the source for incremental data migration.

    • DTS periodically executes `CREATE DATABASE IF NOT EXISTS test ` on the source database to advance the binary log file position.

  • ApsaraDB RDS for MySQL instance as destination: DTS auto-creates databases in the destination instance. However, if the source database name does not comply with ApsaraDB RDS for MySQL naming conventions, create the database manually before configuring the task. For more information, see Manage databases.

Procedure

Step 1: Open the data migration page

Open the Data Migration page and select the region of the data migration instance using one of the following methods.

DTS console

  1. Log on to the DTS console.

  2. In the left-side navigation pane, click Data Migration.

  3. In the upper-left corner, select the region of the data migration instance.

DMS console

Note

Operations may vary based on the DMS console mode and layout. For more information, see Simple mode and Customize the layout and style of the DMS console.

  1. Log on to the DMS console.

  2. In the top navigation bar, move the pointer over Data + AI > DTS (DTS) > Data Migration.

  3. From the drop-down list to the right of Data Migration Tasks, select the region of the data synchronization instance.

Step 2: Configure source and destination databases

  1. Click Create Task.

  2. (Optional) Click New Configuration Page in the upper-right corner.

    Note

    - Skip this step if Back to Previous Version is displayed in the upper-right corner. - The new configuration page is recommended. Some parameters differ between versions.

  3. Configure the source and destination databases.

    Warning

    After configuration, read the Limits in the upper part of the page. Otherwise, the task may fail or data inconsistency may occur.

    SectionParameterDescription
    N/ATask NameDTS auto-generates a task name. Specify an informative name for easy identification. A unique name is not required.
    Source DatabaseSelect Existing ConnectionIn this example, no existing connection is selected. Configure the database information below.
    Database TypeSelect MySQL.
    Access MethodSelect an access method based on the deployment of the source database. In this example, select Public IP Address.
    Important

    For self-managed databases, prepare the required environment before migration. For more information, see Preparation overview.

    Instance RegionThe region of the source MySQL database.
    Domain Name or IPThe endpoint of the source MySQL database. In this example, enter a public IP address.
    Port NumberThe service port of the source database. Must be accessible over the Internet. Default: 3306.
    Database AccountThe source database account. For required permissions, see the Permissions required for database accounts section.
    Database PasswordThe password of the source database account.
    EncryptionSelect Non-encrypted or SSL-encrypted based on your requirements.
    - If SSL is not enabled on the source, select Non-encrypted.
    - If SSL is enabled, select SSL-encrypted and upload a CA Certificate and configure the CA Key.
    Destination DatabaseSelect Existing ConnectionIn this example, no existing connection is selected. Configure the database information below.
    Database TypeSelect MySQL.
    Access MethodSelect Alibaba Cloud Instance.
    Instance RegionThe region of the destination ApsaraDB RDS for MySQL instance.
    Replicate Data Across Alibaba Cloud AccountsSelect No.
    RDS Instance IDThe ID of the destination ApsaraDB RDS for MySQL instance.
    Database AccountThe destination database account. For required permissions, see the Permissions required for database accounts section.
    Database PasswordThe password of the destination database account.
    EncryptionSelect Non-encrypted or SSL-encrypted. To use SSL-encrypted, enable SSL on the RDS instance first. For more information, see Use a cloud certificate to enable SSL encryption.
  4. Click Test Connectivity and Proceed, then click Confirm authorization and test link in the CIDR Blocks of DTS Servers dialog box.

    Note

    Ensure that DTS server CIDR blocks can be added (automatically or manually) to the security settings of both databases. For more information, see Add the CIDR blocks of DTS servers.

Step 3: Configure migration objects and advanced settings

  1. On the Configure Objects page, set the following parameters.

    ParameterDescription
    Migration Types- For full data migration only, select Schema Migration and Full Data Migration.
    - For zero-downtime migration, select Schema Migration, Full Data Migration, and Incremental Data Migration.
    > Note:
    > - If you skip Schema Migration, create the database and tables in the destination first, and enable object name mapping in Selected Objects.
    > - If you skip Incremental Data Migration, do not write data to the source during migration.
    Method to Migrate Triggers in Source DatabaseSelect a trigger migration method based on your requirements. Skip if no triggers exist. For more information, see Synchronize or migrate triggers from the source database.
    Note

    Available only when both Schema Migration and Incremental Data Migration are selected.

    Enable Migration AssessmentCheck compatibility of source and destination schemas (index lengths, stored procedures, dependent tables). Select Yes or No.
    > Note:
    > - Available only when Schema Migration is selected.
    > - Selecting Yes increases precheck time. View the Assessment Result during precheck. Assessment results do not affect precheck results.
    Processing Mode of Conflicting Tables- Precheck and Report Errors: Checks for duplicate table names. If duplicates exist, the precheck fails and migration cannot start.
    Note

    If duplicate tables cannot be deleted or renamed, use object name mapping to rename migrated tables. For more information, see Map object names.

    - Ignore Errors and Proceed: Skips the duplicate table name check.
    Warning

    Selecting Ignore Errors and Proceed may cause data inconsistency:

    > - Same schema, same primary key: during full migration, the existing record is retained; during incremental migration, the existing record is overwritten.
    > - Different schemas: only specific columns are migrated, or the task fails.
    Whether to migrate EventSelect Yes to migrate events. For more information, see Synchronize or migrate events.
    Capitalization of Object Names in Destination InstanceControls the capitalization of database, table, and column names. Default: DTS default policy. For more information, see Specify the capitalization of object names in the destination instance.
    Source ObjectsSelect objects from Source Objects and click the rightwards arrow to add them to Selected Objects.
    Note

    You can select columns, tables, or databases. Selecting tables or columns excludes views, triggers, and stored procedures.

    Selected Objects- To rename a single object, right-click it. For more information, see Map the name of a single object.
    - To rename multiple objects, click Batch Edit. For more information, see Map multiple object names at a time.
    > Note:
    > - Renaming an object may cause dependent objects to fail migration.
    > - To filter data with a WHERE condition, right-click the table and set the filter. For more information, see Filter the data to be migrated.
    > - To select specific SQL operations for incremental migration, right-click the object and choose operations. For supported operations, see the Supported SQL operations for incremental data migration section.
  2. Click Next: Advanced Settings.

    ParameterDescription
    Dedicated Cluster for Task SchedulingBy default, DTS uses the shared cluster. For better stability, purchase a dedicated cluster. For more information, see What is a DTS dedicated cluster.
    Copy the temporary table of the Online DDL tool that is generated in the source table to the destination database.Controls handling of temporary tables from online DDL operations using DMS or gh-ost.
    Important

    Do not use pt-online-schema-change for online DDL operations. It causes the DTS task to fail.

    - Yes: Migrates temporary table data.
    Note

    Large amounts of online DDL data may increase migration latency.

    - No, Adapt to DMS Online DDL: Skips temporary tables. Only original DDL operations from DMS are migrated.
    Note

    This option may lock destination tables.

    - No, Adapt to gh-ost: Skips temporary tables. Only original DDL from gh-ost is migrated. Use default or custom regex to filter shadow tables.
    Note

    This option may lock destination tables.

    Whether to Migrate AccountsSelect Yes to migrate source database accounts. Verify the permissions of source and destination accounts used in the task.
    Retry Time for Failed ConnectionsRetry duration when the source or destination connection fails. Range: 10-1,440 minutes. Default: 720. Recommended: greater than 30. If reconnection succeeds within this time, the task resumes. Otherwise, it fails.
    > Note:
    > - For multiple tasks sharing a database, the latest configured value takes precedence.
    > - DTS charges apply during retries. Release the instance promptly after the databases are released.
    Retry Time for Other IssuesRetry duration for DDL/DML failures. Range: 1-1,440 minutes. Default: 10. Recommended: greater than 10.
    Important

    This value must be smaller than Retry Time for Failed Connections.

    Enable Throttling for Full Data MigrationLimits resource consumption during full migration. Set Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s).
    Note

    Available only when Full Data Migration is selected.

    Enable Throttling for Incremental Data MigrationLimits resource consumption during incremental migration. Set RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s).
    Note

    Available only when Incremental Data Migration is selected.

    Environment TagTag to identify the DTS instance. Not required in this example.
    Whether to delete SQL operations on heartbeat tables of forward and reverse tasksControls whether heartbeat SQL operations are written to the source database.
    - Yes: Does not write heartbeat SQL. Latency information may be affected.
    - No: Writes heartbeat SQL. May affect physical backup and cloning of the source.
    Configure ETLEnable extract, transform, and load (ETL). For more information, see What is ETL?
    - Yes: Enter data processing statements in the code editor. For more information, see Configure ETL in a data migration or data synchronization task.
    - No: Skips ETL configuration.
    Monitoring and AlertingConfigure alerts for task failures or excessive migration latency.
    - No: Skips alerting.
    - Yes: Set alert thresholds and notification recipients. For more information, see Configure monitoring and alerting when you create a DTS task.
  3. Click Next Step: Data Verification to configure data verification. For more information, see Configure a data verification task.

Step 4: Run the precheck and purchase an instance

  1. Save the task settings and start the precheck.

    • To view API parameters, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

    • Otherwise, click Next: Save Task Settings and Precheck.

    Note

    - DTS runs a precheck before starting the task. The task starts only after the precheck passes. - If the precheck fails, click View Details next to each failed item. Fix the issues and run the precheck again. - If an alert is triggered during the precheck: - If the alert cannot be ignored, click View Details, fix the issue, and rerun the precheck. - If the alert can be ignored, click Confirm Alert Details, then click Ignore in the dialog box, and click OK. Then click Precheck Again. Ignoring alerts may cause data inconsistency.

  2. Wait until Success Rate reaches 100%, then click Next: Purchase Instance.

  3. On the Purchase Instance page, configure the instance class.

    SectionParameterDescription
    New Instance ClassResource GroupThe resource group for the instance. Default: default resource group. For more information, see What is Resource Management?
    Instance ClassSelect an instance class based on the required migration speed. For more information, see Instance classes of data migration instances.
  4. Select the Data Transmission Service (Pay-as-you-go) Service Terms check box.

  5. Click Buy and Start, then click OK in the confirmation dialog box.

Post-migration

View task progress on the Data Migration page.

Note

Tasks without incremental migration stop automatically when complete. Status shows Completed.

Note

Tasks with incremental migration run continuously. Status shows Running.

Note

To switch workloads to the destination, do so during off-peak hours when migration latency is low. For more information, see Switch workloads.