Use Data Transmission Service (DTS) to migrate data between ApsaraDB RDS for MySQL instances. DTS supports schema migration, full data migration, and incremental data migration. Selecting all three migration types keeps services running without interruption during migration.
This topic covers migration between RDS instances owned by the same Alibaba Cloud account. To migrate across accounts, see Configure a DTS migration task across Alibaba Cloud accounts. Before using DTS, make sure DTS has the permissions to access Alibaba Cloud resources. See What is DTS? and Authorize DTS to access Alibaba Cloud resources.
Prerequisites
Before you begin, make sure that:
Both the source and destination ApsaraDB RDS for MySQL instances are created. See Create an ApsaraDB RDS for MySQL instance.
The available storage of the destination instance is greater than that of the source instance.
Migration types
| Migration type | Description |
|---|---|
| Schema migration | Migrates schemas of selected objects (tables, views, triggers, stored procedures, stored functions) from the source to the destination. During migration, DTS changes the SECURITY attribute from DEFINER to INVOKER for views, stored procedures, and functions, and sets the DEFINER to the destination database account. DTS does not migrate user information. To call a view, stored procedure, or stored function on the destination, grant read and write permissions to INVOKER. |
| Full data migration | Migrates all historical data of selected objects from the source to the destination. |
| Incremental data migration | After full data migration completes, migrates ongoing changes from the source to the destination. Use this to keep services running without interruption during migration. |
routine_body of stored procedures and functions, and the select_statement of views cannot be modified. The SECURITY attribute and DEFINER cannot be modified during migration.Billing
| Migration type | Instance configuration fee | Internet traffic fee |
|---|---|---|
| Schema migration and full data migration | Free of charge | Charged when Connection Type is set to Public IP Address. See Billing overview. |
| Incremental data migration | Charged. See Billing overview. |
SQL operations supported for incremental migration
| Operation type | SQL statements |
|---|---|
| DML | INSERT, UPDATE, DELETE |
| DDL | ALTER TABLE, ALTER VIEW, CREATE FUNCTION, CREATE INDEX, CREATE PROCEDURE, CREATE TABLE, CREATE VIEW, DROP INDEX, DROP TABLE, RENAME TABLE, TRUNCATE TABLE |
RENAME TABLE operations may cause data inconsistency. If you select a table as the migration object and rename it during migration, that table's data will not be migrated to the destination. To prevent this, select the database containing the table as the migration object, and make sure the database is included in the migration scope both before and after the RENAME TABLE operation.
Limitations
Review the following limitations before you configure a migration task. Most limitations apply at task creation time and cannot be changed afterward.
Source database requirements
| Limitation | Workaround |
|---|---|
| The server hosting the source database must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed. | Schedule migration during off-peak hours and enable throttling in the advanced settings. |
| Tables to be migrated must have PRIMARY KEY or UNIQUE constraints with all fields unique. Without these constraints, the destination may contain duplicate records. | Add a primary key or unique constraint to the table before migration. |
| If you select tables as migration objects and need to rename tables or columns in the destination, a single task supports up to 1,000 tables. | For more than 1,000 tables, configure multiple tasks or migrate the entire database instead. |
For incremental data migration: binary logging must be enabled, binlog_format must be set to row, and binlog_row_image must be set to full. If these conditions are not met, the precheck fails. | Enable binary logging and set the required parameters before starting the task. For a self-managed MySQL database in a dual-primary cluster, also set log_slave_updates to ON. |
| For incremental data migration: binary logs must be retained for at least 7 days. Shorter retention may cause DTS to fail obtaining logs, resulting in data inconsistency or loss, and the DTS SLA may not be guaranteed. | Extend the binary log retention period to at least 7 days before starting migration. |
| During schema migration and full data migration, do not run DDL statements to change database or table schemas. Doing so causes the task to fail. | Plan and complete all schema changes before starting the migration task. |
| If running full data migration only (without incremental), do not write data to the source during migration. | Select schema migration, full data migration, and incremental data migration together to maintain data consistency. |
| Change data from binary log operations — such as data restored from a physical backup or cascade operations — is not recorded and will not be migrated while the instance runs. | Run a full data migration again after the instance run completes, if your business allows it. |
| For MySQL 8.0.23 or later: invisible columns cause data loss because DTS cannot obtain that data. | Run ALTER TABLE <table_name> ALTER COLUMN <column_name> SET VISIBLE; to make columns visible before migration. See Invisible Columns. Tables without primary keys automatically generate invisible primary keys; make these visible too. See Generated Invisible Primary Keys. |
General limitations
| Limitation | Workaround |
|---|---|
| The versions of the source and destination MySQL databases must be the same. | Verify version compatibility before starting migration. |
| DTS does not migrate data where a parser defined using comments is used. | Remove comment-based parsers before migration. |
| For MySQL 8.0.23 or later destination databases: invisible destination columns cause the task to fail and data loss. | Run ALTER TABLE <table_name> ALTER COLUMN <column_name> SET VISIBLE; on the destination before starting the task. |
If you do not use DTS schema migration, ensure field type compatibility manually. For example, if a source field is text but the destination field is varchar(255), data may be truncated. | Use DTS schema migration, or manually verify and align field types in the destination before migration. |
| If migration data includes 4-byte characters (rare characters or emojis), the destination database and tables must use the UTF8mb4 character set. | If using DTS schema migration, set character_set_server in the destination to UTF8mb4. If not using schema migration, set the character set manually before migration. |
| Full data migration consumes read and write resources on both source and destination, which may increase server load. | Schedule migration during off-peak hours and enable throttling in the advanced settings. |
| During full data migration, concurrent INSERT operations cause table fragmentation in the destination. After migration, the destination tablespace will be larger than the source. | Run OPTIMIZE TABLE on the destination after migration if tablespace size is a concern. |
For FLOAT or DOUBLE columns: DTS uses ROUND(COLUMN,PRECISION). If no precision is specified, DTS defaults to 38 digits for FLOAT and 308 digits for DOUBLE. | Verify that the default precision values meet your requirements before starting the task. |
| DTS attempts to resume failed tasks for up to 7 days. If not stopped before you switch workloads, a resumed task may overwrite destination data with source data. | Stop or release any failed tasks before switching workloads to the destination. Alternatively, run REVOKE to revoke write permissions from DTS accounts on the destination. |
| If DDL statements fail to run on the destination, the DTS task continues. | View failed DDL statements in task logs. See View task logs. |
| If column names within the same destination MySQL table differ only in capitalization, results may be unexpected because MySQL column names are not case-sensitive. | Normalize column name casing in the destination before migration. |
| After migration completes (instance Status changes to Completed), data written during a high-availability (HA) switchover may exist only in memory, causing data loss. | Run analyze table <table name> on the destination after migration to verify data was written successfully. |
| EncDB-enabled RDS MySQL instances do not support full data migration. | Disable EncDB or use a different migration approach. Note: TDE-enabled RDS MySQL instances do support schema migration, full data migration, and incremental data migration. |
| If a DTS task fails, DTS technical support will attempt to restore it within 8 hours. During restoration, the task may be restarted and task parameters (not database parameters) may be modified. | No action required; this is handled by DTS support. |
Special cases
| Scenario | Limitation | Workaround |
|---|---|---|
| Self-managed MySQL source | A primary/secondary switchover while the task is running causes the task to fail. | Avoid switchovers during migration. |
| Self-managed MySQL source | DTS executes CREATE DATABASE IF NOT EXISTS \test\`` in the source on a schedule to advance the binary log position. If no DML operations run on the source for an extended period, migration latency readings may be inaccurate. | Run a DML operation on the source to refresh the latency value. If migrating an entire database, create a heartbeat table that receives data every second. |
| ApsaraDB RDS for MySQL source | A read-only ApsaraDB RDS for MySQL V5.6 instance that does not record transaction logs cannot be used as the source for incremental data migration. | Use a different source instance or migration type. |
| ApsaraDB RDS for MySQL source | DTS executes CREATE DATABASE IF NOT EXISTS \test\`` in the source database as scheduled to advance the binary log file position. | No action required; this is handled automatically by DTS. |
| ApsaraDB RDS for MySQL destination | DTS automatically creates the destination database. However, if the source database name does not comply with ApsaraDB RDS for MySQL naming conventions, the automatic creation fails. | Manually create the database in the destination instance before configuring the migration task. See Manage databases. |
To migrate accounts from the source database, review the prerequisites and requirements first. See Migrate database accounts.
Configure a migration task
The following steps walk you through creating and running a DTS migration task.
Step 1: Go to the Data Migration page
Use one of the following methods:
DTS console
Log on to the DTS console.DTS console
In the left-side navigation pane, click Data Migration.
In the upper-left corner, select the region where the migration instance resides.
DMS console
Log on to the DMS console.DMS console
In the top navigation bar, move the pointer over Data + AI > DTS (DTS) > Data Migration.
From the drop-down list to the right of Data Migration Tasks, select the region where the migration instance resides.
Step 2: Create a task
Click Create Task to go to the task configuration page.
Optional: If Back to Previous Version is not shown in the upper-right corner, click New Configuration Page to switch to the new configuration page.
Step 3: Configure source and destination databases
After configuring the source and destination databases, read the Limits displayed at the top of the page. Ignoring these limits may cause the task to fail or result in data inconsistency.
Configure the following parameters:
| Section | Parameter | Description |
|---|---|---|
| N/A | Task Name | The name of the DTS task. DTS auto-generates a name. Specify a descriptive name for easy identification. Uniqueness is not required. |
| Source Database | Select a DMS database instance. | In this example, no database instance is selected. Configure the database information below. |
| Database Type | Select MySQL. | |
| Connection Type | Select Alibaba Cloud Instance. | |
| Instance Region | The region where the source ApsaraDB RDS for MySQL instance resides. | |
| Cross-account | Select No for same-account migration. To migrate across accounts, select Yesalert notification settings. See Configure a DTS task across Alibaba Cloud accounts. | |
| RDS Instance ID | The ID of the source ApsaraDB RDS for MySQL instance. The source and destination instances can be the same or different — DTS supports both within-instance and between-instance migrations. | |
| Database Account | The database account for the source instance. See the permissions section of this topic for required permissions. | |
| Database Password | The password used to access the source instance. | |
| Connection Method | Select Non-encrypted or SSL-encrypted based on your requirements. To use SSL encryption, enable SSL on the RDS MySQL instance first. See Use a cloud certificate to enable SSL encryption. | |
| Destination Database | Select a DMS database instance. | In this example, no database instance is selected. Configure the database information below. |
| Database Type | Select MySQL. | |
| Connection Type | Select Alibaba Cloud Instance. | |
| Instance Region | The region where the destination ApsaraDB RDS for MySQL instance resides. | |
| Cross-account | Select No for same-account migration. | |
| RDS Instance ID | The ID of the destination ApsaraDB RDS for MySQL instance. | |
| Database Account | The database account for the destination instance. See the permissions section of this topic for required permissions. | |
| Database Password | The password used to access the destination instance. | |
| Connection Method | Select Non-encrypted or SSL-encrypted based on your requirements. If using SSL encryption, enable it on the destination instance first. |
Step 4: Test connectivity and proceed
Click Test Connectivity and Proceed.
DTS automatically adds its server CIDR blocks to the IP whitelist of Alibaba Cloud database instances (such as ApsaraDB RDS for MySQL and ApsaraDB for MongoDB) and to the security group rules of Elastic Compute Service (ECS) instances. For self-managed databases hosted on multiple ECS instances, manually add DTS CIDR blocks to the security group rules of each ECS instance. For self-managed databases in data centers or from third-party cloud providers, manually add DTS CIDR blocks to the database whitelist. See Add the CIDR blocks of DTS servers.
Adding DTS CIDR blocks to a database whitelist or ECS security group rules introduces security risks. Before using DTS, acknowledge these risks and take preventive measures, such as: strengthening username and password security, limiting exposed ports, authenticating API calls, regularly auditing whitelist and security group rules, and removing unauthorized CIDR blocks. For more secure connectivity, connect your database to DTS through Express Connect, VPN Gateway, or Smart Access Gateway.
Step 5: Configure migration objects
On the Configure Objects page, set the following parameters:
| Parameter | Description |
|---|---|
| Migration Types | Select the migration types based on your needs:<br>- For full migration only: select Schema Migration and Full Data Migration.<br>- To keep services running during migration: select Schema Migration, Full Data Migration, and Incremental Data Migration.<br><br> Note If Schema Migration is not selected, create the target database and table in the destination manually before starting, and enable object name mapping in Selected Objects. If Incremental Data Migration is not selected, do not write to the source database during migration to maintain data consistency. |
| Method to Migrate Triggers in Source Database | The method for migrating triggers. Configure based on your requirements. Available only when both Schema Migration and Incremental Data Migration are selected. See Synchronize or migrate triggers from the source database. |
| Enable Migration Assessment | Whether to assess schema compatibility (index length, stored procedures, dependent tables, and so on) before migration. Available only when Schema Migration is selected. Selecting Yes may extend the precheck duration. Assessment results are visible during the precheck but do not affect precheck outcomes. |
| Processing Mode of Conflicting Tables | How to handle tables in the destination that share names with source tables:<br>- Precheck and Report Errors: fails the precheck if identical table names exist. To resolve, rename destination tables using object name mapping. See Map object names.<br>- Ignore Errors and Proceed: skips the name conflict check.<br><br> Warning If Ignore Errors and Proceed is selected: during full data migration, DTS skips conflicting records (existing destination records are kept); during incremental data migration, conflicting records overwrite destination records. If schemas differ, only some columns may be migrated or the task may fail. |
| Capitalization of Object Names in Destination Instance | The capitalization of database, table, and column names in the destination. Defaults to DTS default policy. See Specify the capitalization of object names in the destination instance. |
| Source Objects | Select objects from the Source Objects section and click the arrow icon to move them to Selected Objects. Selectable granularity: columns, tables, or databases. Selecting tables or columns excludes other objects such as views, triggers, and stored procedures. |
| Selected Objects | To rename a single object in the destination, right-click it in the Selected Objects section. For bulk renaming, click Batch Edit. See Map object names.<br><br> Note Renaming an object may cause dependent objects to fail migration. To filter rows by condition, right-click a table and specify the WHERE conditions. See Specify filter conditions. To scope incremental DML or DDL operations to specific objects, right-click the object and select the operations to include. |
Click Next: Advanced Settings.
Step 6: Configure advanced settings
| Parameter | Description |
|---|---|
| Dedicated Cluster for Task Scheduling | By default, DTS schedules the task to the shared cluster. Purchase a dedicated cluster for better task stability. See What is a DTS dedicated cluster. |
| Copy the temporary table of the Online DDL tool that is generated in the source table to the destination database. | Applies when using DMS or gh-ost for online DDL operations on the source. Options:<br>- Yes: migrates temporary table data (may cause latency).<br>- No, Adapt to DMS Online DDL: migrates only the original DDL operations from DMS; destination tables may be locked.<br>- No, Adapt to gh-ost: migrates only the original DDL from gh-ost; destination tables may be locked. Use default or custom regex to filter out gh-ost shadow tables.<br><br> Important Do not use pt-online-schema-change for online DDL on the source — it causes DTS task failure. |
| Whether to Migrate Accounts | Whether to migrate source database account information. If Yes, select the accounts to migrate and verify permissions for both source and destination accounts. |
| Retry Time for Failed Connections | How long DTS retries after a connection failure. Valid values: 10–1,440 minutes. Default: 720 minutes. Set to at least 30 minutes. If reconnection succeeds within this window, DTS resumes the task. Otherwise, the task fails.<br><br> Note When multiple tasks share the same source or destination database and have different retry time ranges, the value specified last takes precedence. DTS charges for the instance during retry. Set the retry time based on your business needs and release the instance promptly when no longer needed. |
| Retry Time for Other Issues | How long DTS retries after DDL or DML operation failures. Valid values: 1–1,440 minutes. Default: 10 minutes. Set to at least 10 minutes. This value must be smaller than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Migration | Whether to limit resource usage during full data migration to reduce load on database servers. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. |
| Enable Throttling for Incremental Data Migration | Whether to limit resource usage during incremental data migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. |
| Environment Tag | An optional tag to identify the DTS instance. |
| Whether to delete SQL operations on heartbeat tables of forward and reverse tasks | Whether to write heartbeat table SQL operations to the source database while the instance runs:<br>- Yes: heartbeat operations are not written to the source. A latency value may be displayed for the DTS instance.<br>- No: heartbeat operations are written to the source. Physical backup and cloning of the source may be affected. |
| Configure ETL | Whether to enable extract, transform, and load (ETL). Select Yes to enter data processing statements. See Configure ETL in a data migration or data synchronization task. For details about ETL, see What is ETL?. |
| Monitoring and Alerting | Whether to configure alerts. If Yes, set the alert threshold and notification settings. DTS sends alerts when the task fails or migration latency exceeds the threshold. See Configure monitoring and alerting when you create a DTS task. |
Step 7: Configure data verification
Click Next Step: Data Verification to configure a data verification task. See Configure a data verification task.
Step 8: Save settings and run a precheck
To preview the API parameters for this configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
To proceed, click Next: Save Task Settings and Precheck.
DTS runs a precheck before the task starts. The task can only start after passing the precheck.
If the precheck fails, click View Details next to each failed item. Fix the issues based on the check results, then click Precheck Again.
If an alert is triggered:
For alerts that cannot be ignored: click View Details, fix the issues, then click Precheck Again.
For alerts that can be ignored: click Confirm Alert Details > Ignore > OK, then click Precheck Again.
Note Ignoring alerts may cause data inconsistency and expose your business to potential risks.
Step 9: Purchase a migration instance
Wait until Success Rate reaches 100%, then click Next: Purchase Instance.
On the Purchase Instance page, configure the following:
Section Parameter Description New Instance Class Resource Group The resource group for the migration instance. Default: default resource group. See What is Resource Management?. Instance Class The instance class determines migration speed. Select based on your scenario. See Instance classes of data migration instances. Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms by selecting the checkbox.
Click Buy and Start. In the confirmation dialog, click OK.
Track migration progress on the Data Migration page.
What's next
After migration completes (instance Status changes to Completed):
Run
analyze table <table name>on the destination to confirm data was written successfully.Stop or release the DTS migration instance before switching workloads to the destination database.