Migrate data from a self-managed MySQL database to a PolarDB for MySQL cluster using Data Transmission Service (DTS). DTS supports schema migration, full data migration, and incremental data migration, so you can complete the migration with minimal or zero downtime.
How it works
All three migration phases run sequentially:
-
Schema migration — DTS copies the schemas of selected objects (tables, views, triggers, stored procedures, and stored functions) from the source database to the PolarDB for MySQL cluster.
-
Full data migration — DTS copies all existing data from the source database to the cluster.
-
Incremental data migration — After full data migration completes, DTS continuously replicates ongoing changes from the source database. This keeps the cluster in sync so you can cut over without stopping your application.
Prerequisites
Before you begin, make sure you have:
-
A self-managed MySQL database running version 5.1, 5.5, 5.6, 5.7, or 8.0. If the database is hosted on a third-party cloud platform, it must be accessible over the Internet.
-
A destination ApsaraDB RDS for MySQL instance with available storage larger than the total size of the source data. See Create an ApsaraDB RDS for MySQL instance.
-
A PolarDB for MySQL cluster with available storage larger than the total size of the source data. See Purchase an Enterprise Edition cluster or Purchase a subscription cluster.
-
A database account for the source MySQL database and an account for the destination PolarDB for MySQL cluster with the permissions listed in Required permissions.
-
Binary logging configured on the source database if you plan to use incremental data migration. See Binary logging requirements.
Required permissions
The following table lists the minimum permissions required for each migration type.
| Database | Schema migration | Full data migration | Incremental data migration |
|---|---|---|---|
| Self-managed MySQL database | SELECT | SELECT | SELECT on the objects to be migrated; REPLICATION CLIENT; REPLICATION SLAVE; SHOW VIEW; CREATE (for databases and tables, allowing DTS to create a dts database to store heartbeat data) |
| PolarDB for MySQL cluster | Read and write | Read and write | Read and write |
For instructions on creating accounts and granting permissions:
-
Self-managed MySQL database: Create an account for a self-managed MySQL database and configure binary logging
-
PolarDB for MySQL cluster: Create and manage a database account
Binary logging requirements
Incremental data migration reads from the source database's binary logs. Before starting the migration task, verify the following settings on the source database:
| Parameter | Required value | Notes |
|---|---|---|
| Binary logging | Enabled | DTS reads incremental changes from binary logs. If disabled, the precheck fails and the task cannot start. |
binlog_format |
row |
Required for row-level replication. Other formats may produce data inconsistencies. |
binlog_row_image |
full |
Ensures complete row images are captured. |
| Binary log retention | At least 7 days | If logs are purged before DTS reads them, the task fails. In exceptional cases, data loss may occur. |
log_slave_updates |
ON (dual-primary clusters only) |
Required when the source is a self-managed MySQL database in a dual-primary cluster, so DTS can obtain all binary logs. |
DTS executes CREATE DATABASE IF NOT EXISTS \test\`` on the source database periodically to advance the binary log file position.
Billing
| Migration type | Instance fee | Internet traffic fee |
|---|---|---|
| Schema migration and full data migration | Free | Charged only when data is migrated from Alibaba Cloud over the Internet. See Billing overview. |
| Incremental data migration | Charged. See Billing overview. | — |
SQL operations supported for incremental migration
| Operation type | SQL statements |
|---|---|
| DML | INSERT, UPDATE, DELETE |
| DDL | ALTER TABLE, ALTER VIEW, CREATE FUNCTION, CREATE INDEX, CREATE PROCEDURE, CREATE TABLE, CREATE VIEW, DROP INDEX, DROP TABLE, RENAME TABLE, TRUNCATE TABLE |
RENAME TABLE operations can cause data inconsistency. If you select a table as the migration object and rename it during migration, data for that table is not migrated to the destination. To avoid this, select the entire database as the migration object and make sure the databases the table belongs to — both before and after the rename — are included in the migration objects.
Configure and run the migration task
Step 1: Open the Data Migration page
Use one of the following methods:
DTS console
-
Log on to the DTS console.
-
In the left-side navigation pane, click Data Migration.
-
In the upper-left corner, select the region where the migration instance will reside.
DMS console
The exact steps may vary based on the mode and layout of the Data Management Service (DMS) console. See Simple mode and Customize the layout and style of the DMS console.
-
Log on to the DMS console.
-
In the top navigation bar, move the pointer over Data Development > DTS (DTS) > Data Migration.
-
From the drop-down list to the right of Data Migration Tasks, select the region where the migration instance will reside.
Step 2: Create the migration task
-
Click Create Task.
-
If the New Configuration Page button appears in the upper-right corner, click it to switch to the new configuration page.
NoteIf the Back to Previous Version button is displayed instead, you are already on the new configuration page. Skip this step.
Step 3: Configure the source and destination databases
After configuring the source and destination databases, read the Limits displayed at the top of the page. Skipping this step may cause the task to fail or result in data inconsistency.
Configure the parameters as described in the following table.
Source database
| Parameter | Description |
|---|---|
| Select Existing Connection | Select an existing connection to auto-populate the database parameters, or leave blank to enter the connection details manually. To register a database, see Manage database connections. |
| Database Type | Select MySQL. |
| Access Method | Select Public IP Address for a source database accessible over the Internet. For other access methods, see Preparation overview. |
| Instance Region | Select the region closest to the physical location of the source database. |
| Hostname or IP address | The public IP address or hostname of the source database. |
| Port | The service port of the source MySQL database. Default: 3306. |
| Database Account | The account with the permissions listed in Required permissions. |
| Database Password | The password for the database account. |
| Encryption | Select Non-encrypted or SSL-encrypted based on your source database configuration. If you select SSL-encrypted, upload a CA Certificate and set the CA Key. |
Destination database
| Parameter | Description |
|---|---|
| Select Existing Connection | Select an existing connection to auto-populate the database parameters, or leave blank to enter the connection details manually. |
| Database Type | Select PolarDB for MySQL. |
| Access Method | Select Alibaba Cloud Instance. |
| Instance Region | The region where the destination PolarDB for MySQL cluster resides. |
| PolarDB Cluster ID | The ID of the destination PolarDB for MySQL cluster. |
| Database Account | The account with the permissions listed in Required permissions. |
| Database Password | The password for the database account. |
| Encryption | Configure SSL encryption based on your requirements. See Configure SSL encryption. |
Step 4: Test connectivity
At the bottom of the page, click Test Connectivity and Proceed, then click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.
Make sure the CIDR blocks of DTS servers are added to the security settings of your source and destination databases. See Add the CIDR blocks of DTS servers.
Step 5: Select migration objects and types
On the Configure Objects page, configure the following parameters.
| Parameter | Description |
|---|---|
| Migration Types | Select Schema Migration and Full Data Migration for a one-time migration. To migrate with minimal downtime, also select Incremental Data Migration. If you do not select Incremental Data Migration, stop all writes to the source database during migration to maintain data consistency. |
| Method to Migrate Triggers in Source Database | Select how triggers are migrated. This parameter is available only when Schema Migration is selected. See Synchronize or migrate triggers from the source database. |
| Processing Mode of Conflicting Tables | Precheck and Report Errors (default): fails the precheck if identical table names exist in the source and destination. Use object name mapping to rename conflicting tables before starting. Ignore Errors and Proceed: skips the check. During full data migration, existing records in the destination are retained; during incremental data migration, they are overwritten. Use with caution. |
| Whether to Migrate Events | Specify whether to migrate events. If yes, perform additional steps to avoid data inconsistency. See Synchronize or migrate events. |
| Capitalization of object names in destination instance | Set the capitalization policy for database names, table names, and column names in the destination. Default: DTS default policy. See Specify the capitalization of object names in the destination instance. |
| Source Objects | Select the objects to migrate and click the |
| Selected Objects | To rename a single object, right-click it. To rename multiple objects at once, click Batch Edit in the upper-right corner. See Map object names. To filter data by condition, right-click a table and specify the WHERE clause. See Specify filter conditions. |
Renaming a migration object with object name mapping may cause other objects that depend on it to fail migration.
Step 6: Configure advanced settings
Click Next: Advanced Settings and configure the following parameters.
| Parameter | Description |
|---|---|
| Dedicated Cluster for Task Scheduling | By default, DTS schedules the task to the shared cluster. Purchase a dedicated cluster to improve task stability. See What is a DTS dedicated cluster. |
| Select the engine type of the destination database | InnoDB (default storage engine) or X-Engine (an online transaction processing (OLTP) storage engine). |
| Copy the temporary table of the Online DDL tool that is generated in the source table to the destination database. | Controls how DTS handles temporary tables from online DDL operations performed with DMS or gh-ost. Yes: migrates temporary table data (may increase latency). No, Adapt to DMS Online DDL: skips temporary table data; migrates only the original DDL statements from DMS (destination tables may be locked). No, Adapt to gh-ost: skips temporary table data; migrates only the original DDL statements from gh-ost (destination tables may be locked). Important
Do not use pt-online-schema-change for online DDL operations on the source database — the DTS task will fail. |
| Whether to Migrate Accounts | Specify whether to migrate source database account information. If Yes, select the accounts to migrate and verify the permissions of the DTS accounts. |
| Retry Time for Failed Connections | How long DTS retries after a connection failure. Range: 10–1,440 minutes. Default: 720 minutes. Set to at least 30 minutes. If DTS reconnects within this period, the task resumes automatically. Note: Different values set across multiple tasks sharing the same source or destination database — the latest value takes precedence. |
| Retry Time for Other Issues | How long DTS retries after DDL or DML operation failures. Range: 1–1,440 minutes. Default: 10 minutes. Set to at least 10 minutes. Must be less than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Migration | Limit the read/write rate during full data migration to reduce load on the database servers. Configure QPS to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. |
| Enable Throttling for Incremental Data Migration | Limit the replication rate during incremental data migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. |
| Whether to delete SQL operations on heartbeat tables of forward and reverse tasks | Yes: DTS does not write heartbeat SQL operations to the source database. Migration latency may be displayed. No: DTS writes heartbeat SQL operations to the source database. Physical backup and cloning of the source database may be affected. |
| Configure ETL | Enable extract, transform, and load (ETL) to process data during migration. See What is ETL? and Configure ETL in a data migration or data synchronization task. |
| Monitoring and Alerting | Yes: configure an alert threshold and notification settings so DTS notifies alert contacts when the task fails or migration latency exceeds the threshold. See Configure monitoring and alerting. No: no alerting. |
| Environment Tag | An optional tag to identify the DTS instance. |
Step 7: Configure data verification (optional)
Click Next Step: Data Verification to set up a data verification task that checks data consistency between the source and destination. See Configure a data verification task.
Step 8: Run the precheck and purchase the instance
-
Click Next: Save Task Settings and Precheck.
NoteTo view the API parameters for configuring the DTS task programmatically, move the pointer over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters before proceeding.
-
Wait for the precheck to complete. If any item fails, click View Details to see the cause, fix the issue, and run the precheck again. If a precheck item triggers an alert that can be ignored: click Confirm Alert Details, then Ignore, then OK, and run the precheck again. Ignoring an alert may cause data inconsistency.
-
When Success Rate reaches 100%, click Next: Purchase Instance.
-
On the Purchase Instance page, configure the following parameters.
Parameter Description Resource Group The resource group for the migration instance. Default: default resource group. See What is Resource Management? Instance Class The instance class determines migration speed. Select based on your requirements. See Instance classes of data migration instances. -
Read and accept Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start > OK.
The migration task starts. Monitor its progress on the Data Migration page.
After migration
After the migration task completes, perform the following steps before switching your application to the PolarDB for MySQL cluster.
-
Wait for migration latency to reach 0 — If you selected incremental data migration, wait until the migration latency drops to 0 seconds and remains there for a period. This confirms the cluster is fully in sync with the source database.
-
Stop or release the DTS task — DTS automatically retries failed tasks for up to seven days. Stop or release the task before switching workloads to the destination. If you do not stop the task, a resumed migration may overwrite data in the destination database with data from the source. Alternatively, run
REVOKEto revoke write permissions from the DTS accounts that access the destination database. -
Rebuild permissions — During schema migration, DTS changes the SECURITY attribute from DEFINER to INVOKER for views, stored procedures, and functions, but does not migrate user account information. Grant the required read and write permissions to INVOKER for any views, stored procedures, or stored functions in the destination cluster.
-
Verify your application — Run sanity tests against the PolarDB for MySQL cluster to confirm data integrity and application behavior before completing the cutover.
-
Switch your application — Update your application's connection string to point to the PolarDB for MySQL cluster.
Limits
Source database limits
-
The source server must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed.
-
Tables to be migrated must have a primary key or UNIQUE constraint with all fields unique. Without this, the destination database may contain duplicate records.
-
If you select tables as migration objects and plan to rename tables or columns in the destination, a single task can migrate up to 1,000 tables. Tasks exceeding this limit return a request error. Split large migrations into multiple tasks or migrate the entire database instead.
-
During schema migration and full data migration, do not execute DDL statements that change database or table schemas. The migration task will fail.
-
If you run only full data migration, do not write to the source database during migration. This ensures data consistency. For best results, select schema migration, full data migration, and incremental data migration together.
Foreign key behavior
-
During schema migration, DTS migrates foreign keys from the source to the destination.
-
During full data migration and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. Cascade update and delete operations performed on the source database during this time may cause data inconsistency.
General limits
-
The source and destination MySQL versions should match to ensure compatibility.
-
Migrate data during off-peak hours. Full data migration uses read and write resources on both databases and increases load on the database servers.
-
Concurrent INSERT operations during full data migration cause table fragmentation in the destination. After full data migration, the destination tablespace is larger than the source.
-
For columns of the FLOAT or DOUBLE type, DTS uses
ROUND(COLUMN, PRECISION)to retrieve values. If no precision is specified, DTS uses 38 digits for FLOAT and 308 digits for DOUBLE. Verify that this meets your requirements before starting. -
You cannot convert data from the DATETIME type to the VARCHAR type.
-
If DDL statements fail in the destination database, the DTS task continues running. View failed DDL statements in task logs.
-
If the source database is an ApsaraDB RDS for MySQL instance with the EncDB feature enabled, full data migration is not supported.
Special cases
-
Primary/secondary switchover — If a primary/secondary switchover occurs on the source database while the migration task is running, the task fails.
-
Low DML activity — Migration latency may be inaccurate if no DML operations are performed on the source for an extended period. Perform a DML operation to update the latency display. If you select an entire database as the migration object, create a heartbeat table that is updated every second.
-
ApsaraDB RDS for MySQL read-only instances — A read-only ApsaraDB RDS for MySQL V5.6 instance that does not record transaction logs cannot be used as the source database for incremental data migration.
-
PolarDB for MySQL as the destination — DTS automatically creates the database in the destination cluster if the source database name complies with PolarDB naming conventions. If the name does not comply, manually create the database before configuring the migration task. See Manage databases. Throttling is not supported for full data migration to a PolarDB for MySQL cluster.