Use Data Transmission Service (DTS) to migrate data from a PolarDB for MySQL cluster to a PolarDB-X 2.0 instance. DTS supports schema migration, full data migration, and incremental data migration, enabling a smooth cutover with minimal downtime.
Prerequisites
Before you begin, make sure that you have:
A PolarDB-X 2.0 instance created
Available storage space on the PolarDB-X 2.0 instance that is larger than the total data size of the source PolarDB for MySQL cluster
Billing
| Migration type | Instance configuration fee | Internet traffic fee |
|---|---|---|
| Schema migration and full data migration | Free of charge | Charged when the Access Method parameter of the destination database is set to Public IP Address. See Billing overview. |
| Incremental data migration | Charged. See Billing overview. | — |
Migration types
DTS supports three migration types that you can combine based on your requirements.
| Migration type | Description |
|---|---|
| Schema migration | Migrates schemas of selected objects (tables, views, triggers, stored procedures, and stored functions). The routine_body of stored procedures and functions, and the select_statement of views cannot be modified during migration. DTS changes the SECURITY attribute from DEFINER to INVOKER for views, stored procedures, and functions, and sets the DEFINER to the destination database account. DTS does not migrate user information. To call a view, stored procedure, or stored function in the destination database, grant the required permissions to INVOKER. |
| Full data migration | Migrates historical data from selected objects in the source database to the destination database. |
| Incremental data migration | Migrates ongoing changes after full data migration completes, keeping source and destination databases in sync while your applications continue running. Supported SQL operations: INSERT, UPDATE, DELETE. |
Recommended configuration: Select Schema Migration, Full Data Migration, and Incremental Data Migration together to ensure data consistency and minimize downtime during cutover.
Limitations
Source database
| Limitation | Details |
|---|---|
| Outbound bandwidth | The source database server must have enough outbound bandwidth. Insufficient bandwidth reduces migration speed. |
| Table constraints | Tables to be migrated must have PRIMARY KEY or UNIQUE constraints with all fields unique. Otherwise, the destination database may contain duplicate records. |
| Table count when renaming objects | If you select tables as migration objects and need to rename tables or columns in the destination database, a single task supports up to 1,000 tables. For more than 1,000 tables, configure multiple tasks or migrate the entire database instead. |
| DDL during full data migration | Do not perform DDL operations on the source database during full data migration. DDL operations cause the migration task to fail. |
| Writes during full-only migration | If you run only full data migration (without incremental), do not write to the source database during migration. Concurrent writes cause data inconsistency. |
| Incremental DDL | Incremental DDL operations cannot be migrated. To perform DDL operations during incremental migration, run them in the destination database first, then in the source database. |
Binary logging requirements for incremental data migration:
| Parameter | Required value | Notes |
|---|---|---|
| Binary logging feature | Enabled | Must be enabled before starting the task. If not enabled, the precheck fails. |
loose_polar_log_bin | on | Must be set to on. |
| Binary log retention period | At least 3 days (7 days recommended) | Logs retained for fewer than 3 days may cause task failure or data loss. |
Enabling binary logging on a PolarDB for MySQL cluster incurs storage charges for binary log files. For setup instructions, see Enable binary logging and Modify the retention period.
Other limitations
DTS does not migrate read-only nodes of the source PolarDB for MySQL cluster.
DTS does not migrate Object Storage Service (OSS) external tables from the source cluster.
During schema migration, DTS migrates foreign keys from the source database to the destination database. During full data migration and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. If cascade update or delete operations are performed on the source database during migration, data inconsistency may occur.
During full data migration, concurrent INSERT operations cause tablespace fragmentation. The used tablespace of the destination database will be larger than that of the source database after migration completes.
DTS retrieves values from FLOAT and DOUBLE columns using
ROUND(COLUMN,PRECISION). If no precision is specified, DTS sets precision to 38 digits for FLOAT and 308 digits for DOUBLE. Verify that these defaults meet your requirements.DTS attempts to resume failed tasks for up to 7 days. Before switching workloads to the destination database, stop or release any failed tasks, or run
REVOKEto revoke write permissions from the DTS accounts. Otherwise, resumed failed tasks may overwrite data in the destination database.DTS runs
CREATE DATABASE IF NOT EXISTS \`test\`on the source database periodically to advance the binary log position.If a task fails, DTS technical support will attempt to restore it within 8 hours. The task may be restarted and task parameters (not database parameters) may be modified during restoration.
Required permissions
| Database | Required permissions |
|---|---|
| PolarDB for MySQL cluster | Read permissions on the objects to be migrated |
| PolarDB-X 2.0 instance | Read and write permissions on the objects to be migrated |
For instructions on creating accounts and granting permissions, see:
PolarDB for MySQL cluster: Create and manage database accounts
PolarDB-X 2.0 instance: Manage database accounts
Create a data migration task
Step 1: Go to the Data Migration page
Use one of the following methods to open the Data Migration page and select the region where your migration instance resides.
DTS console:
Log on to the DTS console.
In the left-side navigation pane, click Data Migration.
In the upper-left corner, select the region where the data migration instance resides.
DMS console:
The actual steps may vary depending on the mode and layout of your DMS console. See Simple mode and Customize the layout and style of the DMS console.
Log on to the DMS console.
In the top navigation bar, move the pointer over Data + AI > DTS (DTS) > Data Migration.
From the drop-down list to the right of Data Migration Tasks, select the region where the data migration instance resides.
Step 2: Configure source and destination databases
Click Create Task to go to the task configuration page.
Configure the source and destination databases using the following parameters.
WarningAfter configuring the source and destination databases, read the Limits displayed at the top of the page. Skipping this step may cause task failures or data inconsistency.
General settings
Parameter Description Task Name A name for the DTS task. DTS auto-generates a name. Specify a descriptive name to identify the task easily. The name does not need to be unique. Source Database
Parameter Description Select Existing Connection If the source instance is registered with DTS, select it from the drop-down list. DTS auto-fills the remaining parameters. In the DMS console, select the instance from Select a DMS database instance. If the instance is not registered, configure the parameters below manually. Database Type Select PolarDB for MySQL. Access Method Select Alibaba Cloud Instance. Instance Region The region where the source PolarDB for MySQL cluster resides. PolarDB Instance ID The ID of the source PolarDB for MySQL cluster. Database Account The account for the source cluster. See Required permissions. Database Password The password for the database account. Encryption Whether to encrypt the connection. Configure based on your requirements. See Configure SSL encryption. Destination Database
Parameter Description Select Existing Connection If the destination instance is registered with DTS, select it from the drop-down list. DTS auto-fills the remaining parameters. In the DMS console, select the instance from Select a DMS database instance. If the instance is not registered, configure the parameters below manually. Database Type Select PolarDB-X 2.0. Access Method Select Alibaba Cloud Instance. Instance Region The region where the PolarDB-X 2.0 instance resides. Instance ID The ID of the destination PolarDB-X 2.0 instance. Database Account The account for the destination instance. See Required permissions. Database Password The password for the database account. Click Test Connectivity and Proceed.
Make sure the CIDR blocks of DTS servers are added to the security settings of your source and destination databases. See Add the CIDR blocks of DTS servers.
Step 3: Configure migration objects
On the Configure Objects page, configure the following parameters.
If you do not select Schema Migration, create the target database and tables in the destination database beforehand and enable object name mapping in Selected Objects. If you do not select Incremental Data Migration, do not write to the source database during migration to maintain data consistency. Renaming objects using the object name mapping feature may cause dependent objects to fail to migrate.
Parameter Description Migration Types Select the migration types based on your needs: - For a one-time migration, select Schema Migration and Full Data Migration. - For continuous sync with minimal downtime (recommended), select Schema Migration, Full Data Migration, and Incremental Data Migration. Processing Mode of Conflicting Tables - Precheck and Report Errors (recommended): checks for tables in the destination database with the same names as source tables. The precheck fails if conflicts exist. If conflicting destination tables cannot be deleted or renamed, use the object name mapping feature to rename migrated tables. See Map object names. - Ignore Errors and Proceed: skips the name conflict precheck. Use with caution — during full migration, DTS skips records that conflict with existing destination records; during incremental migration, DTS overwrites conflicting destination records. If schemas differ, only specific columns may be migrated or the task may fail. Source Objects Select objects from the Source Objects section, click the rightwards arrow icon, and add them to the Selected Objects section. Selectable objects include columns, tables, and schemas. Selecting tables or columns excludes other object types such as views, triggers, and stored procedures. Selected Objects - To rename a single object: right-click it and see Map the name of a single object. - To rename multiple objects at once: click Batch Edit. See Map multiple object names at a time. - To filter rows: right-click an object and specify WHERE conditions. See Specify filter conditions. - To select specific SQL operations for a table: right-click the object and select the operations. Click Next: Advanced Settings and configure the following parameters.
If multiple tasks share the same source or destination database and have different retry time settings, the most recently configured value applies. DTS charges for instance usage during retry periods. We recommend setting the retry time based on your business requirements and releasing the DTS instance promptly after migration completes.
Parameter Description Dedicated Cluster for Task Scheduling By default, DTS schedules tasks to the shared cluster. For higher stability, purchase a dedicated cluster. See What is a DTS dedicated cluster. Retry Time for Failed Connections How long DTS retries failed connections after the task starts. Valid values: 10–1,440 minutes. Default: 720 minutes. Set to at least 30 minutes. If DTS reconnects within this period, the task resumes; otherwise, it fails. Retry Time for Other Issues How long DTS retries failed DDL or DML operations. Valid values: 1–1,440 minutes. Default: 10 minutes. Set to at least 10 minutes. This value must be smaller than Retry Time for Failed Connections. Enable Throttling for Full Data Migration Limits read/write load during full migration. When enabled, configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. Enable Throttling for Incremental Data Migration Limits load during incremental migration. When enabled, configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. Environment Tag An optional tag to identify the DTS instance. Whether to delete SQL operations on heartbeat tables of forward and reverse tasks Controls whether DTS writes heartbeat table operations to the source database. Yes: does not write heartbeat operations (a latency display may appear). No: writes heartbeat operations (may affect physical backup and cloning of the source database). Configure ETL Whether to enable extract, transform, and load (ETL). Yes: opens the ETL code editor. See Configure ETL in a data migration or data synchronization task and What is ETL?. No: disables ETL. Monitoring and Alerting Whether to receive alerts for task failures or latency thresholds. Yes: configure alert thresholds and notification settings. See Configure monitoring and alerting when you create a DTS task. No: no alerts. Click Next Step: Data Verification to set up a data verification task. See Configure a data verification task.
Step 4: Run the precheck
To preview the API parameters for this task, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
Click Next: Save Task Settings and Precheck to save settings and start the precheck.
DTS runs a precheck before starting the migration. The task starts only after the precheck passes.
If the precheck fails:
Click View Details next to any failed item, review the cause, fix the issue, and click Precheck Again.
For alert items that can be ignored: click Confirm Alert Details > View Details > Ignore > OK, then click Precheck Again.
Ignoring alert items may result in data inconsistency or other risks.
Step 5: Purchase and start the migration instance
Wait until Success Rate reaches 100%, then click Next: Purchase Instance.
On the Purchase Instance page, configure the following parameters.
Parameter Description Resource Group The resource group for the migration instance. Default: default resource group. See What is Resource Management? Instance Class The migration instance class determines migration speed. Select a class based on your workload. See Instance classes of data migration instances. Read and agree to Data Transmission Service (Pay-as-you-go) Service Terms by selecting the check box.
Click Buy and Start, then click OK in the confirmation dialog.
Monitor the migration task
After starting the task, monitor its progress on the Data Migration page.
| Migration configuration | Status behavior |
|---|---|
| Schema migration and full data migration only | The task stops automatically when complete. The Status column shows Completed. |
| Incremental data migration included | The task runs continuously and does not stop automatically. The Status column shows Running. |