All Products
Search
Document Center

Data Transmission Service:Migrate data from a PolarDB-X 2.0 instance to a PolarDB for MySQL cluster

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate data from a PolarDB-X 2.0 instance to a PolarDB for MySQL cluster. DTS supports schema migration, full data migration, and incremental data migration, so you can migrate with zero downtime by keeping both databases in sync until you are ready to cut over.

Prerequisites

Before you begin, make sure that you have:

Limitations

Source database

ConstraintDetails
BandwidthThe server hosting the source database must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed.
Primary keysTables to be migrated must have PRIMARY KEY or UNIQUE constraints, with all fields unique. Otherwise, the destination database may contain duplicate records.
Table count (with renaming)If you select tables as migration objects and need to rename tables or columns in the destination, a single task supports up to 1,000 tables. For more than 1,000 tables, configure multiple tasks or migrate at the schema level.
Binary logging (incremental migration)Binary logging must be enabled and binlog_row_image must be set to full. Otherwise, the precheck fails and the task cannot start.
Binary log retentionFor incremental-only migration: retain logs for more than 24 hours. For full + incremental migration: retain logs for at least seven days. If the retention period is too short, DTS may fail to read the binary logs, which can cause task failure or data loss. After full migration completes, you can reduce the retention period to more than 24 hours. If the retention requirements are not met, the DTS Service Level Agreement (SLA) does not apply.
MySQL compatibilityThe PolarDB-X instance must be compatible with MySQL 5.7.

Other constraints

ConstraintDetails
Migration windowFull data migration reads from the source and writes to the destination, increasing load on both database servers. Migrate during off-peak hours.
Destination tablespaceConcurrent INSERT operations during full data migration cause table fragmentation in the destination. After full migration, the destination tablespace is larger than the source.
Failed task resumeDTS retries failed tasks for up to seven days. Before switching workloads to the destination, stop or release any failed DTS tasks. Alternatively, run REVOKE to remove write permissions from the DTS database account on the destination. If a failed task resumes after you switch over, source data may overwrite destination data.
Heartbeat updatesDTS periodically updates the dts_health_check.ha_health_check table in the source database to advance the binary log position.
Destination database creationDTS automatically creates the database in the destination PolarDB for MySQL cluster. If the source database name is invalid, manually create the database before configuring the task. See Database Management.

Operations to avoid during migration

To prevent task failures, do not perform the following operations while a migration task is running:

  1. Do not perform DDL operations (such as schema changes) during schema migration or full data migration.

  2. Do not write data to the source database if you are running full data migration only. Writes during this phase may cause inconsistencies between the source and destination.

  3. If you change the network type of the PolarDB-X instance during migration, update the network connection settings of the DTS task accordingly.

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration and full data migrationFreeCharged only when data is transferred from Alibaba Cloud over the Internet. See Billing overview.
Incremental data migrationCharged. See Billing overview.

Migration types

TypeWhat it migrates
Schema migrationDatabase object schemas (tables, indexes, and so on)
Full data migrationAll existing data in the selected objects
Incremental data migrationAfter full data migration is complete, DTS migrates incremental data from the source database to the destination database. Incremental data migration allows data to be migrated smoothly without interrupting services of self-managed applications during data migration. DML operations (INSERT, UPDATE, DELETE) are supported.

Required database account permissions

DatabaseSchema migrationFull data migrationIncremental data migration
PolarDB-X instanceSELECTSELECTREPLICATION SLAVE, REPLICATION CLIENT, SELECT on objects to be migrated
PolarDB for MySQL clusterRead and writeRead and writeRead and write

Grant permissions on the PolarDB-X instance

For more details on PolarDB-X permission management, see Data synchronization tools for PolarDB-X.

Grant permissions on the PolarDB for MySQL cluster

For instructions on creating a database account with read and write permissions, see Create a database account.

Create a migration task

Step 1: Open the Data Migration Tasks page

  1. Log on to the Data Management (DMS) console.

  2. In the top navigation bar, click DTS.

  3. In the left-side navigation pane, choose DTS (DTS) > Data Migration.

The console layout may vary. See Simple mode and Configure the DMS console based on your business requirements for details. Alternatively, go directly to the Data Migration Tasks page in the new DTS console.

Step 2: Select the region

From the drop-down list next to Data Migration Tasks, select the region where the data migration instance resides.

In the new DTS console, select the region in the upper-left corner.

Step 3: Configure source and destination databases

Click Create Task, then configure the following parameters.

Warning

After configuring the source and destination databases, read the limits displayed at the top of the page before proceeding. Skipping this step may cause the task to fail or result in data inconsistency.

Source Database

ParameterDescription
Task NameDTS generates a name automatically. Specify a descriptive name to identify the task. The name does not need to be unique.
Select an existing DMS database instanceOptional. Select an existing instance to auto-populate the parameters, or configure them manually.
Database TypeSelect PolarDB-X 2.0.
Connection TypeSelect Alibaba Cloud Instance.
Instance RegionThe region where the source PolarDB-X instance resides.
Instance IDThe ID of the source PolarDB-X instance.
Database AccountThe database account for the source instance. See Required database account permissions for details.
Database PasswordThe password of the database account.

Destination Database

ParameterDescription
Select an existing DMS database instanceOptional. Select an existing instance to auto-populate the parameters, or configure them manually.
Database TypeSelect PolarDB for MySQL.
Connection TypeSelect Alibaba Cloud Instance.
Instance RegionThe region where the destination PolarDB for MySQL cluster resides.
PolarDB Cluster IDThe ID of the destination PolarDB for MySQL cluster.
Database AccountThe database account for the destination cluster. See Required database account permissions for details.
Database PasswordThe password of the database account.

Step 4: Test connectivity

Click Test Connectivity and Proceed.

DTS automatically adds its server CIDR blocks to the IP address whitelist of Alibaba Cloud database instances and to the security group rules of Elastic Compute Service (ECS) instances hosting self-managed databases. For databases hosted on-premises or by third-party providers, manually add the DTS server CIDR blocks to the database IP address whitelist. See Add the CIDR blocks of DTS servers to the security settings of on-premises databases for the list of CIDR blocks.

Warning

Adding DTS CIDR blocks to whitelists or security group rules introduces security exposure. Before using DTS, take precautions, including securing your account credentials, limiting exposed ports, validating API calls, auditing whitelist rules regularly, and connecting DTS to your database over Express Connect, VPN Gateway, or Smart Access Gateway.

Step 5: Select migration objects and migration type

Configure the following parameters.

Migration type

OptionWhen to use
Schema Migration + Full Data MigrationFor a one-time, offline migration without service continuity requirements. Do not write to the source database during migration.
Schema Migration + Full Data Migration + Incremental Data MigrationFor zero-downtime migration. DTS keeps the destination in sync until you cut over.
If you do not select Incremental Data Migration, do not write data to the source database during migration. This prevents data inconsistency.

Processing mode of conflicting tables

OptionBehavior
Precheck and Report ErrorsChecks for tables with identical names in the source and destination. If duplicates exist, the precheck fails and the task cannot start. Use the object name mapping feature to rename conflicting tables. See Map object names.
Ignore Errors and ProceedSkips the duplicate-name check. If schemas match, records with duplicate primary keys are skipped. If schemas differ, specific columns may not be migrated or the task may fail. Use with caution.
Warning

Selecting Ignore Errors and Proceed may cause data inconsistency.

Source objects and selected objects

Select columns, tables, or schemas from Source Objects and move them to Selected Objects.

Selecting tables or columns excludes views, triggers, and stored procedures from migration.

To rename an object in the destination, right-click it in Selected Objects. To rename multiple objects at once, click Batch Edit in the upper-right corner of Selected Objects. See Map object names.

Renaming an object may cause dependent objects to fail migration. To filter rows using SQL conditions, right-click an object in Selected Objects and specify filter conditions. See Use SQL conditions to filter data.

Step 6: Configure advanced settings

Click Next: Advanced Settings, then configure the following parameters.

ParameterDescription
Set AlertsNo: disables alerts. Yes: enables alerts. Specify the alert threshold and contacts. See Configure monitoring and alerting when you create a DTS task.
Retry Time for Failed ConnectionsThe duration DTS retries a connection after a failure. Range: 10–1,440 minutes. Default: 720 minutes. Set this to at least 30 minutes. If DTS reconnects within this window, the task resumes automatically. Otherwise, the task fails. The shortest retry time across tasks sharing the same source or destination takes effect. You are charged for the DTS instance during retries.
Configure ETLYes: enables the extract, transform, and load (ETL) feature. Enter processing statements in the code editor. See Configure ETL in a data migration or data synchronization task. No: disables ETL.
Whether to delete SQL operations on heartbeat tables of forward and reverse tasksYes: DTS does not write heartbeat SQL operations to the source database. Migration latency may be displayed. No: DTS writes heartbeat SQL operations. Physical backup and cloning of the source database may be affected.

Step 7: Run the precheck

Click Next: Save Task Settings and Precheck.

To view the OpenAPI parameters for this task configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

The task must pass the precheck before it can start. If the precheck fails:

  1. Click View Details next to each failed item.

  2. Fix the reported issues.

  3. Click Precheck Again.

If a precheck item generates an alert:

  • If the alert cannot be ignored, fix the issue and rerun the precheck.

  • If the alert can be ignored, click Confirm Alert Details, then click Ignore > OK > Precheck Again. Ignoring alerts may lead to data inconsistency.

Step 8: Purchase a migration instance

Wait until the Success Rate reaches 100%, then click Next: Purchase Instance.

On the Purchase Instance page, configure the following parameters.

SectionParameterDescription
New Instance ClassResource GroupThe resource group for the migration instance. Default: default resource group. See What is Resource Management?.
Instance ClassThe instance class determines migration speed. Select based on your data volume and time requirements. See Specifications of data migration instances.

Read and select Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start.

The task appears in the task list. Full migration progress is shown as a percentage. Incremental migration shows the latency between source and destination.

What's next