Use Data Transmission Service (DTS) to migrate data from a PolarDB-X 2.0 instance to a PolarDB for MySQL cluster. DTS supports schema migration, full data migration, and incremental data migration, so you can migrate with zero downtime by keeping both databases in sync until you are ready to cut over.
Prerequisites
Before you begin, make sure that you have:
A PolarDB-X 2.0 instance compatible with MySQL 5.7
A PolarDB for MySQL cluster (see Purchase a pay-as-you-go cluster or Purchase a subscription cluster)
Available storage on the destination PolarDB for MySQL cluster that exceeds the total data size on the source PolarDB-X instance
Limitations
Source database
| Constraint | Details |
|---|---|
| Bandwidth | The server hosting the source database must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed. |
| Primary keys | Tables to be migrated must have PRIMARY KEY or UNIQUE constraints, with all fields unique. Otherwise, the destination database may contain duplicate records. |
| Table count (with renaming) | If you select tables as migration objects and need to rename tables or columns in the destination, a single task supports up to 1,000 tables. For more than 1,000 tables, configure multiple tasks or migrate at the schema level. |
| Binary logging (incremental migration) | Binary logging must be enabled and binlog_row_image must be set to full. Otherwise, the precheck fails and the task cannot start. |
| Binary log retention | For incremental-only migration: retain logs for more than 24 hours. For full + incremental migration: retain logs for at least seven days. If the retention period is too short, DTS may fail to read the binary logs, which can cause task failure or data loss. After full migration completes, you can reduce the retention period to more than 24 hours. If the retention requirements are not met, the DTS Service Level Agreement (SLA) does not apply. |
| MySQL compatibility | The PolarDB-X instance must be compatible with MySQL 5.7. |
Other constraints
| Constraint | Details |
|---|---|
| Migration window | Full data migration reads from the source and writes to the destination, increasing load on both database servers. Migrate during off-peak hours. |
| Destination tablespace | Concurrent INSERT operations during full data migration cause table fragmentation in the destination. After full migration, the destination tablespace is larger than the source. |
| Failed task resume | DTS retries failed tasks for up to seven days. Before switching workloads to the destination, stop or release any failed DTS tasks. Alternatively, run REVOKE to remove write permissions from the DTS database account on the destination. If a failed task resumes after you switch over, source data may overwrite destination data. |
| Heartbeat updates | DTS periodically updates the dts_health_check.ha_health_check table in the source database to advance the binary log position. |
| Destination database creation | DTS automatically creates the database in the destination PolarDB for MySQL cluster. If the source database name is invalid, manually create the database before configuring the task. See Database Management. |
Operations to avoid during migration
To prevent task failures, do not perform the following operations while a migration task is running:
Do not perform DDL operations (such as schema changes) during schema migration or full data migration.
Do not write data to the source database if you are running full data migration only. Writes during this phase may cause inconsistencies between the source and destination.
If you change the network type of the PolarDB-X instance during migration, update the network connection settings of the DTS task accordingly.
Billing
| Migration type | Instance configuration fee | Internet traffic fee |
|---|---|---|
| Schema migration and full data migration | Free | Charged only when data is transferred from Alibaba Cloud over the Internet. See Billing overview. |
| Incremental data migration | Charged. See Billing overview. |
Migration types
| Type | What it migrates |
|---|---|
| Schema migration | Database object schemas (tables, indexes, and so on) |
| Full data migration | All existing data in the selected objects |
| Incremental data migration | After full data migration is complete, DTS migrates incremental data from the source database to the destination database. Incremental data migration allows data to be migrated smoothly without interrupting services of self-managed applications during data migration. DML operations (INSERT, UPDATE, DELETE) are supported. |
Required database account permissions
| Database | Schema migration | Full data migration | Incremental data migration |
|---|---|---|---|
| PolarDB-X instance | SELECT | SELECT | REPLICATION SLAVE, REPLICATION CLIENT, SELECT on objects to be migrated |
| PolarDB for MySQL cluster | Read and write | Read and write | Read and write |
Grant permissions on the PolarDB-X instance
For more details on PolarDB-X permission management, see Data synchronization tools for PolarDB-X.
Grant permissions on the PolarDB for MySQL cluster
For instructions on creating a database account with read and write permissions, see Create a database account.
Create a migration task
Step 1: Open the Data Migration Tasks page
Log on to the Data Management (DMS) console.
In the top navigation bar, click DTS.
In the left-side navigation pane, choose DTS (DTS) > Data Migration.
The console layout may vary. See Simple mode and Configure the DMS console based on your business requirements for details. Alternatively, go directly to the Data Migration Tasks page in the new DTS console.
Step 2: Select the region
From the drop-down list next to Data Migration Tasks, select the region where the data migration instance resides.
In the new DTS console, select the region in the upper-left corner.
Step 3: Configure source and destination databases
Click Create Task, then configure the following parameters.
After configuring the source and destination databases, read the limits displayed at the top of the page before proceeding. Skipping this step may cause the task to fail or result in data inconsistency.
Source Database
| Parameter | Description |
|---|---|
| Task Name | DTS generates a name automatically. Specify a descriptive name to identify the task. The name does not need to be unique. |
| Select an existing DMS database instance | Optional. Select an existing instance to auto-populate the parameters, or configure them manually. |
| Database Type | Select PolarDB-X 2.0. |
| Connection Type | Select Alibaba Cloud Instance. |
| Instance Region | The region where the source PolarDB-X instance resides. |
| Instance ID | The ID of the source PolarDB-X instance. |
| Database Account | The database account for the source instance. See Required database account permissions for details. |
| Database Password | The password of the database account. |
Destination Database
| Parameter | Description |
|---|---|
| Select an existing DMS database instance | Optional. Select an existing instance to auto-populate the parameters, or configure them manually. |
| Database Type | Select PolarDB for MySQL. |
| Connection Type | Select Alibaba Cloud Instance. |
| Instance Region | The region where the destination PolarDB for MySQL cluster resides. |
| PolarDB Cluster ID | The ID of the destination PolarDB for MySQL cluster. |
| Database Account | The database account for the destination cluster. See Required database account permissions for details. |
| Database Password | The password of the database account. |
Step 4: Test connectivity
Click Test Connectivity and Proceed.
DTS automatically adds its server CIDR blocks to the IP address whitelist of Alibaba Cloud database instances and to the security group rules of Elastic Compute Service (ECS) instances hosting self-managed databases. For databases hosted on-premises or by third-party providers, manually add the DTS server CIDR blocks to the database IP address whitelist. See Add the CIDR blocks of DTS servers to the security settings of on-premises databases for the list of CIDR blocks.
Adding DTS CIDR blocks to whitelists or security group rules introduces security exposure. Before using DTS, take precautions, including securing your account credentials, limiting exposed ports, validating API calls, auditing whitelist rules regularly, and connecting DTS to your database over Express Connect, VPN Gateway, or Smart Access Gateway.
Step 5: Select migration objects and migration type
Configure the following parameters.
Migration type
| Option | When to use |
|---|---|
| Schema Migration + Full Data Migration | For a one-time, offline migration without service continuity requirements. Do not write to the source database during migration. |
| Schema Migration + Full Data Migration + Incremental Data Migration | For zero-downtime migration. DTS keeps the destination in sync until you cut over. |
If you do not select Incremental Data Migration, do not write data to the source database during migration. This prevents data inconsistency.
Processing mode of conflicting tables
| Option | Behavior |
|---|---|
| Precheck and Report Errors | Checks for tables with identical names in the source and destination. If duplicates exist, the precheck fails and the task cannot start. Use the object name mapping feature to rename conflicting tables. See Map object names. |
| Ignore Errors and Proceed | Skips the duplicate-name check. If schemas match, records with duplicate primary keys are skipped. If schemas differ, specific columns may not be migrated or the task may fail. Use with caution. |
Selecting Ignore Errors and Proceed may cause data inconsistency.
Source objects and selected objects
Select columns, tables, or schemas from Source Objects and move them to Selected Objects.
Selecting tables or columns excludes views, triggers, and stored procedures from migration.
To rename an object in the destination, right-click it in Selected Objects. To rename multiple objects at once, click Batch Edit in the upper-right corner of Selected Objects. See Map object names.
Renaming an object may cause dependent objects to fail migration. To filter rows using SQL conditions, right-click an object in Selected Objects and specify filter conditions. See Use SQL conditions to filter data.
Step 6: Configure advanced settings
Click Next: Advanced Settings, then configure the following parameters.
| Parameter | Description |
|---|---|
| Set Alerts | No: disables alerts. Yes: enables alerts. Specify the alert threshold and contacts. See Configure monitoring and alerting when you create a DTS task. |
| Retry Time for Failed Connections | The duration DTS retries a connection after a failure. Range: 10–1,440 minutes. Default: 720 minutes. Set this to at least 30 minutes. If DTS reconnects within this window, the task resumes automatically. Otherwise, the task fails. The shortest retry time across tasks sharing the same source or destination takes effect. You are charged for the DTS instance during retries. |
| Configure ETL | Yes: enables the extract, transform, and load (ETL) feature. Enter processing statements in the code editor. See Configure ETL in a data migration or data synchronization task. No: disables ETL. |
| Whether to delete SQL operations on heartbeat tables of forward and reverse tasks | Yes: DTS does not write heartbeat SQL operations to the source database. Migration latency may be displayed. No: DTS writes heartbeat SQL operations. Physical backup and cloning of the source database may be affected. |
Step 7: Run the precheck
Click Next: Save Task Settings and Precheck.
To view the OpenAPI parameters for this task configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
The task must pass the precheck before it can start. If the precheck fails:
Click View Details next to each failed item.
Fix the reported issues.
Click Precheck Again.
If a precheck item generates an alert:
If the alert cannot be ignored, fix the issue and rerun the precheck.
If the alert can be ignored, click Confirm Alert Details, then click Ignore > OK > Precheck Again. Ignoring alerts may lead to data inconsistency.
Step 8: Purchase a migration instance
Wait until the Success Rate reaches 100%, then click Next: Purchase Instance.
On the Purchase Instance page, configure the following parameters.
| Section | Parameter | Description |
|---|---|---|
| New Instance Class | Resource Group | The resource group for the migration instance. Default: default resource group. See What is Resource Management?. |
| Instance Class | The instance class determines migration speed. Select based on your data volume and time requirements. See Specifications of data migration instances. |
Read and select Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start.
The task appears in the task list. Full migration progress is shown as a percentage. Incremental migration shows the latency between source and destination.