Use Data Transmission Service (DTS) to migrate data between PolarDB-X 2.0 instances with minimal downtime. DTS supports three migration types — schema migration, full data migration, and incremental data migration — so you can keep your application running throughout the migration.
Prerequisites
Before you begin, ensure that you have:
Two PolarDB-X 2.0 instances (source and destination), both compatible with MySQL 5.7
A destination instance with more available storage space than the total data size in the source instance
Database accounts with the permissions listed in Required permissions
Limitations
Source database
| Limitation | Details |
|---|---|
| Table constraints | Tables must have a PRIMARY KEY or UNIQUE constraint, and all fields must be unique. Otherwise, the destination database may contain duplicate records. |
| Table count (when selecting individual tables) | Up to 1,000 tables per task. Exceeding this limit causes a request error. To migrate more tables, create multiple tasks or migrate at the database level. |
| Bandwidth | The server hosting the source database must have enough outbound bandwidth. Insufficient bandwidth reduces migration speed. |
| DDL operations | Do not run DDL operations on the source during schema migration or full data migration. Doing so causes the task to fail. |
| Writes during full-only migration | If you run only full data migration (without incremental), do not write to the source during migration. Concurrent writes can cause data inconsistency. |
| Network type changes | If you change the network type of the source PolarDB-X 2.0 instance during migration, update the network connection settings in the DTS task as well. |
Binary logging requirements (for incremental data migration)
If you include incremental data migration, verify that the source database meets the following requirements:
| Parameter | Required value | Why |
|---|---|---|
| Binary logging | ON | DTS reads binary logs to replicate incremental changes |
binlog_row_image | full | Captures all column values, enabling accurate conflict resolution |
| Binary log retention (incremental only) | More than 24 hours | Ensures DTS can access consecutive logs during migration |
| Binary log retention (full + incremental) | At least 7 days | Provides enough history for DTS to resume from the full migration start point |
If binary logs are not retained for the required duration, DTS may fail to obtain them and the task may fail. In exceptional cases, data inconsistency or loss may occur. After full data migration completes, you can reduce the retention period to more than 24 hours.
DTS updates the dts_health_check.ha_health_check table in the source database at regular intervals to advance the binary log position.
Other limitations
The destination PolarDB-X 2.0 instance must be compatible with MySQL 5.7.
Run migrations during off-peak hours. Full data migration consumes read and write resources on both the source and destination, which increases server load.
After full data migration, the destination tablespace is larger than the source due to fragmentation from concurrent INSERT operations.
DTS automatically retries failed tasks for up to 7 days. Before you switch workloads to the destination, stop or release any failed tasks. Alternatively, run
REVOKEto remove DTS write permissions on the destination. Otherwise, a resumed task may overwrite destination data with source data.
Billing
| Migration type | Instance configuration fee | Internet traffic fee |
|---|---|---|
| Schema migration and full data migration | Free | Charged only when migrating from Alibaba Cloud over the Internet. See Billing overview. |
| Incremental data migration | Charged. See Billing overview. |
Migration types
DTS supports three migration types. Combine them based on your requirements:
| Migration type | What it does | When to use |
|---|---|---|
| Schema migration | Migrates the schemas of selected objects to the destination | Required as the foundation for data migration |
| Full data migration | Migrates all existing data in the selected objects | Sufficient for one-time migrations with no write traffic during migration |
| Incremental data migration | Continuously replicates changes from source to destination after full migration completes | Use with full data migration to keep services running during migration |
Recommended: Select all three types (schema migration + full data migration + incremental data migration) to maintain service continuity and data consistency.
SQL operations supported in incremental data migration
| Operation type | Supported statements |
|---|---|
| DML (Data Manipulation Language) | INSERT, UPDATE, DELETE |
Required permissions
Grant the following permissions to the database accounts used by DTS before starting the task.
Source PolarDB-X 2.0 instance
| Permission | Required for | Purpose |
|---|---|---|
| SELECT | Schema migration, full data migration | Reads data and schema definitions from the source |
| REPLICATION SLAVE | Incremental data migration | Enables binary log streaming to replicate incremental changes |
| REPLICATION CLIENT | Incremental data migration | Provides access to binary log position and server status |
| SELECT (on objects to be migrated) | Incremental data migration | Reads the specific objects being migrated |
Destination PolarDB-X 2.0 instance
Read and write permissions are required for all migration types.
Create a migration task
Step 1: Go to the Data Migration Tasks page
Log on to the Data Management (DMS) console.
In the top navigation bar, click DTS.
In the left-side navigation pane, choose DTS (DTS) > Data Migration.
You can also go directly to the Data Migration Tasks page in the new DTS console. Console navigation may vary based on your DMS console mode. See Simple mode and Configure the DMS console based on your business requirements.
Step 2: Configure source and destination databases
From the drop-down list next to Data Migration Tasks, select the region where the migration instance resides.
In the new DTS console, select the region in the upper-left corner.
Click Create Task.
WarningAfter selecting the source and destination instances, read the limits displayed at the top of the page to prevent task failures or data inconsistency.
Configure the task and database connections:
Section Parameter Description — Task Name DTS auto-generates a name. Specify a descriptive name to identify the task. Names do not need to be unique. Source Database Select Instance Select an existing instance to auto-fill its settings, or configure manually. Database Type Select PolarDB-X 2.0. Connection Type Select Alibaba Cloud Instance. Instance Region The region where the source PolarDB-X 2.0 instance resides. Instance ID The ID of the source PolarDB-X 2.0 instance. Database Account The source database account. See Required permissions. Database Password The password for the database account. Destination Database Select Instance Select an existing instance to auto-fill its settings, or configure manually. Database Type Select PolarDB-X 2.0. Connection Type Select Alibaba Cloud Instance. Instance Region The region where the destination PolarDB-X 2.0 instance resides. Instance ID The ID of the destination PolarDB-X 2.0 instance. Database Account The destination database account. See Required permissions. Database Password The password for the database account.
Step 3: Test connectivity
Click Test Connectivity and Proceed.
DTS automatically adds its server CIDR blocks to the whitelist of Alibaba Cloud database instances (such as ApsaraDB RDS for MySQL or ApsaraDB for MongoDB) and to the security group rules of Elastic Compute Service (ECS) instances. For self-managed databases in data centers or hosted by third-party providers, manually add the DTS server CIDR blocks to the database whitelist. See Add the CIDR blocks of DTS servers to the security settings of on-premises databases.
Adding DTS CIDR blocks to whitelists or security groups introduces security exposure. Before proceeding, take preventive measures such as: strengthening account credentials, restricting exposed ports, authenticating API calls, auditing whitelist and security group rules regularly, and using Express Connect, VPN Gateway, or Smart Access Gateway to connect the database to DTS.
Step 4: Select objects and configure migration settings
| Parameter | Description |
|---|---|
| Migration Type | Select the migration types to run. For service continuity, select Schema Migration, Full Data Migration, and Incremental Data Migration. For a one-time migration with no writes to the source, select Schema Migration and Full Data Migration. |
| Processing Mode of Conflicting Tables | Precheck and Report Errors: Fails the precheck if the destination has tables with the same names as source tables. Use the object name mapping feature to resolve conflicts. See Map object names. Ignore Errors and Proceed: Skips the name conflict check. If schemas match, DTS skips records with matching primary keys. If schemas differ, only some columns migrate or the task fails. Use with caution. |
| Source Objects | Select objects from Source Objects and click |
| Selected Objects | To rename a single migrated object, right-click it in Selected Objects. See Map the name of a single object. To rename multiple objects, click Batch Edit. See Map multiple object names at a time. To filter rows, right-click an object and specify SQL WHERE conditions. See Use SQL conditions to filter data. |
Renaming an object may cause dependent objects to fail migration.
Step 5: Configure advanced settings
Click Next: Advanced Settings and configure:
| Parameter | Description |
|---|---|
| Set Alerts | Set to Yes to receive notifications when the task fails or migration latency exceeds a threshold. Specify the alert threshold and contacts. See Configure monitoring and alerting when you create a DTS task. |
| Retry Time for Failed Connections | How long DTS retries after losing connection to the source or destination. Valid values: 10–1,440 minutes. Default: 720 minutes. Set this to more than 30 minutes. If DTS reconnects within the retry window, the task resumes. Otherwise, the task fails. If multiple tasks share a source or destination, the shortest retry window applies to all. |
| Configure ETL | Set to Yes to transform data during migration using the extract, transform, and load (ETL) feature. Enter processing statements in the code editor. See Configure ETL in a data migration or data synchronization task and What is ETL?. |
| Whether to delete SQL operations on heartbeat tables of forward and reverse tasks | Yes: DTS does not write heartbeat SQL operations to the source. A latency value may appear on the DTS instance. No: DTS writes heartbeat SQL to the source. This may affect features such as physical backup and cloning. |
Step 6: Run the precheck
Click Next: Save Task Settings and Precheck.
DTS runs a precheck before starting migration. The task only proceeds after passing the precheck.
If the precheck fails or returns alerts:
Failed items: Click View Details next to the failed item, resolve the issue, then click Precheck Again.
Alert items that cannot be ignored: Click View Details, resolve the issue, then rerun the precheck.
Alert items that can be ignored: Click Confirm Alert Details > Ignore > OK, then click Precheck Again. Ignoring alerts may cause data inconsistency.
Step 7: Purchase an instance and start migration
Wait until the Success Rate reaches 100%, then click Next: Purchase Instance.
Select an Instance Class for the migration instance. Instance class determines migration speed. See Specifications of data migration instances.
Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms.
Click Buy and Start.
Monitor the migration progress in the task list.
Switch your workload to the destination
This is the highest-risk phase of migration. A DTS task can resume automatically for up to 7 days after failing. If you do not stop or release the task after switching, a resumed task will overwrite destination data with source data.
Complete these steps in order to switch your workload safely:
Wait for the incremental migration latency to reach 0 and remain stable. This confirms the destination is in sync with the source.
Stop all write traffic to the source database.
Wait for the migration latency to reach 0 again, confirming all remaining changes have replicated.
Update your application's connection string to point to the destination instance.
Stop or release the DTS migration task. Alternatively, run
REVOKEon the destination to remove DTS write permissions.