Use Data Transmission Service (DTS) to migrate data from a PolarDB-X 2.0 instance to an ApsaraDB RDS for MySQL instance with minimal downtime. DTS supports schema migration, full data migration, and incremental data migration, so you can keep the source database online throughout the process.
Prerequisites
Before you begin, make sure you have:
A PolarDB-X 2.0 instance compatible with MySQL 5.7
A destination ApsaraDB RDS for MySQL instance. For more information, see Create an ApsaraDB RDS for MySQL instance
Available storage space on the destination instance that exceeds the total data size of the source instance
Billing
| Migration type | Instance configuration fee | Internet traffic fee |
|---|---|---|
| Schema migration + full data migration | Free | Charged only when migrating from Alibaba Cloud over the Internet. See Billing overview. |
| Incremental data migration | Charged. See Billing overview. | See Billing overview. |
Required permissions
Grant the following permissions to the database accounts used by DTS.
| Database | Schema migration | Full data migration | Incremental data migration |
|---|---|---|---|
| PolarDB-X 2.0 | SELECT | SELECT | REPLICATION SLAVE, REPLICATION CLIENT, SELECT |
| ApsaraDB RDS for MySQL | Read and write | Read and write | Read and write |
For instructions on creating accounts and granting permissions on the source PolarDB-X 2.0 instance and the destination ApsaraDB RDS for MySQL instance, see Create an account and Modify the permissions of an account.
Limitations
Source database
| Category | Limitation |
|---|---|
| Bandwidth | The source database server must have sufficient outbound bandwidth. Low bandwidth reduces migration speed. |
| Table structure | Tables must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Tables without these constraints may produce duplicate records in the destination database. |
| Table count | If you select tables as objects and need to rename them in the destination database, a single task supports up to 1,000 tables. For more than 1,000 tables, create multiple tasks or migrate at the database level. |
| DDL during migration | During schema migration and full data migration, do not run DDL operations that change database or table schemas. This causes the task to fail. |
| Network changes | If you change the PolarDB-X instance network type during migration, update the network connection settings in the DTS task accordingly. |
| Writes during full migration | During full data migration only (without incremental migration), do not write to the source database. To guarantee consistency, select schema migration, full data migration, and incremental data migration together. |
Incremental data migration requirements
For incremental data migration, the source database must meet the following requirements:
Binary logging is enabled and
binlog_row_imageis set tofull. If this is not configured, the precheck fails and the task cannot start.Binary log retention period: If logs are not retained long enough, DTS may fail to read them and the task fails. In exceptional cases, data loss or inconsistency may occur. After full migration completes, you can reduce the retention period to more than 24 hours.
Incremental migration only: retain logs for more than 24 hours
Full migration + incremental migration: retain logs for at least 7 days
DTS does not guarantee the Service Level Agreement (SLA) if binary log retention does not meet these requirements.
Other limitations
Evaluate the performance impact on both source and destination databases before migrating. Run migrations during off-peak hours. During full data migration, DTS uses read and write resources of the source and destination databases, which may increase load on the database servers.
During full data migration, concurrent INSERT operations cause fragmentation in destination tables. After full migration, the destination tablespace is larger than the source.
DTS retries failed tasks for up to 7 days. Before switching workloads to the destination database, stop or release any failed tasks, or run
REVOKEto remove DTS write permissions on the destination database. Otherwise, a resumed failed task overwrites data in the destination.
Precautions
DTS periodically updates the
dts_health_check.ha_health_checktable in the source database to advance the binary log position.DTS automatically creates databases in the destination ApsaraDB RDS for MySQL instance. If the source database name is invalid, manually create the database before configuring the migration task. For more information, see Manage databases.
Migration types
DTS supports three migration types that you can combine based on your requirements:
| Migration type | What it does | Recommended for |
|---|---|---|
| Schema migration | Migrates schemas of selected objects from source to destination | Always include this |
| Full data migration | Migrates all existing data from source to destination | Initial data copy |
| Incremental data migration | Continuously replicates changes from source to destination after full migration completes | Minimizing downtime |
To migrate with minimal downtime, select all three types. Incremental migration keeps the source and destination databases in sync so you can switch over when ready.
Supported migration objects
| Object type | Supported | Notes |
|---|---|---|
| Tables (with PRIMARY KEY or UNIQUE constraints) | Yes | |
| Indexes | Yes | Migrated as part of schema migration |
| Views, triggers, stored procedures | No | Not migrated when you select tables or columns as objects. To migrate these, select the database as the object. |
| Tables without PRIMARY KEY or UNIQUE constraints | No | May cause duplicate records in the destination |
SQL operations supported during incremental migration
| Type | Operations |
|---|---|
| DML | INSERT, UPDATE, DELETE |
| DDL | ALTER TABLE; CREATE FUNCTION, CREATE INDEX, CREATE TABLE; DROP INDEX, DROP TABLE; RENAME TABLE; TRUNCATE TABLE |
RENAME TABLE operations can cause data inconsistency. If you select a table as a migration object and rename it during migration, that table's data is not migrated to the destination. To avoid this, select the entire database as the migration object instead of individual tables.
Create a migration task
Step 1: Go to Data Migration Tasks
Log on to the Data Management (DMS) console.
In the top navigation bar, move the pointer over DTS.
Choose DTS (DTS) > Data Migration.
You can also go directly to the Data Migration page of the new DTS console. The navigation may vary based on the DMS console mode. For more information, see Simple mode and Customize the layout and style of the DMS console.
Step 2: Select a region
From the drop-down list on the right side of Data Migration Tasks, select the region where your migration instance resides.
In the new DTS console, select the region in the upper-left corner.
Step 3: Configure source and destination databases
Click Create Task.
On the Create Data Migration Task page, review the Limits displayed at the top of the page before proceeding.
Configure the source database:
Parameter Value Task Name Enter an informative name. The name does not need to be unique. Select a DMS database instance Select an existing database or configure a new one. Database Type PolarDB-X 2.0 Connection Type Alibaba Cloud Instance Instance Region The region of the source PolarDB-X instance Instance ID The ID of the source PolarDB-X instance Database Account The account with the required permissions (see Required permissions) Database Password The account password Configure the destination database:
Parameter Value Select a DMS database instance Select an existing database or configure a new one. Database Type MySQL Connection Type Alibaba Cloud Instance Instance Region The region of the destination ApsaraDB RDS for MySQL instance RDS Instance ID The ID of the destination ApsaraDB RDS for MySQL instance Database Account The account with read and write permissions Database Password The account password Connection Method Select Non-encrypted or SSL-encrypted. To use SSL encryption, enable SSL on the RDS instance first. See Use a cloud certificate to enable SSL encryption.
Step 4: Test connectivity
Click Test Connectivity and Proceed.
DTS automatically adds its server CIDR blocks to the IP address whitelist of Alibaba Cloud database instances. For self-managed databases, you may need to add these CIDR blocks manually. For more information, see CIDR blocks of DTS servers.
Adding DTS server CIDR blocks to your database whitelist or security group rules introduces security risks. Take preventive measures such as using strong credentials, limiting exposed ports, authenticating API calls, and regularly auditing whitelist rules. Alternatively, connect through Express Connect, VPN Gateway, or Smart Access Gateway.
Step 5: Select objects and migration types
Configure the following settings:
| Parameter | Description |
|---|---|
| Migration Types | Select Schema Migration and Full Data Migration for a one-time migration. Add Incremental Data Migration to keep data in sync and minimize downtime. |
| Processing Mode of Conflicting Tables | Precheck and Report Errors: fails the precheck if the source and destination have tables with identical names. Use object name mapping to resolve conflicts. Ignore Errors and Proceed: skips the check. During full migration, existing destination records with matching primary keys are retained. During incremental migration, they are overwritten. Proceed with caution. |
| Source Objects | Select objects from Source Objects and move them to Selected Objects using the |
| Selected Objects | Right-click an object to rename it, add filter conditions, or select specific SQL operations to migrate. To rename multiple objects at once, click Batch Edit. Note: renaming an object may cause dependent objects to fail. See Map object names and Specify filter conditions. |
Step 6: Configure advanced settings
Click Next: Advanced Settings and configure the following:
| Parameter | Description |
|---|---|
| Monitoring and Alerting | Select Yes to receive alerts when the task fails or migration latency exceeds your threshold. Configure the alert threshold and notification settings. See Configure monitoring and alerting. |
| Retry Time for Failed Connections | The time range for DTS to retry failed connections. Valid values: 10–1440 minutes. Default: 720 minutes. Set a value greater than 30 minutes. If DTS reconnects within this period, the task resumes. Otherwise, the task fails. Note: DTS charges continue during retries. |
| Configure ETL | Select Yes to enable the extract, transform, and load (ETL) feature and enter data processing statements. See What is ETL? and Configure ETL in a data migration or data synchronization task. |
| Whether to delete SQL operations on heartbeat tables of forward and reverse tasks | Yes: DTS does not write heartbeat SQL to the source database, but migration latency may appear in metrics. No: DTS writes heartbeat SQL, which may affect physical backup and cloning operations on the source. |
Step 7: Run the precheck
Click Next: Save Task Settings and Precheck.
To preview the API parameters for this task, move the pointer over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
DTS runs a precheck automatically. Resolve any issues before proceeding:
Failed items: Click View Details, fix the reported issue, then click Precheck Again.
Alert items that block the task: Click View Details, fix the issue, then run the precheck again.
Alert items that can be ignored: Click Confirm Alert Details > Ignore > OK, then click Precheck Again. Ignoring alerts may cause data inconsistency.
Step 8: Purchase an instance
Wait until Success Rate reaches 100%.
Click Next: Purchase Instance.
On the Purchase Instance page, configure the instance:
Parameter Description Resource Group The resource group for the migration instance. Default: default resource group. See What is Resource Management? Instance Class Select an instance class based on your required migration speed. See Instance classes of data migration instances. Select the Data Transmission Service (Pay-as-you-go) Service Terms check box.
Click Buy and Start, then click OK in the confirmation dialog.
Monitor task progress on the Data Migration page.
Switch to the destination database
Before switching your applications to the destination database:
Stop writes to the source database.
Wait for the incremental migration latency to reach 0 and confirm that the data gap between source and destination has closed. Wait until both values remain stable before proceeding.
Stop or release the migration task.
Update your application connection strings to point to the destination ApsaraDB RDS for MySQL instance.
DTS retries failed tasks for up to 7 days. If you do not stop the task before switching, a resumed task may overwrite data in the destination database. Stop the task or run REVOKE to remove DTS write permissions before switching.