Use Data Transmission Service (DTS) to migrate data from a PolarDB-X 1.0 instance to an AnalyticDB for MySQL 3.0 cluster. This topic covers prerequisites, source and destination database requirements, required permissions, and step-by-step configuration of the migration task.
Prerequisites
Before you begin, make sure that:
A PolarDB-X 1.0 source instance exists. See Create a PolarDB-X 1.0 instance.
ImportantThe storage type of the PolarDB-X 1.0 instance must be ApsaraDB RDS for MySQL (custom or purchased instances). PolarDB for MySQL is not supported as the storage type.
The source PolarDB-X 1.0 instance is version 5.2 or later and compatible with MySQL 5.7.
An AnalyticDB for MySQL 3.0 target cluster exists with storage space larger than the used storage of the source instance. See Create a cluster.
(Incremental migration only) The character set of the data to be migrated is not
utf8mb3. Migratingutf8mb3data incrementally causes the task to fail.
Limitations
Source database limits
| Limit | Details |
|---|---|
| Bandwidth | The source server must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed. |
| Table structure | Tables must have a PRIMARY KEY or UNIQUE constraint with all fields unique. Without this, the destination may contain duplicate records. |
| Table count (table-level migration) | When migrating individual tables with column or table renaming, a single task supports up to 1,000 tables. Exceeding this limit causes a request error. Split the work across multiple tasks, or migrate at the database level instead. |
| Binary logging (incremental migration) | Binary logging must be enabled and binlog_row_image must be set to full. If not configured, the precheck fails and the task cannot start. |
| Binary log retention (incremental migration) | Incremental-only tasks: retain logs for more than 24 hours. Full + incremental tasks: retain logs for at least seven days. If logs are purged before DTS reads them, the task fails and data loss or inconsistency may occur. After full migration completes, you can reduce the retention period to more than 24 hours. |
| Prohibited operations during migration | Do not perform scaling, shrinking, hot table migration, shard key changes, or DDL changes on the source instance while the task is running. These operations cause the task to fail. |
| Network type changes | If you switch the network type of the PolarDB-X 1.0 instance during migration, update the connection information in the migration task after the switch completes. |
Foreign key behavior
DTS does not migrate foreign keys during schema migration. During full and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. If cascade updates or deletions occur on the source during this time, data inconsistency may result.
Other limits
| Limit | Details |
|---|---|
| Destination primary key | The destination database must have a custom primary key, or configure Primary Key Column in the Configurations for Databases, Tables, and Columns step. Without a primary key, migration may fail. |
| Migration timing | Run migration during off-peak hours. Full data migration uses read and write resources on both databases, which increases server load. |
| Tablespace size after full migration | Concurrent INSERT operations during full data migration cause fragmentation in the destination. After full migration, the destination tablespace is larger than the source. |
| Task auto-resume | DTS retries failed tasks for up to seven days. Before switching your workload to the destination, stop or release the task, or revoke write permissions from the DTS account using REVOKE. Otherwise, the resumed task overwrites destination data with source data. |
| Backup conflict | If the AnalyticDB for MySQL 3.0 cluster starts a backup while the DTS task is running, the DTS task fails. |
| Task restoration by support | If a task fails, DTS technical support attempts restoration within eight hours. The task may be restarted and task parameters may be modified during restoration. Database parameters are not changed. |
Other precautions
DTS periodically writes to the dts_health_check.ha_health_check table in the source database to advance the binlog position. This is expected behavior.
Billing
| Migration type | Link configuration fee | Data transfer fee |
|---|---|---|
| Schema migration + full data migration | Free | Internet outbound traffic is charged when migrating data over the Internet. See Billing overview. |
| Incremental data migration | Charged. See Billing overview. | — |
Migration types
DTS supports three migration types for this path. Select all three for a smooth migration with minimal service interruption.
| Type | What it does | Task behavior after completion |
|---|---|---|
| Schema migration | Migrates the schemas of selected objects to the destination. Foreign keys are not migrated. | Stops automatically. |
| Full data migration | Migrates all historical data from the source to the destination. | Stops automatically. |
| Incremental data migration | After full migration finishes, continuously replicates changes from the source to the destination. Keeps the destination in sync while your applications continue running. | Runs continuously — does not stop automatically. Stop the task manually before switching your workload to the destination. |
If you run only full data migration (without incremental), avoid writing to the source database during the task. Writes to the source after the task starts cause data inconsistency between source and destination.
SQL operations supported for incremental migration
| Operation type | Supported statements |
|---|---|
| DML | INSERT, UPDATE, DELETE |
When writing to the AnalyticDB for MySQL 3.0 destination cluster, UPDATE is automatically converted to REPLACE INTO. If the UPDATE targets a primary key column, it is converted to DELETE + INSERT.
Required permissions
Grant the following permissions to the database accounts used by DTS before starting the task.
| Database | Schema migration | Full migration | Incremental migration |
|---|---|---|---|
| PolarDB-X 1.0 | SELECT | SELECT | REPLICATION SLAVE, REPLICATION CLIENT, and SELECT on objects to migrate |
| AnalyticDB for MySQL 3.0 | Access control list | — | — |
REPLICATION SLAVE and REPLICATION CLIENT are required only for incremental migration. Full-only tasks do not need these privileges.
For instructions on creating accounts and granting permissions:
PolarDB-X 1.0: See Account Management. For permission-related issues, see Account permission issues during data synchronization.
AnalyticDB for MySQL 3.0: See Create a database account.
Data type mappings
Create a migration task
Step 1: Go to the Data Migration page
Use either the DTS console or the DMS console.
DTS console
Log on to the DTS console.
In the left-side navigation pane, click Data Migration.
In the upper-left corner, select the region where the migration instance resides.
DMS console
Steps may vary based on the DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console.
Log on to the DMS console.
In the top navigation bar, move your pointer over Data + AI > DTS (DTS) > Data Migration.
From the drop-down list to the right of Data Migration Tasks, select the region where the instance resides.
Step 2: Configure source and destination databases
Click Create Task.
Read the Limits displayed at the top of the page before proceeding.
Configure the source and destination databases using the following parameters.
Source database
| Parameter | Description |
|---|---|
| Task Name | A name for the DTS task. DTS generates a name automatically. Specify a descriptive name to identify the task easily. The name does not need to be unique. |
| Select Existing Connection | If the source instance is already registered with DTS, select it from the drop-down list — DTS populates the remaining parameters automatically. Otherwise, configure the parameters below. |
| Database Type | Select PolarDB-X 1.0. |
| Access Method | Select Cloud Instance. |
| Instance Region | Select the region where the source PolarDB-X 1.0 instance resides. |
| Replicate Data Across Alibaba Cloud Accounts | Select No if the source instance belongs to the current Alibaba Cloud account. |
| Instance ID | Select the ID of the source PolarDB-X 1.0 instance. |
| Database Account | Enter the database account. See Required permissions for the minimum permissions needed. |
| Database Password | Enter the password for the database account. |
Destination database
| Parameter | Description |
|---|---|
| Select Existing Connection | If the destination cluster is already registered with DTS, select it from the drop-down list. Otherwise, configure the parameters below. |
| Database Type | Select AnalyticDB MySQL 3.0. |
| Access Method | Select Cloud Instance. |
| Instance Region | Select the region where the target AnalyticDB for MySQL 3.0 cluster resides. |
| Instance ID | Select the ID of the target AnalyticDB for MySQL 3.0 cluster. |
| Database Account | Enter the database account. See Required permissions for the minimum permissions needed. |
| Database Password | Enter the password for the database account. |
Click Test Connectivity and Proceed.
DTS server IP address ranges must be added to the security settings of both databases. DTS can add these automatically, or you can add them manually. See Add the IP address ranges of DTS servers.
Step 3: Configure migration objects
On the Configure Objects page, set the following parameters.
| Parameter | Description |
|---|---|
| Migration Types | Select the migration types to run. To minimize service interruption, select Schema Migration, Full Data Migration, and Incremental Data Migration. To run full migration only, select Schema Migration and Full Data Migration. Note If you skip Schema Migration, create the target database and tables manually and enable the object name mapping feature in Selected Objects before starting the task. |
| DDL and DML Operations to Be Synchronized | Select the SQL operations for incremental migration at the instance level. See SQL operations supported for incremental migration. To configure at the database or table level, right-click the object in Selected Objects and select the operations in the dialog box. |
| Merge Tables | Select Yes to merge all selected source tables into a single destination table. DTS adds a __dts_data_source column to store the data source. To merge only specific tables, create two separate migration tasks. Select No (default) to migrate tables without merging. Warning Do not perform DDL operations that change source database or table schemas during migration. Doing so may cause data inconsistency or task failure. |
| Processing Mode of Conflicting Tables | Precheck and Report Errors (default): checks whether destination tables have the same names as source tables. If identical names exist, the precheck fails and the task cannot start. Use the object name mapping feature to rename conflicting tables. Ignore Errors and Proceed: skips the name conflict check. If source and destination have the same schema and a record shares a primary key with an existing destination record — during full migration, DTS skips the record; during incremental migration, DTS overwrites it. If schemas differ, only matching columns are migrated or the task fails. Use with caution. |
| Capitalization of Object Names in Destination Instance | Controls the capitalization of database, table, and column names in the destination. Default: DTS default policy. See Specify the capitalization of object names in the destination instance. |
| Source Objects | Select columns, tables, or databases to migrate, then click Note If you select a table, associated views, triggers, and stored procedures are not migrated. If you select a database: tables with a primary key use it as the distribution key; tables without a primary key get an auto-increment primary key, which may cause data inconsistency. |
| Selected Objects | To rename a single object, right-click it and follow the instructions in Rename objects one by one. To rename multiple objects at once, click Edit in the upper-right corner. See Rename objects in batches. Note Renaming an object may break migration for objects that depend on it. To filter rows, right-click the table and set a WHERE condition. See Configure filter conditions. |
Step 4: Configure advanced settings
Click Next: Advanced Settings and configure the following parameters.
| Parameter | Description |
|---|---|
| Dedicated Cluster for Task Scheduling | By default, DTS schedules the task to the shared cluster. Purchase a dedicated cluster to improve task stability. See What is a DTS dedicated cluster. |
| Retry Time for Failed Connections | How long DTS retries when the source or destination database is unreachable. Valid values: 10–1,440 minutes. Default: 720 minutes. Set this to a value greater than 30 minutes. If DTS reconnects within the retry window, the task resumes. If not, the task fails. Note When multiple tasks share a source or destination database, the most recently set retry time applies to all. DTS charges for the instance during retries. |
| Retry Time for Other Issues | How long DTS retries when DDL or DML operations fail. Valid values: 1–1,440 minutes. Default: 10 minutes. Set this to a value greater than 10 minutes. This value must be smaller than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Migration | Limits resource consumption during full data migration. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. |
| Enable Throttling for Incremental Data Migration | Limits resource consumption during incremental migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. |
| Environment Tag | Tags the DTS instance for organization purposes. Optional. |
| Configure ETL | Select Yes to enable the extract, transform, and load (ETL) feature and enter data processing statements. See Configure ETL in a data migration or data synchronization task. Select No to skip ETL. |
| Monitoring and Alerting | Select Yes to receive notifications when the task fails or migration latency exceeds a threshold. Configure the alert threshold and notification settings. See Configure monitoring and alerting when you create a DTS task. |
Step 5: Configure database and table fields (optional)
Click Next: Configure Database and Table Fields to set the Type, Primary Key Column, Distribution Key, Partition Key, Partitioning Rules, and Partition Lifecycle for tables in the destination database.
This step is available only when Schema Migration is selected. Set Definition Status to All to view all tables. You can select multiple columns as a composite Primary Key Column, then choose one or more of those columns as the Distribution Key and Partition Key. See CREATE TABLE.
Step 6: Run the precheck
Click Next: Save Task Settings and Precheck.
Tip: Before saving, click Preview OpenAPI parameters to view the parameters that the corresponding API operation uses when configuring this task.
DTS runs a precheck before the task starts. If any item fails:
Click View Details next to the failed item.
Fix the underlying issue based on the check results.
Click Precheck Again.
If an item generates an alert:
If the alert cannot be ignored, click View Details next to the failed item and troubleshoot the issues. Then click Precheck Again.
If you want to ignore the alert, follow these steps:
Click Confirm Alert Details.
In the dialog box, click Ignore, then click OK.
Click Precheck Again.
Ignoring alert items may result in data inconsistency. Proceed only if you understand the risk.
Step 7: Purchase an instance and start the task
Wait until Success Rate reaches 100%, then click Next: Purchase Instance.
On the Purchase Instance page, configure the following parameters.
Parameter Description Resource Group The resource group for the migration instance. Default: default resource group. See What is Resource Management? Instance Class The instance class determines migration speed. Select a class based on your requirements. See Instance classes of data migration instances. Read and accept Data Transmission Service (Pay-as-you-go) Service Terms by selecting the check box.
Click Buy and Start, then click OK in the confirmation dialog.
Track progress on the Data Migration page.
Schema migration and full data migration tasks stop automatically when complete. The Status column shows Completed.
Incremental data migration tasks run continuously and do not stop automatically. The Status column shows Running. Stop the task manually before switching your workload to the destination.
What's next
Map object names — rename source objects before they are written to the destination
Enable multi-table merge — merge data from multiple source tables into one destination table
Modify the parameters of a DTS instance — adjust task parameters after the task is created
Billing overview — understand DTS pricing