All Products
Search
Document Center

Data Transmission Service:Migrate data from a PolarDB-X 1.0 instance to an AnalyticDB for MySQL 3.0 cluster

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate data from a PolarDB-X 1.0 instance to an AnalyticDB for MySQL 3.0 cluster. This topic covers prerequisites, source and destination database requirements, required permissions, and step-by-step configuration of the migration task.

Prerequisites

Before you begin, make sure that:

  • A PolarDB-X 1.0 source instance exists. See Create a PolarDB-X 1.0 instance.

    Important

    The storage type of the PolarDB-X 1.0 instance must be ApsaraDB RDS for MySQL (custom or purchased instances). PolarDB for MySQL is not supported as the storage type.

  • The source PolarDB-X 1.0 instance is version 5.2 or later and compatible with MySQL 5.7.

  • An AnalyticDB for MySQL 3.0 target cluster exists with storage space larger than the used storage of the source instance. See Create a cluster.

  • (Incremental migration only) The character set of the data to be migrated is not utf8mb3. Migrating utf8mb3 data incrementally causes the task to fail.

Limitations

Source database limits

LimitDetails
BandwidthThe source server must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed.
Table structureTables must have a PRIMARY KEY or UNIQUE constraint with all fields unique. Without this, the destination may contain duplicate records.
Table count (table-level migration)When migrating individual tables with column or table renaming, a single task supports up to 1,000 tables. Exceeding this limit causes a request error. Split the work across multiple tasks, or migrate at the database level instead.
Binary logging (incremental migration)Binary logging must be enabled and binlog_row_image must be set to full. If not configured, the precheck fails and the task cannot start.
Binary log retention (incremental migration)Incremental-only tasks: retain logs for more than 24 hours. Full + incremental tasks: retain logs for at least seven days. If logs are purged before DTS reads them, the task fails and data loss or inconsistency may occur. After full migration completes, you can reduce the retention period to more than 24 hours.
Prohibited operations during migrationDo not perform scaling, shrinking, hot table migration, shard key changes, or DDL changes on the source instance while the task is running. These operations cause the task to fail.
Network type changesIf you switch the network type of the PolarDB-X 1.0 instance during migration, update the connection information in the migration task after the switch completes.

Foreign key behavior

DTS does not migrate foreign keys during schema migration. During full and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. If cascade updates or deletions occur on the source during this time, data inconsistency may result.

Other limits

LimitDetails
Destination primary keyThe destination database must have a custom primary key, or configure Primary Key Column in the Configurations for Databases, Tables, and Columns step. Without a primary key, migration may fail.
Migration timingRun migration during off-peak hours. Full data migration uses read and write resources on both databases, which increases server load.
Tablespace size after full migrationConcurrent INSERT operations during full data migration cause fragmentation in the destination. After full migration, the destination tablespace is larger than the source.
Task auto-resumeDTS retries failed tasks for up to seven days. Before switching your workload to the destination, stop or release the task, or revoke write permissions from the DTS account using REVOKE. Otherwise, the resumed task overwrites destination data with source data.
Backup conflictIf the AnalyticDB for MySQL 3.0 cluster starts a backup while the DTS task is running, the DTS task fails.
Task restoration by supportIf a task fails, DTS technical support attempts restoration within eight hours. The task may be restarted and task parameters may be modified during restoration. Database parameters are not changed.

Other precautions

DTS periodically writes to the dts_health_check.ha_health_check table in the source database to advance the binlog position. This is expected behavior.

Billing

Migration typeLink configuration feeData transfer fee
Schema migration + full data migrationFreeInternet outbound traffic is charged when migrating data over the Internet. See Billing overview.
Incremental data migrationCharged. See Billing overview.

Migration types

DTS supports three migration types for this path. Select all three for a smooth migration with minimal service interruption.

TypeWhat it doesTask behavior after completion
Schema migrationMigrates the schemas of selected objects to the destination. Foreign keys are not migrated.Stops automatically.
Full data migrationMigrates all historical data from the source to the destination.Stops automatically.
Incremental data migrationAfter full migration finishes, continuously replicates changes from the source to the destination. Keeps the destination in sync while your applications continue running.Runs continuously — does not stop automatically. Stop the task manually before switching your workload to the destination.
If you run only full data migration (without incremental), avoid writing to the source database during the task. Writes to the source after the task starts cause data inconsistency between source and destination.

SQL operations supported for incremental migration

Operation typeSupported statements
DMLINSERT, UPDATE, DELETE
When writing to the AnalyticDB for MySQL 3.0 destination cluster, UPDATE is automatically converted to REPLACE INTO. If the UPDATE targets a primary key column, it is converted to DELETE + INSERT.

Required permissions

Grant the following permissions to the database accounts used by DTS before starting the task.

DatabaseSchema migrationFull migrationIncremental migration
PolarDB-X 1.0SELECTSELECTREPLICATION SLAVE, REPLICATION CLIENT, and SELECT on objects to migrate
AnalyticDB for MySQL 3.0Access control list
REPLICATION SLAVE and REPLICATION CLIENT are required only for incremental migration. Full-only tasks do not need these privileges.

For instructions on creating accounts and granting permissions:

Data type mappings

See Data type mappings for schema migration.

Create a migration task

Step 1: Go to the Data Migration page

Use either the DTS console or the DMS console.

DTS console

  1. Log on to the DTS console.

  2. In the left-side navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where the migration instance resides.

DMS console

Steps may vary based on the DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console.
  1. Log on to the DMS console.

  2. In the top navigation bar, move your pointer over Data + AI > DTS (DTS) > Data Migration.

  3. From the drop-down list to the right of Data Migration Tasks, select the region where the instance resides.

Step 2: Configure source and destination databases

  1. Click Create Task.

  2. Read the Limits displayed at the top of the page before proceeding.

  3. Configure the source and destination databases using the following parameters.

Source database

ParameterDescription
Task NameA name for the DTS task. DTS generates a name automatically. Specify a descriptive name to identify the task easily. The name does not need to be unique.
Select Existing ConnectionIf the source instance is already registered with DTS, select it from the drop-down list — DTS populates the remaining parameters automatically. Otherwise, configure the parameters below.
Database TypeSelect PolarDB-X 1.0.
Access MethodSelect Cloud Instance.
Instance RegionSelect the region where the source PolarDB-X 1.0 instance resides.
Replicate Data Across Alibaba Cloud AccountsSelect No if the source instance belongs to the current Alibaba Cloud account.
Instance IDSelect the ID of the source PolarDB-X 1.0 instance.
Database AccountEnter the database account. See Required permissions for the minimum permissions needed.
Database PasswordEnter the password for the database account.

Destination database

ParameterDescription
Select Existing ConnectionIf the destination cluster is already registered with DTS, select it from the drop-down list. Otherwise, configure the parameters below.
Database TypeSelect AnalyticDB MySQL 3.0.
Access MethodSelect Cloud Instance.
Instance RegionSelect the region where the target AnalyticDB for MySQL 3.0 cluster resides.
Instance IDSelect the ID of the target AnalyticDB for MySQL 3.0 cluster.
Database AccountEnter the database account. See Required permissions for the minimum permissions needed.
Database PasswordEnter the password for the database account.
  1. Click Test Connectivity and Proceed.

    DTS server IP address ranges must be added to the security settings of both databases. DTS can add these automatically, or you can add them manually. See Add the IP address ranges of DTS servers.

Step 3: Configure migration objects

On the Configure Objects page, set the following parameters.

ParameterDescription
Migration TypesSelect the migration types to run. To minimize service interruption, select Schema Migration, Full Data Migration, and Incremental Data Migration. To run full migration only, select Schema Migration and Full Data Migration.
Note

If you skip Schema Migration, create the target database and tables manually and enable the object name mapping feature in Selected Objects before starting the task.

DDL and DML Operations to Be SynchronizedSelect the SQL operations for incremental migration at the instance level. See SQL operations supported for incremental migration. To configure at the database or table level, right-click the object in Selected Objects and select the operations in the dialog box.
Merge TablesSelect Yes to merge all selected source tables into a single destination table. DTS adds a __dts_data_source column to store the data source. To merge only specific tables, create two separate migration tasks. Select No (default) to migrate tables without merging.
Warning

Do not perform DDL operations that change source database or table schemas during migration. Doing so may cause data inconsistency or task failure.

Processing Mode of Conflicting TablesPrecheck and Report Errors (default): checks whether destination tables have the same names as source tables. If identical names exist, the precheck fails and the task cannot start. Use the object name mapping feature to rename conflicting tables. Ignore Errors and Proceed: skips the name conflict check. If source and destination have the same schema and a record shares a primary key with an existing destination record — during full migration, DTS skips the record; during incremental migration, DTS overwrites it. If schemas differ, only matching columns are migrated or the task fails. Use with caution.
Capitalization of Object Names in Destination InstanceControls the capitalization of database, table, and column names in the destination. Default: DTS default policy. See Specify the capitalization of object names in the destination instance.
Source ObjectsSelect columns, tables, or databases to migrate, then click Rightwards arrow to add them to Selected Objects.
Note

If you select a table, associated views, triggers, and stored procedures are not migrated. If you select a database: tables with a primary key use it as the distribution key; tables without a primary key get an auto-increment primary key, which may cause data inconsistency.

Selected ObjectsTo rename a single object, right-click it and follow the instructions in Rename objects one by one. To rename multiple objects at once, click Edit in the upper-right corner. See Rename objects in batches.
Note

Renaming an object may break migration for objects that depend on it. To filter rows, right-click the table and set a WHERE condition. See Configure filter conditions.

Step 4: Configure advanced settings

Click Next: Advanced Settings and configure the following parameters.

ParameterDescription
Dedicated Cluster for Task SchedulingBy default, DTS schedules the task to the shared cluster. Purchase a dedicated cluster to improve task stability. See What is a DTS dedicated cluster.
Retry Time for Failed ConnectionsHow long DTS retries when the source or destination database is unreachable. Valid values: 10–1,440 minutes. Default: 720 minutes. Set this to a value greater than 30 minutes. If DTS reconnects within the retry window, the task resumes. If not, the task fails.
Note

When multiple tasks share a source or destination database, the most recently set retry time applies to all. DTS charges for the instance during retries.

Retry Time for Other IssuesHow long DTS retries when DDL or DML operations fail. Valid values: 1–1,440 minutes. Default: 10 minutes. Set this to a value greater than 10 minutes. This value must be smaller than Retry Time for Failed Connections.
Enable Throttling for Full Data MigrationLimits resource consumption during full data migration. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected.
Enable Throttling for Incremental Data MigrationLimits resource consumption during incremental migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected.
Environment TagTags the DTS instance for organization purposes. Optional.
Configure ETLSelect Yes to enable the extract, transform, and load (ETL) feature and enter data processing statements. See Configure ETL in a data migration or data synchronization task. Select No to skip ETL.
Monitoring and AlertingSelect Yes to receive notifications when the task fails or migration latency exceeds a threshold. Configure the alert threshold and notification settings. See Configure monitoring and alerting when you create a DTS task.

Step 5: Configure database and table fields (optional)

Click Next: Configure Database and Table Fields to set the Type, Primary Key Column, Distribution Key, Partition Key, Partitioning Rules, and Partition Lifecycle for tables in the destination database.

This step is available only when Schema Migration is selected. Set Definition Status to All to view all tables. You can select multiple columns as a composite Primary Key Column, then choose one or more of those columns as the Distribution Key and Partition Key. See CREATE TABLE.

Step 6: Run the precheck

Click Next: Save Task Settings and Precheck.

Tip: Before saving, click Preview OpenAPI parameters to view the parameters that the corresponding API operation uses when configuring this task.

DTS runs a precheck before the task starts. If any item fails:

  1. Click View Details next to the failed item.

  2. Fix the underlying issue based on the check results.

  3. Click Precheck Again.

If an item generates an alert:

  • If the alert cannot be ignored, click View Details next to the failed item and troubleshoot the issues. Then click Precheck Again.

  • If you want to ignore the alert, follow these steps:

    1. Click Confirm Alert Details.

    2. In the dialog box, click Ignore, then click OK.

    3. Click Precheck Again.

Warning

Ignoring alert items may result in data inconsistency. Proceed only if you understand the risk.

Step 7: Purchase an instance and start the task

  1. Wait until Success Rate reaches 100%, then click Next: Purchase Instance.

  2. On the Purchase Instance page, configure the following parameters.

    ParameterDescription
    Resource GroupThe resource group for the migration instance. Default: default resource group. See What is Resource Management?
    Instance ClassThe instance class determines migration speed. Select a class based on your requirements. See Instance classes of data migration instances.
  3. Read and accept Data Transmission Service (Pay-as-you-go) Service Terms by selecting the check box.

  4. Click Buy and Start, then click OK in the confirmation dialog.

Track progress on the Data Migration page.

  • Schema migration and full data migration tasks stop automatically when complete. The Status column shows Completed.

  • Incremental data migration tasks run continuously and do not stop automatically. The Status column shows Running. Stop the task manually before switching your workload to the destination.

What's next