All Products
Search
Document Center

Data Transmission Service:Migrate data from a PolarDB-X 2.0 instance to an AnalyticDB for MySQL V3.0 cluster

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate data from a PolarDB-X 2.0 instance to an AnalyticDB for MySQL V3.0 cluster. After migration, use the cluster to build internal business intelligence (BI) systems, interactive query systems, and real-time report systems.

How it works

DTS supports three migration types for this path:

  • Schema migration: copies the schemas of selected objects to the destination cluster.

  • Full data migration: copies all historical data from the source instance to the destination cluster.

  • Incremental data migration: after full data migration, continuously syncs new changes so that applications keep running without downtime.

Run all three types together for zero-downtime migration. Run schema migration and full data migration only when a brief cutover window is acceptable.

Prerequisites

Before you begin, make sure that:

  • The source PolarDB-X 2.0 instance is compatible with MySQL 5.7.

  • The destination AnalyticDB for MySQL V3.0 cluster is created. See Create a cluster.

  • The available storage space of the destination cluster is larger than the total data size of the source instance.

  • The destination cluster disk usage is below 80%. If disk usage reaches 80%, the DTS task is delayed and returns errors.

  • No backup is running on the destination cluster. If the cluster is backed up while a DTS task is active, the task fails. Schedule migrations and backups so they do not overlap.

AnalyticDB for MySQL V3.0 destination constraints:

AnalyticDB for MySQL V3.0 is an OLAP database. Be aware of the following behaviors before you start:

  • Every table migrated to the destination cluster must have a primary key. Specify a custom primary key in the destination cluster, or configure Primary Key Column in the Configurations for Databases, Tables, and Columns step. Without a primary key, migration may fail.

  • During incremental data migration, DTS automatically converts UPDATE statements to REPLACE INTO. If the UPDATE targets a primary key column, DTS converts it to DELETE + INSERT.

  • Prefix indexes cannot be migrated. Tables with prefix indexes may fail to migrate.

  • Concurrent INSERT operations during full data migration cause tablespace fragmentation in the destination. After full data migration, the destination tablespace is larger than the source.

Limitations

Source database

  • Bandwidth: the source database server must have enough outbound bandwidth. Insufficient bandwidth slows migration.

  • Primary key or unique constraint: every table to be migrated must have a PRIMARY KEY or UNIQUE constraint with all unique field values. Tables without these constraints may produce duplicate records in the destination.

  • Table rename limit: if you rename tables or columns during migration, one task supports up to 1,000 tables. For more than 1,000 tables, run multiple tasks in batches, or migrate at the database level.

  • Uppercase table names: tables whose names contain uppercase letters support schema migration only.

  • TABLEGROUP and Locality: tables in a TABLEGROUP or a database that has a Locality attribute are not supported.

  • DDL during schema and full data migration: do not run DDL operations that change database or table schemas while schema migration or full data migration is running. The task fails if you do.

  • Writes during full-only migration: if you run schema migration and full data migration without incremental data migration, do not write to the source database during migration. To maintain consistency, include incremental data migration.

  • Network type change: if you change the network type of the PolarDB-X 2.0 instance during migration, update the network connection settings in the DTS task as well.

For incremental data migration, also verify:

  • Binary logging is enabled and binlog_row_image is set to full. If not, the precheck fails and the task cannot start.

  • Binary log retention period: If DTS cannot read binary logs, the task fails and data loss may occur. After full data migration completes, you can shorten the retention period to more than 24 hours, but do not reduce it further.

    • Incremental migration only: retain logs for more than 24 hours.

    • Full and incremental migration: retain logs for at least seven days.

DTS task behavior

  • DTS updates the dts_health_check.ha_health_check table in the source database at regular intervals to advance the binary log position.

  • DTS retries failed tasks for up to seven days. Before switching workloads to the destination, stop or release any failed DTS tasks, or run REVOKE to remove write permissions from the DTS accounts. Otherwise, the source data may overwrite destination data when a failed task resumes.

  • If DDL statements fail in the destination, DTS continues running them. View failed DDL statements in task logs. See View task logs.

  • If a DTS task fails, DTS technical support attempts to restore it within 8 hours. The task may be restarted and its parameters modified. Only task parameters are modified; database parameters are not. For the parameters that may be modified, see Modify instance parameters.

  • During schema migration, DTS does not migrate foreign keys. During full data migration and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. If you run cascade updates or deletes on the source during migration, data inconsistency may occur.

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration and full data migrationFreeCharged only when Access Method is set to Public IP Address. See Billing overview.
Incremental data migrationCharged. See Billing overview.

SQL operations supported during incremental data migration

Operation typeSQL statements
DMLINSERT, UPDATE, DELETE. UPDATE is auto-converted to REPLACE INTO. UPDATE on a primary key column is converted to DELETE + INSERT.
DDLCREATE TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLE, ADD COLUMN, MODIFY COLUMN, DROP COLUMN

Required permissions

Grant the DTS database accounts the following permissions before starting the task.

DatabaseSchema migrationFull data migrationIncremental data migration
PolarDB-X 2.0SELECTSELECTSELECT on objects to migrate, plus REPLICATION SLAVE and REPLICATION CLIENT (incremental only). See Data synchronization tools for PolarDB-X.
AnalyticDB for MySQL V3.0Read and writeRead and writeRead and write

To create database accounts and grant permissions for AnalyticDB for MySQL V3.0, see Create a database account.

Data type mappings

For data type mappings between source and destination, see Data type mappings for initial schema synchronization.

Create a migration task

Step 1: Go to the Data Migration page

Use one of the following consoles:

DTS console

  1. Log on to the DTS console.

  2. In the left-side navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where the data migration instance resides.

DMS console

Note

The steps below may vary based on the DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console.

  1. Log on to the DMS console.

  2. In the top navigation bar, move the pointer over Data + AI > DTS (DTS) > Data Migration.

  3. From the drop-down list to the right of Data Migration Tasks, select the region where the data migration instance resides.

Step 2: Configure source and destination databases

  1. Click Create Task.

  2. On the task configuration page, configure the source and destination databases.

    Warning

    After configuring the source and destination databases, read the Limits displayed at the top of the page before proceeding. Skipping this step may cause task failures or data inconsistency.

    SectionParameterDescription
    N/ATask NameThe name of the DTS task. DTS generates a name automatically. Specify a descriptive name to identify the task easily. The name does not need to be unique.
    Source DatabaseSelect Existing ConnectionIf the instance is registered with DTS, select it from the drop-down list. DTS auto-fills the parameters. Otherwise, configure the parameters manually. In the DMS console, select the instance from Select a DMS database instance. See Manage database connections.
    Database TypeSelect PolarDB-X 2.0.
    Access MethodSelect Alibaba Cloud Instance.
    Instance RegionThe region where the source PolarDB-X 2.0 instance resides.
    Replicate Data Across Alibaba Cloud AccountsSelect No for migrations within the same account.
    Instance IDThe ID of the source PolarDB-X 2.0 instance.
    Database AccountThe database account for the source instance. See Required permissions.
    Database PasswordThe password for the database account.
    Destination DatabaseSelect Existing ConnectionSame as source. In the DMS console, select the instance from Select a DMS database instance.
    Database TypeSelect AnalyticDB MySQL 3.0.
    Access MethodSelect Alibaba Cloud Instance.
    Instance RegionThe region where the destination AnalyticDB for MySQL V3.0 cluster resides.
    Instance IDThe ID of the destination AnalyticDB for MySQL V3.0 cluster.
    Database AccountThe database account for the destination cluster. See Required permissions.
    Database PasswordThe password for the database account.
  3. Click Test Connectivity and Proceed.

    DTS servers must be able to access the source and destination databases. Add the DTS server CIDR blocks to the security settings of your databases if needed. See Add the CIDR blocks of DTS servers.

Step 3: Configure objects to migrate

  1. On the Configure Objects page, set the following parameters.

    ParameterDescription
    Migration TypesSelect the migration types based on your requirements: <br>- Schema Migration + Full Data Migration: migrates schema and historical data. Run a brief cutover after completion. <br>- Schema Migration + Full Data Migration + Incremental Data Migration: zero-downtime migration. Keeps the destination in sync until you cut over. <br><br>
    Note

    Selecting Full Data Migration migrates tables created with CREATE TABLE. If Incremental Data Migration is not selected, do not write to the source database during migration.

    Processing Mode for Existing Destination TablesControls behavior when the destination already contains tables with the same names as the source: <br>- Precheck and Report Errors: fails the precheck if matching table names exist. To rename migrated tables, use the object name mapping feature. <br>- Ignore Errors and Proceed: skips the precheck.
    Warning

    This may cause data inconsistency. During full data migration, existing records in the destination are retained. During incremental data migration, existing records are overwritten. If schemas differ, only specific columns may be migrated or the task may fail.

    Table Merging- Yesalert notification settings: merges selected source tables into one destination table and adds a __dts_data_source column to track data sources. See Enable the multi-table merging feature. Do not run DDL operations on source tables when this option is enabled. <br>- No (default): tables are migrated individually.
    Source ObjectsSelect the objects to migrate, then click the arrow icon to add them to Selected Objects. Select columns, tables, or databases. Selecting a database migrates tables only (not views, triggers, or stored procedures). <br><br>Default distribution key rules when migrating a database: <br>- Tables with a primary key: primary key columns become distribution keys. <br>- Tables without a primary key: an auto-increment primary key column is generated, which may cause inconsistency between source and destination.
    Selected Objects- To rename a single object, right-click it and follow the prompts. See Rename objects to be migrated. <br>- To rename multiple objects, click Edit in the upper-right corner. See Rename multiple objects at a time. <br>- To filter rows, right-click an object and specify WHERE conditions. See Specify filter conditions. <br>- To select specific SQL operations for a table, right-click it and choose the operations. <br><br>
    Note

    Renaming objects with the object name mapping feature may break dependent objects.

  2. Click Next: Advanced Settings and configure the following parameters.

    ParameterDescription
    Dedicated Cluster for Task SchedulingBy default, DTS uses the shared cluster. For higher stability, purchase a dedicated cluster. See What is a DTS dedicated cluster.
    Retry Time for Failed ConnectionsHow long DTS retries when the source or destination cannot be reached. Valid values: 10–1,440 minutes. Default: 720 minutes. Set to at least 30 minutes. If the connection is restored within this window, the task resumes. Otherwise, the task fails.
    Note

    If multiple tasks share the same source or destination, the most recently set value applies. DTS charges for the instance during retry.

    Retry Time for Other IssuesHow long DTS retries when DML or DDL operations fail after the task starts. Valid values: 1–1,440 minutes. Default: 10 minutes. Set to greater than 10 minutes. Must be less than Retry Time for Failed Connections.
    Enable Throttling for Full Data MigrationLimits the load on source and destination servers during full data migration. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected.
    Enable Throttling for Incremental Data MigrationLimits load during incremental data migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected.
    Environment TagTags the DTS instance for identification. Optional.
    Whether to delete SQL operations on heartbeat tables of forward and reverse tasksControls whether DTS writes heartbeat SQL to the source database: <br>- Yes: DTS does not write heartbeat SQL. A latency figure may appear on the task. <br>- No: DTS writes heartbeat SQL. Physical backup and cloning of the source may be affected.
    Configure ETLEnables the extract, transform, and load (ETL) feature to transform data during migration. See What is ETL? and Configure ETL in a data migration or data synchronization task.
    Monitoring and AlertingSends alerts when the task fails or migration latency exceeds a threshold. Select Yes to configure alert thresholds and notification settings. See Configure monitoring and alerting.
  3. (Optional) Click Next: Configure Database and Table Fields. In the dialog box, configure Type, Primary Key Column, Distribution Key, Partition Key, Partitioning Rules, and Partition Lifecycle for the tables being migrated to the destination cluster.

    This step is available only when Schema Migration is selected. Set Definition Status to All to view all tables. In Primary Key Column, select one or more columns to form a composite primary key, then select one or more primary key columns as distribution keys and partition keys. See CREATE TABLE.

Step 4: Run the precheck

  1. Click Next: Save Task Settings and Precheck.

    To view the OpenAPI parameters for this task configuration before saving, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
  2. DTS runs a precheck automatically. Wait for the results.

    • If an item fails, click View Details next to the failed item, fix the issue, and click Precheck Again.

    • If an alert appears for an item you can safely ignore, click Confirm Alert Details, then click Ignore in the dialog box, click OK, and click Precheck Again. Ignoring alerts may cause data inconsistency.

Step 5: Purchase and start the instance

  1. Wait until Success Rate reaches 100%, then click Next: Purchase Instance.

  2. On the Purchase Instance page, configure the following parameters.

    SectionParameterDescription
    New Instance ClassResource GroupThe resource group for the instance. Default: default resource group. See What is Resource Management?.
    Instance ClassThe instance class determines migration speed. Select based on your data volume and time requirements. See Instance classes of data migration instances.
  3. Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms.

  4. Click Buy and Start, then click OK in the confirmation dialog.

View task progress on the Data Migration page.

Tasks without incremental data migration stop automatically when full data migration completes. The status shows Completed.
Tasks with incremental data migration run continuously and never stop automatically. The status shows Running.

After migration

After the task reaches Completed, or before you switch workloads to the destination cluster, complete the following steps:

  1. Verify data consistency: compare row counts and sample data between the source and destination to confirm that migration transferred all data correctly.

  2. Stop or release the DTS task: DTS retries failed tasks for up to seven days. Stop or release any active or failed DTS tasks. Alternatively, run REVOKE to remove write permissions from the DTS accounts on the destination cluster. This prevents DTS from overwriting destination data if a task resumes unexpectedly.

  3. Update application connections: update your application connection strings to point to the destination AnalyticDB for MySQL V3.0 cluster.

  4. Test the application: run smoke tests against the destination cluster to verify that queries return expected results and application behavior is correct.

What's next