All Products
Search
Document Center

Data Transmission Service:Migrate data from a PolarDB for MySQL cluster to an AnalyticDB for MySQL V3.0 cluster

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate data from a PolarDB for MySQL cluster to an AnalyticDB for MySQL V3.0 cluster. After migration, you can use AnalyticDB for MySQL to build business intelligence (BI) systems, interactive query systems, and real-time report systems.

Prerequisites

Before you begin, make sure that:

Migration types

DTS supports the following migration types for this scenario:

Migration typeDescription
Schema migrationMigrates the schemas of selected objects from PolarDB for MySQL to AnalyticDB for MySQL V3.0. Because the two databases are heterogeneous, DTS does not guarantee schema consistency after migration. Evaluate the impact of data type conversion before proceeding. For data type mappings, see Data type mappings between heterogeneous databases.
Full data migrationMigrates all historical data of selected objects to the destination cluster. If Full Data Migration is selected, you can migrate the schema and data of tables created by using the CREATE TABLE statement to the destination database.
Incremental data migrationAfter full data migration completes, continuously migrates new data changes from the source to the destination. Use this type to migrate without interrupting your applications.

Recommended combination: Select schema migration, full data migration, and incremental data migration together. This approach keeps the source database writable during migration and maintains data consistency.

SQL operations supported in incremental data migration

TypeSupported operations
DMLINSERT, UPDATE, DELETE
DDLCREATE TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLE, ADD COLUMN, MODIFY COLUMN, DROP COLUMN
When writing to the destination AnalyticDB for MySQL V3.0 cluster, DTS automatically converts UPDATE to REPLACE INTO. If an UPDATE targets the primary key, it is converted to DELETE followed by INSERT.
Warning

If you change a field's data type in the source table during migration, the task fails. To recover: 1. In the destination cluster, create a new table (for example, customer_new) with the same schema as the original. 2. Run INSERT INTO SELECT to copy data from the original table to the new table. 3. Rename or delete the original table, then rename the new table to the original name. 4. Restart the data migration task in the DTS console.

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration + full data migrationFree of chargeCharged when the destination Access Method is set to Public IP Address
Incremental data migrationCharged

For pricing details, see Billing overview.

Required permissions

DatabaseRequired permission
PolarDB for MySQLRead permissions on the objects to be migrated
AnalyticDB for MySQL V3.0 clusterRead and write permissions

For instructions on creating accounts and granting permissions, see:

Limitations

Review the following limitations before creating a migration task. The Applies to column identifies which migration types each limitation affects.

Source database limitations

LimitationApplies to
The source server must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed.All migration types
Tables must have PRIMARY KEY or UNIQUE constraints, with all fields unique. Otherwise, the destination may contain duplicate records.All migration types
When migrating tables individually (not entire databases), a single task supports a maximum of 1,000 tables. Exceeding this limit causes a request error. To migrate more than 1,000 tables, use multiple tasks or migrate the entire database instead.All migration types
Binary logging must be enabled, and the loose_polar_log_bin parameter must be set to on. Otherwise, the precheck fails. See Enable binary logging and Modify parameters.
Note

Enabling binary logging incurs storage charges for the log files.

Incremental data migration only
Binary logs must be retained for at least 3 days; 7 days is recommended. Shorter retention periods may cause DTS to fail to retrieve logs, resulting in task failure or data loss. See Modify the retention period.Incremental data migration only
Do not run DDL statements that change database or table schemas. Doing so causes the task to fail.Schema migration and full data migration only
Do not run DDL statements that add comments (for example, ALTER TABLE table_name COMMENT='Table comment';). Doing so causes the task to fail.All migration types
Do not write data to the source database during migration. Data written during migration causes inconsistency between source and destination. (This restriction does not apply if you also select incremental data migration.)Full data migration only

Other limitations

LimitationApplies to
Prefix indexes cannot be migrated. Source tables with prefix indexes may fail to migrate.All migration types
Either configure a custom primary key in the destination table or set Primary Key Column in Configurations for Databases, Tables, and Columns. Otherwise, migration may fail.All migration types
Read-only nodes of the source PolarDB for MySQL cluster are not migrated.All migration types
OSS external tables from the source PolarDB for MySQL cluster are not migrated.All migration types
If disk usage of any node in the destination AnalyticDB for MySQL cluster exceeds 80%, an exception occurs and the DTS task is delayed. Estimate the disk space required before starting migration and make sure the destination cluster has enough storage.All migration types
If the destination AnalyticDB for MySQL V3.0 cluster is being backed up while the DTS task runs, the task fails.All migration types
DTS retrieves values from FLOAT and DOUBLE columns using ROUND(COLUMN,PRECISION). If no precision is specified, the default is 38 digits for FLOAT and 308 digits for DOUBLE. Verify that these defaults meet your requirements.All migration types
DTS automatically retries failed tasks for up to 7 days. Before switching workloads to the destination, stop or release any failed tasks, or run REVOKE to revoke write permissions from DTS accounts. Otherwise, resumed tasks may overwrite destination data.All migration types
While the DTS instance runs, DTS periodically executes CREATE DATABASE IF NOT EXISTS \test\`` in the source database to advance the binary log position.Incremental data migration only
If a DDL statement fails in the destination, the task continues running. Check task logs to review failed statements. See View task logs.All migration types
During schema migration, DTS does not migrate foreign keys. During full and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. Cascade update or delete operations on the source during migration may cause data inconsistency.All migration types
Concurrent INSERT operations during full data migration cause table fragmentation in the destination. After full migration completes, the destination tablespace is larger than the source.Full data migration only
If a task fails, DTS technical support attempts to restore it within 8 hours. The task may be restarted and its parameters may be modified during restoration. Database parameters are not modified.All migration types

Create a migration task

Step 1: Go to the Data Migration page

Use one of the following consoles:

DTS console

  1. Log on to the DTS console.DTS console

  2. In the left-side navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where the migration instance will reside.

DMS console

Note

The exact steps may vary depending on the DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console.

  1. Log on to the DMS console.DMS console

  2. In the top navigation bar, move the pointer over Data + AI > DTS (DTS) > Data Migration.

  3. From the drop-down list to the right of Data Migration Tasks, select the region where the instance will reside.

Step 2: Configure source and destination databases

  1. Click Create Task.

  2. Configure the following parameters.

    Warning

    After configuring the source and destination databases, read the Limits displayed at the top of the page before proceeding. Skipping this step may cause the task to fail or result in data inconsistency.

    Source Database

    ParameterDescription
    Task NameA name for the DTS task. DTS generates a name automatically. Use a descriptive name for easy identification. Uniqueness is not required.
    Select Existing ConnectionIf the source cluster is already registered with DTS, select it from the drop-down list. DTS populates the remaining parameters automatically. Otherwise, configure the parameters below. In the DMS console, select from Select a DMS database instance.
    Database TypeSelect PolarDB for MySQL.
    Access MethodSelect Alibaba Cloud Instance.
    Instance RegionSelect the region where the source PolarDB for MySQL cluster resides.
    PolarDB Cluster IDSelect the ID of the source PolarDB for MySQL cluster.
    Database AccountEnter the database account of the source cluster. See Required permissions.
    Database PasswordEnter the password for the database account.
    EncryptionSpecify whether to encrypt the connection. For SSL encryption details, see Configure SSL encryption.

    Destination Database

    ParameterDescription
    Select Existing ConnectionIf the destination cluster is already registered with DTS, select it from the drop-down list. DTS populates the remaining parameters automatically. Otherwise, configure the parameters below. In the DMS console, select from Select a DMS database instance.
    Database TypeSelect AnalyticDB for MySQL V3.0.
    Access MethodSelect Alibaba Cloud Instance.
    Instance RegionSelect the region where the destination cluster resides.
    Instance IDSelect the ID of the destination AnalyticDB for MySQL V3.0 cluster.
    Database AccountEnter the database account of the destination cluster. See Required permissions.
    Database PasswordEnter the password for the database account.
  3. Click Test Connectivity and Proceed.

    Make sure the CIDR blocks of DTS servers are added to the security settings of both the source and destination databases. See Add the CIDR blocks of DTS servers.

Step 3: Configure objects to migrate

  1. On the Configure Objects page, set the following parameters:

    ParameterDescription
    Migration TypesSelect the migration types to run. To migrate without service downtime, select Schema Migration, Full Data Migration, and Incremental Data Migration. For a one-time migration only, select Schema Migration and Full Data Migration.
    Processing Mode for Existing Destination TablesPrecheck and Report Errors (default): checks whether the destination contains tables with names identical to those in the source. If identical names exist, the precheck fails and the task cannot start. To resolve name conflicts without deleting destination tables, use the object name mapping feature. Ignore Errors and Proceed: skips the precheck for identical table names. Use with caution — data inconsistency may result. During full data migration, conflicting records in the destination are retained. During incremental data migration, conflicting records are overwritten.
    Select DDL and DML to Sync at the Instance LevelThe SQL operations to migrate during incremental data migration, set at the instance level. To configure operations for a specific database or table, right-click the object in the Selected Objects section.
    Table MergingYesalert notification settings: adds a __dts_data_source column to each table to record the data source. All selected source tables are merged into a single destination table. For details, see Enable the multi-table merging feature. No (default): tables are not merged.
    Source ObjectsSelect the objects to migrate, then click the arrow to move them to Selected Objects. Select columns, tables, or entire databases. If you select tables, DTS does not migrate views, triggers, or stored procedures. If you select an entire database: tables with a primary key use the primary key columns as distribution keys; tables without a primary key get an auto-increment primary key, which may cause data inconsistency.
    Selected ObjectsTo rename an object, right-click it and use individual name mapping. To rename multiple objects at once, click Batch Edit. To filter rows by condition, right-click the object and specify WHERE conditions. See Specify filter conditions. Renaming an object may cause dependent objects to fail migration.
  2. Click Next: Advanced Settings and configure the following optional parameters:

    ParameterDescription
    Dedicated Cluster for Task SchedulingBy default, DTS uses the shared cluster. To improve stability, purchase a dedicated cluster. See What is a DTS dedicated cluster.
    Copy the temporary table of the Online DDL toolApplies when you use DMS or gh-ost for online DDL on the source. Yes: migrates temporary tables generated by online DDL. Large online DDL operations may increase migration latency. No, Adapt to DMS Online DDL: skips temporary tables; migrates only the original DDL from DMS. Destination tables may be locked. No, Adapt to gh-ost: skips temporary tables; migrates only the original DDL from gh-ost. Use default or custom regular expressions to filter out shadow tables. Destination tables may be locked.
    Important

    pt-online-schema-change is not supported. Using it causes the DTS task to fail.

    Retry Time for Failed ConnectionsHow long DTS retries a failed connection before marking the task as failed. Range: 10–1,440 minutes. Default: 720 minutes. We recommend that you set the parameter to a value greater than 30 minutes. If the connection is restored within this period, the task resumes automatically.
    Note

    Multiple tasks sharing a source or destination use the most recently configured retry time. DTS instance charges apply during retries.

    Retry Time for Other IssuesHow long DTS retries failed DDL or DML operations. Range: 1–1,440 minutes. Default: 10 minutes. We recommend that you set the parameter to a value greater than 10 minutes. This value must be less than Retry Time for Failed Connections.
    Enable Throttling for Full Data MigrationLimits read/write resource usage during full data migration to reduce load on database servers. Configure QPS to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected.
    Enable Throttling for Incremental Data MigrationLimits resource usage during incremental data migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected.
    Whether to delete SQL operations on heartbeat tablesYes: DTS does not write heartbeat table operations to the source. Migration latency metrics may show a delay. No: DTS writes heartbeat operations to the source. Physical backup and cloning of the source may be affected.
    Environment TagAn optional tag to identify the DTS instance.
    Configure ETLYes: enables extract, transform, and load (ETL). Enter data processing statements in the code editor. See Configure ETL in a data migration or data synchronization task. No: ETL is disabled.
    Monitoring and AlertingYes: sends alerts when the task fails or migration latency exceeds the threshold. Configure the alert threshold and notification settings. See Configure monitoring and alerting. No: alerting is disabled.
  3. Click Next Step: Data Verification to configure a data verification task. For details, see Configure a data verification task.

  4. (Optional) Click Next: Configure Database and Table Fields. In the dialog box, configure the Type, Primary Key Column, Distribution Key, Partition Key, Partitioning Rules, and Partition Lifecycle for destination tables.

    This step is available only when Schema Migration is selected. Set Definition Status to All to view all tables. In the Primary Key Column field, you can select multiple columns to form a composite primary key. At least one primary key column must also be selected as a distribution key and partition key. See CREATE TABLE.

Step 4: Run the precheck

  1. Click Next: Save Task Settings and Precheck.

    To preview the API parameters for this task configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
  2. Wait for the precheck to complete. If any item fails, click View Details next to the failed item, resolve the issue, and click Precheck Again. If an alert is triggered:

    • For alerts that cannot be ignored, resolve the issue and run the precheck again.

    • For alerts that can be ignored, click Confirm Alert Details, then click Ignore > OK > Precheck Again. Ignoring alerts may result in data inconsistency.

Step 5: Purchase an instance and start the task

  1. Wait until Success Rate reaches 100%, then click Next: Purchase Instance.

  2. On the Purchase Instance page, configure the following:

    ParameterDescription
    Resource GroupThe resource group for the migration instance. Default: default resource group. See What is Resource Management?
    Instance ClassThe instance class determines migration speed. Select based on your workload. See Instance classes of data migration instances.
  3. Select the Data Transmission Service (Pay-as-you-go) Service Terms check box.

  4. Click Buy and Start, then click OK in the confirmation message.

Monitor task progress

After the task starts, go to the Data Migration page to monitor progress.

  • Full data migration only (no incremental): The task stops automatically when complete. Status changes to Completed.

  • With incremental data migration: The task runs continuously and does not stop automatically. Status shows Running.

What to do after migration

After incremental data migration reaches a stable state and the migration latency drops to near zero:

  1. Stop any applications that write to the source PolarDB for MySQL cluster.

  2. Wait for the DTS task to process the remaining changes (latency reaches 0).

  3. Switch your application connections to the destination AnalyticDB for MySQL V3.0 cluster.

  4. Stop or release the DTS migration task to prevent resumed tasks from overwriting destination data.

  5. Verify data consistency between the source and destination databases.

  6. Update any connection strings, configuration files, or environment variables in your applications to point to the destination cluster.