All Products
Search
Document Center

Data Transmission Service:Migrate data between PolarDB-X 2.0 instances

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate data between PolarDB-X 2.0 instances with minimal downtime. DTS supports three migration types — schema migration, full data migration, and incremental data migration — so you can keep your application running throughout the migration.

Prerequisites

Before you begin, ensure that you have:

  • Two PolarDB-X 2.0 instances (source and destination), both compatible with MySQL 5.7

  • A destination instance with more available storage space than the total data size in the source instance

  • Database accounts with the permissions listed in Required permissions

Limitations

Source database

LimitationDetails
Table constraintsTables must have a PRIMARY KEY or UNIQUE constraint, and all fields must be unique. Otherwise, the destination database may contain duplicate records.
Table count (when selecting individual tables)Up to 1,000 tables per task. Exceeding this limit causes a request error. To migrate more tables, create multiple tasks or migrate at the database level.
BandwidthThe server hosting the source database must have enough outbound bandwidth. Insufficient bandwidth reduces migration speed.
DDL operationsDo not run DDL operations on the source during schema migration or full data migration. Doing so causes the task to fail.
Writes during full-only migrationIf you run only full data migration (without incremental), do not write to the source during migration. Concurrent writes can cause data inconsistency.
Network type changesIf you change the network type of the source PolarDB-X 2.0 instance during migration, update the network connection settings in the DTS task as well.

Binary logging requirements (for incremental data migration)

If you include incremental data migration, verify that the source database meets the following requirements:

ParameterRequired valueWhy
Binary loggingONDTS reads binary logs to replicate incremental changes
binlog_row_imagefullCaptures all column values, enabling accurate conflict resolution
Binary log retention (incremental only)More than 24 hoursEnsures DTS can access consecutive logs during migration
Binary log retention (full + incremental)At least 7 daysProvides enough history for DTS to resume from the full migration start point
If binary logs are not retained for the required duration, DTS may fail to obtain them and the task may fail. In exceptional cases, data inconsistency or loss may occur. After full data migration completes, you can reduce the retention period to more than 24 hours.
Important

DTS updates the dts_health_check.ha_health_check table in the source database at regular intervals to advance the binary log position.

Other limitations

  • The destination PolarDB-X 2.0 instance must be compatible with MySQL 5.7.

  • Run migrations during off-peak hours. Full data migration consumes read and write resources on both the source and destination, which increases server load.

  • After full data migration, the destination tablespace is larger than the source due to fragmentation from concurrent INSERT operations.

  • DTS automatically retries failed tasks for up to 7 days. Before you switch workloads to the destination, stop or release any failed tasks. Alternatively, run REVOKE to remove DTS write permissions on the destination. Otherwise, a resumed task may overwrite destination data with source data.

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration and full data migrationFreeCharged only when migrating from Alibaba Cloud over the Internet. See Billing overview.
Incremental data migrationCharged. See Billing overview.

Migration types

DTS supports three migration types. Combine them based on your requirements:

Migration typeWhat it doesWhen to use
Schema migrationMigrates the schemas of selected objects to the destinationRequired as the foundation for data migration
Full data migrationMigrates all existing data in the selected objectsSufficient for one-time migrations with no write traffic during migration
Incremental data migrationContinuously replicates changes from source to destination after full migration completesUse with full data migration to keep services running during migration

Recommended: Select all three types (schema migration + full data migration + incremental data migration) to maintain service continuity and data consistency.

SQL operations supported in incremental data migration

Operation typeSupported statements
DML (Data Manipulation Language)INSERT, UPDATE, DELETE

Required permissions

Grant the following permissions to the database accounts used by DTS before starting the task.

Source PolarDB-X 2.0 instance

PermissionRequired forPurpose
SELECTSchema migration, full data migrationReads data and schema definitions from the source
REPLICATION SLAVEIncremental data migrationEnables binary log streaming to replicate incremental changes
REPLICATION CLIENTIncremental data migrationProvides access to binary log position and server status
SELECT (on objects to be migrated)Incremental data migrationReads the specific objects being migrated

Destination PolarDB-X 2.0 instance

Read and write permissions are required for all migration types.

Create a migration task

Step 1: Go to the Data Migration Tasks page

  1. Log on to the Data Management (DMS) console.

  2. In the top navigation bar, click DTS.

  3. In the left-side navigation pane, choose DTS (DTS) > Data Migration.

You can also go directly to the Data Migration Tasks page in the new DTS console. Console navigation may vary based on your DMS console mode. See Simple mode and Configure the DMS console based on your business requirements.

Step 2: Configure source and destination databases

  1. From the drop-down list next to Data Migration Tasks, select the region where the migration instance resides.

    In the new DTS console, select the region in the upper-left corner.
  2. Click Create Task.

    Warning

    After selecting the source and destination instances, read the limits displayed at the top of the page to prevent task failures or data inconsistency.

  3. Configure the task and database connections:

    SectionParameterDescription
    Task NameDTS auto-generates a name. Specify a descriptive name to identify the task. Names do not need to be unique.
    Source DatabaseSelect InstanceSelect an existing instance to auto-fill its settings, or configure manually.
    Database TypeSelect PolarDB-X 2.0.
    Connection TypeSelect Alibaba Cloud Instance.
    Instance RegionThe region where the source PolarDB-X 2.0 instance resides.
    Instance IDThe ID of the source PolarDB-X 2.0 instance.
    Database AccountThe source database account. See Required permissions.
    Database PasswordThe password for the database account.
    Destination DatabaseSelect InstanceSelect an existing instance to auto-fill its settings, or configure manually.
    Database TypeSelect PolarDB-X 2.0.
    Connection TypeSelect Alibaba Cloud Instance.
    Instance RegionThe region where the destination PolarDB-X 2.0 instance resides.
    Instance IDThe ID of the destination PolarDB-X 2.0 instance.
    Database AccountThe destination database account. See Required permissions.
    Database PasswordThe password for the database account.

Step 3: Test connectivity

Click Test Connectivity and Proceed.

DTS automatically adds its server CIDR blocks to the whitelist of Alibaba Cloud database instances (such as ApsaraDB RDS for MySQL or ApsaraDB for MongoDB) and to the security group rules of Elastic Compute Service (ECS) instances. For self-managed databases in data centers or hosted by third-party providers, manually add the DTS server CIDR blocks to the database whitelist. See Add the CIDR blocks of DTS servers to the security settings of on-premises databases.

Warning

Adding DTS CIDR blocks to whitelists or security groups introduces security exposure. Before proceeding, take preventive measures such as: strengthening account credentials, restricting exposed ports, authenticating API calls, auditing whitelist and security group rules regularly, and using Express Connect, VPN Gateway, or Smart Access Gateway to connect the database to DTS.

Step 4: Select objects and configure migration settings

ParameterDescription
Migration TypeSelect the migration types to run. For service continuity, select Schema Migration, Full Data Migration, and Incremental Data Migration. For a one-time migration with no writes to the source, select Schema Migration and Full Data Migration.
Processing Mode of Conflicting TablesPrecheck and Report Errors: Fails the precheck if the destination has tables with the same names as source tables. Use the object name mapping feature to resolve conflicts. See Map object names. Ignore Errors and Proceed: Skips the name conflict check. If schemas match, DTS skips records with matching primary keys. If schemas differ, only some columns migrate or the task fails. Use with caution.
Source ObjectsSelect objects from Source Objects and click Rightwards arrow to add them to Selected Objects. You can select columns, tables, or schemas. Selecting tables or columns excludes views, triggers, and stored procedures.
Selected ObjectsTo rename a single migrated object, right-click it in Selected Objects. See Map the name of a single object. To rename multiple objects, click Batch Edit. See Map multiple object names at a time. To filter rows, right-click an object and specify SQL WHERE conditions. See Use SQL conditions to filter data.
Renaming an object may cause dependent objects to fail migration.

Step 5: Configure advanced settings

Click Next: Advanced Settings and configure:

ParameterDescription
Set AlertsSet to Yes to receive notifications when the task fails or migration latency exceeds a threshold. Specify the alert threshold and contacts. See Configure monitoring and alerting when you create a DTS task.
Retry Time for Failed ConnectionsHow long DTS retries after losing connection to the source or destination. Valid values: 10–1,440 minutes. Default: 720 minutes. Set this to more than 30 minutes. If DTS reconnects within the retry window, the task resumes. Otherwise, the task fails. If multiple tasks share a source or destination, the shortest retry window applies to all.
Configure ETLSet to Yes to transform data during migration using the extract, transform, and load (ETL) feature. Enter processing statements in the code editor. See Configure ETL in a data migration or data synchronization task and What is ETL?.
Whether to delete SQL operations on heartbeat tables of forward and reverse tasksYes: DTS does not write heartbeat SQL operations to the source. A latency value may appear on the DTS instance. No: DTS writes heartbeat SQL to the source. This may affect features such as physical backup and cloning.

Step 6: Run the precheck

Click Next: Save Task Settings and Precheck.

DTS runs a precheck before starting migration. The task only proceeds after passing the precheck.

If the precheck fails or returns alerts:

  • Failed items: Click View Details next to the failed item, resolve the issue, then click Precheck Again.

  • Alert items that cannot be ignored: Click View Details, resolve the issue, then rerun the precheck.

  • Alert items that can be ignored: Click Confirm Alert Details > Ignore > OK, then click Precheck Again. Ignoring alerts may cause data inconsistency.

Step 7: Purchase an instance and start migration

  1. Wait until the Success Rate reaches 100%, then click Next: Purchase Instance.

  2. Select an Instance Class for the migration instance. Instance class determines migration speed. See Specifications of data migration instances.

  3. Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms.

  4. Click Buy and Start.

Monitor the migration progress in the task list.

Switch your workload to the destination

Warning

This is the highest-risk phase of migration. A DTS task can resume automatically for up to 7 days after failing. If you do not stop or release the task after switching, a resumed task will overwrite destination data with source data.

Complete these steps in order to switch your workload safely:

  1. Wait for the incremental migration latency to reach 0 and remain stable. This confirms the destination is in sync with the source.

  2. Stop all write traffic to the source database.

  3. Wait for the migration latency to reach 0 again, confirming all remaining changes have replicated.

  4. Update your application's connection string to point to the destination instance.

  5. Stop or release the DTS migration task. Alternatively, run REVOKE on the destination to remove DTS write permissions.

What's next