All Products
Search
Document Center

Data Transmission Service:Migrate data from a PolarDB for MySQL cluster to a PolarDB-X 2.0 instance

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate data from a PolarDB for MySQL cluster to a PolarDB-X 2.0 instance. DTS supports schema migration, full data migration, and incremental data migration, enabling a smooth cutover with minimal downtime.

Prerequisites

Before you begin, make sure that you have:

  • A PolarDB-X 2.0 instance created

  • Available storage space on the PolarDB-X 2.0 instance that is larger than the total data size of the source PolarDB for MySQL cluster

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration and full data migrationFree of chargeCharged when the Access Method parameter of the destination database is set to Public IP Address. See Billing overview.
Incremental data migrationCharged. See Billing overview.

Migration types

DTS supports three migration types that you can combine based on your requirements.

Migration typeDescription
Schema migrationMigrates schemas of selected objects (tables, views, triggers, stored procedures, and stored functions). The routine_body of stored procedures and functions, and the select_statement of views cannot be modified during migration. DTS changes the SECURITY attribute from DEFINER to INVOKER for views, stored procedures, and functions, and sets the DEFINER to the destination database account. DTS does not migrate user information. To call a view, stored procedure, or stored function in the destination database, grant the required permissions to INVOKER.
Full data migrationMigrates historical data from selected objects in the source database to the destination database.
Incremental data migrationMigrates ongoing changes after full data migration completes, keeping source and destination databases in sync while your applications continue running. Supported SQL operations: INSERT, UPDATE, DELETE.

Recommended configuration: Select Schema Migration, Full Data Migration, and Incremental Data Migration together to ensure data consistency and minimize downtime during cutover.

Limitations

Source database

LimitationDetails
Outbound bandwidthThe source database server must have enough outbound bandwidth. Insufficient bandwidth reduces migration speed.
Table constraintsTables to be migrated must have PRIMARY KEY or UNIQUE constraints with all fields unique. Otherwise, the destination database may contain duplicate records.
Table count when renaming objectsIf you select tables as migration objects and need to rename tables or columns in the destination database, a single task supports up to 1,000 tables. For more than 1,000 tables, configure multiple tasks or migrate the entire database instead.
DDL during full data migrationDo not perform DDL operations on the source database during full data migration. DDL operations cause the migration task to fail.
Writes during full-only migrationIf you run only full data migration (without incremental), do not write to the source database during migration. Concurrent writes cause data inconsistency.
Incremental DDLIncremental DDL operations cannot be migrated. To perform DDL operations during incremental migration, run them in the destination database first, then in the source database.

Binary logging requirements for incremental data migration:

ParameterRequired valueNotes
Binary logging featureEnabledMust be enabled before starting the task. If not enabled, the precheck fails.
loose_polar_log_binonMust be set to on.
Binary log retention periodAt least 3 days (7 days recommended)Logs retained for fewer than 3 days may cause task failure or data loss.
Enabling binary logging on a PolarDB for MySQL cluster incurs storage charges for binary log files. For setup instructions, see Enable binary logging and Modify the retention period.

Other limitations

  • DTS does not migrate read-only nodes of the source PolarDB for MySQL cluster.

  • DTS does not migrate Object Storage Service (OSS) external tables from the source cluster.

  • During schema migration, DTS migrates foreign keys from the source database to the destination database. During full data migration and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. If cascade update or delete operations are performed on the source database during migration, data inconsistency may occur.

  • During full data migration, concurrent INSERT operations cause tablespace fragmentation. The used tablespace of the destination database will be larger than that of the source database after migration completes.

  • DTS retrieves values from FLOAT and DOUBLE columns using ROUND(COLUMN,PRECISION). If no precision is specified, DTS sets precision to 38 digits for FLOAT and 308 digits for DOUBLE. Verify that these defaults meet your requirements.

  • DTS attempts to resume failed tasks for up to 7 days. Before switching workloads to the destination database, stop or release any failed tasks, or run REVOKE to revoke write permissions from the DTS accounts. Otherwise, resumed failed tasks may overwrite data in the destination database.

  • DTS runs CREATE DATABASE IF NOT EXISTS \`test\` on the source database periodically to advance the binary log position.

  • If a task fails, DTS technical support will attempt to restore it within 8 hours. The task may be restarted and task parameters (not database parameters) may be modified during restoration.

Required permissions

DatabaseRequired permissions
PolarDB for MySQL clusterRead permissions on the objects to be migrated
PolarDB-X 2.0 instanceRead and write permissions on the objects to be migrated

For instructions on creating accounts and granting permissions, see:

Create a data migration task

Step 1: Go to the Data Migration page

Use one of the following methods to open the Data Migration page and select the region where your migration instance resides.

DTS console:

  1. Log on to the DTS console.

  2. In the left-side navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where the data migration instance resides.

DMS console:

The actual steps may vary depending on the mode and layout of your DMS console. See Simple mode and Customize the layout and style of the DMS console.
  1. Log on to the DMS console.

  2. In the top navigation bar, move the pointer over Data + AI > DTS (DTS) > Data Migration.

  3. From the drop-down list to the right of Data Migration Tasks, select the region where the data migration instance resides.

Step 2: Configure source and destination databases

  1. Click Create Task to go to the task configuration page.

  2. Configure the source and destination databases using the following parameters.

    Warning

    After configuring the source and destination databases, read the Limits displayed at the top of the page. Skipping this step may cause task failures or data inconsistency.

    General settings

    ParameterDescription
    Task NameA name for the DTS task. DTS auto-generates a name. Specify a descriptive name to identify the task easily. The name does not need to be unique.

    Source Database

    ParameterDescription
    Select Existing ConnectionIf the source instance is registered with DTS, select it from the drop-down list. DTS auto-fills the remaining parameters. In the DMS console, select the instance from Select a DMS database instance. If the instance is not registered, configure the parameters below manually.
    Database TypeSelect PolarDB for MySQL.
    Access MethodSelect Alibaba Cloud Instance.
    Instance RegionThe region where the source PolarDB for MySQL cluster resides.
    PolarDB Instance IDThe ID of the source PolarDB for MySQL cluster.
    Database AccountThe account for the source cluster. See Required permissions.
    Database PasswordThe password for the database account.
    EncryptionWhether to encrypt the connection. Configure based on your requirements. See Configure SSL encryption.

    Destination Database

    ParameterDescription
    Select Existing ConnectionIf the destination instance is registered with DTS, select it from the drop-down list. DTS auto-fills the remaining parameters. In the DMS console, select the instance from Select a DMS database instance. If the instance is not registered, configure the parameters below manually.
    Database TypeSelect PolarDB-X 2.0.
    Access MethodSelect Alibaba Cloud Instance.
    Instance RegionThe region where the PolarDB-X 2.0 instance resides.
    Instance IDThe ID of the destination PolarDB-X 2.0 instance.
    Database AccountThe account for the destination instance. See Required permissions.
    Database PasswordThe password for the database account.
  3. Click Test Connectivity and Proceed.

    Make sure the CIDR blocks of DTS servers are added to the security settings of your source and destination databases. See Add the CIDR blocks of DTS servers.

Step 3: Configure migration objects

  1. On the Configure Objects page, configure the following parameters.

    If you do not select Schema Migration, create the target database and tables in the destination database beforehand and enable object name mapping in Selected Objects. If you do not select Incremental Data Migration, do not write to the source database during migration to maintain data consistency. Renaming objects using the object name mapping feature may cause dependent objects to fail to migrate.
    ParameterDescription
    Migration TypesSelect the migration types based on your needs: - For a one-time migration, select Schema Migration and Full Data Migration. - For continuous sync with minimal downtime (recommended), select Schema Migration, Full Data Migration, and Incremental Data Migration.
    Processing Mode of Conflicting Tables- Precheck and Report Errors (recommended): checks for tables in the destination database with the same names as source tables. The precheck fails if conflicts exist. If conflicting destination tables cannot be deleted or renamed, use the object name mapping feature to rename migrated tables. See Map object names. - Ignore Errors and Proceed: skips the name conflict precheck. Use with caution — during full migration, DTS skips records that conflict with existing destination records; during incremental migration, DTS overwrites conflicting destination records. If schemas differ, only specific columns may be migrated or the task may fail.
    Source ObjectsSelect objects from the Source Objects section, click the rightwards arrow icon, and add them to the Selected Objects section. Selectable objects include columns, tables, and schemas. Selecting tables or columns excludes other object types such as views, triggers, and stored procedures.
    Selected Objects- To rename a single object: right-click it and see Map the name of a single object. - To rename multiple objects at once: click Batch Edit. See Map multiple object names at a time. - To filter rows: right-click an object and specify WHERE conditions. See Specify filter conditions. - To select specific SQL operations for a table: right-click the object and select the operations.
  2. Click Next: Advanced Settings and configure the following parameters.

    If multiple tasks share the same source or destination database and have different retry time settings, the most recently configured value applies. DTS charges for instance usage during retry periods. We recommend setting the retry time based on your business requirements and releasing the DTS instance promptly after migration completes.
    ParameterDescription
    Dedicated Cluster for Task SchedulingBy default, DTS schedules tasks to the shared cluster. For higher stability, purchase a dedicated cluster. See What is a DTS dedicated cluster.
    Retry Time for Failed ConnectionsHow long DTS retries failed connections after the task starts. Valid values: 10–1,440 minutes. Default: 720 minutes. Set to at least 30 minutes. If DTS reconnects within this period, the task resumes; otherwise, it fails.
    Retry Time for Other IssuesHow long DTS retries failed DDL or DML operations. Valid values: 1–1,440 minutes. Default: 10 minutes. Set to at least 10 minutes. This value must be smaller than Retry Time for Failed Connections.
    Enable Throttling for Full Data MigrationLimits read/write load during full migration. When enabled, configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected.
    Enable Throttling for Incremental Data MigrationLimits load during incremental migration. When enabled, configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected.
    Environment TagAn optional tag to identify the DTS instance.
    Whether to delete SQL operations on heartbeat tables of forward and reverse tasksControls whether DTS writes heartbeat table operations to the source database. Yes: does not write heartbeat operations (a latency display may appear). No: writes heartbeat operations (may affect physical backup and cloning of the source database).
    Configure ETLWhether to enable extract, transform, and load (ETL). Yes: opens the ETL code editor. See Configure ETL in a data migration or data synchronization task and What is ETL?. No: disables ETL.
    Monitoring and AlertingWhether to receive alerts for task failures or latency thresholds. Yes: configure alert thresholds and notification settings. See Configure monitoring and alerting when you create a DTS task. No: no alerts.
  3. Click Next Step: Data Verification to set up a data verification task. See Configure a data verification task.

Step 4: Run the precheck

  • To preview the API parameters for this task, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

  • Click Next: Save Task Settings and Precheck to save settings and start the precheck.

DTS runs a precheck before starting the migration. The task starts only after the precheck passes.

If the precheck fails:

  • Click View Details next to any failed item, review the cause, fix the issue, and click Precheck Again.

  • For alert items that can be ignored: click Confirm Alert Details > View Details > Ignore > OK, then click Precheck Again.

Ignoring alert items may result in data inconsistency or other risks.

Step 5: Purchase and start the migration instance

  1. Wait until Success Rate reaches 100%, then click Next: Purchase Instance.

  2. On the Purchase Instance page, configure the following parameters.

    ParameterDescription
    Resource GroupThe resource group for the migration instance. Default: default resource group. See What is Resource Management?
    Instance ClassThe migration instance class determines migration speed. Select a class based on your workload. See Instance classes of data migration instances.
  3. Read and agree to Data Transmission Service (Pay-as-you-go) Service Terms by selecting the check box.

  4. Click Buy and Start, then click OK in the confirmation dialog.

Monitor the migration task

After starting the task, monitor its progress on the Data Migration page.

Migration configurationStatus behavior
Schema migration and full data migration onlyThe task stops automatically when complete. The Status column shows Completed.
Incremental data migration includedThe task runs continuously and does not stop automatically. The Status column shows Running.

What's next