All Products
Search
Document Center

Data Transmission Service:Migrate data between PolarDB for MySQL clusters

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate data between PolarDB for MySQL clusters with minimal downtime. DTS supports three migration types—schema migration, full data migration, and incremental data migration—that you can combine based on whether your application needs to stay online during migration.

PolarDB for MySQL clusters cannot be upgraded directly to version 8.0. To upgrade, create a new version 8.0 cluster and migrate your data to it. Before migrating across major versions, create a pay-as-you-go cluster to test compatibility, then release it after testing.

Choose a migration strategy

Select a combination of migration types based on your requirements:

GoalMigration types to selectDowntime required
Migrate data with the application offlineSchema migration + Full data migrationYes — stop writes to the source before starting
Migrate data with the application staying onlineSchema migration + Full data migration + Incremental data migrationNo — incremental migration keeps the destination in sync

Migration types

Schema migration

DTS migrates the schemas of tables, views, triggers, stored procedures, and stored functions from the source cluster to the destination cluster.

  • DTS changes the SECURITY attribute from DEFINER to INVOKER for views, stored procedures, and stored functions.

  • DTS does not migrate user information. Grant read and write permissions to the INVOKER to call views, stored procedures, or stored functions on the destination cluster.

  • Foreign keys are migrated during schema migration. During full and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. If you perform cascade or delete operations on the source during migration, data inconsistency may occur.

Full data migration

DTS migrates all existing data from the source cluster to the destination cluster. Concurrent INSERT operations during this phase cause table fragmentation, so the destination tablespace will be larger than the source after migration completes.

Incremental data migration

After full data migration completes, DTS continuously applies changes from the source to the destination. This keeps both clusters in sync and lets your application stay online throughout the migration.

SQL operations supported in incremental migration

Operation typeSupported SQL statements
DMLINSERT, UPDATE, DELETE
DDLALTER TABLE, ALTER VIEW, CREATE FUNCTION, CREATE INDEX, CREATE PROCEDURE, CREATE TABLE, CREATE VIEW, DROP INDEX, DROP TABLE, RENAME TABLE, TRUNCATE TABLE
Important

RENAME TABLE operations can cause data inconsistency. If you rename a table during migration and the task scope is that table (not its database), DTS stops migrating data for that table. To avoid this, select the database as the migration object instead of individual tables, and make sure both the pre-rename and post-rename database names are included in the selected objects.

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration + Full data migrationFreeCharged when migrating from Alibaba Cloud over the Internet. See Billing overview
Incremental data migrationCharged. See Billing overview

Limitations

Source database limitations

LimitationDetailWorkaround
Outbound bandwidthThe source server must have enough outbound bandwidth. Low bandwidth reduces migration speed.Upgrade bandwidth or enable throttling.
Primary key or unique keyTables being migrated must have a PRIMARY KEY or UNIQUE constraint, with all fields unique. Without this, the destination may contain duplicate records.Add a primary key or unique key to each table before migrating.
Table count per task (with renaming)If you select tables (not databases) as migration objects and rename tables or columns in the destination, a single task supports up to 1,000 tables. More than 1,000 tables causes a request error.Split the migration across multiple tasks, or select the entire database as the migration object.
Binary logging (incremental migration)Binary logging must be enabled and loose_polar_log_bin must be set to on. If not configured, the precheck fails and the task cannot start. Enabling binary logging incurs storage charges for log files.See Enable binary logging and Modify parameters.
Binary log retentionFor incremental-only tasks: retain binary logs for more than 24 hours. For full + incremental tasks: retain binary logs for at least 7 days. After full migration completes, you can reduce retention to more than 24 hours. Insufficient retention may cause task failure or data loss; DTS Service Level Agreement (SLA) guarantees do not apply if this requirement is not met.Set the log retention period before starting the migration task.
DDL during schema + full migrationDo not run DDL operations that change database or table schemas during schema migration or full data migration. Such operations cause the task to fail.Schedule schema changes outside the migration window.
Writes during full-only migrationDo not write data to the source database during a full-data-migration-only task. This can cause data inconsistency between source and destination.Add incremental data migration to keep both clusters in sync, or stop application writes before starting.

Other limitations

LimitationDetail
MySQL versionUse the same MySQL version for source and destination clusters to avoid compatibility issues.
Read-only nodesRead-only nodes on the source cluster cannot be migrated.
Migration timingFull data migration uses read and write resources on both clusters and may increase server load. Assess the performance impact before starting and run migrations during off-peak hours.
FLOAT and DOUBLE precisionDTS retrieves FLOAT and DOUBLE values using ROUND(COLUMN, PRECISION). If you do not specify a precision, DTS defaults to 38 digits for FLOAT and 308 digits for DOUBLE. Verify that these defaults meet your requirements.
Failed task resumptionDTS automatically retries failed tasks for up to 7 days. Stop or release the DTS task before switching your application to the destination cluster. Alternatively, run REVOKE to remove DTS write permissions on the destination. If a failed task resumes after you switch over, the source data overwrites the destination data.
Online DDL toolsDo not use pt-online-schema-change for online DDL operations on the source. This causes the DTS task to fail. Use DMS or gh-ost instead.

Required permissions

The permissions required depend on the migration types you select.

DatabaseFull data migration onlyFull data migration + Incremental data migration
Source PolarDB for MySQL clusterRead permissions on the objects to be migratedRead permissions on the objects to be migrated
Destination PolarDB for MySQL clusterRead and write permissions on the destination databaseRead and write permissions on the destination database
Use a privileged account for the destination cluster. For instructions on creating a database account, see Create a database account.

For incremental data migration, the source database account also needs binary logging access. Make sure binary logging is enabled and loose_polar_log_bin is set to on before starting.

Migrate data between PolarDB for MySQL clusters

Prerequisites

Before you begin, make sure that you have:

Step 1: Go to the Data Migration Tasks page

  1. Log on to the Data Management (DMS) console.

  2. In the top navigation bar, click DTS.

  3. In the left-side navigation pane, choose DTS (DTS) > Data Migration.

DMS console navigation varies by mode and layout. See Simple mode and Customize the layout and style of the DMS console. You can also go directly to the Data Migration Tasks page in the new DTS console.

Step 2: Select a region

From the drop-down list next to Data Migration Tasks, select the region where the data migration instance resides.

In the new DTS console, select the region in the upper-left corner.

Step 3: Configure source and destination databases

Click Create Task. On the Create Task page, configure the following parameters.

Warning

After configuring the source and destination, read the Limitations displayed at the top of the page before proceeding. Skipping this may cause the task to fail or data inconsistency.

Task settings

ParameterDescription
Task NameA name for the task. DTS assigns a name automatically. Specify a descriptive name for easy identification. The name does not need to be unique.

Source database

ParameterDescription
Select an existing DMS database instanceSelect an existing instance to have DTS auto-fill its parameters, or leave blank and configure parameters manually.
Database TypeSelect PolarDB for MySQL.
Access MethodSelect Alibaba Cloud Instance.
Instance RegionThe region where the source PolarDB for MySQL cluster resides.
PolarDB Cluster IDThe ID of the source PolarDB for MySQL cluster.
Database AccountThe account for the source cluster. See Required permissions.
Database PasswordThe password for the database account.
EncryptionWhether to encrypt the connection. Configure based on your requirements. See Configure SSL encryption.

Destination database

ParameterDescription
Select an existing DMS database instanceSelect an existing instance to have DTS auto-fill its parameters, or leave blank and configure parameters manually.
Database TypeSelect PolarDB for MySQL.
Access MethodSelect Alibaba Cloud Instance.
Instance RegionThe region where the destination PolarDB for MySQL cluster resides.
PolarDB Cluster IDThe ID of the destination PolarDB for MySQL cluster.
Database AccountThe account for the destination cluster. Use a privileged account. See Required permissions.
Database PasswordThe password for the database account.
EncryptionWhether to encrypt the connection. Configure based on your requirements. See Configure SSL encryption.

Step 4: Test connectivity

Click Test Connectivity and Proceed.

DTS automatically adds its server CIDR blocks to the IP address whitelist of Alibaba Cloud database instances (such as ApsaraDB RDS for MySQL or ApsaraDB for MongoDB) and to the security group rules of Elastic Compute Service (ECS) instances hosting self-managed databases. For self-managed databases spread across multiple ECS instances, add DTS CIDR blocks to each instance's security group rules manually. For on-premises databases or databases from third-party cloud providers, add DTS CIDR blocks to the database IP whitelist manually. For the full list of CIDR blocks, see Add the CIDR blocks of DTS servers to the security settings of on-premises databases.

Warning

Adding DTS server CIDR blocks to IP whitelists or security groups creates security exposure. Before proceeding, take preventive measures including: strengthening account and password security, restricting exposed ports, authenticating API calls, auditing whitelist and security group rules regularly, and connecting DTS to your database over Express Connect, VPN Gateway, or Smart Access Gateway instead of the public Internet.

Step 5: Select migration objects and types

Migration types

ParameterDescription
Migration TypesSelect Schema Migration and Full Data Migration for an offline migration. Add Incremental Data Migration to keep the application online during migration.
Method to Migrate Triggers in Source DatabaseThe method for migrating triggers. This parameter appears only when Schema Migration is selected. See Synchronize or migrate triggers from the source database.

Conflict handling

OptionBehavior
Precheck and Report ErrorsBefore migration starts, DTS checks whether the destination has tables with the same names as the source. If name conflicts exist, the precheck fails and the task does not start. Use the object name mapping feature to rename objects if needed. See Map object names.
Ignore Errors and ProceedSkips the name-conflict precheck. If the source and destination have identical schemas, DTS skips rows with matching primary key values. If schemas differ, only specific columns are migrated or the task fails. Use with caution.

Select objects

In the Source Objects section, select the objects to migrate, then click Rightwards arrow to move them to Selected Objects. You can select columns, tables, or schemas. Selecting tables or columns excludes views, triggers, and stored procedures from migration.

In Selected Objects, you can:

Renaming an object may cause dependent objects, such as views or stored procedures, to fail during migration.

Step 6: Configure advanced settings

Click Next: Advanced Settings, then configure the following.

Data verification

To verify data consistency between source and destination, configure the data verification feature. See Enable data verification.

Advanced settings

ParameterDescription
Select the dedicated cluster used to schedule the taskBy default, DTS uses the shared cluster. Purchase a dedicated cluster to isolate task resources. See What is a DTS dedicated cluster?
Set AlertsConfigure alerting for task failures or latency threshold breaches. Select Yes to specify alert thresholds and contacts. See Configure monitoring and alerting.
Select the engine type of the destination databaseThe storage engine for the destination cluster: InnoDB (default) or X-Engine (an online transaction processing (OLTP) engine).
Copy the temporary table of the Online DDL tool that is generated in the source table to the destination database.Controls whether DTS migrates temporary tables generated by online DDL tools.
Important

pt-online-schema-change is not supported — using it causes the task to fail. Options: Yes (migrate temporary table data; may extend migration time); No, Adapt to DMS Online DDL (skip temporary tables, migrate original DDL from DMS only — destination tables may be locked); No, Adapt to gh-ost (skip temporary tables, migrate original DDL from gh-ost only — destination tables may be locked; supports custom regular expressions to filter shadow tables).

Retry Time for Failed ConnectionsHow long DTS retries failed connections after the task starts. Range: 10–1,440 minutes. Default: 720. Set to at least 30 minutes. If multiple tasks share the same source or destination, the shortest retry time among them takes precedence. DTS charges for the instance during retry periods.
The wait time before a retry when other issues occur in the source and destination databases.How long DTS retries failed DDL or DML operations. Range: 1–1,440 minutes. Default: 10. Set to at least 10 minutes. This value must be less than the Retry Time for Failed Connections value.
Enable Throttling for Full Data MigrationLimit read/write load on source and destination during full data migration. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Appears only when Full Data Migration is selected.
Enable Throttling for Incremental Data MigrationLimit load during incremental migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Appears only when Incremental Data Migration is selected.
Environment TagA tag to identify the DTS instance. Select based on your requirements.
Configure ETLWhether to apply extract, transform, and load (ETL) transformations. Select Yes to enter data processing statements. See What is ETL? and Configure ETL in a data migration or data synchronization task.
Whether to delete SQL operations on heartbeat tables of forward and reverse tasksControls whether DTS writes to heartbeat tables in the source database. Yes: DTS does not write to heartbeat tables; migration latency may appear in the console. No: DTS writes to heartbeat tables; physical backup and cloning of the source database may be affected.

Step 7: Save settings and run the precheck

Click Next: Save Task Settings and Precheck.

Before clicking, you can hover over this button and click Preview OpenAPI parameters to view the API parameters for this task configuration.

DTS runs a precheck before the task starts. If the precheck fails:

  • Click View Details next to the failed item, fix the reported issue, then click Precheck Again.

  • If a precheck item raises an alert that can be ignored: click Confirm Alert Details, then in the View Details dialog box click Ignore > OK > Precheck Again. Ignoring alerts may result in data inconsistency.

Step 8: Wait for the precheck to pass

Wait until the success rate reaches 100%, then click Next: Purchase Instance.

Step 9: Select an instance class

On the Purchase Instance page, configure the instance for the migration task.

ParameterDescription
Resource GroupThe resource group for the migration instance. Default: default resource group. See What is Resource Management?
Instance ClassThe instance class determines migration speed. Select based on your data volume and time requirements. See Specifications of data migration instances.

Step 10: Accept the service terms

Read and accept Data Transmission Service (Pay-as-you-go) Service Terms by selecting the check box.

Step 11: Start the task

Click Buy and Start. Monitor progress in the task list.

Post-migration tasks

After the migration task completes:

  1. Verify data consistency: Use the data verification feature to confirm data integrity between source and destination. See Enable data verification.

  2. Stop or release the DTS task: Before or immediately after switching your application, stop or release the migration task. DTS automatically retries failed tasks for up to 7 days — if a resumed task runs after you switch over, it overwrites destination data with source data. Alternatively, run REVOKE to remove DTS write permissions from the destination as an additional safeguard.

  3. Switch your application: Update connection strings in your application to point to the destination cluster.

  4. Clean up: If you created a pay-as-you-go cluster for version compatibility testing, release it now.

Usage notes

DTS periodically executes CREATE DATABASE IF NOT EXISTS \`test\` on the source database to advance the binary log position.