All Products
Search
Document Center

PolarDB:Migrate data between PolarDB for MySQL clusters

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate data between PolarDB for MySQL clusters. DTS supports schema migration, full data migration, and incremental data migration, letting you move data with minimal downtime.

Note

PolarDB for MySQL does not support in-place upgrades to version 8.0. To upgrade, create a new version 8.0 cluster and migrate your data to it. Before migrating across versions, create a pay-as-you-go cluster to test compatibility, then release it after testing.

Prerequisites

Before you begin, ensure that you have:

Choose a migration type

Select migration types based on your scenario:

ScenarioMigration types to selectNotes
One-time data move with planned downtimeSchema migration + Full data migrationStop writes to the source during migration to avoid inconsistency.
Live migration with minimal service interruptionSchema migration + Full data migration + Incremental data migrationDTS syncs ongoing changes after the full migration completes.

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration + Full data migrationFreeCharged only when data is migrated from Alibaba Cloud over the Internet. See Billing overview.
Incremental data migrationCharged. See Billing overview.

Migration types

Schema migration

DTS migrates the schemas of selected objects from the source to the destination database. Supported object types: tables, views, triggers, stored procedures, and stored functions.

During schema migration, DTS changes the SECURITY attribute from DEFINER to INVOKER for views, stored procedures, and functions. DTS does not migrate user information. Grant read and write permissions to INVOKER before calling views, stored procedures, or functions on the destination database.

Full data migration

DTS migrates historical data of the selected objects from the source to the destination database.

After a full migration completes, the used tablespace on the destination database will be larger than on the source. This is caused by fragmentation from concurrent INSERT operations during migration.

Incremental data migration

After full data migration completes, DTS migrates ongoing changes from the source to the destination database. This keeps the source database available during migration and lets you cut over with minimal downtime.

SQL operations supported for incremental migration

Operation typeSQL statements
DMLINSERT, UPDATE, DELETE
DDLALTER TABLE, ALTER VIEW, CREATE FUNCTION, CREATE INDEX, CREATE PROCEDURE, CREATE TABLE, CREATE VIEW, DROP INDEX, DROP TABLE, RENAME TABLE, TRUNCATE TABLE
Important

RENAME TABLE operations may cause data inconsistency. If a selected table is renamed during migration, its data may not be migrated to the destination. To prevent this, select the entire database (not individual tables) as the migration object, and make sure both the pre- and post-rename database names are included in the selected objects.

Permissions required

DatabaseRequired permissions
Source PolarDB for MySQL clusterRead permissions on the objects to be migrated
Destination PolarDB for MySQL clusterRead/write permissions on the destination database

For instructions on creating database accounts, see Create and manage a database account.

Limits

Foreign key behavior

  • During schema migration, DTS migrates foreign keys from the source to the destination database.

  • During full and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. If you run cascade update or delete operations on the source during migration, data inconsistency may occur.

Source database requirements

  • Bandwidth: The source database server must have enough outbound bandwidth. Insufficient bandwidth reduces migration speed.

  • Primary keys: Tables being migrated must have PRIMARY KEY or UNIQUE constraints with all unique fields. Tables without these constraints may result in duplicate records on the destination.

  • Table count limit: If you select tables as migration objects and need to rename tables or columns in the destination, a single task supports up to 1,000 tables. If you run a task to migrate more than 1,000 tables, a request error occurs. In this case, either split the work across multiple tasks or migrate the entire database.

  • Binary logging for incremental migration: Binary logging must be enabled and the loose_polar_log_bin parameter must be set to on. Without this, the precheck fails and the task cannot start. See Enable binary logging and Modify parameters. Binary log retention requirements: After full migration completes, you can set the retention period to more than 24 hours. Make sure that you set the retention period of binary logs in accordance with the preceding requirements. Otherwise, the Service Level Agreement (SLA) of DTS does not guarantee service reliability or performance. Enabling binary logging incurs storage charges.

    • Incremental migration only: retain binary logs for more than 24 hours.

    • Full + incremental migration: retain binary logs for at least 7 days.

  • Schema and DDL changes: Do not run DDL statements that modify database or table schemas during schema migration or full data migration. This causes the task to fail.

  • Writes during full-only migration: If you run only full data migration (without incremental), do not write to the source database during migration. Writes cause data inconsistency between source and destination.

Other limits

  • Cluster versions: Source and destination clusters must be the same version.

  • Read-only nodes: DTS does not migrate read-only nodes from the source cluster.

  • OSS external tables: DTS does not migrate Object Storage Service (OSS) external tables from the source cluster.

  • Two-way synchronization: You cannot migrate data from a cluster that has DTS two-way synchronization configured.

  • FLOAT and DOUBLE precision: DTS uses the ROUND(COLUMN,PRECISION) function to retrieve values from FLOAT and DOUBLE columns. Default precision: 38 digits for FLOAT, 308 digits for DOUBLE. Verify these settings meet your requirements before starting the migration.

  • Task auto-resume: DTS tries to resume failed tasks for up to 7 days. Before switching workloads to the destination cluster, stop or release the migration task, or run the revoke command to remove write permissions from the DTS accounts on the destination cluster. Otherwise, a resumed task may overwrite destination data with source data.

  • Failed DDL statements: If a DDL statement fails to execute on the destination, the task continues running. View failed DDL statements in the task logs. See View task logs.

  • DTS support intervention: If a DTS task fails, DTS technical support attempts to restore it within 8 hours. The task may be restarted and task-level parameters may be modified. Database parameters are not modified. For the parameters that may be modified, see the Modify instance parameters section of the Modify the parameters of a DTS instance topic.

  • Performance impact: Full data migration uses read and write resources on both source and destination databases, which increases load. Run migrations during off-peak hours.

Special cases

DTS executes the CREATE DATABASE IF NOT EXISTS 'test' statement on the source database periodically to advance the binary log file position.

Migrate data between PolarDB for MySQL clusters

Step 1: Go to the Data Migration Tasks page

  1. Log on to the Data Management (DMS) console.Data Migration page of the new DTS console

  2. In the top navigation bar, hover over DTS and choose DTS (DTS) > Data Migration.

    Note

    Depending on the DMS console mode and layout, the steps may differ. See Simple mode and Customize the layout and style of the DMS console. Alternatively, go directly to the Data Migration page of the new DTS console.

  3. From the drop-down list on the right side of Data Migration Tasks, select the region where your migration instance resides.

    Note

    In the new DTS console, select the region in the upper-left corner.

Step 2: Create a task

Click Create Task.

Step 3: Switch to the new configuration page (optional)

In the upper-right corner, click New Configuration Page if it is available.

Note

Skip this step if the Back to Previous Version button is already displayed, which means you are already on the new configuration page. The new configuration page is recommended — some parameters differ between versions.

Step 4: Configure the source and destination databases

Warning

After configuring the source and destination databases, review the Limits displayed at the top of the page before proceeding. Skipping this step may cause task failures or data inconsistency.

Configure the parameters as described in the following table.

Task information

ParameterDescription
Task NameThe name of the task. DTS generates a name automatically. Specify a descriptive name to identify the task — unique names are not required.

Source Database

ParameterDescription
Select a DMS database instanceSelect an existing instance or configure the database manually. If you select an existing instance, DTS populates the parameters automatically. To register a database, see Register an Alibaba Cloud database instance or Register a database hosted on a third-party cloud service or a self-managed database. In the DTS console, manage connections on the Database Connections page. See Manage database connections.
Database TypeSelect PolarDB for MySQL.
Connection TypeSelect Alibaba Cloud Instance.
Instance RegionThe region where the source PolarDB for MySQL cluster resides.
PolarDB Cluster IDThe ID of the source cluster.
Database AccountThe database account for the source cluster. See the Permissions required section for the required permissions.
Database PasswordThe password for the database account.
EncryptionWhether to encrypt the connection. Configure based on your requirements. See Configure SSL encryption.

Destination Database

ParameterDescription
Select a DMS database instanceSelect an existing instance or configure the database manually. If you select an existing instance, DTS populates the parameters automatically. To register a database, see Register an Alibaba Cloud database instance or Register a database hosted on a third-party cloud service or a self-managed database. In the DTS console, manage connections on the Database Connections page. See Manage database connections.
Database TypeSelect PolarDB for MySQL.
Connection TypeSelect Alibaba Cloud Instance.
Instance RegionThe region where the destination PolarDB for MySQL cluster resides.
PolarDB Cluster IDThe ID of the destination cluster.
Database AccountThe database account for the destination cluster. See the Permissions required section. Use a privileged account.
Database PasswordThe password for the database account.
EncryptionWhether to encrypt the connection. Configure based on your requirements. See Configure SSL encryption.

Step 5: Test connectivity

Click Test Connectivity and Proceed.

DTS automatically adds its server CIDR blocks to the IP address whitelist of Alibaba Cloud database instances. For self-managed databases on Elastic Compute Service (ECS), DTS adds its CIDR blocks to the ECS security group. If the self-managed database spans multiple ECS instances, manually add the CIDR blocks to each instance's security group. For databases hosted in a data center or on a third-party cloud, manually add the DTS server CIDR blocks to the database whitelist. See CIDR blocks of DTS servers.

Warning

Adding DTS server CIDR blocks to a database whitelist or ECS security group introduces security risks. Before using DTS, understand and accept these risks, and take precautions such as: using strong credentials, limiting exposed ports, authenticating API calls, auditing whitelist and security group rules regularly, and connecting through Express Connect, VPN Gateway, or Smart Access Gateway.

Step 6: Configure migration objects

On the Configure Objects page, set the following parameters.

Migration types and objects

ParameterDescription
Migration TypesSelect the migration types for your scenario. See the Choose a migration type section.
Method to Migrate Triggers in Source DatabaseThe method for migrating triggers. Configure based on your requirements. Only available when Migration Types includes Schema Migration. See Synchronize or migrate triggers from the source database.
Processing Mode of Conflicting TablesHow DTS handles tables in the destination that share names with source tables: Precheck and Report Errors — the precheck fails if conflicting table names exist. To proceed, rename the conflicting tables using the object name mapping feature. See Map object names. Ignore Errors and Proceed — skips the precheck. If schemas match, full migration skips conflicting records; incremental migration overwrites them. If schemas differ, only specific columns are migrated or the task fails. Use with caution.
Source ObjectsSelect objects from the Source Objects list and click Rightwards arrow to add them to Selected Objects. You can select columns, tables, or schemas. Selecting tables or columns excludes views, triggers, and stored procedures.
Selected ObjectsRight-click an object to rename it (single object) or click Batch Edit to rename multiple objects. Object name mapping may cause dependent objects to fail migration. Right-click a table to add WHERE filter conditions (see Specify filter conditions) or to select specific SQL operations to migrate.

Advanced settings

Click Next: Advanced Settings to configure the following parameters.

ParameterDescription
Dedicated Cluster for Task SchedulingBy default, DTS schedules the task to the shared cluster. Purchase a dedicated cluster for predictable performance. See What is a DTS dedicated cluster.
Select the engine type of the destination databaseThe storage engine for the destination database: InnoDB (default) or X-Engine (online transaction processing (OLTP) workloads).
Copy the temporary table of the Online DDL tool that is generated in the source table to the destination databaseApplies when you use DMS or gh-ost for online DDL operations on the source. Yesalert notification settings — migrates temporary table data (may introduce latency if large). No, Adapt to DMS Online DDL — migrates only the original DDL; destination tables may be locked. No, Adapt to gh-ost — migrates only the original DDL using custom or default regex to filter shadow tables; destination tables may be locked. Do not use pt-online-schema-change — it causes the DTS task to fail.
Whether to Migrate AccountsWhether to migrate account information from the source. If Yes, select the accounts and verify the permissions of accounts used in the migration task. See Migrate database accounts.
Retry Time for Failed ConnectionsHow long DTS retries failed connections after a task starts. Range: 10–1,440 minutes. Default: 720 minutes. Set to a value greater than 30 minutes. If the connection is restored within this period, DTS resumes the task. Otherwise, the task fails. If multiple tasks share a source or destination database, the most recently set value takes precedence. Note that retry time counts toward billing.
Retry Time for Other IssuesHow long DTS retries failed DDL or DML operations after a task starts. Range: 1–1,440 minutes. Default: 10 minutes. Set to a value greater than 10 minutes. This value must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data MigrationWhether to throttle full data migration. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s) to reduce load on the destination server. Only available when Full Data Migration is selected.
Enable Throttling for Incremental Data MigrationWhether to throttle incremental data migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Only available when Incremental Data Migration is selected.
Environment TagAn optional tag identifying the cluster environment.
Whether to delete SQL operations on heartbeat tables of forward and reverse tasksYes — DTS does not write heartbeat SQL operations to the source. Task latency may appear elevated. No — DTS writes heartbeat SQL operations, which may affect source database physical backup and cloning.
Configure ETLWhether to enable the extract, transform, and load (ETL) feature. Yes — configure data processing statements in the code editor. See Configure ETL in a data migration or data synchronization task. No — skip ETL. See What is ETL?
Monitoring and AlertingWhether to configure alerts for the migration task. Yes — set an alert threshold and notification contacts. See Configure monitoring and alerting when you create a DTS task. No — no alerts configured.

Data verification

Click Next Step: Data Verification to configure data verification. See Configure a data verification task.

Step 7: Run the precheck

Click Next: Save Task Settings and Precheck.

To preview the API parameters for this task, hover over the button and click Preview OpenAPI parameters before clicking through.

The task cannot start until it passes the precheck.

  • If a precheck item fails, click View Details next to the item, fix the issue, and click Precheck Again.

  • If a precheck item triggers an alert:

    • If the alert cannot be ignored, fix the issue and rerun the precheck.

    • If the alert can be ignored, click Confirm Alert Details, then click Ignore > OK > Precheck Again. Ignoring alerts may result in data inconsistency.

Step 8: Purchase a DTS instance

Wait until Success Rate reaches 100%, then click Next: Purchase Instance.

On the Purchase Instance page, configure the following parameters:

ParameterDescription
Resource GroupThe resource group for the migration instance. Default: default resource group. See What is Resource Management?
Instance ClassThe instance class determines migration speed. Select based on your data volume and time requirements. See Instance classes of data migration instances.

Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start > OK.

Track migration progress on the Data Migration page.