All Products
Search
Document Center

PolarDB:Migrate data from a PolarDB for MySQL cluster to an ApsaraDB RDS for MySQL instance

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate data from a PolarDB for MySQL cluster to an ApsaraDB RDS for MySQL instance. DTS supports schema migration, full data migration, and incremental data migration, so you can migrate with zero or minimal downtime depending on your requirements.

The same procedure applies if you migrate to a self-managed MySQL database instead of an ApsaraDB RDS for MySQL instance. Supported self-managed destinations include databases with a public IP address, databases hosted on Elastic Compute Service (ECS), and databases connected over Express Connect, VPN Gateway, Smart Access Gateway, or Database Gateway.

Choose a migration strategy

Select a migration strategy before you configure the task.

GoalMigration types to selectDowntime
Migrate with minimal downtime (recommended)Schema migration + full data migration + incremental data migrationNear-zero: switch over after incremental migration catches up
Migrate during a maintenance windowSchema migration + full data migrationRequired: stop writes to the source before migration starts

How incremental migration reduces downtime: After full migration completes, DTS continuously replicates changes from the source to the destination. When the replication latency approaches 0, switch your applications to the destination. Your source database remains available throughout.

Prerequisites

Before you begin, make sure that:

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration + full data migrationFreeCharged only when migrating from Alibaba Cloud over the Internet. See Billing overview.
Incremental data migrationCharged. See Billing overview.

Permissions required for database accounts

DatabaseRequired permissionsReference
PolarDB for MySQL cluster (source)Read permissions on the objects to be migratedCreate and manage a database account
ApsaraDB RDS for MySQL instance (destination)Read and write permissions on the objects to be migratedCreate an account

Limitations

Source database requirements

  • Tables to be migrated must have a PRIMARY KEY or UNIQUE constraint with all fields unique. Without this, the destination may contain duplicate records.

  • The source database server must have sufficient outbound bandwidth. Low bandwidth reduces migration speed.

  • When migrating with object renaming (renaming tables or columns in the destination), a single task supports up to 1,000 tables. For more than 1,000 tables, either split into multiple tasks or migrate the entire database.

  • Read-only nodes of the source PolarDB for MySQL cluster cannot be migrated.

Binary log requirements (incremental migration only)

  • Binary logging must be enabled and loose_polar_log_bin must be set to on. If not configured, the precheck fails and the task cannot start.

  • Binary logs incur storage charges when enabled on a PolarDB for MySQL cluster.

  • Retain binary logs for the following minimum periods: If binary logs are purged before DTS reads them, the task fails. After full migration completes, you can reduce the retention period to more than 24 hours.

    • Incremental-only migration: more than 24 hours

    • Full data + incremental migration: at least 7 days

Important

DTS does not guarantee service reliability or performance if binary log retention does not meet the minimum requirements.

Operational restrictions during migration

  • During schema migration and full data migration, do not perform DDL operations on the source database. DDL changes during this phase cause the task to fail.

  • During full-only migration (no incremental), do not write to the source database. Writes cause data inconsistency between source and destination.

  • DTS temporarily disables foreign key constraint checks and cascade operations at the session level during full and incremental migration. If you perform cascade or delete operations on the source during this time, data inconsistency may occur.

Other usage notes

  • Full data migration uses concurrent INSERT operations, which creates table fragmentation in the destination. After migration, the destination tablespace is larger than the source tablespace.

  • DTS uses ROUND(COLUMN, PRECISION) to read FLOAT and DOUBLE columns. Default precision is 38 digits for FLOAT and 308 digits for DOUBLE. Verify that these precision settings meet your requirements before migration.

  • DTS retries failed tasks for up to 7 days. Before switching workloads to the destination, stop or release the migration task. Alternatively, run REVOKE to remove write permissions from the DTS accounts on the destination. Otherwise, a resumed failed task overwrites destination data with source data.

  • DTS executes CREATE DATABASE IF NOT EXISTS \test\`` on the source database periodically to advance the binary log position.

  • During schema migration, DTS migrates foreign keys from the source database to the destination database. DTS also changes the SECURITY attribute from DEFINER to INVOKER for views, stored procedures, and stored functions. DTS does not migrate user information. To call these objects in the destination, grant the required read and write permissions to the INVOKER.

  • If the source database name is invalid, DTS cannot automatically create the destination database. Manually create it in the ApsaraDB RDS for MySQL instance before configuring the task. See Manage databases.

  • Migrate data during off-peak hours to reduce the impact on source and destination database performance.

SQL operations supported for incremental migration

Operation typeSupported statements
DMLINSERT, UPDATE, DELETE
DDLALTER TABLE, ALTER VIEW, CREATE FUNCTION, CREATE INDEX, CREATE PROCEDURE, CREATE TABLE, CREATE VIEW, DROP INDEX, DROP TABLE, RENAME TABLE, TRUNCATE TABLE
Important

RENAME TABLE operations may cause data inconsistency. If you rename a table during migration and the table was selected as the migration object (not its parent database), the renamed table's data is not migrated. To avoid this, select the database (not individual tables) as the migration object, and make sure that the databases to which the table belongs before and after the RENAME TABLE operation are both included in the objects to be migrated.

Migration types

Schema migration migrates the schemas of selected objects from the source to the destination. DTS supports schema migration for the following types of objects: tables, views, triggers, stored procedures, and stored functions.

Configure and run the migration task

Step 1: Go to the Data Migration Tasks page

  1. Log on to the Data Management (DMS) console.Data Migration Tasks page of the new DTS console

  2. In the top navigation bar, click DTS.

  3. In the left-side navigation pane, choose DTS (DTS) > Data Migration.

The navigation may vary depending on the DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console for details. You can also go directly to the Data Migration Tasks page in the DTS console.

Step 2: Select the region

From the drop-down list next to Data Migration Tasks, select the region where the migration instance resides.

In the new DTS console, select the region in the upper-left corner instead.

Step 3: Create the task

Click Create Task to open the task configuration page.

Optional: If the Back to Previous Version button is not visible in the upper-right corner, click New Configuration Page to switch to the latest configuration UI. The new page is recommended because some parameters differ between versions.

Step 4: Configure source and destination databases

Warning

After configuring the source and destination databases, read the Limits displayed at the top of the page before proceeding. Skipping this may cause the task to fail or result in data inconsistency.

Configure the following parameters:

Source database (PolarDB for MySQL)

ParameterValue
Task NameEnter a descriptive name. Names don't need to be unique.
Select a DMS database instance.Select an existing registered instance, or configure a new connection. If you select an existing instance, DTS auto-populates the remaining parameters.
Database TypeSelect PolarDB for MySQL.
Connection TypeSelect Alibaba Cloud Instance.
Instance RegionSelect the region where the source cluster resides.
Replicate Data Across Alibaba Cloud AccountsSelect No (same account).
PolarDB Cluster IDSelect the source PolarDB for MySQL cluster.
Database AccountEnter the database account. See Permissions required for database accounts.
Database PasswordEnter the password for the account.
EncryptionConfigure based on your requirements. See Configure SSL encryption.

Destination database (ApsaraDB RDS for MySQL)

ParameterValue
Select a DMS database instance.Select an existing registered instance, or configure a new connection.
Database TypeSelect MySQL.
Connection TypeSelect Alibaba Cloud Instance.
Instance RegionSelect the region where the destination instance resides.
Replicate Data Across Alibaba Cloud AccountsSelect No (same account).
RDS Instance IDSelect the destination ApsaraDB RDS for MySQL instance.
Database AccountEnter the database account. See Permissions required for database accounts.
Database PasswordEnter the password for the account.
EncryptionSelect Non-encrypted or SSL-encrypted. If you select SSL-encrypted, enable SSL encryption on the RDS instance first. See Configure the SSL encryption feature.
To register a database with DMS, click Create Template in the DMS console. To register with DTS, go to the Database Connections page. For registration instructions, see Register an Alibaba Cloud database instance. For self-managed or third-party cloud databases, see Register a database hosted on a third-party cloud service or a self-managed database. To manage DTS database connections, see Manage database connections.

Step 5: Test connectivity

Click Test Connectivity and Proceed at the bottom of the page.

DTS automatically adds its server CIDR blocks to the IP address whitelist of Alibaba Cloud database instances and to the security group rules of ECS-hosted databases. For self-managed databases in data centers or on third-party cloud providers, manually add the DTS server CIDR blocks to the database whitelist. See Add the CIDR blocks of DTS servers.

Warning

Adding DTS server CIDR blocks to whitelists or security group rules introduces security exposure. Before proceeding, take protective measures such as: using strong credentials, limiting exposed ports, authenticating API calls, regularly auditing whitelist rules, and removing unauthorized CIDR blocks. Alternatively, connect the database to DTS over Express Connect, VPN Gateway, or Smart Access Gateway.

Step 6: Select objects to migrate

On the Select Objects page, configure the migration scope.

Migration types

GoalSelection
Offline migration (maintenance window required)Schema Migration + Full Data Migration
Online migration (minimal downtime)Schema Migration + Full Data Migration + Incremental Data Migration
If you do not select Incremental Data Migration, do not write to the source database during migration to keep data consistent.

Other parameters

ParameterDescription
Method to Migrate Triggers in Source DatabaseSelect how to handle triggers. Available only when both Schema Migration and Incremental Data Migration are selected. See Synchronize or migrate triggers from the source database.
Processing Mode of Conflicting TablesPrecheck and Report Errors: fails the task if source and destination share table names. Use object name mapping to resolve conflicts. Ignore Errors and Proceed: skips the check. Use with caution — may cause data inconsistency or partial migration if schemas differ.
Capitalization of Object Names in Destination InstanceSet capitalization of database, table, and column names. Default is DTS default policy. See Specify the capitalization of object names.
Source ObjectsSelect objects from the Source Objects panel and click the arrow icon to move them to Selected Objects. Selecting individual tables or columns excludes views, triggers, and stored procedures.
Selected ObjectsRight-click an object to rename it or set filter conditions. Click Batch Edit to rename multiple objects at once. Note that renaming an object may break dependent objects. See Map object names and Set filter conditions.

Step 7: Configure advanced settings

Click Next: Advanced Settings and configure as needed.

ParameterDescription
Select the dedicated cluster used to schedule the taskDTS uses a shared cluster by default. For higher stability, purchase a dedicated cluster. See What is a DTS dedicated cluster.
Copy the temporary table of the Online DDL toolIf you use DMS or gh-ost for online DDL on the source, select how to handle temporary tables. Yesalert notification settings: migrates temporary table data (may increase latency). No, Adapt to DMS Online DDL: skips temporary tables; migrates only the original DDL from DMS (destination tables may be locked). No, Adapt to gh-ost: skips temporary tables; migrates only the original gh-ost DDL (destination tables may be locked). Do not use pt-online-schema-change on the source — it causes the DTS task to fail.
Retry Time for Failed ConnectionsTime range (in minutes) DTS retries after a connection failure. Valid values: 10–1440. Default: 720. Set to more than 30 minutes. If a later task specifies a different value for a shared source or destination, that value takes precedence.
The wait time before a retry when other issues occurTime range (in minutes) DTS retries after DDL or DML failures. Valid values: 1–1440. Default: 10. Set to more than 10 minutes. Must be less than the Retry Time for Failed Connections value.
Enable Throttling for Full Data MigrationThrottle read and write operations during full migration to reduce load on the source and destination. Configure QPS, RPS, and migration speed (MB/s). Available only with Full Data Migration selected.
Enable Throttling for Incremental Data MigrationThrottle incremental migration. Configure RPS and migration speed (MB/s). Available only with Incremental Data Migration selected.
Environment TagTag the DTS instance by environment (optional).
Whether to delete SQL operations on heartbeat tablesYes: DTS does not write heartbeat SQL to the source (task latency metrics may be affected). No: DTS writes heartbeat SQL (physical backup and cloning of the source may be affected).
Configure ETLEnable extract, transform, and load (ETL) to apply data transformations during migration. See What is ETL? and Configure ETL.
Monitoring and AlertingConfigure alerts for task failures or latency exceeding a threshold. See Configure monitoring and alerting.

Step 8: Configure data verification (optional)

Click Next Step: Verification Configurations to set up data verification. See Configure data verification.

Step 9: Run the precheck

Click Next: Save Task Settings and Precheck.

To preview the API parameters for this task configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

DTS runs a precheck before the task starts. The task only starts after the precheck passes.

  • If an item fails: click View Details next to the failed item, fix the issue, then click Precheck Again.

  • If an alert is generated: if the alert cannot be ignored, fix the issue and rerun the precheck. If it can be ignored, click Confirm Alert Details > Ignore > OK > Precheck Again. Ignoring alerts may result in data inconsistency.

Step 10: Purchase the migration instance and start

Wait until the precheck success rate reaches 100%, then click Next: Purchase Instance.

  1. On the Purchase Instance page, configure the instance class:

    ParameterDescription
    Resource Group SettingsSelect the resource group for the migration instance. Default: default resource group. See What is Resource Management?.
    Instance ClassSelect a class based on your required migration speed. See Specifications of data migration instances.
  2. Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms.

  3. Click Buy and Start. Monitor task progress in the task list.

What's next

Before you switch workloads to the destination, stop or release the migration task. Alternatively, run REVOKE to remove write permissions from the DTS accounts on the destination. Otherwise, a resumed failed task may overwrite destination data with source data.

After full data migration completes and incremental migration catches up (latency approaches 0), switch your applications to the destination ApsaraDB RDS for MySQL instance.

To validate migrated data row by row before stopping the task, configure data verification. See Configure data verification.