All Products
Search
Document Center

Data Transmission Service:Migrate data from a PolarDB for MySQL cluster to an RDS for MySQL instance

Last Updated:Mar 30, 2026

Data Transmission Service (DTS) migrates data from a PolarDB for MySQL cluster to an ApsaraDB RDS for MySQL instance with minimal downtime by combining schema migration, full data migration, and incremental data migration.

Note

The same procedure applies when migrating to a self-managed MySQL database (public IP, Elastic Compute Service (ECS)-hosted, connected over Express Connect, VPN Gateway, or Smart Access Gateway, or connected over Database Gateway).

Prerequisites

Before you begin, ensure that you have:

Migration types

Choose one of the following combinations based on your requirements:

Combination When to use Downtime required
Schema migration + Full data migration One-time migration where the source can be taken offline Yes
Schema migration + Full data migration + Incremental data migration Live migration with minimal service interruption No

Schema migration copies the schemas of the selected objects (tables, views, triggers, stored procedures, and stored functions) from the source to the destination. DTS changes the SECURITY attribute from DEFINER to INVOKER for views, stored procedures, and stored functions. User information is not migrated—grant read and write permissions to INVOKER separately.

Full data migration copies historical data from the selected objects.

Incremental data migration continuously replicates changes from the source after full data migration completes.

Billing

Migration type Instance configuration fee Internet traffic fee
Schema migration + Full data migration Free Charged only when data is migrated over the Internet. See Billing overview.
Incremental data migration Charged. See Billing overview. Charged only when data is migrated over the Internet.

Required permissions

Grant the following permissions to the database accounts used by DTS before configuring the migration task.

Database Required permissions Reference
PolarDB for MySQL cluster Read permissions on the objects to be migrated Create and manage a database account
ApsaraDB RDS for MySQL instance Read and write permissions on the objects to be migrated Create an account

Limitations

Source database

  • Tables must have a PRIMARY KEY or UNIQUE constraint with all unique fields. Without this, the destination database may contain duplicate records.

  • If you select tables as the migration objects and need to rename tables or columns in the destination, a single task supports a maximum of 1,000 tables. For more than 1,000 tables, configure multiple tasks or migrate at the database level instead.

  • Do not perform DDL operations that change database or table schemas during schema migration or full data migration.

  • During full-only migration (no incremental), do not write data to the source database. To avoid data inconsistency, include incremental data migration in the task.

  • The source server must have sufficient outbound bandwidth, otherwise migration speed decreases.

  • Read-only nodes of the source cluster cannot be migrated.

Binary log retention for incremental migration

Important

Enabling binary logging on a PolarDB for MySQL cluster incurs storage charges for binary log files.

Set the binary log retention period based on your migration type:

Migration type Minimum retention period
Incremental data migration only More than 24 hours
Full data migration + Incremental data migration At least 7 days

If DTS cannot obtain the binary logs because the retention period is too short, the task fails. In exceptional cases, data inconsistency or loss may occur. After full data migration completes, the retention period can be reduced to more than 24 hours.

Important

Retention periods shorter than these minimums are outside the DTS Service Level Agreement (SLA).

Foreign key behavior

  • DTS migrates foreign keys during schema migration.

  • During full and incremental data migration, DTS disables foreign key constraint checks and cascade operations at the session level. Performing cascade or delete operations on the source during migration may cause data inconsistency.

Data type precision

DTS retrieves FLOAT and DOUBLE column values using ROUND(COLUMN,PRECISION). If no precision is specified, DTS uses 38 digits for FLOAT and 308 digits for DOUBLE. Verify that these defaults meet your requirements before starting the migration.

Destination database creation

DTS automatically creates the destination database in the ApsaraDB RDS for MySQL instance. If the source database name is invalid, manually create the database before configuring the task. See Manage databases.

SQL operations supported for incremental migration

Operation type Supported statements
DML INSERT, UPDATE, DELETE
DDL ALTER TABLE, ALTER VIEW, CREATE FUNCTION, CREATE INDEX, CREATE PROCEDURE, CREATE TABLE, CREATE VIEW, DROP INDEX, DROP TABLE, RENAME TABLE, TRUNCATE TABLE
Important

RENAME TABLE operations may cause data inconsistency. If you select a table as the migration object and rename it during migration, DTS does not migrate that table's data. To avoid this, select the database (not individual tables) as the migration object, and make sure both the pre-rename and post-rename database are included in the selected objects.

Configure a migration task

Step 1: Go to the Data Migration Tasks page

  1. Log on to the Data Management (DMS) console.

  2. In the top navigation bar, click DTS.

  3. In the left-side navigation pane, choose DTS (DTS) > Data Migration.

Note

You can also go directly to the Data Migration Tasks page of the new DTS console. Navigation options may vary based on your DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console.

Step 2: Create a task

  1. From the drop-down list next to Data Migration Tasks, select the region where the data migration instance resides.

    Note

    In the new DTS console, select the region in the upper-left corner.

  2. Click Create Task.

  3. (Optional) In the upper-right corner, click New Configuration Page to switch to the new configuration page.

    Note

    Skip this step if Back to Previous Version is shown instead. The new configuration page is recommended.

Step 3: Configure source and destination databases

Warning

After configuring the source and destination databases, read the Limits displayed at the top of the page before proceeding.

Configure the following parameters:

Task information

Parameter Description
Task Name DTS assigns a name automatically. Specify a descriptive name to identify the task. The name does not need to be unique.

Source Database

Parameter Value
Select a DMS database instance Select an existing registered instance, or configure the source database parameters manually.
Database Type PolarDB for MySQL
Connection Type Alibaba Cloud Instance
Instance Region The region where the source PolarDB for MySQL cluster resides.
Replicate Data Across Alibaba Cloud Accounts No (this example uses the current account)
PolarDB Cluster ID The ID of the source PolarDB for MySQL cluster.
Database Account The account with read permissions on the objects to be migrated.
Database Password The password for the database account.
Encryption Configure based on your security requirements. See Configure SSL encryption.

Destination Database

Parameter Value
Select a DMS database instance Select an existing registered instance, or configure the destination database parameters manually.
Database Type MySQL
Connection Type Alibaba Cloud Instance
Instance Region The region where the destination ApsaraDB RDS for MySQL instance resides.
Replicate Data Across Alibaba Cloud Accounts No (this example uses the current account)
RDS Instance ID The ID of the destination ApsaraDB RDS for MySQL instance.
Database Account The account with read and write permissions on the objects to be migrated.
Database Password The password for the database account.
Encryption Select Non-encrypted or SSL-encrypted. If you select SSL-encrypted, enable SSL encryption for the RDS instance before starting the task. See Configure the SSL encryption feature.
Note

To register a database with DMS, click Create Template in the DMS console. See Register an Alibaba Cloud database instance or Register a database hosted on a third-party cloud service or a self-managed database. In the DTS console, register databases on the Database Connections page or the new configuration page. See Manage database connections.

Step 4: Test connectivity

Click Test Connectivity and Proceed.

DTS automatically adds its server CIDR blocks to the IP address whitelist of Alibaba Cloud database instances and the security group rules of ECS-hosted databases. If the source or destination database is hosted on multiple ECS instances, you must manually add the CIDR blocks of DTS servers to the security group rules of each ECS instance. For self-managed databases in data centers or on third-party cloud services, manually add the DTS server CIDR blocks to the database's IP address whitelist. See CIDR blocks of DTS servers.

Warning

Adding DTS server CIDR blocks to whitelists or security group rules introduces security risks. Before proceeding, take preventive measures such as strengthening account credentials, limiting exposed ports, authenticating API calls, and regularly auditing whitelist rules. Alternatively, connect the database to DTS using Express Connect, VPN Gateway, or Smart Access Gateway.

Step 5: Select objects and configure migration settings

On the Select Objects page, configure the following parameters:

Parameter Description
Migration Types Select Schema Migration and Full Data Migration for a one-time migration. Select all three types (including Incremental Data Migration) to keep the source running during migration.
Method to Migrate Triggers in Source Database Available only when both Schema Migration and Incremental Data Migration are selected. See Synchronize or migrate triggers from the source database.
Processing Mode of Conflicting Tables Precheck and Report Errors (default): fails the precheck if tables with identical names exist in both databases. Ignore Errors and Proceed: skips this check. Use with caution—data inconsistency may occur if schemas differ between source and destination.
Capitalization of Object Names in Destination Instance Default: DTS default policy. See Specify the capitalization of object names in the destination instance.
Source Objects Select objects and click the 向右小箭头 icon to move them to Selected Objects. You can select columns, tables, or databases. Selecting tables or columns excludes views, triggers, and stored procedures.
Selected Objects Right-click an object to rename it or set filter conditions. Click Batch Edit to rename multiple objects at once. See Map object names. Note: renaming an object may cause dependent objects to fail migration.

Click Next: Advanced Settings and configure the following:

Parameter Description
Select the dedicated cluster used to schedule the task Default: shared cluster. Purchase a dedicated cluster for higher migration stability. See What is a DTS dedicated cluster.
Copy the temporary table of the Online DDL tool Controls how DTS handles temporary tables from online DDL tools (Data Management (DMS) or gh-ost). Yesalert notification settings: migrates temporary table data (may add latency). No, Adapt to DMS Online DDL: DTS does not migrate the data of temporary tables generated by online DDL operations. Only the original DDL operations performed by using DMS are migrated.
Note

If you select No, Adapt to DMS Online DDL, the tables in the destination database may be locked. No, Adapt to gh-ost: DTS does not migrate the data of temporary tables generated by online DDL operations. Only the original DDL operations performed by using the gh-ost tool are migrated.

Note

If you select No, Adapt to gh-ost, the tables in the destination database may be locked. Note: pt-online-schema-change is not supported—using it causes the DTS task to fail.

Retry Time for Failed Connections How long DTS retries after a connection failure. Valid values: 10–1440 minutes. Default: 720 minutes. We recommend that you set the parameter to a value greater than 30. If different tasks share the same source or destination, the most recently configured value takes effect.
The wait time before a retry when other issues occur in the source and destination databases How long DTS retries after DDL or DML failures. Valid values: 1–1440 minutes. Default: 10 minutes. We recommend that you set the parameter to a value greater than 10. This value must be smaller than the Retry Time for Failed Connections value.
Enable Throttling for Full Data Migration Limits read/write load during full data migration by capping QPS (queries per second), RPS, and migration speed (MB/s). Available only when Full Data Migration is selected.
Enable Throttling for Incremental Data Migration Limits load during incremental migration by capping RPS and migration speed (MB/s). Available only when Incremental Data Migration is selected.
Environment Tag Optional tag to identify the DTS instance.
Whether to delete SQL operations on heartbeat tables of forward and reverse tasks Yes: DTS does not write to heartbeat tables; in this case, latency of the DTS instance may be displayed. No: DTS writes to heartbeat tables, which may affect physical backup and cloning of the source database.
Configure ETL Enable extract, transform, and load (ETL) to process data during migration. See Configure ETL in a data migration or data synchronization task.
Monitoring and Alerting Configure alerts for task failures or latency exceeding a threshold. See Configure monitoring and alerting when you create a DTS task.

Click Next Step: Verification Configurations to configure data verification. See Configure data verification.

Step 6: Save settings and run the precheck

Click Next: Save Task Settings and Precheck.

Note

To preview the API parameters for this task configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

DTS runs a precheck before starting the migration. The task starts only after the precheck passes.

  • If an item fails: click View Details, fix the issue, then click Precheck Again.

  • If an alert is generated: click Confirm Alert Details. In the dialog, click Ignore, confirm, then click Precheck Again. Ignoring an alert may cause data inconsistency.

Step 7: Purchase an instance

Wait until the success rate reaches 100%, then click Next: Purchase Instance.

On the Purchase Instance page, configure the following:

Parameter Description
Resource Group Settings The resource group for the data migration instance. Default: default resource group. See What is Resource Management?
Instance Class The migration speed depends on the instance class. See Specifications of data migration instances.

Read and accept Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start.

The task appears in the task list. Monitor progress from there.

Step 8: Complete the migration

After the task status shows that full data migration is complete and incremental migration latency is near zero, the destination database is ready for traffic.

Warning

Before switching workloads to the destination database, stop or release the data migration task—or run REVOKE to revoke the write permissions from the DTS accounts on the destination database. If you switch traffic without stopping the task, a resumed task overwrites destination data with source data.

Complete the following steps to finish the migration:

  1. Verify data consistency between the source and destination databases.

  2. Stop or release the DTS migration task. Alternatively, run REVOKE to remove the write permissions granted to the DTS accounts on the destination database.

  3. Switch application traffic to the destination ApsaraDB RDS for MySQL instance.

  4. Validate that the application functions correctly against the destination database.

Considerations

  • Schedule migrations during off-peak hours. Full data migration uses read and write resources of both databases and increases load on database servers.

  • Full data migration causes table fragmentation in the destination database due to concurrent INSERT operations. After full data migration, the destination tablespace is larger than the source tablespace.

  • DTS attempts to resume failed tasks for up to 7 days. Stop or release the task before switching workloads to prevent data overwrite on recovery.

  • DTS periodically executes CREATE DATABASE IF NOT EXISTS \test\`` on the source database to advance the binary log position.

What's next

  • Data verification: Validate data consistency between source and destination after migration.

  • Map object names: Rename objects in the destination database during migration.

  • Billing overview: Understand DTS pricing for incremental migration instances.