Use Data Transmission Service (DTS) to migrate data between PolarDB for MySQL clusters with minimal downtime. DTS supports three migration types—schema migration, full data migration, and incremental data migration—that you can combine based on whether your application needs to stay online during migration.
PolarDB for MySQL clusters cannot be upgraded directly to version 8.0. To upgrade, create a new version 8.0 cluster and migrate your data to it. Before migrating across major versions, create a pay-as-you-go cluster to test compatibility, then release it after testing.
Choose a migration strategy
Select a combination of migration types based on your requirements:
| Goal | Migration types to select | Downtime required |
|---|---|---|
| Migrate data with the application offline | Schema migration + Full data migration | Yes — stop writes to the source before starting |
| Migrate data with the application staying online | Schema migration + Full data migration + Incremental data migration | No — incremental migration keeps the destination in sync |
Migration types
Schema migration
DTS migrates the schemas of tables, views, triggers, stored procedures, and stored functions from the source cluster to the destination cluster.
DTS changes the SECURITY attribute from DEFINER to INVOKER for views, stored procedures, and stored functions.
DTS does not migrate user information. Grant read and write permissions to the INVOKER to call views, stored procedures, or stored functions on the destination cluster.
Foreign keys are migrated during schema migration. During full and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. If you perform cascade or delete operations on the source during migration, data inconsistency may occur.
Full data migration
DTS migrates all existing data from the source cluster to the destination cluster. Concurrent INSERT operations during this phase cause table fragmentation, so the destination tablespace will be larger than the source after migration completes.
Incremental data migration
After full data migration completes, DTS continuously applies changes from the source to the destination. This keeps both clusters in sync and lets your application stay online throughout the migration.
SQL operations supported in incremental migration
| Operation type | Supported SQL statements |
|---|---|
| DML | INSERT, UPDATE, DELETE |
| DDL | ALTER TABLE, ALTER VIEW, CREATE FUNCTION, CREATE INDEX, CREATE PROCEDURE, CREATE TABLE, CREATE VIEW, DROP INDEX, DROP TABLE, RENAME TABLE, TRUNCATE TABLE |
RENAME TABLE operations can cause data inconsistency. If you rename a table during migration and the task scope is that table (not its database), DTS stops migrating data for that table. To avoid this, select the database as the migration object instead of individual tables, and make sure both the pre-rename and post-rename database names are included in the selected objects.
Billing
| Migration type | Instance configuration fee | Internet traffic fee |
|---|---|---|
| Schema migration + Full data migration | Free | Charged when migrating from Alibaba Cloud over the Internet. See Billing overview |
| Incremental data migration | Charged. See Billing overview | — |
Limitations
Source database limitations
| Limitation | Detail | Workaround |
|---|---|---|
| Outbound bandwidth | The source server must have enough outbound bandwidth. Low bandwidth reduces migration speed. | Upgrade bandwidth or enable throttling. |
| Primary key or unique key | Tables being migrated must have a PRIMARY KEY or UNIQUE constraint, with all fields unique. Without this, the destination may contain duplicate records. | Add a primary key or unique key to each table before migrating. |
| Table count per task (with renaming) | If you select tables (not databases) as migration objects and rename tables or columns in the destination, a single task supports up to 1,000 tables. More than 1,000 tables causes a request error. | Split the migration across multiple tasks, or select the entire database as the migration object. |
| Binary logging (incremental migration) | Binary logging must be enabled and loose_polar_log_bin must be set to on. If not configured, the precheck fails and the task cannot start. Enabling binary logging incurs storage charges for log files. | See Enable binary logging and Modify parameters. |
| Binary log retention | For incremental-only tasks: retain binary logs for more than 24 hours. For full + incremental tasks: retain binary logs for at least 7 days. After full migration completes, you can reduce retention to more than 24 hours. Insufficient retention may cause task failure or data loss; DTS Service Level Agreement (SLA) guarantees do not apply if this requirement is not met. | Set the log retention period before starting the migration task. |
| DDL during schema + full migration | Do not run DDL operations that change database or table schemas during schema migration or full data migration. Such operations cause the task to fail. | Schedule schema changes outside the migration window. |
| Writes during full-only migration | Do not write data to the source database during a full-data-migration-only task. This can cause data inconsistency between source and destination. | Add incremental data migration to keep both clusters in sync, or stop application writes before starting. |
Other limitations
| Limitation | Detail |
|---|---|
| MySQL version | Use the same MySQL version for source and destination clusters to avoid compatibility issues. |
| Read-only nodes | Read-only nodes on the source cluster cannot be migrated. |
| Migration timing | Full data migration uses read and write resources on both clusters and may increase server load. Assess the performance impact before starting and run migrations during off-peak hours. |
| FLOAT and DOUBLE precision | DTS retrieves FLOAT and DOUBLE values using ROUND(COLUMN, PRECISION). If you do not specify a precision, DTS defaults to 38 digits for FLOAT and 308 digits for DOUBLE. Verify that these defaults meet your requirements. |
| Failed task resumption | DTS automatically retries failed tasks for up to 7 days. Stop or release the DTS task before switching your application to the destination cluster. Alternatively, run REVOKE to remove DTS write permissions on the destination. If a failed task resumes after you switch over, the source data overwrites the destination data. |
| Online DDL tools | Do not use pt-online-schema-change for online DDL operations on the source. This causes the DTS task to fail. Use DMS or gh-ost instead. |
Required permissions
The permissions required depend on the migration types you select.
| Database | Full data migration only | Full data migration + Incremental data migration |
|---|---|---|
| Source PolarDB for MySQL cluster | Read permissions on the objects to be migrated | Read permissions on the objects to be migrated |
| Destination PolarDB for MySQL cluster | Read and write permissions on the destination database | Read and write permissions on the destination database |
Use a privileged account for the destination cluster. For instructions on creating a database account, see Create a database account.
For incremental data migration, the source database account also needs binary logging access. Make sure binary logging is enabled and loose_polar_log_bin is set to on before starting.
Migrate data between PolarDB for MySQL clusters
Prerequisites
Before you begin, make sure that you have:
A source and a destination PolarDB for MySQL cluster. See Purchase a pay-as-you-go cluster and Purchase a subscription cluster
Enough free storage on the destination cluster to hold all data from the source cluster
Database accounts with the required permissions (see Required permissions)
If you plan to run incremental data migration: binary logging enabled on the source cluster and
loose_polar_log_binset toon. See Enable binary logging and Modify parameters
Step 1: Go to the Data Migration Tasks page
Log on to the Data Management (DMS) console.
In the top navigation bar, click DTS.
In the left-side navigation pane, choose DTS (DTS) > Data Migration.
DMS console navigation varies by mode and layout. See Simple mode and Customize the layout and style of the DMS console. You can also go directly to the Data Migration Tasks page in the new DTS console.
Step 2: Select a region
From the drop-down list next to Data Migration Tasks, select the region where the data migration instance resides.
In the new DTS console, select the region in the upper-left corner.
Step 3: Configure source and destination databases
Click Create Task. On the Create Task page, configure the following parameters.
After configuring the source and destination, read the Limitations displayed at the top of the page before proceeding. Skipping this may cause the task to fail or data inconsistency.
Task settings
| Parameter | Description |
|---|---|
| Task Name | A name for the task. DTS assigns a name automatically. Specify a descriptive name for easy identification. The name does not need to be unique. |
Source database
| Parameter | Description |
|---|---|
| Select an existing DMS database instance | Select an existing instance to have DTS auto-fill its parameters, or leave blank and configure parameters manually. |
| Database Type | Select PolarDB for MySQL. |
| Access Method | Select Alibaba Cloud Instance. |
| Instance Region | The region where the source PolarDB for MySQL cluster resides. |
| PolarDB Cluster ID | The ID of the source PolarDB for MySQL cluster. |
| Database Account | The account for the source cluster. See Required permissions. |
| Database Password | The password for the database account. |
| Encryption | Whether to encrypt the connection. Configure based on your requirements. See Configure SSL encryption. |
Destination database
| Parameter | Description |
|---|---|
| Select an existing DMS database instance | Select an existing instance to have DTS auto-fill its parameters, or leave blank and configure parameters manually. |
| Database Type | Select PolarDB for MySQL. |
| Access Method | Select Alibaba Cloud Instance. |
| Instance Region | The region where the destination PolarDB for MySQL cluster resides. |
| PolarDB Cluster ID | The ID of the destination PolarDB for MySQL cluster. |
| Database Account | The account for the destination cluster. Use a privileged account. See Required permissions. |
| Database Password | The password for the database account. |
| Encryption | Whether to encrypt the connection. Configure based on your requirements. See Configure SSL encryption. |
Step 4: Test connectivity
Click Test Connectivity and Proceed.
DTS automatically adds its server CIDR blocks to the IP address whitelist of Alibaba Cloud database instances (such as ApsaraDB RDS for MySQL or ApsaraDB for MongoDB) and to the security group rules of Elastic Compute Service (ECS) instances hosting self-managed databases. For self-managed databases spread across multiple ECS instances, add DTS CIDR blocks to each instance's security group rules manually. For on-premises databases or databases from third-party cloud providers, add DTS CIDR blocks to the database IP whitelist manually. For the full list of CIDR blocks, see Add the CIDR blocks of DTS servers to the security settings of on-premises databases.
Adding DTS server CIDR blocks to IP whitelists or security groups creates security exposure. Before proceeding, take preventive measures including: strengthening account and password security, restricting exposed ports, authenticating API calls, auditing whitelist and security group rules regularly, and connecting DTS to your database over Express Connect, VPN Gateway, or Smart Access Gateway instead of the public Internet.
Step 5: Select migration objects and types
Migration types
| Parameter | Description |
|---|---|
| Migration Types | Select Schema Migration and Full Data Migration for an offline migration. Add Incremental Data Migration to keep the application online during migration. |
| Method to Migrate Triggers in Source Database | The method for migrating triggers. This parameter appears only when Schema Migration is selected. See Synchronize or migrate triggers from the source database. |
Conflict handling
| Option | Behavior |
|---|---|
| Precheck and Report Errors | Before migration starts, DTS checks whether the destination has tables with the same names as the source. If name conflicts exist, the precheck fails and the task does not start. Use the object name mapping feature to rename objects if needed. See Map object names. |
| Ignore Errors and Proceed | Skips the name-conflict precheck. If the source and destination have identical schemas, DTS skips rows with matching primary key values. If schemas differ, only specific columns are migrated or the task fails. Use with caution. |
Select objects
In the Source Objects section, select the objects to migrate, then click
to move them to Selected Objects. You can select columns, tables, or schemas. Selecting tables or columns excludes views, triggers, and stored procedures from migration.
In Selected Objects, you can:
Rename a single object: right-click the object and rename it. See Map the name of a single object.
Rename multiple objects: click Batch Edit. See Map multiple object names at a time.
Filter rows by condition: right-click an object and specify a WHERE clause. See Use SQL conditions to filter data.
Restrict which SQL operations are migrated: right-click an object and select the SQL operations to include.
Renaming an object may cause dependent objects, such as views or stored procedures, to fail during migration.
Step 6: Configure advanced settings
Click Next: Advanced Settings, then configure the following.
Data verification
To verify data consistency between source and destination, configure the data verification feature. See Enable data verification.
Advanced settings
| Parameter | Description |
|---|---|
| Select the dedicated cluster used to schedule the task | By default, DTS uses the shared cluster. Purchase a dedicated cluster to isolate task resources. See What is a DTS dedicated cluster? |
| Set Alerts | Configure alerting for task failures or latency threshold breaches. Select Yes to specify alert thresholds and contacts. See Configure monitoring and alerting. |
| Select the engine type of the destination database | The storage engine for the destination cluster: InnoDB (default) or X-Engine (an online transaction processing (OLTP) engine). |
| Copy the temporary table of the Online DDL tool that is generated in the source table to the destination database. | Controls whether DTS migrates temporary tables generated by online DDL tools. Important pt-online-schema-change is not supported — using it causes the task to fail. Options: Yes (migrate temporary table data; may extend migration time); No, Adapt to DMS Online DDL (skip temporary tables, migrate original DDL from DMS only — destination tables may be locked); No, Adapt to gh-ost (skip temporary tables, migrate original DDL from gh-ost only — destination tables may be locked; supports custom regular expressions to filter shadow tables). |
| Retry Time for Failed Connections | How long DTS retries failed connections after the task starts. Range: 10–1,440 minutes. Default: 720. Set to at least 30 minutes. If multiple tasks share the same source or destination, the shortest retry time among them takes precedence. DTS charges for the instance during retry periods. |
| The wait time before a retry when other issues occur in the source and destination databases. | How long DTS retries failed DDL or DML operations. Range: 1–1,440 minutes. Default: 10. Set to at least 10 minutes. This value must be less than the Retry Time for Failed Connections value. |
| Enable Throttling for Full Data Migration | Limit read/write load on source and destination during full data migration. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Appears only when Full Data Migration is selected. |
| Enable Throttling for Incremental Data Migration | Limit load during incremental migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Appears only when Incremental Data Migration is selected. |
| Environment Tag | A tag to identify the DTS instance. Select based on your requirements. |
| Configure ETL | Whether to apply extract, transform, and load (ETL) transformations. Select Yes to enter data processing statements. See What is ETL? and Configure ETL in a data migration or data synchronization task. |
| Whether to delete SQL operations on heartbeat tables of forward and reverse tasks | Controls whether DTS writes to heartbeat tables in the source database. Yes: DTS does not write to heartbeat tables; migration latency may appear in the console. No: DTS writes to heartbeat tables; physical backup and cloning of the source database may be affected. |
Step 7: Save settings and run the precheck
Click Next: Save Task Settings and Precheck.
Before clicking, you can hover over this button and click Preview OpenAPI parameters to view the API parameters for this task configuration.
DTS runs a precheck before the task starts. If the precheck fails:
Click View Details next to the failed item, fix the reported issue, then click Precheck Again.
If a precheck item raises an alert that can be ignored: click Confirm Alert Details, then in the View Details dialog box click Ignore > OK > Precheck Again. Ignoring alerts may result in data inconsistency.
Step 8: Wait for the precheck to pass
Wait until the success rate reaches 100%, then click Next: Purchase Instance.
Step 9: Select an instance class
On the Purchase Instance page, configure the instance for the migration task.
| Parameter | Description |
|---|---|
| Resource Group | The resource group for the migration instance. Default: default resource group. See What is Resource Management? |
| Instance Class | The instance class determines migration speed. Select based on your data volume and time requirements. See Specifications of data migration instances. |
Step 10: Accept the service terms
Read and accept Data Transmission Service (Pay-as-you-go) Service Terms by selecting the check box.
Step 11: Start the task
Click Buy and Start. Monitor progress in the task list.
Post-migration tasks
After the migration task completes:
Verify data consistency: Use the data verification feature to confirm data integrity between source and destination. See Enable data verification.
Stop or release the DTS task: Before or immediately after switching your application, stop or release the migration task. DTS automatically retries failed tasks for up to 7 days — if a resumed task runs after you switch over, it overwrites destination data with source data. Alternatively, run
REVOKEto remove DTS write permissions from the destination as an additional safeguard.Switch your application: Update connection strings in your application to point to the destination cluster.
Clean up: If you created a pay-as-you-go cluster for version compatibility testing, release it now.
Usage notes
DTS periodically executes CREATE DATABASE IF NOT EXISTS \`test\` on the source database to advance the binary log position.