Use Data Transmission Service (DTS) to migrate data between PolarDB for MySQL clusters. DTS supports schema migration, full data migration, and incremental data migration, letting you move data with minimal downtime.
PolarDB for MySQL does not support in-place upgrades to version 8.0. To upgrade, create a new version 8.0 cluster and migrate your data to it. Before migrating across versions, create a pay-as-you-go cluster to test compatibility, then release it after testing.
Prerequisites
Before you begin, ensure that you have:
A source and a destination PolarDB for MySQL cluster already created. See Purchase an Enterprise Edition cluster and Purchase a subscription cluster.
Available storage on the destination cluster that exceeds the total data size of the source cluster.
Choose a migration type
Select migration types based on your scenario:
| Scenario | Migration types to select | Notes |
|---|---|---|
| One-time data move with planned downtime | Schema migration + Full data migration | Stop writes to the source during migration to avoid inconsistency. |
| Live migration with minimal service interruption | Schema migration + Full data migration + Incremental data migration | DTS syncs ongoing changes after the full migration completes. |
Billing
| Migration type | Instance configuration fee | Internet traffic fee |
|---|---|---|
| Schema migration + Full data migration | Free | Charged only when data is migrated from Alibaba Cloud over the Internet. See Billing overview. |
| Incremental data migration | Charged. See Billing overview. | — |
Migration types
Schema migration
DTS migrates the schemas of selected objects from the source to the destination database. Supported object types: tables, views, triggers, stored procedures, and stored functions.
During schema migration, DTS changes the SECURITY attribute from DEFINER to INVOKER for views, stored procedures, and functions. DTS does not migrate user information. Grant read and write permissions to INVOKER before calling views, stored procedures, or functions on the destination database.
Full data migration
DTS migrates historical data of the selected objects from the source to the destination database.
After a full migration completes, the used tablespace on the destination database will be larger than on the source. This is caused by fragmentation from concurrent INSERT operations during migration.
Incremental data migration
After full data migration completes, DTS migrates ongoing changes from the source to the destination database. This keeps the source database available during migration and lets you cut over with minimal downtime.
SQL operations supported for incremental migration
| Operation type | SQL statements |
|---|---|
| DML | INSERT, UPDATE, DELETE |
| DDL | ALTER TABLE, ALTER VIEW, CREATE FUNCTION, CREATE INDEX, CREATE PROCEDURE, CREATE TABLE, CREATE VIEW, DROP INDEX, DROP TABLE, RENAME TABLE, TRUNCATE TABLE |
RENAME TABLE operations may cause data inconsistency. If a selected table is renamed during migration, its data may not be migrated to the destination. To prevent this, select the entire database (not individual tables) as the migration object, and make sure both the pre- and post-rename database names are included in the selected objects.
Permissions required
| Database | Required permissions |
|---|---|
| Source PolarDB for MySQL cluster | Read permissions on the objects to be migrated |
| Destination PolarDB for MySQL cluster | Read/write permissions on the destination database |
For instructions on creating database accounts, see Create and manage a database account.
Limits
Foreign key behavior
During schema migration, DTS migrates foreign keys from the source to the destination database.
During full and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. If you run cascade update or delete operations on the source during migration, data inconsistency may occur.
Source database requirements
Bandwidth: The source database server must have enough outbound bandwidth. Insufficient bandwidth reduces migration speed.
Primary keys: Tables being migrated must have PRIMARY KEY or UNIQUE constraints with all unique fields. Tables without these constraints may result in duplicate records on the destination.
Table count limit: If you select tables as migration objects and need to rename tables or columns in the destination, a single task supports up to 1,000 tables. If you run a task to migrate more than 1,000 tables, a request error occurs. In this case, either split the work across multiple tasks or migrate the entire database.
Binary logging for incremental migration: Binary logging must be enabled and the
loose_polar_log_binparameter must be set toon. Without this, the precheck fails and the task cannot start. See Enable binary logging and Modify parameters. Binary log retention requirements: After full migration completes, you can set the retention period to more than 24 hours. Make sure that you set the retention period of binary logs in accordance with the preceding requirements. Otherwise, the Service Level Agreement (SLA) of DTS does not guarantee service reliability or performance. Enabling binary logging incurs storage charges.Incremental migration only: retain binary logs for more than 24 hours.
Full + incremental migration: retain binary logs for at least 7 days.
Schema and DDL changes: Do not run DDL statements that modify database or table schemas during schema migration or full data migration. This causes the task to fail.
Writes during full-only migration: If you run only full data migration (without incremental), do not write to the source database during migration. Writes cause data inconsistency between source and destination.
Other limits
Cluster versions: Source and destination clusters must be the same version.
Read-only nodes: DTS does not migrate read-only nodes from the source cluster.
OSS external tables: DTS does not migrate Object Storage Service (OSS) external tables from the source cluster.
Two-way synchronization: You cannot migrate data from a cluster that has DTS two-way synchronization configured.
FLOAT and DOUBLE precision: DTS uses the
ROUND(COLUMN,PRECISION)function to retrieve values from FLOAT and DOUBLE columns. Default precision: 38 digits for FLOAT, 308 digits for DOUBLE. Verify these settings meet your requirements before starting the migration.Task auto-resume: DTS tries to resume failed tasks for up to 7 days. Before switching workloads to the destination cluster, stop or release the migration task, or run the
revokecommand to remove write permissions from the DTS accounts on the destination cluster. Otherwise, a resumed task may overwrite destination data with source data.Failed DDL statements: If a DDL statement fails to execute on the destination, the task continues running. View failed DDL statements in the task logs. See View task logs.
DTS support intervention: If a DTS task fails, DTS technical support attempts to restore it within 8 hours. The task may be restarted and task-level parameters may be modified. Database parameters are not modified. For the parameters that may be modified, see the Modify instance parameters section of the Modify the parameters of a DTS instance topic.
Performance impact: Full data migration uses read and write resources on both source and destination databases, which increases load. Run migrations during off-peak hours.
Special cases
DTS executes the CREATE DATABASE IF NOT EXISTS 'test' statement on the source database periodically to advance the binary log file position.
Migrate data between PolarDB for MySQL clusters
Step 1: Go to the Data Migration Tasks page
Log on to the Data Management (DMS) console.Data Migration page of the new DTS console
In the top navigation bar, hover over DTS and choose DTS (DTS) > Data Migration.
NoteDepending on the DMS console mode and layout, the steps may differ. See Simple mode and Customize the layout and style of the DMS console. Alternatively, go directly to the Data Migration page of the new DTS console.
From the drop-down list on the right side of Data Migration Tasks, select the region where your migration instance resides.
NoteIn the new DTS console, select the region in the upper-left corner.
Step 2: Create a task
Click Create Task.
Step 3: Switch to the new configuration page (optional)
In the upper-right corner, click New Configuration Page if it is available.
Skip this step if the Back to Previous Version button is already displayed, which means you are already on the new configuration page. The new configuration page is recommended — some parameters differ between versions.
Step 4: Configure the source and destination databases
After configuring the source and destination databases, review the Limits displayed at the top of the page before proceeding. Skipping this step may cause task failures or data inconsistency.
Configure the parameters as described in the following table.
Task information
| Parameter | Description |
|---|---|
| Task Name | The name of the task. DTS generates a name automatically. Specify a descriptive name to identify the task — unique names are not required. |
Source Database
| Parameter | Description |
|---|---|
| Select a DMS database instance | Select an existing instance or configure the database manually. If you select an existing instance, DTS populates the parameters automatically. To register a database, see Register an Alibaba Cloud database instance or Register a database hosted on a third-party cloud service or a self-managed database. In the DTS console, manage connections on the Database Connections page. See Manage database connections. |
| Database Type | Select PolarDB for MySQL. |
| Connection Type | Select Alibaba Cloud Instance. |
| Instance Region | The region where the source PolarDB for MySQL cluster resides. |
| PolarDB Cluster ID | The ID of the source cluster. |
| Database Account | The database account for the source cluster. See the Permissions required section for the required permissions. |
| Database Password | The password for the database account. |
| Encryption | Whether to encrypt the connection. Configure based on your requirements. See Configure SSL encryption. |
Destination Database
| Parameter | Description |
|---|---|
| Select a DMS database instance | Select an existing instance or configure the database manually. If you select an existing instance, DTS populates the parameters automatically. To register a database, see Register an Alibaba Cloud database instance or Register a database hosted on a third-party cloud service or a self-managed database. In the DTS console, manage connections on the Database Connections page. See Manage database connections. |
| Database Type | Select PolarDB for MySQL. |
| Connection Type | Select Alibaba Cloud Instance. |
| Instance Region | The region where the destination PolarDB for MySQL cluster resides. |
| PolarDB Cluster ID | The ID of the destination cluster. |
| Database Account | The database account for the destination cluster. See the Permissions required section. Use a privileged account. |
| Database Password | The password for the database account. |
| Encryption | Whether to encrypt the connection. Configure based on your requirements. See Configure SSL encryption. |
Step 5: Test connectivity
Click Test Connectivity and Proceed.
DTS automatically adds its server CIDR blocks to the IP address whitelist of Alibaba Cloud database instances. For self-managed databases on Elastic Compute Service (ECS), DTS adds its CIDR blocks to the ECS security group. If the self-managed database spans multiple ECS instances, manually add the CIDR blocks to each instance's security group. For databases hosted in a data center or on a third-party cloud, manually add the DTS server CIDR blocks to the database whitelist. See CIDR blocks of DTS servers.
Adding DTS server CIDR blocks to a database whitelist or ECS security group introduces security risks. Before using DTS, understand and accept these risks, and take precautions such as: using strong credentials, limiting exposed ports, authenticating API calls, auditing whitelist and security group rules regularly, and connecting through Express Connect, VPN Gateway, or Smart Access Gateway.
Step 6: Configure migration objects
On the Configure Objects page, set the following parameters.
Migration types and objects
| Parameter | Description |
|---|---|
| Migration Types | Select the migration types for your scenario. See the Choose a migration type section. |
| Method to Migrate Triggers in Source Database | The method for migrating triggers. Configure based on your requirements. Only available when Migration Types includes Schema Migration. See Synchronize or migrate triggers from the source database. |
| Processing Mode of Conflicting Tables | How DTS handles tables in the destination that share names with source tables: Precheck and Report Errors — the precheck fails if conflicting table names exist. To proceed, rename the conflicting tables using the object name mapping feature. See Map object names. Ignore Errors and Proceed — skips the precheck. If schemas match, full migration skips conflicting records; incremental migration overwrites them. If schemas differ, only specific columns are migrated or the task fails. Use with caution. |
| Source Objects | Select objects from the Source Objects list and click |
| Selected Objects | Right-click an object to rename it (single object) or click Batch Edit to rename multiple objects. Object name mapping may cause dependent objects to fail migration. Right-click a table to add WHERE filter conditions (see Specify filter conditions) or to select specific SQL operations to migrate. |
Advanced settings
Click Next: Advanced Settings to configure the following parameters.
| Parameter | Description |
|---|---|
| Dedicated Cluster for Task Scheduling | By default, DTS schedules the task to the shared cluster. Purchase a dedicated cluster for predictable performance. See What is a DTS dedicated cluster. |
| Select the engine type of the destination database | The storage engine for the destination database: InnoDB (default) or X-Engine (online transaction processing (OLTP) workloads). |
| Copy the temporary table of the Online DDL tool that is generated in the source table to the destination database | Applies when you use DMS or gh-ost for online DDL operations on the source. Yesalert notification settings — migrates temporary table data (may introduce latency if large). No, Adapt to DMS Online DDL — migrates only the original DDL; destination tables may be locked. No, Adapt to gh-ost — migrates only the original DDL using custom or default regex to filter shadow tables; destination tables may be locked. Do not use pt-online-schema-change — it causes the DTS task to fail. |
| Whether to Migrate Accounts | Whether to migrate account information from the source. If Yes, select the accounts and verify the permissions of accounts used in the migration task. See Migrate database accounts. |
| Retry Time for Failed Connections | How long DTS retries failed connections after a task starts. Range: 10–1,440 minutes. Default: 720 minutes. Set to a value greater than 30 minutes. If the connection is restored within this period, DTS resumes the task. Otherwise, the task fails. If multiple tasks share a source or destination database, the most recently set value takes precedence. Note that retry time counts toward billing. |
| Retry Time for Other Issues | How long DTS retries failed DDL or DML operations after a task starts. Range: 1–1,440 minutes. Default: 10 minutes. Set to a value greater than 10 minutes. This value must be less than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Migration | Whether to throttle full data migration. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s) to reduce load on the destination server. Only available when Full Data Migration is selected. |
| Enable Throttling for Incremental Data Migration | Whether to throttle incremental data migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Only available when Incremental Data Migration is selected. |
| Environment Tag | An optional tag identifying the cluster environment. |
| Whether to delete SQL operations on heartbeat tables of forward and reverse tasks | Yes — DTS does not write heartbeat SQL operations to the source. Task latency may appear elevated. No — DTS writes heartbeat SQL operations, which may affect source database physical backup and cloning. |
| Configure ETL | Whether to enable the extract, transform, and load (ETL) feature. Yes — configure data processing statements in the code editor. See Configure ETL in a data migration or data synchronization task. No — skip ETL. See What is ETL? |
| Monitoring and Alerting | Whether to configure alerts for the migration task. Yes — set an alert threshold and notification contacts. See Configure monitoring and alerting when you create a DTS task. No — no alerts configured. |
Data verification
Click Next Step: Data Verification to configure data verification. See Configure a data verification task.
Step 7: Run the precheck
Click Next: Save Task Settings and Precheck.
To preview the API parameters for this task, hover over the button and click Preview OpenAPI parameters before clicking through.
The task cannot start until it passes the precheck.
If a precheck item fails, click View Details next to the item, fix the issue, and click Precheck Again.
If a precheck item triggers an alert:
If the alert cannot be ignored, fix the issue and rerun the precheck.
If the alert can be ignored, click Confirm Alert Details, then click Ignore > OK > Precheck Again. Ignoring alerts may result in data inconsistency.
Step 8: Purchase a DTS instance
Wait until Success Rate reaches 100%, then click Next: Purchase Instance.
On the Purchase Instance page, configure the following parameters:
| Parameter | Description |
|---|---|
| Resource Group | The resource group for the migration instance. Default: default resource group. See What is Resource Management? |
| Instance Class | The instance class determines migration speed. Select based on your data volume and time requirements. See Instance classes of data migration instances. |
Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start > OK.
Track migration progress on the Data Migration page.