All Products
Search
Document Center

Data Transmission Service:Synchronize data from a PolarDB-X 2.0 instance to a PolarDB for MySQL cluster

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to synchronize data from a PolarDB-X 2.0 instance to a PolarDB for MySQL cluster. DTS supports schema synchronization, full data synchronization, and incremental data synchronization.

Before you begin

Make sure the following conditions are met before configuring the task:

  • The source PolarDB-X 2.0 instance is compatible with MySQL 5.7.

  • The destination PolarDB for MySQL cluster is created. For more information, see Purchase an Enterprise Edition cluster or Purchase a subscription cluster.

  • The available storage space on the destination cluster is larger than the total data size on the source instance.

  • Binary logging is enabled in the PolarDB-X 2.0 console, and binlog_row_image is set to full. For more information, see Parameter settings.

  • The source database account has the SELECT permission on the objects to be synchronized, and the REPLICATION CLIENT and REPLICATION SLAVE permissions. For more information, see Data synchronization tools for PolarDB-X.

  • The destination database account has read and write permissions on the destination database.

Billing

Synchronization typeFee
Schema synchronization and full data synchronizationFree
Incremental data synchronizationCharged. For more information, see Billing overview.

Limitations

Source database

  • Tables to synchronize must have PRIMARY KEY or UNIQUE constraints with all fields unique. Otherwise, the destination database may contain duplicate records.

  • If you select tables as the objects to synchronize and need to rename tables or columns in the destination, a single task can synchronize up to 5,000 tables. To synchronize more than 5,000 tables, configure multiple tasks or synchronize at the database level.

  • Binary log retention requirements:

    • Incremental data synchronization only: retain binary logs for at least 24 hours.

    • Full data synchronization + incremental data synchronization: retain binary logs for at least seven days during full synchronization. After full synchronization completes, you can reduce the retention period to more than 24 hours. Otherwise, DTS may fail to obtain the binary logs and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. Make sure that you set the retention period of binary logs in accordance with the preceding requirements. Otherwise, the service reliability and performance stated in the Service Level Agreement (SLA) of DTS cannot be achieved.

  • Table names containing uppercase letters support schema synchronization only.

  • TABLEGROUP objects and databases or schemas with the Locality attribute are not supported.

  • During schema synchronization and full data synchronization, do not execute DDL statements that alter database or table schemas. Doing so causes the task to fail.

Foreign key behavior

  • During schema synchronization, DTS synchronizes foreign keys from the source to the destination.

  • During full and incremental synchronization, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. If you run cascade update or delete operations on the source during synchronization, data inconsistency may occur.

Other limitations

  • Run synchronization during off-peak hours. Full data synchronization consumes read and write resources on both databases and increases load on the database servers.

  • After full data synchronization, the destination tablespace is larger than the source because concurrent INSERT operations during full synchronization create fragmentation.

  • Do not use pt-online-schema-change for DDL operations on synchronized objects. Doing so may cause synchronization to fail.

  • Write data to the destination only through DTS during synchronization. Using other tools to write data to the destination, combined with DMS online DDL operations, may cause data loss.

  • If a DDL statement fails to execute on the destination, DTS continues running the statement. Check failed DDL statements in task logs.

  • MySQL column names are case-insensitive. If the source has columns whose names differ only in capitalization, their data is written to the same destination column, which produces unexpected results.

  • If the data to synchronize contains four-byte characters (such as rare characters or emojis), the destination tables must use the UTF8mb4 character set. If you use schema synchronization, set the character_set_server parameter in the destination to UTF8mb4.

  • After synchronization completes (status changes to Completed), run analyze table <Table name> to verify that data was written correctly. For example, a high-availability (HA) switchover in the source may leave data only in memory, causing data loss.

  • If a DTS task fails, DTS technical support will attempt to restore it within 8 hours. During restoration, the task may be restarted and task parameters may be modified (database parameters are not modified). The parameters that may be modified include but are not limited to the parameters in the Modify instance parameters section of the Modify the parameters of a DTS instance topic.

  • DTS updates the dts_health_check.ha_health_check table in the source database on a schedule to advance the binary log file position.

Supported SQL operations

TypeOperations
DMLINSERT, UPDATE, DELETE
DDLALTER TABLE; CREATE FUNCTION, CREATE INDEX, CREATE TABLE; DROP INDEX, DROP TABLE; RENAME TABLE; TRUNCATE TABLE
Note

The RENAME TABLE operation may cause data inconsistency. If a table is selected as a synchronization object and you rename it during synchronization, changes to that table stop being synchronized to the destination. To avoid this, add the database containing the table to the synchronization objects—both the database before the RENAME TABLE operation and the database containing the renamed table.

Synchronization types

Choose a synchronization type based on your scenario before you create the task.

TypeWhat it doesWhen to use
Schema synchronization + full data synchronizationCopies table schemas and all existing data to the destinationInitial setup or one-time data copy
Schema synchronization + full data synchronization + incremental data synchronizationCopies schemas and existing data, then continuously applies changes from the sourceOngoing replication with live source traffic
Incremental data synchronization onlyApplies ongoing changes only; no historical data is copiedThe destination already has the base data

For most use cases, select all three types. DTS uses the full synchronization data as the baseline for incremental synchronization.

Create a data synchronization task

Step 1: Go to the Data Synchronization Tasks page

  1. Log on to the Data Management (DMS) console.

  2. In the top navigation bar, click Data + AI.

  3. In the left-side navigation pane, choose DTS (DTS) > Data Synchronization.

Note

Console operations may vary based on the mode and layout of DMS. For more information, see Simple mode and Customize the layout and style of the DMS console. You can also go directly to the Data Synchronization Tasks page of the new DTS console.

Step 2: Select a region

On the right side of Data Synchronization Tasks, select the region where the synchronization instance resides.

Note

In the new DTS console, select the region in the top navigation bar.

Step 3: Configure source and destination databases

Click Create Task. In the Create Data Synchronization Task wizard, configure the following parameters.

Warning

Read the Limits displayed on the page after configuring the source and destination databases. Skipping this step may cause task failure or data inconsistency.

Task Name

ParameterDescription
Task NameDTS automatically generates a task name. Enter a descriptive name that makes the task easy to identify. The name does not need to be unique.

Source Database

ParameterDescription
Select a DMS database instanceSelect an existing DMS database instance to have DTS populate the connection parameters automatically, or leave blank to configure the parameters manually.
Database TypeSelect PolarDB-X 2.0.
Connection TypeSelect Alibaba Cloud Instance.
Instance RegionSelect the region where the source PolarDB-X instance resides.
Instance IDSelect the ID of the source PolarDB-X instance.
Database AccountEnter the database account. The account must have the SELECT permission on the objects to be synchronized, and the REPLICATION CLIENT and REPLICATION SLAVE permissions. For more information, see Data synchronization tools for PolarDB-X.
Database PasswordEnter the password for the database account.

Destination Database

ParameterDescription
Select a DMS database instanceSelect an existing DMS database instance to have DTS populate the connection parameters automatically, or leave blank to configure the parameters manually.
Database TypeSelect PolarDB for MySQL.
Connection TypeSelect Alibaba Cloud Instance.
Instance RegionSelect the region where the destination PolarDB for MySQL cluster resides.
PolarDB Cluster IDSelect the ID of the destination PolarDB for MySQL cluster.
Database AccountEnter the database account. The account requires read and write permissions on the destination database.
Database PasswordEnter the password for the database account.

Step 4: Test connectivity

Click Test Connectivity and Proceed.

DTS automatically adds its server CIDR blocks to the whitelist of Alibaba Cloud database instances and to the security group rules of Elastic Compute Service (ECS) instances hosting self-managed databases. If the database is deployed on multiple ECS instances, you must manually add the CIDR blocks of DTS servers to the security group rules of each ECS instance. For self-managed databases in data centers or on third-party cloud providers, manually add the DTS server CIDR blocks to the database whitelist. For more information, see Add the CIDR blocks of DTS servers.

Warning

Adding DTS server CIDR blocks to whitelists or security group rules introduces potential security risks. Before proceeding, take preventive measures such as strengthening password security, limiting exposed ports, authenticating API calls, auditing whitelist entries regularly, and using Express Connect, VPN Gateway, or Smart Access Gateway to connect the database to DTS.

Step 5: Select objects and configure settings

Configure the following parameters.

Synchronization types

ParameterDescription
Synchronization TypesSelect Schema Synchronization, Full Data Synchronization, and Incremental Data Synchronization. By default, only Incremental Data Synchronization is selected. After the precheck passes, DTS synchronizes historical data from the source as the baseline for incremental synchronization.

Conflicting table handling

ParameterDescription
Processing Mode of Conflicting TablesPrecheck and Report Errors: DTS checks whether destination tables share names with source tables. If matching names are found, the precheck fails and the task cannot start. To resolve the conflict, use the object name mapping feature to rename the destination tables. For more information, see Map object names.
Ignore Errors and Proceed: Skips the table name check. If source and destination tables share the same schema and a record has the same primary or unique key value, the existing record is retained during full synchronization and overwritten during incremental synchronization. If schemas differ, initialization may fail or only some columns may be synchronized. Use with caution.

Objects to synchronize

Select objects from the Source Objects section, then click the 向右 icon to move them to the Selected Objects section. You can select columns, tables, or databases. If you select tables or columns, DTS does not synchronize views, triggers, or stored procedures.

In the Selected Objects section:

Step 6: Configure advanced settings

Click Next: Advanced Settings and configure the following parameters.

ParameterDescription
Monitoring and AlertingNo: Disables alerting. Yes: Enables alerting. Configure the alert threshold and notification settings. For more information, see Configure monitoring and alerting.
Retry Time for Failed ConnectionsThe time range during which DTS retries failed connections after the task starts. Valid values: 10–1440 minutes. Default: 720 minutes. Set this to at least 30 minutes. If DTS reconnects within this window, the task resumes; otherwise, the task fails. If multiple tasks share the same source or destination, the shortest retry window applies. DTS charges continue during retries.
Configure ETLNo: Disables extract, transform, and load (ETL). Yes: Enables ETL and opens the code editor for data processing statements. For more information, see What is ETL? and Configure ETL in a data migration or data synchronization task.
Whether to delete SQL operations on heartbeat tables of forward and reverse tasksYes: Does not write heartbeat table SQL operations to the source. A latency indicator may appear on the DTS instance. No: Writes heartbeat table SQL operations to the source. This may affect physical backup and cloning of the source database.

Step 7: Run the precheck

Click Next: Save Task Settings and Precheck.

Note

To preview the API parameters for this task configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

DTS runs a precheck before the task starts. If the precheck fails:

  • Click View Details next to each failed item, diagnose the issue, resolve it, and rerun the precheck.

  • If an alert item can be ignored, click Confirm Alert Details > Ignore > OK, then click Precheck Again. Ignoring alerts may cause data inconsistency.

Step 8: Purchase an instance

Wait for the Success Rate to reach 100%, then click Next: Purchase Instance.

On the buy page, configure the following parameters.

ParameterDescription
Billing MethodSubscription: Pay upfront for a fixed period. More cost-effective for long-term use. Pay-as-you-go: Billed hourly. Suitable for short-term use. Release the instance when no longer needed to stop charges.
Resource Group SettingsThe resource group for the synchronization instance. Default: default resource group. For more information, see What is Resource Management?
Instance ClassThe synchronization speed varies by instance class. Select a class based on your throughput requirements. For more information, see Instance classes of data synchronization instances.
Subscription DurationAvailable only for the Subscription billing method. Options: 1–9 months, 1 year, 2 years, 3 years, or 5 years.

Step 9: Start the task

Read and select Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start. In the confirmation dialog box, click OK.

Track task progress in the task list.