All Products
Search
Document Center

Data Transmission Service:One-way synchronization between PolarDB for MySQL clusters

Last Updated:Mar 30, 2026

When you run PolarDB for MySQL clusters across regions or accounts, keeping data consistent between them requires a reliable replication mechanism. Data Transmission Service (DTS) provides one-way and two-way data synchronization between PolarDB for MySQL clusters, so you can:

  • Disaster recovery: Keep a standby cluster in sync with your primary cluster and fail over quickly when needed.

  • Multi-region deployment: Replicate data across clusters in different regions to reduce read latency for geographically distributed users.

  • Zero-downtime migration: Synchronize data between clusters while your application is running, then cut over traffic without downtime.

Prerequisites

Before you begin:

  • Both source and destination PolarDB for MySQL clusters are created. See Custom purchase and Purchase a subscription cluster.

  • The destination cluster has more available storage space than the total data size of the source cluster.

Limitations

Source database

  • Tables must have a PRIMARY KEY or UNIQUE constraint, with all fields unique. Tables without these constraints may produce duplicate records in the destination. For two-way synchronization, enable the Exactly-Once write feature to handle tables without primary keys or UNIQUE constraints. See Synchronize tables without primary keys or UNIQUE constraints.

  • If you synchronize tables (rather than an entire database) and need to rename tables or columns in the destination, a single task supports up to 1,000 tables. For more than 1,000 tables, split the work into multiple tasks or synchronize the entire database instead.

  • Binary logging requirements:

    • Enable binary logging with loose_polar_log_bin set to ON. If this parameter is not set, the precheck fails and the DTS task cannot start. See Enable binary logging and Modify parameters. > Note: Enabling binary logging incurs storage charges.

    • Retain binary logs for at least 3 days. A 7-day retention period is recommended. Shorter retention periods can cause data inconsistency or loss, and may affect DTS service reliability. To set the retention period, see the Modify the retention period section in "Enable binary logging".

  • Do not run DDL statements that change database or table schemas during schema synchronization or full data synchronization. Doing so causes the task to fail.

During schema synchronization, DTS also synchronizes foreign keys from the source to the destination. During full and incremental synchronization, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. If you run cascade update or delete operations on the source during this time, data inconsistency may occur.

Other limitations

Limitation Details
Read-only nodes DTS does not synchronize read-only nodes of the source cluster.
OSS external tables DTS does not synchronize Object Storage Service (OSS) external tables.
Comment-defined parsers DTS does not synchronize data that uses a parser defined by comments.
4-byte characters Data containing rare characters or 4-byte UTF-8 characters requires the destination databases and tables to use the UTF8mb4 character set. If schema synchronization is enabled, set the character_set_server parameter in the destination database to UTF8mb4.
DDL tools We recommend that you do not use tools such as pt-online-schema-change to perform DDL operations on source tables during synchronization. Otherwise, the DTS task will fail. To perform online DDL operations, use Data Management (DMS)—but only if no other data source is writing to the destination during synchronization. Writing to the destination from multiple sources during synchronization can cause data loss.
Failed DDL in destination If a DDL statement fails in the destination, the synchronization task continues. View failed DDL statements in the task logs. See View task logs.
Account synchronization Synchronizing database accounts has its own prerequisites. See Migrate database accounts.
Task recovery If a DTS task fails, DTS support will attempt to restore it within 8 hours. The task may be restarted and task parameters (not database parameters) may be adjusted. For the parameters that may be modified, see Modify instance parameters.

Performance impact

During full data synchronization, DTS reads and writes to both clusters simultaneously, which increases their load. To reduce the impact:

  • Run synchronization during off-peak hours.

  • Enable throttling in the advanced settings (configure QPS and RPS limits for full synchronization).

  • Expect the destination tablespace to be larger than the source after full synchronization completes—concurrent INSERT operations cause table fragmentation.

Two-way synchronization

  • Two-way synchronization is supported only between two PolarDB for MySQL clusters. Three or more clusters are not supported.

  • DDL operations can only be synchronized in the forward direction. This preserves data consistency.

  • DTS creates a dts database in the destination to prevent circular synchronization. Do not modify this database while the task is running.

  • A two-way synchronization instance includes a forward task and a reverse task. If an object appears in both tasks:

    • Only one task can synchronize both full data and incremental data for that object. The other task synchronizes only incremental data.

    • Data synchronized by one task is not fed back as source data for the other task.

  • DTS periodically executes CREATE DATABASE IF NOT EXISTS \test\`` on the source database to advance the binary log position.

Billing

Synchronization type Cost
Schema synchronization and full data synchronization Free
Incremental data synchronization Charged. See Billing overview.

SQL operations that can be synchronized

Type Operations
DML INSERT, UPDATE, DELETE
DDL ALTER TABLE, ALTER VIEW, CREATE FUNCTION, CREATE INDEX, CREATE PROCEDURE, CREATE TABLE, CREATE VIEW, DROP INDEX, DROP TABLE, RENAME TABLE, TRUNCATE TABLE
Important

RENAME TABLE can cause data inconsistency. For example, if you synchronize a table by name and then rename it, DTS stops synchronizing that table. To avoid this, synchronize the entire database rather than individual tables, and make sure the databases the table belongs to—both before and after the rename—are included in the synchronization scope.

Set up data synchronization

The setup consists of eight steps: navigate to the task list, select the region, configure source and destination databases, test connectivity, select objects and configure synchronization settings, configure advanced settings, run the precheck, and purchase the instance.

Step 1: Go to the Data Synchronization Tasks page

  1. Log on to the Data Management (DMS) console.

  2. In the top navigation bar, click Data + AI.

  3. In the left-side navigation pane, choose DTS (DTS) > Data Synchronization.

Navigation may vary based on the DMS console layout. See Simple mode and Customize the layout and style of the DMS console. Alternatively, go directly to the Data Synchronization Tasks page.

Step 2: Select the region

On the Data Synchronization Tasks page, select the region where the synchronization instance will reside.

In the new DTS console, select the region from the top navigation bar.

Step 3: Configure source and destination databases

Click Create Task. On the wizard page, configure the following parameters.

Warning

After configuring the source and destination, read the Limits displayed on the page carefully. Skipping this step may cause task failures or data inconsistency.

Task settings

Parameter Description
Task Name A name for the DTS task. DTS generates a default name. Specify a descriptive name to make the task easy to identify. The name does not need to be unique.

Source database

Parameter Description
Select a DMS database instance Select an existing DMS database instance to auto-populate connection parameters, or leave blank and fill in the parameters manually.
Database Type Select PolarDB for MySQL.
Connection Type Select Alibaba Cloud Instance.
Instance Region The region where the source cluster resides.
Replicate Data Across Alibaba Cloud Accounts Whether to synchronize across Alibaba Cloud accounts. Select No for same-account synchronization.
PolarDB Cluster ID The ID of the source cluster. The source and destination can be the same cluster (to synchronize within a cluster) or different clusters.
Database Account A database account with read permissions on the objects to be synchronized.
Database Password The password for the database account.
Encryption Whether to encrypt the connection. See Configure SSL encryption.

Destination database

Parameter Description
Select a DMS database instance Select an existing DMS database instance to auto-populate connection parameters, or leave blank and fill in the parameters manually.
Database Type Select PolarDB for MySQL.
Connection Type Select Alibaba Cloud Instance.
Instance Region The region where the destination cluster resides.
PolarDB Cluster ID The ID of the destination cluster.
Database Account A database account with read permissions on the destination database. A privileged account is recommended.
Database Password The password for the database account.
Encryption Whether to encrypt the connection. See Configure SSL encryption.

Step 4: Test connectivity

Click Test Connectivity and Proceed.

DTS automatically adds its server CIDR blocks to the whitelist of Alibaba Cloud database instances and to the security group rules of Elastic Compute Service (ECS) instances. For self-managed databases hosted in a data center or on a third-party cloud, add the DTS CIDR blocks manually. See Add the CIDR blocks of DTS servers.

Warning

Adding DTS CIDR blocks to your whitelist or security group rules introduces potential security risks. Before proceeding, take preventive measures such as enforcing strong credentials, restricting exposed ports, validating API calls, and auditing your whitelist regularly. For enhanced security, connect DTS to your database through Express Connect, VPN Gateway, or Smart Access Gateway.

Step 5: Select objects and configure synchronization settings

Parameter Description
Synchronization Types By default, Incremental Data Synchronization is selected. You must also select Schema Synchronization and Full Data Synchronization. DTS synchronizes existing data as a baseline, then applies ongoing changes incrementally.
Method to Migrate Triggers in Source Database How to handle triggers during synchronization. Set this only if you are synchronizing triggers and have selected Schema Synchronization. See Synchronize or migrate triggers from the source database.
Synchronization Topology In this example, One-way Synchronization is selected. Alternatively, you can select Two-way Synchronization.
Processing Mode of Conflicting Tables How DTS handles tables in the destination that have the same name as source tables: Precheck and Report Errors stops the task if identical table names are found; Ignore Errors and Proceed skips the check and may cause data inconsistency—use with caution.
Source Objects Select objects from the Source Objects section and click the arrow icon to move them to the Selected Objects section. Select columns, tables, or databases. Selecting only tables or columns excludes views, triggers, and stored procedures.
Selected Objects Right-click an object to rename it in the destination (object name mapping) or to select specific SQL operations to synchronize. Click Batch Edit to rename multiple objects at once. Right-click a table to add a WHERE condition for row-level filtering. See Map object names and Specify filter conditions.

Object selection guidelines

Select all related objects together to avoid failures caused by missing dependencies, such as views that reference tables or tables with foreign key relationships. If the source database will have tables renamed during synchronization (via RENAME TABLE), select the entire database rather than individual tables to ensure both the pre-rename and post-rename database names are in the synchronization scope.

Step 6: Configure advanced settings

Click Next: Advanced Settings.

Data verification

Configure a data verification task to detect inconsistencies between the source and destination after synchronization. See Configure a data verification task.

Advanced settings

Parameter Description
Dedicated Cluster for Task Scheduling By default, DTS uses a shared cluster. For improved stability, purchase a dedicated cluster. See What is a DTS dedicated cluster.
Set Alerts Configure alerting for task failures or latency exceeding a threshold. Select Yes to set the alert threshold and notification contacts. See Configure monitoring and alerting.
Select the engine type of the destination database The storage engine of the destination cluster: InnoDB (default) or X-Engine (an online transaction processing (OLTP) storage engine).
Copy the temporary table of the Online DDL tool that is generated in the source table to the destination database How DTS handles temporary tables generated by online DDL tools (DMS or gh-ost): Yes — synchronizes temporary table data (may extend task duration for large changes); No, Adapt to DMS Online DDL — skips temporary tables and synchronizes only the original DDL (destination tables may be locked); No, Adapt to gh-ost — skips temporary tables and synchronizes only the original gh-ost DDL (destination tables may be locked; supports custom regex to filter shadow tables).
Retry Time for Failed Connections How long DTS retries after a connection failure. Valid range: 10–1,440 minutes. Default: 720 minutes. Set to more than 30 minutes. If DTS reconnects within this window, the task resumes. If multiple tasks share the same source or destination database, the shortest retry window takes precedence.
Retry Time for Other Issues How long DTS retries after a DDL or DML failure. Valid range: 1–1,440 minutes. Default: 10 minutes. Set to more than 10 minutes. Must be smaller than Retry Time for Failed Connections.
Enable Throttling for Full Data Synchronization Limits the read/write load during full synchronization. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Synchronization is selected.
Enable Throttling for Incremental Data Synchronization Limits the load during incremental synchronization. Configure RPS of Incremental Data Synchronization and Data synchronization speed for incremental synchronization (MB/s).
Environment Tag A tag to identify the DTS instance. Optional.
Configure ETL Whether to apply extract, transform, and load (ETL) transformations. Select Yes to write data processing statements in the code editor. See Configure ETL and What is ETL?.
Whether to delete SQL operations on heartbeat tables of forward and reverse tasks Whether DTS writes heartbeat SQL operations to the source database: Yes — disables writing (a synchronization latency indicator may show on the DTS instance); No — enables writing (may affect physical backup and cloning of the source database).

Step 7: Run the precheck

Click Next: Save Task Settings and Precheck.

To preview the API parameters for this task configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

DTS runs a precheck before the task can start. Each precheck item has one of three outcomes:

Outcome Meaning Action
Passed The item meets all requirements. No action needed.
Failed The item does not meet requirements and blocks the task. Click View Details next to the failed item, fix the issue, then click Precheck Again.
Alert The item does not fully meet requirements, but the task can proceed with potential impact. To ignore: click Confirm Alert Details > Ignore > OK, then click Precheck Again. Ignoring alerts may cause data inconsistency.

Step 8: Purchase the synchronization instance

Wait for Success Rate to reach 100%, then click Next: Purchase Instance.

On the buy page, configure the following:

Parameter Description
Billing Method Subscription — pay upfront for a fixed term, more cost-effective for long-term use. Pay-as-you-go — billed hourly, suitable for short-term use. Release the instance when no longer needed to avoid ongoing charges.
Resource Group Settings The resource group for the instance. Default: default resource group. See What is Resource Management?.
Instance Class The synchronization throughput class. Choose based on your data volume and latency requirements. See Instance classes of data synchronization instances.
Subscription Duration The subscription term: 1–9 months, or 1, 2, 3, or 5 years. Available only for the Subscription billing method.

Read and accept Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start. In the confirmation dialog, click OK.

The task appears in the task list. Monitor its progress there.