All Products
Search
Document Center

Data Transmission Service:Synchronize data from an ApsaraDB RDS for MySQL instance to a PolarDB-X 2.0 instance

Last Updated:Mar 28, 2026

Data Transmission Service (DTS) lets you synchronize data from an ApsaraDB RDS for MySQL instance to a PolarDB-X 2.0 instance in real time—covering schema, full data, and ongoing incremental changes in a single task.

Supported source types

This topic uses an ApsaraDB RDS for MySQL instance as the source example. The same procedure applies to the following MySQL source types:

  • ApsaraDB RDS for MySQL instance

  • Self-managed MySQL database hosted on Elastic Compute Service (ECS)

  • Self-managed MySQL database connected over Express Connect, VPN Gateway, or Smart Access Gateway

  • Self-managed MySQL database connected over Database Gateway

  • Self-managed MySQL database connected over Cloud Enterprise Network (CEN)

Prerequisites

Before you begin, ensure that you have:

  • An ApsaraDB RDS for MySQL instance. For more information, see Create an ApsaraDB RDS for MySQL instance

  • A PolarDB-X 2.0 instance with available storage space larger than the total data size of the source RDS MySQL instance

  • A source database account with the REPLICATION CLIENT, REPLICATION SLAVE, SHOW VIEW, and SELECT permissions

  • A destination database account with read and write permissions

Billing

Synchronization typeFee
Schema synchronization and full data synchronizationFree of charge
Incremental data synchronizationCharged. See Billing overview

Supported synchronization topologies

  • One-way one-to-one synchronization

  • One-way many-to-one synchronization

SQL operations that can be synchronized

Operation typeSQL statements
DMLINSERT, UPDATE, DELETE
DDLALTER TABLE, ALTER VIEW; CREATE FUNCTION, CREATE INDEX, CREATE PROCEDURE, CREATE TABLE, CREATE VIEW; DROP INDEX, DROP TABLE; RENAME TABLE; TRUNCATE TABLE

Limitations

Source database requirements

  • Tables must have PRIMARY KEY or UNIQUE constraints with all fields unique. Without these constraints, the destination database may contain duplicate records.

  • When synchronizing at the table level with object renaming (such as renaming tables or columns), a single task supports up to 1,000 tables. Tasks exceeding this limit return a request error. To synchronize more than 1,000 tables, split the work across multiple tasks or synchronize the entire database instead.

  • Binary log requirements:

    • For ApsaraDB RDS for MySQL: Binary logging is enabled by default. Verify that binlog_row_image is set to full—DTS returns an error during precheck if this setting is missing. See Modify instance parameters. Retain binary logs for at least 3 days (7 days recommended).

    • For self-managed MySQL: Enable binary logging, set binlog_format to row, and set binlog_row_image to full. Retain binary logs for at least 7 days. For dual-primary clusters, also set log_slave_updates to ON so DTS can capture all binary logs. See Create an account for a self-managed MySQL database and configure binary logging.

    Important

    If binary logs are purged before DTS reads them, the synchronization task fails. In exceptional cases, data inconsistency or loss may occur. Insufficient binary log retention is not covered by the DTS Service Level Agreement (SLA).

  • Do not execute DDL statements that change database or table schemas during schema synchronization or full data synchronization. Doing so causes the task to fail.

  • Data generated by binary log change operations—such as data restored from a physical backup or data from cascade operations—is not synchronized to the destination. If needed, remove and re-add the affected tables from the synchronization objects without impacting your business. See Modify the objects to be synchronized.

  • If the source is MySQL 8.0.23 or later and the data includes invisible columns, DTS cannot read those columns and data loss occurs.

    • To make a column visible, run: ALTER TABLE <table_name> ALTER COLUMN <column_name> SET VISIBLE;

    • Tables without explicit primary keys auto-generate invisible primary keys. Make these visible before synchronization. See Generated Invisible Primary Keys.

Other limitations

  • DTS does not synchronize the following data types: BIT, VARBIT, GEOMETRY, ARRAY, UUID, TSQUERY, TSVECTOR, and TXID_SNAPSHOT.

  • Prefix indexes cannot be synchronized. If the source database contains prefix indexes, the task may fail.

  • Run the synchronization task during off-peak hours. Initial full data synchronization uses read and write resources on both the source and destination databases, which increases load.

  • After initial full data synchronization, the destination tablespace is larger than the source because concurrent INSERTs cause table fragmentation.

  • When synchronizing at the table level, do not use pt-online-schema-change for online DDL operations. Use Data Management (DMS) instead. See Perform lock-free DDL operations.

  • Writing data from other sources to the destination during synchronization causes data inconsistency.

  • If the EncDB feature is enabled on the source RDS MySQL instance, full data synchronization cannot be performed.

    Note RDS MySQL instances with Transparent Data Encryption (TDE) enabled support schema synchronization, full data synchronization, and incremental data synchronization.
  • During schema synchronization, DTS synchronizes foreign keys from the source to the destination. During full data synchronization and incremental data synchronization, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. If you perform cascade update or delete operations on the source during synchronization, data inconsistency may occur.

  • If a DTS task fails, DTS technical support will restore it within 8 hours. During restoration, the task may be restarted and task parameters (not database parameters) may be modified.

Special cases

  • Self-managed MySQL sources: A primary/secondary switchover during an active synchronization task causes the task to fail. If no DML operations occur on the source for a long time, synchronization latency readings may be inaccurate—run a DML operation to update the latency. If you synchronize an entire database, create a heartbeat table that receives writes every second to maintain accurate latency tracking.

  • ApsaraDB RDS for MySQL sources: Read-only RDS MySQL 5.6 instances that do not record transaction logs cannot be used as the source.

  • For both source types, DTS periodically executes CREATE DATABASE IF NOT EXISTS 'test' on the source database to advance the binary log file position.

Create a synchronization task

Step 1: Open the DTS console

Go to the Data Synchronization page of the DTS console.

Note Alternatively, log on to the DMS console. In the top navigation bar, move the pointer over Data + AI and choose DTS (DTS) > Data Synchronization.

Step 2: Select the region

In the upper-left corner, select the region where the data synchronization instance will reside.

Step 3: Configure source and destination databases

Click Create Task. In the Create Data Synchronization Task wizard, configure the source and destination databases using the parameters below.

Warning

After configuring the source and destination databases, read the Limits displayed on the page before proceeding. Skipping this step may cause the task to fail or result in data inconsistency.

Source database

ParameterDescription
Task NameA name for the DTS task. DTS generates a name automatically. Specify a descriptive name to make the task easy to identify. The name does not need to be unique.
Database TypeSelect MySQL.
Access MethodSelect Alibaba Cloud Instance.
Instance RegionThe region where the source ApsaraDB RDS for MySQL instance resides.
Replicate Data Across Alibaba Cloud AccountsSelect No for same-account synchronization.
RDS Instance IDThe ID of the source ApsaraDB RDS for MySQL instance.
Database AccountThe database account of the source instance. The account must have the REPLICATION CLIENT, REPLICATION SLAVE, SHOW VIEW, and SELECT permissions.
Database PasswordThe password for the database account.
EncryptionSelect Non-encrypted or SSL-encrypted based on your requirements. To use SSL encryption, enable SSL on the RDS MySQL instance before configuring this task. See Use a cloud certificate to enable SSL encryption.

Destination database

ParameterDescription
Database TypeSelect PolarDB-X 2.0.
Access MethodSelect Alibaba Cloud Instance.
Instance RegionThe region where the destination PolarDB-X 2.0 instance resides.
Instance IDThe ID of the destination PolarDB-X 2.0 instance.
Database AccountThe database account of the destination instance. The account must have read and write permissions.
Database PasswordThe password for the database account.

Step 4: Test connectivity

Click Test Connectivity and Proceed.

DTS automatically adds its server CIDR blocks to the whitelist of Alibaba Cloud database instances and to the security group rules of ECS-hosted databases. For self-managed databases in a data center or provided by a third-party cloud service provider, manually add the DTS server CIDR blocks to your database whitelist. See CIDR blocks of DTS servers.

Warning

Adding DTS CIDR blocks to whitelists or security groups introduces security risks. Before proceeding, take preventive measures such as strengthening credentials, limiting exposed ports, regularly auditing whitelist rules, and removing unauthorized CIDR blocks. Consider connecting via Express Connect, VPN Gateway, or Smart Access Gateway for higher security.

Step 5: Configure objects and advanced settings

Basic settings

ParameterDescription
Synchronization TypesIncremental Data Synchronization is selected by default. Also select Schema Synchronization and Full Data Synchronization to synchronize historical data before starting incremental synchronization.
Processing Mode of Conflicting TablesPrecheck and Report Errors: Checks whether the destination has tables with the same names as the source. If identical names exist, an error is returned and the task cannot start. Use object name mapping to rename destination tables if needed. Ignore Errors and Proceed: Skips the precheck for identical table names. During full data synchronization, records with matching primary key or unique key values are retained in the destination (not overwritten). During incremental data synchronization, matching records in the destination are overwritten. If schemas differ, data initialization may fail or only some columns may be synchronized.
Source ObjectsSelect objects from the Source Objects section and click the right-arrow icon to add them to Selected Objects. Select columns, tables, or databases. Selecting tables or columns excludes views, triggers, and stored procedures.
Selected ObjectsTo rename a single object, right-click it and use the object name mapping feature. To rename multiple objects at once, click Batch Edit. To filter specific SQL operations for a table or database, right-click the object and select the operations to synchronize. To filter rows, right-click an object and specify SQL filter conditions. Renaming an object may cause dependent objects to fail synchronization.

Advanced settings

ParameterDescription
Monitoring and AlertingSelect Yes to receive notifications when the task fails or synchronization latency exceeds the threshold. Configure alert thresholds and notification settings as needed. See Configure monitoring and alerting.
Capitalization of Object Names in Destination InstanceControls the capitalization of database, table, and column names in the destination. Default is DTS default policy. See Specify the capitalization of object names.
Copy the temporary table of the Online DDL tool that is generated in the source table to the destination databaseIn this scenario, DTS does not synchronize DDL operations. You must set this parameter to No, Adapt to DMS Online DDL or No, Adapt to gh-ost. No, Adapt to DMS Online DDL skips temporary table data and synchronizes only the original DDL—destination tables may be locked during this process. No, Adapt to gh-ost skips temporary table data and synchronizes only the original DDL, with optional regex-based filtering for gh-ost shadow tables—destination tables may be locked during this process. Yes synchronizes temporary table data generated by online DDL operations but may significantly extend task duration.
Retry Time for Failed ConnectionsThe time range within which DTS retries failed connections. Valid values: 10–1,440 minutes. Default: 720 minutes. Set this to more than 30 minutes. If multiple tasks share the same source or destination database, the shortest retry time takes precedence. DTS charges for the instance during retry periods.

Step 6: Run the precheck

Click Next: Save Task Settings and Precheck.

Note To preview the API parameters for this task configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

DTS performs a precheck before the task can start. If the precheck fails:

  • Click View Details next to each failed item, resolve the issue, and run the precheck again.

  • If an alert item can be ignored, click Confirm Alert Details, then Ignore, click OK, and click Precheck Again. Ignoring alerts may lead to data inconsistency.

Step 7: Purchase a synchronization instance

Wait until the Success Rate reaches 100%, then click Next: Purchase Instance.

On the buy page, configure the following:

ParameterDescription
Billing MethodSubscription: Pay upfront for a fixed term—more cost-effective for long-term use. Subscription duration can be 1–9 months, 1 year, 2 years, 3 years, or 5 years. Pay-as-you-go: Billed hourly—suitable for short-term use. Release the instance when no longer needed to stop charges.
Resource Group SettingsThe resource group for the synchronization instance. Default: default resource group. See What is Resource Management?
Instance ClassDTS offers multiple instance classes with different synchronization speeds. Select based on your data volume and latency requirements. See Instance classes of data synchronization instances.

Step 8: Start the task

Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start. In the confirmation dialog, click OK.

The task appears in the task list. DTS synchronizes historical data from the source to the destination, then transitions to incremental synchronization automatically.