All Products
Search
Document Center

Data Transmission Service:Synchronize data from a PolarDB-X 1.0 instance to a PolarDB for MySQL cluster

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to synchronize incremental data in real time from a PolarDB-X 1.0 instance to a PolarDB for MySQL cluster.

Prerequisites

Before you begin, make sure that:

Billing

Synchronization typeFee
Schema synchronization and full data synchronizationFree of charge
Incremental data synchronizationCharged. For more information, see Billing overview.

Supported synchronization topologies

  • One-way one-to-one synchronization

  • One-way one-to-many synchronization

  • One-way cascade synchronization

  • One-way many-to-one synchronization

For details, see Synchronization topologies.

SQL operations that can be synchronized

Operation typeSQL statements
DMLINSERT, UPDATE, DELETE

Required database account permissions

DatabaseRequired permissionReference
Source PolarDB-X 1.0 instanceRead permissions on the objects to be synchronizedManage accounts
Destination PolarDB for MySQL clusterRead and write permissions on the destination cluster to which the objects are to be synchronizedCreate and manage a database account

Limitations

Table structure requirements

  • Tables must have PRIMARY KEY or UNIQUE constraints with all fields unique. Otherwise, the destination database may contain duplicate records.

  • Tables that have only UNIQUE constraints do not support schema synchronization. Use tables with PRIMARY KEY constraints instead.

  • Tables with secondary indexes cannot be synchronized.

  • If you select tables as sync objects and want to rename tables or columns in the destination, a single task supports a maximum of 5,000 tables. For more than 5,000 tables, split the work across multiple tasks or sync entire databases.

Binary log requirements for attached ApsaraDB RDS for MySQL instances

Parameter or requirementRequired valueNotes
Binary loggingEnabledDTS reads changes from binary logs.
binlog_row_imagefullAny other value causes a precheck failure.
Binary log retention (incremental sync only)At least 24 hoursAfter full data synchronization completes, you can reduce this period to more than 24 hours.
Binary log retention (full + incremental sync)At least 7 daysDTS needs older binary logs during the initial full sync phase.
Warning

If binary logs are purged before DTS reads them, the task fails and data may be inconsistent or lost. Do not shorten the retention period below these minimums. Meeting these requirements is necessary to maintain the reliability and performance stated in the DTS Service Level Agreement (SLA).

Unsupported configurations

  • Vertical splitting is not supported. PolarDB-X 1.0 storage resources can only be split horizontally (databases and tables).PolarDB-X 1.0

  • Read-only instances at PolarDB-X 1.0 compute layers are not supported.PolarDB-X 1.0

  • The storage type of the PolarDB-X 1.0 instance must be ApsaraDB RDS for MySQL. PolarDB for MySQL cannot be used as the storage type.PolarDB-X 1.0PolarDB-X 1.0

Operational restrictions during synchronization

Do not perform the following operations while a synchronization task is running:

  • Change the network type of the PolarDB-X 1.0 instance. If you must change it, update the network connection information in the DTS task after the change.PolarDB-X 1.0

  • Scale the source instance or the attached ApsaraDB RDS for MySQL instances.

  • Change the distribution of physical databases for logical databases or logical tables configured in the attached ApsaraDB RDS for MySQL instances.

  • Migrate hot tables, change shard keys, or run online DDL operations on the source instance.

These actions cause synchronization task failures or data inconsistency.

Other limitations

  • Data is distributed across the attached ApsaraDB RDS for MySQL instances. DTS runs a subtask for each ApsaraDB RDS for MySQL instance. Track subtask states in PolarDB-X 1.0Task Topology.

  • Initial full data synchronization uses read and write resources on both the source and destination databases, increasing load. Run synchronization during off-peak hours.

  • Concurrent INSERT operations during initial full data synchronization cause table fragmentation in the destination. After full synchronization, the destination tablespace will be larger than the source.

  • Do not use gh-ost or pt-online-schema-change for DDL operations on objects being synchronized — this can cause task failures.

  • Write data to the destination database through DTS only. If other tools write to the destination while Data Management (DMS) performs online DDL operations, data loss may occur in the destination.

Foreign key behavior

  • During schema synchronization, DTS synchronizes foreign keys from the source database to the destination.

  • During full and incremental data synchronization, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. If you perform cascade update or delete operations on the source database during synchronization, data inconsistency may occur.

Create a data synchronization task

The high-level steps are:

  1. Configure the source and destination databases.

  2. Select the objects to synchronize and configure advanced settings.

  3. Run a precheck.

  4. Purchase the data synchronization instance.

Step 1: Go to the Data Synchronization Tasks page

  1. Log on to the Data Management (DMS) console.

  2. In the top navigation bar, click Data + AI.

  3. In the left-side navigation pane, choose DTS (DTS) > Data Synchronization.

    Navigation options may vary based on the console mode and layout. See Simple mode for details, or see Customize the layout and style of the DMS console to change the layout. Alternatively, go directly to the Data Synchronization Tasks page of the new DTS console.

Step 2: Configure the source and destination databases

  1. Select the region in which the data synchronization instance resides.

    In the new DTS console, select the region in the top navigation bar.
  2. Click Create Task. In the Create Data Synchronization Task wizard, configure the parameters described in the following tables.

    Warning

    After configuring the source and destination databases, read the Limits displayed on the page before proceeding. Ignoring these limits may cause task failures or data inconsistency.

    Source Database

    ParameterDescription
    Task NameThe name of the DTS task. DTS generates a name automatically. Specify a descriptive name to make the task easier to identify. The name does not need to be unique.
    Database TypeSelect PolarDB-X 1.0.
    Connection TypeSelect Alibaba Cloud Instance.
    Instance RegionThe region where the source PolarDB-X 1.0 instance resides.
    Instance IDThe ID of the source PolarDB-X 1.0 instance.
    Database AccountThe database account for the source instance. For required permissions, see Required database account permissions.
    Database PasswordThe password for the database account.

    Destination Database

    ParameterDescription
    Database TypeSelect PolarDB for MySQL.
    Connection TypeSelect Alibaba Cloud Instance.
    Instance RegionThe region where the destination PolarDB for MySQL cluster resides.
    PolarDB Cluster IDThe ID of the destination PolarDB for MySQL cluster.
    Database AccountThe database account for the destination cluster. For required permissions, see Required database account permissions.
    Database PasswordThe password for the database account.
  3. Click Test Connectivity and Proceed. DTS automatically adds its server CIDR blocks to the whitelists of Alibaba Cloud database instances (such as ApsaraDB RDS for MySQL) and to the security group rules of Elastic Compute Service (ECS)-hosted databases. For self-managed databases in data centers or on third-party clouds, add the CIDR blocks manually. For more information, see the CIDR blocks of DTS servers section.

    Warning

    Adding DTS server CIDR blocks to whitelists or security group rules introduces security risks. Before proceeding, take the following precautions: - Use strong, unique usernames and passwords. - Limit exposed ports. - Authenticate API calls. - Regularly review whitelists and security group rules, and block unauthorized CIDR blocks. - Connect the database to DTS using Express Connect, VPN Gateway, or Smart Access Gateway.

Step 3: Select objects and configure advanced settings

  1. Configure the synchronization settings:

    ParameterDescription
    Synchronization TypesIncremental Data Synchronization is selected by default. Also select Schema Synchronization and Full Data Synchronization. DTS first synchronizes historical data from the source to the destination as the baseline for incremental synchronization.
    Processing Mode of Conflicting TablesPrecheck and Report Errors (default): DTS checks for tables in the destination with the same names as in the source. If identical names exist, the precheck fails and the task cannot start. To resolve naming conflicts without deleting or renaming destination tables, use object name mapping. For more information, see Map object names. Ignore Errors and Proceed: Skips the identical-name check. Use this option with caution — data inconsistency may occur. With this option: during full sync, if a destination record has the same primary key or unique key value as a source record, the destination record is kept. During incremental sync, the destination record is overwritten. If schemas differ, initialization may fail or only some columns are synchronized.
    Capitalization of Object Names in Destination InstanceDTS default policy is selected by default. Change this to match your source or destination database's capitalization. For more information, see Specify the capitalization of object names in the destination instance.
  2. In the Source Objects section, select the objects to synchronize and click 向右 to move them to the Selected Objects section.

    Select tables rather than entire databases as sync objects. If you select an entire database, DTS does not synchronize CREATE TABLE or DROP TABLE changes from the source to the destination.
  3. (Optional) In the Selected Objects section:

  4. Click Next: Advanced Settings and configure the following parameters:

    ParameterDescription
    Monitoring and AlertingValid values: No (does not enable alerting) or Yes (configures alerting). Select Yes to receive notifications when the task fails or synchronization latency exceeds a threshold. Configure the alert threshold and notification contacts. For more information, see the Configure monitoring and alerting when you create a DTS task section.
    Retry Time for Failed ConnectionsThe time range during which DTS retries failed connections after the task starts. Valid values: 10 to 1440 minutes. Default value: 720 minutes. Set this to a value greater than 30 minutes. If DTS reconnects within this period, the task resumes. Otherwise, the task fails.
    Note

    If multiple tasks share the same source or destination database, the shortest configured retry period applies to all. DTS charges for the instance during retry attempts. We recommend that you specify the retry time range based on your business requirements. You can also release the DTS instance at your earliest opportunity after the source and destination instances are released.

    Configure ETLSelect Yes to enable the extract, transform, and load (ETL) feature and enter data processing statements in the code editor. For background on the feature, see What is ETL? For configuration steps, see Configure ETL in a data migration or data synchronization task.

Step 4: Run a precheck

  1. Click Next: Save Task Settings and Precheck. To view the API parameters for this configuration, hover over the button and click Preview OpenAPI parameters before saving.

    The task cannot start until it passes the precheck.
  2. Review the precheck results:

    • If an item fails, click View Details to see the cause, fix the issue, and click Precheck Again.

    • If an alert is triggered for an item that can be ignored, click Confirm Alert Details > Ignore > OK, then click Precheck Again. Ignoring alerts may cause data inconsistency.

  3. Wait for the Success Rate to reach 100%, then click Next: Purchase Instance.

Step 5: Purchase the data synchronization instance

  1. On the purchase page, configure the following parameters:

    ParameterDescription
    Billing MethodSubscription: Pay upfront for a set duration. More cost-effective for long-term use. Pay-as-you-go: Billed hourly. Suitable for short-term use. Release the instance when no longer needed to stop charges.
    Resource Group SettingsThe resource group for the data synchronization instance. Default: default resource group. For more information, see What is Resource Management?
    Instance ClassThe instance class determines synchronization speed. Select a class based on your throughput requirements. For more information, see Instance classes of data synchronization instances.
    Subscription DurationAvailable only for the Subscription billing method. Options: 1–9 months, 1 year, 2 years, 3 years, or 5 years.
  2. Read and select Data Transmission Service (Pay-as-you-go) Service Terms.

  3. Click Buy and Start. In the dialog box that appears, click OK.

The task appears in the task list. You can monitor its progress there.