Use Data Transmission Service (DTS) to set up continuous, one-way data synchronization between two PolarDB-X 1.0 instances. DTS understands PolarDB-X 1.0's distributed architecture: it automatically creates a subtask for each underlying ApsaraDB RDS for MySQL instance and tracks their individual progress in PolarDB-X 1.0PolarDB-X 1.0PolarDB-X 1.0PolarDB-X 1.0PolarDB-X 1.0PolarDB-X 1.0PolarDB-X 1.0PolarDB-X 1.0PolarDB-X 1.0Task Topology.
Two-way synchronization between PolarDB-X 1.0 instances is not supported.PolarDB-X 1.0PolarDB-X 1.0PolarDB-X 1.0
Prerequisites
Before you begin, make sure you have:
Created both the source and destination PolarDB-X 1.0 instances
Created the target databases and tables in the destination instance, with schemas matching the objects to be synchronized — see Create a PolarDB-X 1.0 instance and Create a databaseCreate a database
ApsaraDB RDS for MySQL as the storage type for both instances — PolarDB for MySQL is not supported
Enough available storage on the destination instance to hold all data from the source (the destination tablespace will be larger than the source due to fragmentation from concurrent INSERTs during full sync)
Binary log requirements for the ApsaraDB RDS for MySQL instances attached to PolarDB-X 1.0:PolarDB-X 1.0
| Requirement | Full data sync only | Full + incremental sync |
|---|---|---|
Binary logging enabled, binlog_row_image = full | Required — DTS fails the precheck otherwise | Required |
| Minimum binary log retention period | Not required | 7 days for the full duration of the initial full sync |
| Minimum retention period after full sync completes | Not required | 24 hours |
For incremental-only synchronization, retain binary logs for at least 24 hours.
Limitations
Source database constraints:
Tables must have a PRIMARY KEY or UNIQUE constraint, with all fields being unique — otherwise the destination may contain duplicate records
Tables with only UNIQUE constraints do not support schema synchronization; use tables with PRIMARY KEY constraints instead
Tables with secondary indexes cannot be synchronized
A single synchronization task supports up to 5,000 tables when you select individual tables as objects (rather than full databases); for more than 5,000 tables, configure multiple tasks or synchronize entire databases
Operational constraints during synchronization:
Do not scale the source instance (including its attached ApsaraDB RDS for MySQL instances), change physical database distribution for configured logical databases or tables, migrate hot tables, change shard keys, or run online data definition language (DDL) operations — any of these will cause the task to fail or produce inconsistent data
If you change the network type of the source PolarDB-X 1.0 instance, update the network connection settings of the synchronization task afterwardPolarDB-X 1.0
Do not use gh-ost or pt-online-schema-change for DDL operations on objects being synchronized
Write data to the destination database only through DTS — using other tools, including Data Management (DMS) online DDL, may cause data loss in the destination
Architecture constraints:
Read-only instances at the PolarDB-X 1.0 compute layer are not supportedPolarDB-X 1.0
Only horizontal splitting (by database or table) is supported; vertical splitting is not supportedPolarDB-X 1.0
Billing
| Synchronization type | Cost |
|---|---|
| Schema synchronization and full data synchronization | Free |
| Incremental data synchronization | Charged — see Billing overview |
Supported synchronization topologies
One-way one-to-one synchronization
One-way one-to-many synchronization
One-way cascade synchronization
One-way many-to-one synchronization
For the full list of topologies DTS supports, see Synchronization topologies.
SQL operations supported
| Operation type | Statements |
|---|---|
| DML | INSERT, UPDATE, DELETE |
Required permissions
| Database | Required permissions |
|---|---|
| Source PolarDB-X 1.0 instance | Read on the objects to be synchronized — see Manage accounts |
| Destination PolarDB-X 1.0 instance | Read and write on the destination objects |
Create a synchronization task
Schedule synchronization during off-peak hours to minimize impact on production workloads. DTS uses read and write resources from both source and destination databases during the initial full data synchronization phase.
Step 1: Open the Data Synchronization Tasks page
Log on to the Data Management (DMS) console.
In the top navigation bar, click Data + AI.
In the left-side navigation pane, choose DTS (DTS) > Data Synchronization.
The menu structure may differ depending on your DMS console mode. See Simple mode and Customize the layout and style of the DMS console for details. You can also go directly to the Data Synchronization Tasks page.
Step 2: Select the region
On the Data Synchronization Tasks page, select the region where your synchronization instance resides.
In the new DTS console, select the region in the top navigation bar instead.
Step 3: Configure source and destination databases
Click Create Task. In the Create Data Synchronization Task wizard, configure the following parameters.
Read the Limits section displayed on the page before proceeding — misconfigured tasks may fail or produce inconsistent data.
Source Database
| Parameter | Value |
|---|---|
| Task Name | A descriptive name to identify this task. DTS generates a default name; uniqueness is not required. |
| Database Type | PolarDB-X 1.0 |
| Connection Type | Alibaba Cloud Instance |
| Instance Region | The region of the source PolarDB-X 1.0 instance |
| Instance ID | The ID of the source PolarDB-X 1.0 instance |
| Database Account | The account with read permissions on the source — see Required permissions |
| Database Password | The password for the database account |
Destination Database
| Parameter | Value |
|---|---|
| Database Type | PolarDB-X 1.0 |
| Connection Type | Alibaba Cloud Instance |
| Instance Region | The region of the destination PolarDB-X 1.0 instance |
| Instance ID | The ID of the destination PolarDB-X 1.0 instance |
| Database Account | The account with read and write permissions on the destination — see Required permissions |
| Database Password | The password for the database account |
Step 4: Test connectivity
Click Test Connectivity and Proceed.
DTS automatically adds its server CIDR blocks to the whitelist of Alibaba Cloud database instances. For self-managed databases hosted on Elastic Compute Service (ECS), DTS adds the CIDR blocks to ECS security group rules — if the database spans multiple ECS instances, add the CIDR blocks manually to each one. For on-premises or third-party databases, add the CIDR blocks manually to the database whitelist. For the full list of DTS server CIDR blocks, see Add the CIDR blocks of DTS servers.
Adding DTS CIDR blocks to your whitelist or security group rules carries security risks. Before proceeding, take precautions such as using strong credentials, limiting exposed ports, authenticating API calls, reviewing whitelist rules regularly, and removing unauthorized CIDR blocks. For a more secure connection, use Express Connect, VPN Gateway, or Smart Access Gateway.
Step 5: Configure objects and synchronization settings
| Setting | Description |
|---|---|
| Synchronization Types | Incremental Data Synchronization is selected by default. Select Full Data Synchronization if you also want to sync historical data. Schema Synchronization cannot be selected independently. |
| Processing Mode of Conflicting Tables | Precheck and Report Errors (default): fails the precheck if the destination contains tables with the same names as the source. Use object name mapping if you cannot delete or rename conflicting tables. Ignore Errors and Proceed: skips the name-conflict precheck. During full sync, conflicting records are retained in the destination unchanged; during incremental sync, they are overwritten. Proceed with caution — schema mismatches can cause partial sync failures. |
| Capitalization of object names in destination instance | Controls the case of database, table, and column names in the destination. Default is DTS default policy. See Specify the capitalization of object names. |
| Source Objects | Select tables or databases from the source list, then click the arrow icon to move them to Selected Objects. If you select an entire database as the object to be synchronized, DTS does not synchronize the changes made to create tables in or delete tables from the source database to the destination database. |
| Selected Objects | Right-click a single object to rename it or filter rows with a WHERE clause. Click Batch Edit in the upper-right corner to rename multiple objects at once. See Map object names and Specify filter conditions. |
Step 6: Configure advanced settings
Click Next: Advanced Settings and configure the following parameters.
| Parameter | Description |
|---|---|
| Monitoring and Alerting | Select Yes to receive notifications when the task fails or synchronization latency exceeds a threshold. Configure the alert threshold and notification contacts. Select No to skip alerting. See Configure monitoring and alerting. |
| Retry time for failed connections | How long DTS retries before marking the task as failed. Range: 10–1440 minutes. Default: 720 minutes. We recommend setting this to more than 30 minutes. If multiple tasks share the same source or destination database, the shortest retry time among them takes effect. DTS instance charges continue to accrue during retries. |
| Configure ETL | Select Yes to transform data in transit using extract, transform, and load (ETL) rules. See Configure ETL and What is ETL? |
Step 7: Run the precheck
Click Next: Save Task Settings and Precheck. DTS validates your configuration before starting the task. To review the OpenAPI parameters for this configuration, hover over the button and click Preview OpenAPI parameters before clicking through.
If any precheck item fails, click View Details next to the failed item, address the issue, and click Precheck Again.
If an item shows as an alert rather than a hard failure:
If the alert cannot be ignored, fix the issue and rerun the precheck.
If the alert can be ignored, click Confirm Alert Details, then Ignore, then OK, and then Precheck Again. Ignoring alerts may result in data inconsistency.
Step 8: Purchase the synchronization instance
Wait until Success Rate reaches 100%, then click Next: Purchase Instance. Configure the instance as follows.
| Parameter | Description |
|---|---|
| Billing Method | Subscription: pay upfront; available for 1–9 months or 1, 2, 3, or 5 years. More cost-effective for long-term use. Pay-as-you-go: billed hourly; release the instance when you no longer need it to stop charges. |
| Resource group settings | The resource group for this instance. Default: default resource group. See What is Resource Management? |
| Instance Class | The synchronization throughput class. See Instance classes of data synchronization instances. |
| Subscription Duration | Available only for the subscription billing method. |
Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start. In the confirmation dialog, click OK.
The task appears in the task list. DTS runs a separate subtask for each ApsaraDB RDS for MySQL instance attached to the source PolarDB-X 1.0 instance. Monitor subtask status in Task Topology.
What's next
Synchronization topologies — explore supported topology patterns for more complex setups
Map object names — rename objects in the destination without changing the source
Configure monitoring and alerting — set up latency and failure alerts for the task
Billing overview — understand how incremental synchronization is billed