Use Data Transmission Service (DTS) to continuously synchronize data from a logical database in Data Management (DMS) to an AnalyticDB for MySQL V3.0 cluster. After synchronization starts, the destination cluster reflects changes from your source databases in near real time, supporting internal business intelligence (BI) systems, interactive query systems, and real-time reporting.
How it works
DTS syncs data in three phases:
Schema synchronization — DTS copies table structures from the source logical database to the destination cluster. Foreign keys are not copied.
Full data synchronization — DTS performs an initial bulk load of all existing data. During this phase, read and write resources on both the source and destination databases increase.
Incremental data synchronization — After the initial load completes, DTS reads binary logs from the underlying PolarDB for MySQL clusters to capture ongoing changes (INSERT, UPDATE, DELETE) and applies them in near real time.
Because incremental sync relies on binary logs, the source databases must retain logs long enough for DTS to process them. If logs are purged before DTS reads them, the task fails and data may be lost.
Prerequisites
Before you begin, make sure you have:
A logical database configured in DMS, built from database shards across multiple PolarDB for MySQL clusters. Without this, the source database type DMS LogicDB cannot be selected. See Logical database.
All PolarDB for MySQL clusters hosting the DMS physical databases in the same region, sharing the same database account and password with read permissions on the physical databases. If clusters are in different regions or use different credentials, DTS cannot connect to the source. See Create and manage a database account.
A destination AnalyticDB for MySQL V3.0 cluster already created with available storage space larger than the total data size in the source logical database. If storage is insufficient, full sync may fail partway through. See Create a cluster.
Limitations
Hard limits — task will fail if violated
| Limitation | Detail |
|---|---|
| Regions | Only available in the China (Shanghai) and Singapore regions. |
| Binary logging | The loose_polar_log_bin parameter must be set to on. If not, the precheck fails and the task cannot start. |
| Binary log retention | For incremental-only sync, retain binary logs for at least 24 hours. For full + incremental sync, retain them for at least 7 days. If logs are purged before DTS reads them, the task fails and data may be lost or inconsistent. After the full sync phase completes, you can set the retention period to more than 24 hours. |
| Primary key or unique key required | Tables must have a PRIMARY KEY or UNIQUE constraint with all fields unique. Tables without these constraints can produce duplicate records in the destination. |
| Prefix indexes | Prefix indexes cannot be synchronized. Tables with prefix indexes may fail to sync. |
| DDL during sync | Do not execute DDL statements that change database or table schemas during schema synchronization or full data synchronization. Do not run ALTER TABLE table_name COMMENT='...' at any point during synchronization. These operations cause the task to fail. |
| Primary key in destination | Specify a custom primary key in the destination, or configure Primary Key Column in the Configurations for databases, tables, and columns step. Without a primary key, synchronization may fail. |
| AnalyticDB backup conflict | If the destination cluster runs a backup while the DTS task is active, the task fails. |
| Table limit when renaming objects | Renaming tables or columns via the object mapping feature is limited to 1,000 tables per task. Exceeding this limit returns an error. To sync more tables, split them across multiple tasks, or sync at the database level instead. |
Performance warnings
| Situation | Impact |
|---|---|
| AnalyticDB disk usage reaches 80% | Write performance degrades and the DTS task may experience lag. |
| AnalyticDB disk usage reaches 90% | Data cannot be written; errors are returned. Estimate the required disk space before starting the task. |
| DDL tools (pt-online-schema-change) | Avoid using pt-online-schema-change or similar tools on source tables during sync. Use DMS online DDL operations instead. See Perform lock-free DDL operations. |
| Concurrent writes from other sources | Use DTS as the sole writer to the destination. Writing from other tools simultaneously can cause data loss, especially when DMS performs online DDL operations. |
| Initial full sync — tablespace fragmentation | Concurrent INSERT operations during the full sync phase fragment tables in the destination. After full sync completes, the destination tablespace is typically larger than the source. |
Other notes
DTS does not synchronize foreign keys. During full and incremental sync, DTS also disables foreign key constraint checks and cascade operations at the session level. If cascade UPDATE or DELETE operations run on the source during sync, data inconsistency may occur.
UPDATE statements written to AnalyticDB for MySQL V3.0 are automatically converted to REPLACE INTO. If the UPDATE targets the primary key column, DTS converts it to DELETE followed by INSERT.
If a task fails, DTS technical support attempts to restore it within 8 hours. During restoration, the task may be restarted and certain task parameters may be modified. Only task parameters may be modified — database parameters are not modified. The parameters that may be modified include but are not limited to those described in the Modify the parameters of a DTS instance topic.
Billing
| Synchronization phase | Cost |
|---|---|
| Schema synchronization and full data synchronization | Free |
| Incremental data synchronization | Charged. See Billing overview. |
Supported synchronization topologies
One-way one-to-one synchronization
One-way one-to-many synchronization
One-way many-to-one synchronization
For the full list of supported topologies, see Synchronization topologies.
SQL operations that can be synchronized
| Operation type | SQL statements |
|---|---|
| DML | INSERT, UPDATE, DELETE |
DDL operations are not synchronized in this scenario.
Create a synchronization task
The following steps walk you through creating a DTS synchronization task from the DTS console or the Data Management (DMS) console.
Step 1: Go to the Data synchronization page
From the DTS console:
Log on to the DTS console.
In the left-side navigation pane, click Data Synchronization.
In the upper-left corner, select the region where the data synchronization instance will reside.
From the DMS console:
The navigation steps may vary depending on the DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console.
Log on to the DMS console.
In the top navigation bar, move the pointer over Data + AI and choose DTS (DTS) > Data Synchronization.
From the drop-down list to the right of Data Synchronization Tasks, select the region.
Step 2: Create a task
Click Create Task to open the task configuration page.
Step 3: Configure the source and destination databases
After configuring the source and destination databases, review the Limits displayed on the page before proceeding. Skipping this step may cause the task to fail or result in data inconsistency.
Configure the following parameters:
| Section | Parameter | Value |
|---|---|---|
| N/A | Task Name | Enter a descriptive name. DTS generates a default name, but a meaningful name makes the task easier to identify. Names do not need to be unique. |
| Source Database | Database Type | Select DMS LogicDB. |
| Access Method | Select Alibaba Cloud Instance. | |
| Instance Region | Select the region where the DMS logical database resides. | |
| Database Account | Enter the shared database account for the PolarDB for MySQL clusters hosting the DMS physical databases. The account must have read permissions on the physical databases. See Create and manage a database account. | |
| Database Password | Enter the database password. | |
| Destination Database | Database Type | Select AnalyticDB for MySQL 3.0. |
| Access Method | Select Alibaba Cloud Instance. | |
| Instance Region | Select the region where the destination AnalyticDB for MySQL V3.0 cluster resides. | |
| Instance ID | Select the destination AnalyticDB for MySQL V3.0 cluster. | |
| Database Account | Enter the database account for the destination cluster. The account must have read and write permissions on the destination database. | |
| Database Password | Enter the database password. |
Step 4: Test connectivity
Click Test Connectivity and Proceed.
Make sure that the CIDR blocks of DTS servers are added to the security settings of both the source and destination databases. See Add the CIDR blocks of DTS servers.
Step 5: Configure objects to synchronize
Configure synchronization types and objects:
| Parameter | Description |
|---|---|
| Synchronization Types | By default, Incremental Data Synchronization is selected. You must also select Schema Synchronization and Full Data Synchronization. After the precheck is complete, DTS synchronizes the historical data of the selected objects from the source database to the destination cluster. The historical data is the basis for subsequent incremental synchronization. When schema synchronization and full data synchronization are both selected, DTS also synchronizes tables created with CREATE TABLE. |
| Processing Mode of Conflicting Tables | Precheck and Report Errors (default): fails the precheck if tables with identical names exist in both source and destination. Use object name mapping to rename destination tables if needed. Ignore Errors and Proceed: skips this precheck. During full sync, records with matching primary or unique key values in the destination are kept as-is. During incremental sync, they are overwritten. Schema mismatches may cause initialization failures or partial column sync. Use this option with caution. |
| Source Objects | Select objects from Source Objects and move them to Selected Objects. Select columns, tables, or entire databases. Selecting tables or columns excludes views, triggers, and stored procedures. When syncing an entire database, tables with primary keys use those columns as distribution keys. Tables without primary keys get an auto-generated auto-increment primary key, which may cause data inconsistency between source and destination. |
| Selected Objects | To rename individual objects, right-click an object and use the mapping options. To rename multiple objects at once, click Batch Edit. To filter specific SQL operations or add WHERE conditions for a table, right-click the object. See Specify filter conditions. |
Configure advanced settings:
Click Next: Advanced Settings and configure the following:
| Parameter | Description |
|---|---|
| Dedicated Cluster for Task Scheduling | By default, DTS schedules the task on a shared cluster. For higher stability, purchase a dedicated cluster. See What is a DTS dedicated cluster. |
| Retry Time for Failed Connections | The duration DTS retries if it cannot connect to the source or destination after the task starts. Range: 10–1440 minutes. Default: 720. Set this to more than 30 minutes. If multiple tasks share the same source or destination with different retry times, the shortest value takes precedence. The DTS instance continues to incur charges during retry. |
| Retry Time for Other Issues | The duration DTS retries if DML or DDL operations fail during the task. Range: 1–1440 minutes. Default: 10. Set this to more than 10 minutes. This value must be less than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Synchronization | Limits the read/write rate during full data synchronization to reduce load on source and destination. Configure QPS to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Synchronization is selected. |
| Enable Throttling for Incremental Data Synchronization | Limits the write rate during incremental sync. Configure RPS of Incremental Data Synchronization and Data synchronization speed for incremental synchronization (MB/s). |
| Environment Tag | An optional label for the DTS instance, for organizational purposes. |
| Configure ETL | Select Yes to enable the extract, transform, and load (ETL) feature and enter data processing statements. Select No to skip. See Configure ETL in a data migration or data synchronization task. |
| Monitoring and Alerting | Select Yes to receive notifications when the task fails or synchronization latency exceeds a threshold. Configure the alert threshold and notification settings. See Configure monitoring and alerting. |
(Optional) Configure destination table structure:
Click Next: Configure Database and Table Fields to configure the Type, Primary Key Column, Distribution Key, Partition Key, Partitioning Rules, and Partition Lifecycle for destination tables.
This step is only available when Schema Synchronization is selected. Set Definition Status to All to see all tables. The Primary Key Column field supports composite primary keys — if you set a composite key, specify at least one column as the Distribution Key and Partition Key. See CREATE TABLE.
Step 6: Run the precheck
Click Next: Save Task Settings and Precheck.
To preview the OpenAPI parameters for this task configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters before proceeding.
DTS runs a precheck before the task can start. If the precheck fails:
Click View Details next to the failed item, troubleshoot the issue, then click Precheck Again.
If an alert item can be safely ignored, click Confirm Alert Details, then click Ignore in the dialog. Click OK, then Precheck Again. Ignoring an alert may cause data inconsistency.
Step 7: Purchase the instance
Wait for the Success Rate to reach 100%, then click Next: Purchase Instance.
On the purchase page, configure the following:
Parameter Description Billing Method Subscription: pay upfront for a set period; more cost-effective for long-term use. Pay-as-you-go: billed hourly; suitable for short-term use. Release pay-as-you-go instances when no longer needed to avoid unnecessary charges. Resource Group Settings The resource group for the instance. Default: default resource group. See What is Resource Management? Instance Class Determines synchronization speed. Select based on your throughput requirements. See Instance classes of data synchronization instances. Subscription Duration Available when Subscription is selected. Options: 1–9 months, or 1, 2, 3, or 5 years. Read and select Data Transmission Service (Pay-as-you-go) Service Terms.
Click Buy and Start, then click OK in the confirmation dialog.
The task appears in the task list. Monitor its progress there.
What's next
To monitor synchronization latency and task health, see Configure monitoring and alerting.
To modify task parameters after the task is running, see Modify the parameters of a DTS instance.
To rename synchronized objects, see Map object names.
To filter specific rows during synchronization, see Specify filter conditions.