Use Data Transmission Service (DTS) to keep an AnalyticDB for MySQL 3.0 cluster in sync with an ApsaraDB RDS for MySQL instance. This is the standard approach for feeding a real-time analytics layer — business intelligence (BI) dashboards, interactive queries, and operational reports — without touching the transactional database.
How it works
DTS runs in three phases:
Schema synchronization — DTS replicates table structures from RDS MySQL to the destination cluster.
Full data synchronization — DTS performs an initial bulk load of all selected data. While this runs, DTS continues reading the source binary log to track changes.
Incremental data synchronization — DTS applies ongoing changes (INSERT, UPDATE, DELETE) from the binary log to the destination cluster, keeping the two databases in sync.
Schema synchronization and full data synchronization are free. Incremental synchronization is billed on a pay-as-you-go or subscription basis. See Billing overview.
What's supported
Sync architectures
One-to-one (one source to one destination)
One-to-many (one source to multiple destinations)
Many-to-one (multiple sources to one destination, using the table merging feature)
SQL operations
| Type | Supported operations | Notes |
|---|---|---|
| DML | INSERT, UPDATE, DELETE | UPDATE is converted to REPLACE INTO in AnalyticDB for MySQL. If the primary key is updated, DTS converts it to DELETE + INSERT. |
| DDL | CREATE TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLE, ADD COLUMN, MODIFY COLUMN, DROP COLUMN | See the warning about RENAME TABLE below. |
Changing a field type in a source table during synchronization causes the task to stop with an error. See Recover from a field type change error.
A RENAME TABLE operation can cause data loss if the synchronization object is a single table. To avoid this, synchronize the entire database that contains the table so that both the pre-rename and post-rename table names are included in the synchronization scope.
Prerequisites
Before you begin, make sure you have:
An AnalyticDB for MySQL 3.0 cluster. See Create a cluster.
A destination cluster with more available storage than the data volume used by the source RDS MySQL instance.
A database account on the source RDS MySQL instance with the REPLICATION CLIENT, REPLICATION SLAVE, and SELECT permissions on the objects to be synchronized.
A database account on the destination AnalyticDB for MySQL 3.0 cluster with read and write permissions.
Limitations
During schema synchronization, DTS does not replicate foreign keys. During full and incremental synchronization, DTS temporarily disables constraint checks and foreign key cascade operations at the session level. If cascade update or delete operations run on the source while the task is active, data inconsistency may occur.
Source database limitations
| Limitation | Details |
|---|---|
| Primary key or unique constraint required | Tables must have a PRIMARY KEY or UNIQUE constraint. Without it, duplicate rows may appear in the destination. |
| Table limit for object name mapping | For table-level synchronization that requires object name mapping (such as renaming tables or columns), a single task supports up to 1,000 tables. If you exceed this limit, split the objects across multiple tasks or synchronize at the database level instead. |
| No DDL on primary keys or table comments during sync | Do not run ALTER TABLE table_name COMMENT='...' or any DDL that modifies primary keys while the task is running. |
| No schema DDL during schema sync or full sync | Do not change database or table schemas during schema synchronization or full data synchronization. During the full synchronization phase, DTS holds metadata locks on the source, which can block DDL. |
| Binary log changes only | DTS does not replicate changes that are not recorded in binary logs — for example, data restored from a physical backup or generated by cascade operations. To recover, remove and re-add the affected table from the synchronization objects. See Modify synchronization objects. |
| MySQL 8.0.23+ invisible columns | Invisible columns cannot be read by DTS and their data will be lost. Before starting the task, run: ALTER TABLE <table_name> ALTER COLUMN <column_name> SET VISIBLE;. See Invisible Columns. |
Binary log requirements by source type:
For ApsaraDB RDS for MySQL:
Binary logging is enabled by default.
Set
binlog_row_imagetofull. If this parameter is not set correctly, the precheck fails. See Set instance parameters.Retain binary logs for at least 3 days (7 days recommended). See Automatically delete local logs.
For self-managed MySQL:
Enable binary logging, set
binlog_formattorow, and setbinlog_row_imagetofull.For a primary/primary cluster (where both nodes are the primary for each other), enable
log_slave_updates. See Create a database account for a self-managed MySQL database and configure binary logging.Retain binary logs for at least 7 days.
If binary log retention is shorter than the required period, the DTS task may fail and data inconsistency or data loss may occur. This is not covered by the DTS Service-Level Agreement (SLA).
Other limitations
Prefix indexes are not supported. If the source has prefix indexes, synchronization may fail.
Online DDL operations that use temporary tables (for example, merging multiple tables) may cause data loss or task failure.
The following object types are not synchronized: INDEX, PARTITION, VIEW, PROCEDURE, FUNCTION, TRIGGER, and foreign keys (FK).
Primary key or unique key conflicts:
If table schemas match and the destination has a conflicting record: during full synchronization, DTS retains the destination record and skips the source record; during incremental synchronization, DTS overwrites the destination record.
If table schemas differ, data initialization may fail, resulting in partial or complete synchronization failure.
The destination table must have a custom primary key, or you must configure a Primary Key Column in the Configurations for Databases, Tables, and Columns step.
If a node in the AnalyticDB for MySQL cluster exceeds 80% disk usage, the DTS task becomes abnormal and latency increases. Estimate the required space before starting, accounting for the fact that concurrent INSERTs during the full synchronization phase cause the destination tablespace to be larger than the source.
If the destination cluster is being backed up while the DTS task runs, the task fails.
Run synchronization during off-peak hours to minimize the load on both source and destination databases.
For table-level synchronization, do not use pt-online-schema-change for online DDL on synchronized objects. Use Data Management (DMS) instead.
Do not write data to the destination from sources other than DTS while the task is running. This causes data inconsistency.
If a DDL statement fails to apply to the destination, the task continues running. Check the task logs for failed DDL statements. See View task logs.
If the RDS MySQL instance has the always-confidential database (EncDB) feature enabled, full data synchronization is not supported. For RDS MySQL instances with Transparent Data Encryption (TDE) enabled, schema synchronization, full data synchronization, and incremental data synchronization are supported.
If the task fails, DTS support attempts recovery within 8 hours. Only DTS task parameters are modified during recovery — database parameters are not changed.
Special cases for self-managed MySQL
A primary/secondary switchover during synchronization causes the task to fail.
If no DML operations occur on the source for an extended period, the displayed synchronization latency may be inaccurate. Run a DML operation on the source to refresh the latency value. Alternatively, if you are synchronizing an entire database, create a heartbeat table — DTS updates it every second.
DTS periodically runs
CREATE DATABASE IF NOT EXISTS \test\`` on the source to advance the binary log offset.For Amazon Aurora MySQL or other cluster-mode MySQL instances, the IP address or domain name configured for the DTS task must always resolve to the read/write (RW) node. If it resolves to a read-only node, the task may not run correctly.
Special cases for ApsaraDB RDS for MySQL
Read-only instances of RDS for MySQL 5.6 are not supported as a source because they do not record transaction logs.
DTS periodically runs
CREATE DATABASE IF NOT EXISTS \test\`` on the source to advance the binary log offset.
Create a synchronization task
Step 1: Open the task list
Go to the synchronization task list in one of two ways:
DTS console
Log on to the DTS console.
In the left navigation pane, click Data Synchronization.
In the upper-left corner, select the region where the synchronization instance will be located.
DMS console
The DMS console layout may vary. See Simple mode console for guidance.
Log on to the DMS console.
In the top menu bar, choose Data + AI > DTS (DTS) > Data Synchronization.
To the right of Data Synchronization Tasks, select the region.
Step 2: Configure source and destination databases
Click Create Task, then configure the source and destination.
After selecting source and destination instances, review the Limits shown at the top of the page before proceeding.
Task Name DTS generates a name automatically. Specify a descriptive name for easy identification — the name does not need to be unique.
Source Database
| Parameter | Value |
|---|---|
| Select Existing Connection | Select a registered instance from the list, or leave blank to configure manually. In the DMS console, this is labeled Select a DMS database instance. |
| Database Type | MySQL |
| Connection Type | Cloud Instance |
| Instance Region | Region of the source RDS MySQL instance |
| Cross-account | No (this task is within the same Alibaba Cloud account) |
| RDS Instance ID | ID of the source RDS MySQL instance |
| Database Account | Account with REPLICATION CLIENT, REPLICATION SLAVE, and SELECT permissions |
| Database Password | Password for the database account |
| Connection Method | Non-encrypted or SSL-encrypted. If you select SSL-encrypted, enable SSL encryption on the RDS instance first. See Use a cloud certificate to quickly enable SSL link encryption. |
Destination Database
| Parameter | Value |
|---|---|
| Select Existing Connection | Select a registered instance from the list, or leave blank to configure manually. |
| Database Type | AnalyticDB MySQL 3.0 |
| Connection Type | Cloud Instance |
| Instance Region | Region of the destination AnalyticDB for MySQL 3.0 cluster |
| Instance ID | ID of the destination cluster |
| Database Account | Account with read and write permissions |
| Database Password | Password for the database account |
Click Test Connectivity and Proceed.
DTS needs network access to both databases. Add the DTS server CIDR blocks to the security settings of both databases. See Add the IP address whitelist of DTS servers. For self-managed databases, also click Test Connectivity in the CIDR Blocks of DTS Servers dialog.
Step 3: Select objects and configure synchronization settings
On the Configure Objects page, configure the following:
Synchronization types: DTS always enables Incremental Data Synchronization. Also select Schema Synchronization and Full Data Synchronization (both are selected by default). With full synchronization selected, DTS initializes the destination with a complete copy of the source data before starting incremental sync.
Processing mode of conflicting tables:
Precheck and Report Errors (default): If the destination has tables with the same name as the source objects, the precheck fails and the task does not start. To rename conflicting tables rather than deleting them, use object name mapping.
Ignore Errors and Proceed: Skips the name conflict check. Use with caution — this may cause data inconsistency.
DDL and DML operations to synchronize: Select the SQL operations to synchronize at the instance level. To configure at the database or table level, right-click the object in the Selected Objects list.
Merge tables: Select Yes if you are consolidating sharded tables (OLTP sharding) into a single table in AnalyticDB for MySQL (OLAP). DTS adds a __dts_data_source column of the TEXT type to the destination table to identify the source of each row (format: DTS instance ID:database name:schema name:table name). The table merging feature applies at the task level — to merge only specific tables, create a separate task for them. Select No for standard synchronization.
Do not perform DDL operations that change the database or table schema in the source while table merging is enabled.
Capitalization of object names: Configure the case-sensitivity policy for object names in the destination. The default is DTS default policy. See Case policy for destination object names.
Source Objects and Selected Objects: In the Source Objects box, select the databases, tables, or columns to synchronize, then click the right-arrow button to move them to the Selected Objects box.
To rename a single object in the destination, right-click it in Selected Objects and use the object name mapping feature.
To rename multiple objects in bulk, click Batch Edit in the upper-right corner of the Selected Objects box.
To filter rows with a WHERE clause, right-click a table in Selected Objects and set the filter condition. See Set a filter condition.
If you synchronize an entire database: tables with a primary key use that key as the distribution key; tables without a primary key get an auto-generated one, which may cause data inconsistency between source and destination.
Click Next: Advanced Settings.
Step 4: Configure advanced settings
| Parameter | Description |
|---|---|
| Dedicated Cluster for Task Scheduling | By default, DTS uses a shared cluster. For greater stability, purchase a dedicated cluster. See What is a DTS dedicated cluster?. |
| Copy the temporary table of the online DDL tool | If the source uses DMS or gh-ost for online DDL: select Yes to sync temporary tables (may increase latency for large tables); select No, Adapt to DMS Online DDL to sync only the original DDL (causes table locks on the destination); select No, Adapt to gh-ost for gh-ost-based changes. Note: pt-online-schema-change is not supported and will cause the task to fail. |
| Retry time for failed connections | How long DTS retries after a connection failure. Default: 720 minutes. Configurable range: 10–1,440 minutes. Minimum recommended: 30 minutes. If multiple DTS instances share the same source or destination, the shortest configured retry duration applies to all instances. |
| Retry time for other issues | How long DTS retries after non-connection errors (for example, DDL or DML execution failures). Default: 10 minutes. Configurable range: 1–1,440 minutes. Recommended: 10 minutes or more. Must be less than the retry time for failed connections. |
| Enable throttling for full data synchronization | Limit the read/write rate during the full synchronization phase to reduce database load. Set QPS to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s) as needed. Available only when Full Data Synchronization is selected. |
| Enable throttling for incremental data synchronization | Limit the incremental sync rate by setting RPS of Incremental Data Synchronization and Data synchronization speed for incremental synchronization (MB/s). |
| Environment tag | Optional. Tag the instance to identify its environment. |
| Configure ETL | Select Yes to enable extract, transform, and load (ETL) and enter processing statements. See Configure ETL in a data migration or synchronization task. Select No to skip. |
| Monitoring and alerting | Select Yes to receive alerts when the task fails or latency exceeds a threshold. See Configure monitoring and alerting. |
Step 5: Configure data verification (optional)
Click Data Verification to set up a data verification task. See Configure data verification.
Step 6: Configure destination table fields (optional)
Click Next: Configure Database and Table Fields to configure the Type, Primary Key Column, Distribution Key, and partition settings (Partition Key, Partitioning Rules, Partition Lifecycle) for the destination tables.
This step is available only if Schema Synchronization is selected. Set Definition Status to All to see all tables.
Use Primary Key Column to define a composite primary key (multiple columns). Then select one or more of those columns as the Distribution Key and Partition Key. See CREATE TABLE.
Step 7: Save the task and run the precheck
Click Next: Save Task Settings and Precheck.
To preview API parameters before saving, hover over the button and click Preview OpenAPI parameters.
DTS runs a precheck before starting the task. The task starts only after the precheck reaches 100% Success Rate.
If the precheck fails: click View Details next to the failed item, fix the issue, and click Precheck Again.
If the precheck returns warnings:
For warnings that must be resolved: click View Details, fix the issue, and rerun the precheck.
For ignorable warnings: click Confirm Alert Details > Ignore > OK, then click Precheck Again.
Ignoring precheck warnings may lead to data inconsistency. Proceed with caution.
Step 8: Purchase an instance
When the Success Rate reaches 100%, click Next: Purchase Instance.
| Parameter | Description |
|---|---|
| Billing method | Subscription: Pay upfront for a fixed term (1–9 months, or 1, 2, 3, or 5 years). Cost-effective for long-running tasks. Pay-as-you-go: Billed hourly. Suitable for short-term or test tasks. |
| Resource group | The resource group for this instance. Default: default resource group. See What is Resource Management?. |
| Instance class | Determines synchronization throughput. Select based on your data volume and latency requirements. See Data synchronization link specifications. |
Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start > OK.
The task appears on the data synchronization task list page. Monitor the task progress there.
Recover from a field type change error
If you change a column's data type in the source table during synchronization, the DTS task stops with an error. Follow these steps to recover:
Identify the table with the changed column (for example,
customer) in the DTS task error details.In AnalyticDB for MySQL 3.0, create a new table (for example,
customer_new) with the updated schema.Use the INSERT INTO SELECT command to copy the data from the
customertable and insert it into the newcustomer_newtable. This ensures that the data in both tables is consistent.Rename or delete the original table
customer, then renamecustomer_newtocustomer.In the DTS console, restart the data synchronization task.
FAQ
During schema synchronization, the error
only 500 dimension table allowed, current dimensionTableCount: 500appears. How do I fix it?AnalyticDB for MySQL counts tables in the recycle bin toward the total table limit. If the combined count of active and deleted tables hits the limit, this error appears.
Check whether deleted tables are accumulating in the recycle bin:
-- Count active tables SELECT COUNT(*) FROM INFORMATION_SCHEMA.tables; -- Count tables in the recycle bin SELECT COUNT(*) FROM INFORMATION_SCHEMA.KEPLER_META_RECYCLE_BIN;If the recycle bin has deleted tables, purge them:
ImportantPurging tables from the recycle bin is irreversible. Confirm that these tables are no longer needed.
-- Purge all tables from the recycle bin PURGE RECYCLE_BIN ALL; -- Purge a specific table PURGE RECYCLE_BIN TABLE <table name in ADB_RECYCLE_BIN database>;If the number of active tables alone exceeds the limit, reduce the number of synchronized tables or check the Limits documentation for AnalyticDB for MySQL.