All Products
Search
Document Center

Data Transmission Service:Synchronize data from RDS MySQL to AnalyticDB for MySQL 3.0

Last Updated:Feb 04, 2026

Use Data Transmission Service (DTS) to synchronize data from RDS MySQL to AnalyticDB for MySQL 3.0. This helps you quickly build internal enterprise systems for business intelligence (BI), interactive queries, and real-time reporting.

Prerequisites

  • Create a destination AnalyticDB for MySQL 3.0 cluster. For more information, see Create a cluster.

  • The storage space of the destination AnalyticDB for MySQL instance must be larger than the storage space used by the source RDS MySQL instance.

Notes

Note
  • During schema synchronization, DTS does not synchronize foreign keys from the source database to the destination database.

  • During full and incremental synchronization, DTS temporarily disables constraint checks and foreign key cascade operations at the session level. If cascade update or delete operations occur in the source database while the task is running, data inconsistency may occur.

Type

Description

Source database limits

  • The tables to be synchronized must have a PRIMARY KEY constraint or a UNIQUE constraint, and the fields must be unique. Otherwise, duplicate data may appear in the destination database.

  • If you synchronize data at the table level and need to edit the objects, such as mapping table or column names, you can synchronize a maximum of 1,000 tables in a single task. If this limit is exceeded, an error is reported after you submit the task. In this case, split the tables into multiple synchronization tasks or configure a task to synchronize the entire database.

  • Binary logs:

    • Binary logging is enabled for RDS for MySQL instances by default. You must set the binlog_row_image parameter to full. Otherwise, an error is reported during the precheck and the data synchronization task cannot start. For more information about how to set instance parameters, see Set instance parameters.

      Important
      • If the source instance is a self-managed MySQL database, you must enable binary logging and set binlog_format to row and binlog_row_image to full.

      • If the source self-managed MySQL database is a primary/primary cluster where the two databases are the primary and secondary of each other, you must enable the log_slave_updates parameter. This ensures that DTS can obtain all binary logs. For more information, see Create a database account for a self-managed MySQL database and configure binary logging.

    • Retain binary logs for at least 3 days (7 days recommended) on ApsaraDB RDS for MySQL instances and at least 7 days on self-managed MySQL databases. Otherwise, the DTS task may fail because DTS cannot obtain the binary logs. In extreme cases, data inconsistency or data loss may occur. Issues that are caused by a binary log retention period shorter than the required period are not covered by the DTS Service-Level Agreement (SLA).

      Note

      For more information about how to set the Retention Period of binary logs for an ApsaraDB RDS for MySQL instance, see Automatically delete local logs.

  • During synchronization, do not perform DDL operations that modify primary keys or add comments, such as ALTER TABLE table_name COMMENT='table comment';. Otherwise, the DDL operation fails during data synchronization.

  • During schema synchronization and full data synchronization, do not perform Data Definition Language (DDL) operations that change the schema of databases or tables. Otherwise, the data synchronization task fails.

    Note

    During the full synchronization phase, DTS queries the source database, which acquires metadata locks. This may block DDL operations on the source database.

  • During synchronization, DTS does not synchronize data changes that are not recorded in binary logs (such as data restored from a physical backup or data generated by cascade operations).

    Note

    If this occurs, you can remove the database or table that contains the data from the synchronization objects and then add it back. This is allowed if your business permits. For more information, see Modify synchronization objects.

  • If the source database is a MySQL database of version 8.0.23 or later and the data to be synchronized contains invisible columns, data loss may occur because the data in these columns cannot be obtained.

    Note

    Run ALTER TABLE <table_name> ALTER COLUMN <column_name> SET VISIBLE; to make the invisible columns visible. For more information, see Invisible Columns.

Other limits

  • Synchronization of prefix indexes is not supported. If the source database has prefix indexes, the data synchronization may fail.

  • If you perform online DDL operations that use temporary tables on the source database, such as merging multiple tables, data loss may occur in the destination database or the synchronization task may fail.

  • Synchronization of INDEX, PARTITION, VIEW, PROCEDURE, FUNCTION, TRIGGER, and FK objects is not supported.

  • If a primary key or unique key conflict occurs while the synchronization task is running:

    • If the table schemas are consistent and a record in the destination database has the same primary key or unique key value as a record in the source database:

      • During full data synchronization, DTS retains the destination record and skips the source record.

      • During incremental synchronization, DTS overwrites the destination record with the source record.

    • If the table schemas are inconsistent, data initialization may fail. This can result in only partial data synchronization or a complete synchronization failure. Use with caution.

  • The destination database must have a custom primary key, or in the Configurations for Databases, Tables, and Columns step, configure the Primary Key Column. Otherwise, data synchronization may fail.

  • Due to the limits of AnalyticDB for MySQL, if the disk space usage of a node in the AnalyticDB for MySQL cluster exceeds 80%, the DTS task becomes abnormal and latency occurs. Estimate the required space based on the objects to be synchronized and make sure that the destination cluster has sufficient storage space.

  • If the destination AnalyticDB for MySQL 3.0 cluster is being backed up while the DTS task is running, the task fails.

  • Before you synchronize data, evaluate the performance of the source and destination databases. We recommend that you synchronize data during off-peak hours. Otherwise, initial full data synchronization consumes read and write resources on the source and destination databases, which may increase the database load.

  • Initial full data synchronization runs concurrent INSERT operations, which causes fragmentation in the destination tables. As a result, the tablespace of the destination instance is larger than that of the source instance after initial full data synchronization is complete.

  • If you synchronize one or more tables instead of an entire database, do not use tools such as pt-online-schema-change to perform online DDL operations on the synchronization objects in the source database. Otherwise, the synchronization fails.

    You can use Data Management (DMS) to perform online DDL operations. For more information, see Change schemas without locking tables.

  • For table-level data synchronization, if no data from sources other than DTS is written to the destination AnalyticDB for MySQL database, you can use Data Management (DMS) to perform online DDL operations. For more information, see Change schemas without locking tables.

  • During DTS synchronization, do not allow data writes to the destination database from sources other than DTS. Otherwise, data inconsistency between the source and destination databases occurs. For example, if you use DMS to perform online DDL operations while data is being written to the destination database from other sources, data loss may occur in the destination database.

  • If a DDL statement fails to be written to the destination database, the DTS task continues to run. You need to check the task logs for the failed DDL statement. For more information about how to view task logs, see View task logs.

  • If the RDS MySQL instance has the always-confidential database (EncDB) feature enabled, full data synchronization is not supported.

    Note

    For RDS for MySQL instances with Transparent Data Encryption (TDE) enabled, schema synchronization, full data synchronization, and incremental data synchronization are supported.

  • If the task fails, DTS technical support will attempt to recover it within 8 hours. During the recovery process, operations such as restarting the task or adjusting its parameters may be performed.

    Note

    When parameters are adjusted, only DTS task parameters are modified. Database parameters remain unchanged.The parameters that may be modified include but are not limited to those described in Modify instance parameters.

Special cases

  • When the source database is a self-managed MySQL database:

    • If a primary/secondary switchover occurs on the source database during synchronization, the sync task fails.

    • The latency of DTS is calculated by comparing the timestamp of the last synchronized data record in the destination database with the current timestamp. If no DML operations are performed on the source database for a long time, the displayed latency may be inaccurate. If the displayed latency is too high, you can perform a DML operation on the source database to update the latency information.

      Note

      If you choose to synchronize the entire database, you can also create a heartbeat table. The heartbeat table is updated or written to every second.

    • DTS periodically runs the CREATE DATABASE IF NOT EXISTS `test` command on the source database to advance the binary log offset.

    • If the source database is an Amazon Aurora MySQL instance or another cluster-mode MySQL instance, make sure that the domain name or IP address configured for the task and its resolved result always point to the read/write (RW) node address. Otherwise, the sync task may not run as expected.

  • When the source database is an RDS for MySQL instance:

    • RDS for MySQL instances that do not record transaction logs, such as read-only instances of RDS for MySQL 5.6, are not supported as a source.

    • DTS periodically runs the CREATE DATABASE IF NOT EXISTS `test` command on the source database to advance the binary log offset.

Billing

Synchronization type

Pricing

Schema synchronization and full data synchronization

Free of charge.

Incremental data synchronization

Charged. For more information, see Billing overview.

Supported synchronization architectures

  • One-to-one one-way synchronization.

  • One-to-many one-way synchronization.

  • Many-to-one one-way synchronization.

Supported SQL operations for synchronization

Operation type

SQL statement

DML

INSERT, UPDATE, DELETE

Note

When data is written to AnalyticDB for MySQL, UPDATE statements are automatically converted to REPLACE INTO statements (if the primary key is updated, they are converted to DELETE+INSERT statements).

DDL

CREATE TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLE, ADD COLUMN, MODIFY COLUMN, DROP COLUMN

Important

The RENAME TABLE operation may cause data inconsistency. For example, if the synchronization object is a single table and you rename it on the source instance during synchronization, the data of that table is not synchronized to the destination database. To avoid this issue, you can configure the task to synchronize the entire database that contains the table. Make sure that the database to which the table belongs both before and after the RENAME TABLE operation is included in the synchronization objects.

Warning

If you change the field type of a source table during data synchronization, the task reports an error and stops. You can follow these steps to manually fix the issue.

  1. The synchronization task fails because a field type was changed in a source table, for example, customer, during synchronization to the destination AnalyticDB for MySQL database.

  2. In AnalyticDB for MySQL 3.0, create a new table, customer_new, with the same schema as the customer table.

  3. Use the INSERT INTO SELECT command to copy the data from the customer table and insert it into the new customer_new table. This ensures that the data in both tables is consistent.

  4. Rename or delete the failed table customer, and then change the name of the customer_new table to customer.

  5. In the DTS console, restart the data synchronization task.

Procedure

  1. Go to the data synchronization task list page in the destination region. You can do this in one of two ways.

    DTS console

    1. Log on to the DTS console.

    2. In the navigation pane on the left, click Data Synchronization.

    3. In the upper-left corner of the page, select the region where the synchronization instance is located.

    DMS console

    Note

    The actual steps may vary depending on the mode and layout of the DMS console. For more information, see Simple mode console and Customize the layout and style of the DMS console.

    1. Log on to the DMS console.

    2. In the top menu bar, choose Data + AI > DTS (DTS) > Data Synchronization.

    3. To the right of Data Synchronization Tasks, select the region of the synchronization instance.

  2. Click Create Task to open the task configuration page.

  3. Configure the source and destination databases.

    Warning

    After you select the source and destination instances, review the Limits at the top of the page. Otherwise, the task may fail or data inconsistency may occur.

    Category

    Configuration

    Description

    None

    Task Name

    DTS automatically generates a task name. We recommend that you specify a descriptive name for easy identification. The name does not need to be unique.

    Source Database

    Select Existing Connection

    • Select the registered database instance with DTS from the drop-down list. The database information below is automatically configured.

      Note

      In the DMS console, this configuration item is Select a DMS database instance.

    • If you have not registered the database instance or do not need to use a registered instance, manually configure the database information below.

    Database Type

    Select MySQL.

    Connection Type

    Select Cloud Instance.

    Instance Region

    Select the region where the source RDS MySQL instance resides.

    Cross-account

    This scenario involves synchronization within the same Alibaba Cloud account. Select No.

    RDS Instance ID

    Select the ID of the source RDS MySQL instance.

    Database Account

    Enter the database account for the source RDS MySQL instance. The account must have the REPLICATION CLIENT, REPLICATION SLAVE, and SELECT permissions on the objects to be synchronized.

    Database Password

    Enter the password for the specified database account.

    Connection Method

    Select Non-encrypted or SSL-encrypted as needed. If you set this to SSL-encrypted, you must enable SSL encryption for the RDS for MySQL instance beforehand. For more information, see Use a cloud certificate to quickly enable SSL link encryption.

    Destination Database

    Select Existing Connection

    • Select the registered database instance with DTS from the drop-down list. The database information below is automatically configured.

      Note

      In the DMS console, this configuration item is Select a DMS database instance.

    • If you have not registered the database instance or do not need to use a registered instance, manually configure the database information below.

    Database Type

    Select AnalyticDB MySQL 3.0.

    Connection Type

    Select Cloud Instance.

    Instance Region

    Select the region where the destination AnalyticDB for MySQL 3.0 cluster resides.

    Instance ID

    Select the ID of the destination AnalyticDB for MySQL 3.0 cluster.

    Database Account

    Enter the database account for the destination AnalyticDB for MySQL 3.0 cluster. The account must have read and write permissions.

    Database Password

    Enter the password for the specified database account.

  4. After completing the configuration, click Test Connectivity and Proceed at the bottom of the page.

    Note
    • Ensure that you add the CIDR blocks of the DTS servers (either automatically or manually) to the security settings of both the source and destination databases to allow access. For more information, see Add the IP address whitelist of DTS servers.

    • If the source or destination is a self-managed database (i.e., the Access Method is not Alibaba Cloud Instance), you must also click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.

  5. Configure the task objects.

    1. On the Configure Objects page, specify the objects to synchronize.

      Configuration

      Description

      Synchronization Types

      DTS always selects Incremental Data Synchronization. By default, you must also select Schema Synchronization and Full Data Synchronization. After the precheck, DTS initializes the destination cluster with the full data of the selected source objects, which serves as the baseline for subsequent incremental synchronization.

      Note

      If you select Full Synchronization, tables for which a CREATE TABLE statement was run (both schema and data) can be synchronized to the destination database.

      Processing Mode of Conflicting Tables

      • Precheck and Report Errors: Checks for tables with the same names in the destination database. If any tables with the same names are found, an error is reported during the precheck and the data synchronization task does not start. Otherwise, the precheck is successful.

        Note

        If you cannot delete or rename the table with the same name in the destination database, you can map it to a different name in the destination. For more information, see Database Table Column Name Mapping.

      • Ignore Errors and Proceed: Skips the check for tables with the same name in the destination database.

        Warning

        Selecting Ignore Errors and Proceed may cause data inconsistency and put your business at risk. For example:

        • If the table schemas are consistent and a record in the destination database has the same primary key or unique key value as a record in the source database:

          • During full data synchronization, DTS retains the destination record and skips the source record.

          • During incremental synchronization, DTS overwrites the destination record with the source record.

        • If the table schemas are inconsistent, data initialization may fail. This can result in only partial data synchronization or a complete synchronization failure. Use with caution.

      DDL and DML Operations to Be Synchronized

      Select the DDL or DML operations to be synchronized at the instance level. For a list of supported operations, see Supported SQL operations.

      Note

      To select SQL operations to be synchronized at the database or table level, right-click a synchronization object in the Selected Objects list and select the desired SQL operations in the dialog box that appears.

      Merge Tables

      • Select Yes: In online transactional processing (TP) scenarios, sharding is often used to improve the response speed of business tables. In online analytical processing (OLAP) scenarios, a single table in the destination database can store large volumes of data, which simplifies single-table queries. In such scenarios, you can use the DTS table merging feature to synchronize multiple tables with the same schema (sharded tables) from the source database to a single table in the destination database. For details on this operation, see Enable table merging.

        Note
        • After you select multiple tables from the source database, you must use the object name mapping feature to change their names to the same table name in the destination database. For more information about the object name mapping feature, see Database Table Column Name Mapping.

        • DTS adds a __dts_data_source column of the TEXT type to the destination table to store the data source. The column value is written in the DTS instance ID:database name:schema name:table name format to distinguish the source of the table, for example, dts********:dtstestdata:testschema:customer1.

        • The table merging feature applies at the task level, not the table level. To merge only some tables, you must create a separate data synchronization task for them.

        Warning

        Do not perform DDL operations that change the database or table schema in the source database. Otherwise, data inconsistency or task failure may occur.

      • No: This is the default option.

      Capitalization of Object Names in Destination Instance

      Configure the case-sensitivity policy for database, table, and column names in the destination instance. By default, the DTS default policy is selected. You can also choose to use the default policy of the source or destination database. For more information, see Case policy for destination object names.

      Source Objects

      In the Source Objects box, click the objects, and then click 向右 to move them to the Selected Objects box.

      Note
      • You can select objects to synchronize at the granularity of database, table, or column. If you select tables or columns, other objects such as views, triggers, and stored procedures are not synchronized to the destination database.

      • If you select an entire database as the synchronization object, the default behavior is as follows:

        • If a table to be synchronized in the source database has a primary key (single-column or multi-column), that primary key column is used as the distribution key.

        • If a table to be synchronized in the source database does not have a primary key, an auto-increment primary key column is automatically generated. This may cause data inconsistency between the source and destination databases.

      Selected Objects

      • To rename a single object in the destination instance, right-click the object in the Selected Objects box. For more information, see Map a single object name.

      • To rename multiple objects in bulk, click Batch Edit in the upper-right corner of the Selected Objects box. For more information, see Map multiple object names in bulk.

      Note
      • To select SQL operations to be synchronized at the database or table level, right-click the synchronization object in the Selected Objects list and select the desired SQL operations in the dialog box that appears.

      • To filter data using a WHERE clause, right-click the table to be synchronized in the Selected Objects list and set the filter condition in the dialog box that appears. For information about how to set the filter condition, see Set a filter condition.

    2. Click Next: Advanced Settings.

      Configuration

      Description

      Dedicated Cluster for Task Scheduling

      By default, DTS uses a shared cluster for tasks, so you do not need to make a selection. For greater task stability, you can purchase a dedicated cluster to run the DTS synchronization task. For more information, see What is a DTS dedicated cluster?.

      Copy the temporary table of the Online DDL tool that is generated in the source table to the destination database.

      If the source database uses Data Management (DMS) or gh-ost for online DDL changes, choose whether to synchronize the temporary tables generated during these operations.

      Important

      DTS tasks do not currently support online DDL changes performed by tools like pt-online-schema-change. Using such tools will cause the DTS task to fail.

      • Yes: Synchronizes the temporary tables generated by online DDL changes.

        Note

        If the data of temporary tables generated by online DDL changes is too large, it may cause synchronization latency.

      • No, Adapt to DMS Online DDL: Does not synchronize temporary tables generated by online DDL changes. Instead, it synchronizes only the original DDL statements executed in Data Management (DMS).

        Note

        This approach will cause table locks on the destination database.

      • No, Adapt to gh-ost: Does not the synchronize temporary tables generated by online DDL changes. Instead, it synchronizes only the original DDL statements executed by gh-ost. You can use default or custom regular expressions for gh-ost shadow and trash tables.

        Note

        This approach will cause table locks on the destination database.

      Retry Time for Failed Connections

      If the connection to the source or destination database fails after the synchronization task starts, DTS reports an error and immediately begins to retry the connection. The default retry duration is 720 minutes. You can customize the retry time to a value from 10 to 1,440 minutes. We recommend a duration of 30 minutes or more. If the connection is restored within this period, the task resumes automatically. Otherwise, the task fails.

      Note
      • If multiple DTS instances (e.g., Instance A and B) share a source or destination, DTS uses the shortest configured retry duration (e.g., 30 minutes for A, 60 for B, so 30 minutes is used) for all instances.

      • DTS charges for task runtime during connection retries. Set a custom duration based on your business needs, or release the DTS instance promptly after you release the source/destination instances.

      Retry Time for Other Issues

      If a non-connection issue (e.g., a DDL or DML execution error) occurs, DTS reports an error and immediately retries the operation. The default retry duration is 10 minutes. You can also customize the retry time to a value from 1 to 1,440 minutes. We recommend a duration of 10 minutes or more. If the related operations succeed within the set retry time, the synchronization task automatically resumes. Otherwise, the task fails.

      Important

      The value of Retry Time for Other Issues must be less than that of Retry Time for Failed Connections.

      Enable Throttling for Full Data Synchronization

      During full data synchronization, DTS consumes read and write resources from the source and destination databases, which can increase their load. To mitigate pressure on the destination database, you can limit the migration rate by setting Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s).

      Note

      Enable Throttling for Incremental Data Synchronization

      You can also limit the incremental synchronization rate to reduce pressure on the destination database by setting RPS of Incremental Data Synchronization and Data synchronization speed for incremental synchronization (MB/s).

      Environment Tag

      You can select an environment tag to identify the instance as needed. No selection is required for this example.

      Configure ETL

      Choose whether to enable the extract, transform, and load (ETL) feature. For more information, see What is ETL? Valid values:

      Monitoring and Alerting

      Choose whether to set up alerts. If the synchronization fails or the latency exceeds the specified threshold, DTS sends a notification to the alert contacts.

    3. Click Data Verification to configure a data verification task.

      To use the data verification feature, see Configure data verification.

    4. Optional: After you complete the configuration, click Next: Configure Database and Table Fields to set the Type, Primary Key Column, Distribution Key, and partition key information (Partition Key, Partitioning Rules, and Partition Lifecycle) for the tables to sync in the destination database.

      Note
      • This step is available only if you select the Schema Synchronization checkbox for Synchronization Types during task object configuration. You can set Definition Status to All to make modifications.

      • You can use the Primary Key Column to specify a composite primary key that consists of multiple columns. You must then select one or more columns from the Primary Key Column to serve as the Distribution Key and Partition Key. For more information, see CREATE TABLE.

  6. Save the task and perform a precheck.

    • To view the parameters for configuring this instance via an API operation, hover over the Next: Save Task Settings and Precheck button and click Preview OpenAPI parameters in the tooltip.

    • If you have finished viewing the API parameters, click Next: Save Task Settings and Precheck at the bottom of the page.

    Note
    • Before a synchronization task starts, DTS performs a precheck. You can start the task only if the precheck passes.

    • If the precheck fails, click View Details next to the failed item, fix the issue as prompted, and then rerun the precheck.

    • If the precheck generates warnings:

      • For non-ignorable warning, click View Details next to the item, fix the issue as prompted, and run the precheck again.

      • For ignorable warnings, you can bypass them by clicking Confirm Alert Details, then Ignore, and then OK. Finally, click Precheck Again to skip the warning and run the precheck again. Ignoring precheck warnings may lead to data inconsistencies and other business risks. Proceed with caution.

  7. Purchase an instance.

    1. When the Success Rate reaches 100%, click Next: Purchase Instance.

    2. On the Purchase page, select the billing method and link specifications for the data synchronization instance. For more information, see the following table.

      Category

      Parameter

      Description

      New Instance Class

      Billing Method

      • Subscription: You pay upfront for a specific duration. This is cost-effective for long-term, continuous tasks.

      • Pay-as-you-go: You are billed hourly for actual usage. This is ideal for short-term or test tasks, as you can release the instance at any time to save costs.

      Resource Group Settings

      The resource group to which the instance belongs. The default is default resource group. For more information, see What is Resource Management?.

      Instance Class

      DTS offers synchronization specifications at different performance levels that affect the synchronization rate. Select a specification based on your business requirements. For more information, see Data synchronization link specifications.

      Subscription Duration

      In subscription mode, select the duration and quantity of the instance. Monthly options range from 1 to 9 months. Yearly options include 1, 2, 3, or 5 years.

      Note

      This option appears only when the billing method is Subscription.

    3. Read and select the checkbox for Data Transmission Service (Pay-as-you-go) Service Terms.

    4. Click Buy and Start, and then click OK in the OK dialog box.

      You can monitor the task progress on the data synchronization page.

FAQ

  • During schema synchronization or migration, the following error occurs: only 500 dimension table allowed, current dimensionTableCount: 500. How do I resolve it?

    • Cause: In AnalyticDB for MySQL, the total number of tables includes both tables currently in use and tables in the recycle bin. If this total exceeds the limit, an error appears indicating that the table count limit has been exceeded. For more information, see Limits.

    • Solution:

      1. In your DTS task, check whether the number of tables being synchronized or migrated exceeds the maximum value shown in the error message.

      2. If the number of tables does not exceed the maximum, run the following commands to check whether deleted tables exist in the recycle bin. For more information, see Table recycle bin.

        -- Query the number of tables currently in use by the instance.
        SELECT COUNT(*) FROM INFORMATION_SCHEMA.tables;
        -- Query the number of tables in the recycle bin.
        SELECT COUNT(*) FROM INFORMATION_SCHEMA.KEPLER_META_RECYCLE_BIN;
      3. If you confirm that deleted tables exist in the recycle bin, run the following commands to clean them up.

        Important

        After you purge tables from the recycle bin, they cannot be recovered. Confirm that these tables are no longer needed.

        -- Delete all tables from the table recycle bin.
        PURGE RECYCLE_BIN ALL;
        -- Delete a specific table from the table recycle bin.
        PURGE RECYCLE_BIN TABLE <table name in ADB_RECYCLE_BIN database>;