All Products
Search
Document Center

Data Transmission Service:Synchronize data from an RDS for MySQL instance to an ApsaraMQ for RocketMQ instance

Last Updated:Mar 28, 2026

Data Transmission Service (DTS) lets you stream change data from an RDS for MySQL instance directly into an ApsaraMQ for RocketMQ topic. DTS captures INSERT, UPDATE, and DELETE operations through binary logging and delivers them as ordered messages—without requiring any changes to your application code.

Prerequisites

Before you begin, make sure you have:

  • A destination ApsaraMQ for RocketMQ instance (non-Serverless). To create one, see Manage instances (4.x) or Manage instances (5.x)

  • A topic in the destination instance to receive the synchronized data. To create a topic, see Manage topics (4.x) or Manage topics (5.x)

  • The correct message type set on the destination topic:

    • RocketMQ 4.x: Message Type must be Partitionally Ordered Message

    • RocketMQ 5.x: Message Type must be Ordered Message

  • The binlog_row_image parameter set to full on the source RDS for MySQL instance. Binary logging is enabled by default; verify this parameter before you start.

  • Read permissions on the objects to be synchronized granted to the database account you plan to use. If you created the account outside the RDS console, run the following commands to grant the required permissions:

GRANT REPLICATION CLIENT ON *.* TO '<db_account>'@'%';GRANT REPLICATION SLAVE  ON *.* TO '<db_account>'@'%';GRANT SHOW VIEW          ON *.* TO '<db_account>'@'%';GRANT SELECT             ON *.* TO '<db_account>'@'%';
For supported source and destination database versions, see Synchronization solutions.

Billing

Synchronization typeFee
Full data synchronizationFree
Incremental data synchronizationCharged. See Billing overview.

Supported SQL operations

TypeOperations
DMLINSERT, UPDATE, DELETE
DDLCREATE TABLE, ALTER TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLE
CREATE VIEW, ALTER VIEW, DROP VIEW
CREATE PROCEDURE, ALTER PROCEDURE, DROP PROCEDURE
CREATE FUNCTION, DROP FUNCTION, CREATE TRIGGER, DROP TRIGGER
CREATE INDEX, DROP INDEX

Limitations

Source database

  • Tables being synchronized must have a primary key or a UNIQUE constraint. Without one, duplicate records may appear in the destination.

  • At the table level with object name mapping, a single task supports up to 1,000 tables. To synchronize more tables, split them across multiple tasks or switch to database-level synchronization.

  • The binlog_row_image parameter must be set to full. DTS requires row-level binary log images to capture the complete before and after state of each changed row. If this parameter is not set correctly, DTS reports a precheck error and the task cannot start.

    • Binary logging is enabled by default for ApsaraDB RDS for MySQL instances.

    • For self-managed MySQL databases, you must manually enable binary logging and set the binlog_format parameter to row and the binlog_row_image parameter to full.

    • For self-managed MySQL databases in a primary/primary architecture (where both databases act as primary and secondary of each other), you must also enable the log_slave_updates parameter to ensure DTS can obtain all binary logs. For more information, see Create a database account for a self-managed MySQL database and configure binary logging.

  • Binary logs must be retained long enough for DTS to read them: If binary logs are purged before DTS reads them, the task fails and data loss or inconsistency may result. Issues caused by insufficient log retention are outside the DTS service-level agreement (SLA).

    • RDS for MySQL: at least 3 days (7 days recommended)

    • Self-managed MySQL: at least 7 days

  • Do not perform DDL operations on source tables during full data synchronization. DTS queries the source database during this phase, which creates metadata locks that can block DDL operations.

  • Data changes not recorded in binary logs—such as recovery from physical backups or cascade operations—are not captured and not delivered to the destination topic. If this occurs and your business permits, you can remove the database or table from the synchronization objects and then add it back. For more information, see Modify synchronization objects.

  • Do not use pt-online-schema-change or similar tools to perform online DDL operations on synchronization objects in the source database during synchronization. Doing so causes the task to fail.

  • MySQL 8.0.23 and later: if synchronized tables contain invisible columns, data in those columns may be lost because DTS cannot read them. To make a column visible, run:

    ALTER TABLE <table_name> ALTER COLUMN <column_name> SET VISIBLE;

    Tables without an explicit primary key automatically get an invisible generated primary key—make that visible too. See Invisible columns and Generated invisible primary keys.

Destination instance

  • Serverless ApsaraMQ for RocketMQ instances cannot be used as a destination.

  • Synchronization writes only to specific topics—not to the entire instance.

  • The destination instance supports a maximum message body size of 4 MB. If a record exceeds 4 MB and filtering is disabled, the synchronization task fails.

  • For RocketMQ V4.x: if the destination instance is in a different region from the source, DTS accesses it through a public endpoint, which incurs data transfer costs. Enable public access on the destination instance before starting the task. Check the access status on the Basic Information tab of the instance's details page in the ApsaraMQ for RocketMQ console. For pricing, see Public bandwidth pricing.

  • Upgrading or downgrading the instance class of the destination instance changes how messages are delivered.

  • Do not write data to the destination topic from sources other than DTS during synchronization. Concurrent writes from other sources can cause data inconsistency.

Encryption features

  • Always-Encrypted (EncDB) on the source: full data synchronization is not supported.

  • Transparent Data Encryption (TDE) on the source: both full and incremental data synchronization are supported.

Performance

During full data synchronization, DTS reads from the source database and writes to the destination instance, which increases load on both. Run the initial synchronization during off-peak hours—for example, when CPU utilization on both the source and destination is below 30%.

Concurrent inserts during full synchronization cause table fragmentation in the destination. After full synchronization completes, table storage in the destination is larger than in the source.

Recovery

If a DTS instance fails, DTS support attempts recovery within 8 hours. Recovery may involve restarting the instance or adjusting DTS instance parameters (not database parameters). For the list of adjustable parameters, see Modify instance parameters.

Create a synchronization task

Step 1: Open the data synchronization page

DTS console

  1. Log on to the DTS console.

  2. In the left-side navigation pane, click Data Synchronization.

  3. In the upper-left corner, select the region where your synchronization task will reside.

DMS console

The exact steps vary depending on the mode and layout of the DMS console. See Simple mode and Customize the layout and style of the DMS console.
  1. Log on to the DMS console.

  2. In the top navigation bar, move the pointer over Data + AI and choose DTS (DTS) > Data Synchronization.

  3. From the drop-down list to the right of Data Synchronization Tasks, select the region where the synchronization instance will reside.

Step 2: Configure source and destination databases

Click Create Task, then configure the following parameters.

CategoryParameterDescription
GeneralTask NameA name for the task. DTS generates a default name. Use a descriptive name to make the task easy to identify later. Task names do not need to be unique.
Source DatabaseSelect Existing ConnectionSelect a registered instance from the drop-down list to auto-fill the connection parameters. If no registered instance is available, configure the parameters below manually.
Database TypeSelect MySQL.
Access MethodSelect Alibaba Cloud Instance.
Instance RegionThe region of the source RDS for MySQL instance.
Replicate Data Across Alibaba Cloud AccountsSelect No for same-account replication.
RDS Instance IDThe ID of the source RDS for MySQL instance.
Database AccountThe database account with read permissions. See Prerequisites for required permissions.
Database PasswordThe password for the database account.
EncryptionSelect Non-encrypted or SSL-encrypted. To use SSL encryption, enable SSL on the RDS for MySQL instance before configuring this task.
Destination DatabaseSelect Existing ConnectionSelect a registered instance from the drop-down list to auto-fill the connection parameters. If no registered instance is available, configure the parameters below manually.
Database TypeSelect RocketMQ.
Access MethodSelect Alibaba Cloud Instance.
Instance RegionThe region of the destination ApsaraMQ for RocketMQ instance.
RocketMQ versionThe version of the destination instance (4.x or 5.x).
Instance IDThe ID of the destination ApsaraMQ for RocketMQ instance.
Database AccountRequired only for RocketMQ 5.x. Get the account from the Intelligent Identity Recognition tab under Resource Access Management in the ApsaraMQ for RocketMQ console.
Database PasswordRequired only for RocketMQ 5.x.
TopicThe topic to receive the synchronized data.
Topic That Stores DDL Information(Optional) A separate topic to store DDL change events. If left blank, DDL information is written to the topic selected in Topic.

Click Test Connectivity and Proceed at the bottom of the page.

DTS server CIDR blocks must be allowed in the security settings of both the source and destination. For details, see Add DTS server IP addresses to a whitelist.

Step 3: Configure synchronization objects

In the Configure Objects step, set the following options.

ConfigurationDescription
Synchronization TypesIncremental Data Synchronization is selected by default. Optionally add Full Data Synchronization to copy existing data before streaming incremental changes. Schema Synchronization is not supported when the destination is an ApsaraMQ for RocketMQ instance.
Processing Mode of Conflicting TablesPrecheck and Report Errors (default): fails the precheck if the destination has tables with the same name as source tables. Ignore Errors and Proceed: skips this check.
Warning

selecting this option can cause data inconsistency.

Format of the data delivered to RocketMQThe message format for data written to the destination topic. For format details, see Data storage formats in message queues.
Synchronize all fieldsAvailable only when the format is Canal JSON. Yes: includes the full pre-image of each updated row in the old field. No (default): includes only the pre-image of changed fields.
Rules of the ordered messages delivered to RocketMQHow DTS orders messages written to the destination topic. See Message ordering rules for RocketMQ.
Name of DTS producer groupThe producer group (ProducerGroup) that sends messages to the topic. Default: dts-producer-group.
Limits of RocketMQ messaging transactions per second (TPS)The maximum TPS for writing data to the destination topic. Must not exceed the TPS limit of the destination instance. See Instance type limits and Computing specifications. The actual TPS may fluctuate slightly around the configured value.
Whether to filter large size of recordsWhether to drop message bodies larger than 4 MB.
Important

if you select No and a record exceeds 4 MB, the task fails.

Capitalization of Object Names in Destination InstanceThe capitalization policy for database, table, and column names in the destination. Default: DTS default policy. See Specify the capitalization of object names in the destination instance.
Source ObjectsSelect one or more databases or tables to synchronize and move them to the Selected Objects list.
Selected ObjectsThe objects to synchronize. Use the mapping feature to set the destination topic name, configure filter conditions, select SQL operations, and set the partition key. See Mapping information.

Click Next: Advanced Settings.

ConfigurationDescription
Dedicated Cluster for Task SchedulingBy default, DTS schedules the task on a shared cluster. For higher stability, purchase a dedicated cluster. See What is a DTS dedicated cluster.
Retry Time for Failed ConnectionsHow long DTS retries when the source or destination is unreachable. Valid values: 10–1440 minutes. Default: 720 minutes. Set this to at least 30 minutes. If reconnection succeeds within this period, the task resumes; otherwise, it fails.
Note

if multiple tasks share a source or destination, the shortest retry time applies. DTS charges for the instance during retries.

Retry Time for Other IssuesHow long DTS retries after DDL or DML operation failures. Valid values: 1–1440 minutes. Default: 10 minutes. Set this to at least 10 minutes. Must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data SynchronizationThrottle the full synchronization phase by setting the queries per second (QPS) to the source database, the records per second (RPS) of full synchronization, and the data transfer speed (MB/s). Available only when Full Data Synchronization is selected.
Enable Throttling for Incremental Data SynchronizationThrottle incremental synchronization by setting the RPS and transfer speed (MB/s) to reduce load on the destination.
Whether to delete SQL operations on heartbeat tables of forward and reverse tasksControls whether DTS writes heartbeat updates to the source database. Yes: disables heartbeat writes (may show synchronization latency in the console). No: enables heartbeat writes (may affect physical backup and cloning of the source database).
Environment TagAn optional tag to identify the environment (for example, production or staging).
Configure ETLWhether to enable extract, transform, and load (ETL). Yes: enter data processing statements in the code editor. See Configure ETL in a data migration or data synchronization task. No: skip ETL configuration.
Monitoring and AlertingWhether to configure alerts for task failures or high synchronization latency. Yes: set the alert threshold and notification contacts. See Configure monitoring and alerting when you create a DTS task. No: skip alerting.

Step 4: Run the precheck and purchase the instance

  1. Click Next: Save Task Settings and Precheck.

    To preview the API parameters that correspond to this configuration, move the pointer over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
  2. Wait for the precheck to complete.

    • If an item fails, click View Details to see the cause, fix the issue, then click Precheck Again.

    • If an alert appears on an item that can be safely ignored, click Confirm Alert Details, then Ignore, then OK, then Precheck Again.

  3. After the Success Rate reaches 100%, click Next: Purchase Instance.

  4. On the purchase page, configure the following parameters.

    ParameterDescription
    Billing MethodSubscription: pay upfront for a fixed period—more cost-effective for long-term use. Pay-as-you-go: billed hourly—suitable for short-term or evaluation use. Release the instance when it is no longer needed to stop charges.
    Resource Group SettingsThe resource group for the synchronization instance. Default: default resource group. See What is Resource Management?
    Instance ClassThe synchronization throughput class. Select based on your expected data volume. See Instance classes of data synchronization instances.
    Subscription DurationAvailable for the subscription billing method only. Options: 1–9 months, 1 year, 2 years, 3 years, or 5 years.
  5. Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms.

  6. Click Buy and Start, then click OK in the confirmation dialog.

The task appears in the task list. You can monitor its progress there.

Mapping information

Use the mapping feature to route synchronized data to a specific topic, add filter conditions, select SQL operations, and configure the partition key.

  1. In the Selected Objects list, move the pointer over a table name.

  2. Click Edit next to the destination topic name.

  3. Configure the mapping in the dialog box that appears.

Table-level mapping takes precedence over database-level mapping when both are set.

Database-level mapping (Edit Schema)

ConfigurationDescription
Schema NameThe destination topic name. Defaults to the topic selected in the Destination Database section.
Important

the topic must already exist in the destination instance, otherwise synchronization fails. Changing this value routes data to the new topic.

Select DDL and DML Operations to Be SynchronizedThe SQL operations to include in incremental synchronization.

Table-level mapping (Edit Table)

ConfigurationDescription
Table NameThe destination topic name for this table. Defaults to the topic selected in the Destination Database section.
Important

the topic must already exist in the destination instance, otherwise synchronization fails. Changing this value routes data for this table to the new topic.

Filter ConditionsSQL conditions to filter which rows are synchronized. See Set filter conditions.
Select DDL and DML Operations to Be SynchronizedThe SQL operations to include in incremental synchronization for this table.
Partition KeyAvailable when Rules of the ordered messages delivered to RocketMQ is set to Deliver data based on hash values of a specified column. Specify one or more columns as the partition key. DTS hashes the specified columns and routes each row to the corresponding partition in the destination topic.
  1. Click OK.

Usage notes

For self-managed MySQL sources

  • If a primary/secondary failover occurs during synchronization, the task fails.

  • Synchronization latency is calculated as the time difference between the current timestamp and the timestamp of the last record delivered to the destination. If no DML operations run on the source for an extended period, the displayed latency may be inaccurate. Run a DML operation on the source to refresh the latency display. Alternatively, configure a heartbeat table that DTS updates every second.

  • DTS periodically runs CREATE DATABASE IF NOT EXISTS \test\`` on the source to advance the binary log offset.

  • For Amazon Aurora MySQL or other cluster-mode MySQL sources, the domain name or IP address configured in the task must always resolve to the read/write (RW) node. If it does not, the task may not run correctly.

For RDS for MySQL sources

  • ApsaraDB RDS for MySQL instances that do not record transaction logs—such as read-only RDS for MySQL 5.6 instances—cannot be used as a source.

  • DTS periodically runs CREATE DATABASE IF NOT EXISTS \test\`` on the source to advance the binary log offset.

What's next