All Products
Search
Document Center

Data Transmission Service:Synchronize data from an ApsaraDB for MongoDB instance to an AnalyticDB for MySQL 3.0 cluster

Last Updated:Mar 28, 2026

Data Transmission Service (DTS) continuously synchronizes data from an ApsaraDB for MongoDB replica set instance to an AnalyticDB for MySQL 3.0 cluster, keeping your analytics database up to date with minimal latency.

MongoDB-to-relational concept mapping

If you are unfamiliar with MongoDB, the following mappings apply throughout this guide:

MongoDB conceptRelational equivalentNotes
CollectionTableThe unit of data to synchronize
DocumentRowEach document maps to one row in the destination table
FieldColumnFields must be mapped using bson_value() expressions
ObjectIdVARCHARThe _id field of ObjectId type must map to a VARCHAR column

What DTS supports

CapabilitySupportedNotes
Full data synchronizationYesFree of charge
Incremental data synchronizationYesCharged; see Billing overview
CollectionsYes
INSERT, UPDATE, DELETE operationsYesIncremental only; when synchronizing incremental data of a file, only the $set command can be synchronously run
Schema synchronizationNoCreate the destination table manually before starting
Indexes, views, procedures, functions, triggers, foreign keysNo
admin, config, or local databasesNo
TransactionsNoTransactions are converted to single records

Prerequisites

Before you begin, make sure that:

  • For sharded cluster sources: Each shard node has a public endpoint, and all shard accounts share the same username and password. See Apply for an endpoint for a shard node or the ConfigServer node in a sharded cluster instance.

  • A destination AnalyticDB for MySQL 3.0 cluster exists with available storage larger than the source MongoDB instance's used storage. (Recommended: at least 10% more than the total source data size.) See Create a cluster.

  • A destination database and a table with a unique (non-composite) primary key column exist. See CREATE DATABASE and CREATE TABLE.

  • The multi-statement feature is enabled on the destination cluster: SET ADB_CONFIG ALLOW_MULTI_QUERIES=true;

The multi-statement feature requires AnalyticDB for MySQL V3.1.9.3 or later. See Update the minor version of a cluster.
Important

When creating the destination table:

  • Map the _id field of ObjectId type to a VARCHAR column.

  • Do not name any column _id or _value.

  • Make sure all destination column data types are compatible with the source MongoDB data types.

Data type mapping

Verify that each AnalyticDB for MySQL column type is compatible with the source MongoDB data type before creating the destination table.

MongoDB BSON typeAnalyticDB for MySQL typeNotes
ObjectIdVARCHARThe _id primary key field; ObjectId values are stored as strings
FLOATFLOATDTS uses ROUND(column, 38) by default; adjust column precision to match your requirements
DOUBLEDOUBLEDTS uses ROUND(column, 308) by default; adjust column precision to match your requirements
Nested object fieldsSeparate columnsMap each subfield using bson_value("parent", "child") — do not map the parent object directly

For the complete list of supported data type mappings, see the data type mapping section in Migrate data from an ApsaraDB for MongoDB instance to an ApsaraDB RDS for MySQL instance.

Permissions required

DatabaseRequired permissionsReference
Source ApsaraDB for MongoDB instanceRead permissions on the source, admin, and local databasesAccount management
Destination AnalyticDB for MySQL 3.0 clusterRead and write permissions on the destination databaseCreate a database account

Limits

Source database limits

  • Bandwidth: The server hosting the source database must have sufficient outbound bandwidth. Low bandwidth reduces synchronization speed.

  • Collection limit: If you configure name mapping for destination collections, a single task supports up to 1,000 collections. To synchronize more, create multiple tasks or configure the task to synchronize entire databases.

  • Sharded cluster — unique `_id`: The _id field in each collection to synchronize must be unique. Duplicate _id values cause data inconsistency.

  • Sharded cluster — mongos nodes: The number of mongos nodes cannot exceed 10. The source instance must not contain orphaned documents — otherwise, data inconsistency may occur and the task may fail. See the MongoDB documentation and the FAQ on deleting orphaned documents.

  • Sharded cluster — balancer latency: If the balancer of a sharded cluster source is balancing data, latency may occur in the instance.

  • Unsupported sources: Standalone ApsaraDB for MongoDB instances, Azure Cosmos DB for MongoDB clusters, and Amazon DocumentDB elastic clusters cannot be used as the source.

  • SRV endpoints: DTS cannot connect to a MongoDB database over an SRV connection string.

  • Oplog / change streams: The source database must have oplog enabled, retaining at least 7 days of log data. Alternatively, change streams must be enabled and cover the last 7 days.

Important

Use oplog to record source data changes whenever possible. Change streams require MongoDB 4.0 or later and do not support two-way synchronization. For non-elastic Amazon DocumentDB clusters, you must use change streams and set Migration Method to ChangeStream and Architecture to Sharded Cluster.

  • Restrictions during synchronization:

    • During full data synchronization, do not change collection schemas or modify data of the ARRAY type.

    • If running full data synchronization only (no incremental), do not write to the source database while the task runs.

    • For sharded cluster sources, do not run shardCollection, reshardCollection, unshardCollection, moveCollection, or movePrimary during the task.

Other limits

  • The destination table's primary key must be assigned bson_value("_id") in the Selected Objects configuration.

  • Destination columns named _id or _value cause synchronization failure.

Warning

If disk usage on any AnalyticDB for MySQL 3.0 node exceeds 80%, the DTS task is delayed and returns errors. Estimate the required disk space before starting the task.

  • If the destination cluster is being backed up while a DTS task runs, the task fails.

  • Concurrent INSERT operations during full data synchronization cause collection fragmentation. Storage usage in the destination database will be larger than in the source.

  • FLOAT and DOUBLE precision: DTS uses ROUND(COLUMN, PRECISION). Default precision is 38 digits for FLOAT and 308 digits for DOUBLE. Specify a precision in the column configuration if your business requires a different value.

  • Transactions are converted to single records. Transaction context is not preserved.

  • Incremental latency: DTS calculates latency based on the latest synchronized data timestamp versus the current source timestamp. If the source has no updates for an extended period, the latency reading may be inaccurate. Perform an update on the source to refresh the latency reading.

Warning

DTS attempts to resume failed tasks for up to 7 days. Before switching workloads to the destination database, stop or release any failed tasks — or run REVOKE to remove write permissions from DTS accounts — to prevent source data from overwriting destination data.

  • DTS technical support will attempt to restore a failed task within 8 hours. During restoration, the task may be restarted and task parameters (not database parameters) may be modified. See Modify instance parameters.

  • Synchronize data during off-peak hours to reduce the impact on source and destination database performance.

Set up a synchronization task

Step 1: Open the Data Synchronization page

Use one of the following methods to navigate to the task configuration page.

DTS console

  1. Log on to the DTS console.DTS console

  2. In the left-side navigation pane, click Data Synchronization.

  3. In the upper-left corner, select the region where the synchronization task will run.

DMS console

Note

The available options may vary based on your DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console.

  1. Log on to the DMS console.DMS console

  2. In the top navigation bar, move the pointer over Data + AI and choose DTS (DTS) > Data Synchronization.

  3. From the drop-down list next to Data Synchronization Tasks, select the region where the synchronization instance will run.

Step 2: Configure source and destination databases

  1. Click Create Task.

  2. Configure the task name and database connections using the parameters in the following table.

ParameterDescription
Task NameEnter a descriptive name. DTS generates a name automatically, but a meaningful name makes the task easier to identify. Names do not need to be unique.

Source database

ParameterDescription
Select Existing ConnectionSelect a registered DTS database instance to auto-fill the connection parameters. If the instance is not registered, fill in the parameters manually. In the DMS console, select from Select a DMS database instance.
Database TypeSelect MongoDB.
Access MethodSelect Alibaba Cloud Instance.
Instance RegionSelect the region of the source ApsaraDB for MongoDB instance.
Replicate Data Across Alibaba Cloud AccountsSelect No if the source database is in the same Alibaba Cloud account.
ArchitectureSelect Replica Set for this example. If the source uses a sharded cluster, select Sharded Cluster and configure the Shard account and Shard password parameters.
Migration MethodSelect the method for synchronizing incremental data: Oplog (recommended when oplog is enabled) or ChangeStream (when change streams are enabled). See Change Streams.
Important

If Sharded Cluster is selected for Architecture and ChangeStream is selected for Migration Method, the Shard account and Shard password parameters are not required. If Sharded Cluster is selected with Oplog, SSL-encrypted connections are not available.

Instance IDSelect the ID of the source ApsaraDB for MongoDB instance.
Authentication DatabaseEnter the name of the authentication database. If the account has not been modified, enter admin.
Database AccountEnter the database account. See Permissions required.
Database PasswordEnter the account password.
EncryptionSelect Non-encrypted, SSL-encrypted, or Mongo Atlas SSL based on your requirements. Available options depend on the Access Method and Architecture values. If the source uses a replica set architecture with a self-managed deployment, and you select SSL-encrypted, you can upload a certification authority (CA) certificate to verify the connection.

Destination database

ParameterDescription
Select Existing ConnectionSelect a registered DTS database instance to auto-fill the connection parameters. If the instance is not registered, fill in the parameters manually.
Database TypeSelect AnalyticDB for MySQL 3.0.
Access MethodSelect Alibaba Cloud Instance.
Instance RegionSelect the region of the destination AnalyticDB for MySQL 3.0 cluster.
Instance IDSelect the ID of the destination AnalyticDB for MySQL 3.0 cluster.
Database AccountEnter the database account. See Permissions required.
Database PasswordEnter the account password.
  1. Click Test Connectivity and Proceed.

DTS server CIDR blocks must be added to the security settings of the source and destination databases before connectivity testing succeeds. See Add the CIDR blocks of DTS servers. For self-managed databases, click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.

Step 3: Configure objects to synchronize

In the Configure Objects step, configure the following parameters.

ParameterDescription
Synchronization TypesIncremental Data Synchronization is selected by default. Optionally select Full Data Synchronization as well. Schema synchronization is not available — create the destination table manually beforehand.
DDL and DML operations to be synchronizedSelect the DDL and DML operations to synchronize during incremental data synchronization. To configure operations per collection, right-click the collection in Selected Objects and select operations in the dialog box.
Merge TablesSelect Yes to merge multiple source tables with the same schema into a single destination table (useful in OLAP scenarios). After you select multiple tables from the source database, you must change the names of these tables to the name of the destination table by using the object name mapping feature. See Map object names. DTS adds a __dts_data_source column (TEXT) to identify each source, using the format <DTS instance ID>:<source DB>:<source schema>:<source table>, for example: dts********:dtstestdata:testschema:customer1. If you do not need to merge specific source tables, create a separate data synchronization task for those tables. Select No (default) to synchronize each source table separately.
Warning

Do not perform DDL operations that change source table schemas while merge is enabled, as this causes data inconsistency or task failure.

Processing mode of conflicting tablesPrecheck and Report Errors (default): fails the precheck if the destination contains tables with the same name as source tables. Ignore Errors and Proceed: skips this check.
Warning

Selecting Ignore Errors and Proceed may cause data inconsistency. During full synchronization, conflicting rows in the destination are retained. During incremental synchronization, they are overwritten. If schemas differ, the task may fail or only some columns are synchronized.

Select objects to synchronize

  1. In Source Objects, select one or more collections and click 向右 to move them to Selected Objects. Only collections can be selected.

  2. Edit the schema name:

    1. In Selected Objects, right-click the database that contains the collection.image

    2. Set Schema Name to the name of the destination database in the AnalyticDB for MySQL 3.0 cluster.image

    3. (Optional) In Select DDL and DML Operations to Be Synchronized, select the operations to synchronize.image

    4. Click OK.

  3. Edit the table name:

    1. Right-click the collection in Selected Objects.image

    2. Set Table Name to the name of the destination table in the AnalyticDB for MySQL 3.0 cluster.image

    3. (Optional) Configure filter conditions for full data synchronization. See Set filter conditions.image

    4. (Optional) In Select DDL and DML Operations to Be Synchronized, select the operations to synchronize.image

  4. Map source fields to destination columns. DTS automatically generates bson_value() expressions for the collection fields. Review and adjust the Column Name, Type, Length, and Precision values for each column.

    • To delete a field that does not need to be synchronized, click the image icon next to the row.

    • If the auto-generated expression does not match the required field (for example, a nested field), click the image icon, then click + Add Column:image

      1. Configure Column Name, Type, Length, and Precision.

      2. Enter the bson_value() expression in Assign Value. See Map nested fields for the expression syntax.

      3. Repeat for each additional column.

    Important

    Assign bson_value("_id") to the primary key column of the destination table. Specify the full field hierarchy in each bson_value() expression — using only a parent field (for example, bson_value("person")) causes incremental subfield data to be lost.

  5. Click OK.

Step 4: Configure advanced settings

Click Next: Advanced Settings and configure the following parameters.

ParameterDescription
Dedicated Cluster for Task SchedulingBy default, DTS schedules tasks on the shared cluster. Purchase a dedicated cluster to improve synchronization stability. See What is a DTS dedicated cluster.
Retry Time for Failed ConnectionsThe time DTS retries a failed connection after the task starts. Valid values: 10–1440 minutes. Default: 720 minutes. Set to more than 30 minutes. If DTS reconnects within this window, the task resumes.
Note

If multiple tasks share a source or destination database, the shortest retry window takes effect. DTS charges continue during retry periods.

Retry Time for Other IssuesThe time DTS retries failed DDL or DML operations. Valid values: 1–1440 minutes. Default: 10 minutes. Set to more than 10 minutes. This value must be less than Retry Time for Failed Connections.
Only one data type for primary key `_id` in a table of the data to be synchronizedSpecify whether the _id primary key has a uniform data type across all documents in the collection. Yes: DTS skips the data type scan and synchronizes only one data type per collection. No: DTS scans _id data types and synchronizes all documents. This parameter appears only when Full Data Synchronization is selected. Set it accurately — an incorrect value may cause data loss.
Enable Throttling for Full Data SynchronizationLimit the load on source and destination databases during full synchronization by setting Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Synchronization is selected.
Enable Throttling for Incremental Data SynchronizationLimit the load during incremental synchronization by setting RPS of Incremental Data Synchronization and Data synchronization speed for incremental synchronization (MB/s).
Environment TagAn optional tag to identify the DTS instance environment.
Configure ETLSelect Yes to enable the extract, transform, and load (ETL) feature and enter data processing statements. See Configure ETL in a data migration or data synchronization task. Select No to skip.
Monitoring and AlertingSelect Yes to receive notifications when the task fails or synchronization latency exceeds a threshold. Configure the alert threshold and notification settings. See Configure monitoring and alerting when you create a DTS task.

Step 5: Save settings and run the precheck

  • To preview the API parameters for this task configuration, move the pointer over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

  • Click Next: Save Task Settings and Precheck to save and start the precheck.

The task cannot start until it passes the precheck.
If the precheck fails, click View Details next to each failed item, resolve the issue, and rerun the precheck.
If an alert appears for a precheck item:
If the alert cannot be ignored, resolve the issue and rerun the precheck.
If the alert can be ignored, click Confirm Alert Details, then click Ignore in the dialog box, click OK, and then click Precheck Again.

Step 6: Purchase and start the instance

  1. Wait until Success Rate reaches 100%, then click Next: Purchase Instance.

  2. On the buy page, configure the billing method and instance class.

ParameterDescription
Billing MethodSubscription: pay upfront for a fixed duration; more cost-effective for long-term use. Pay-as-you-go: billed hourly; suitable for short-term use. Release the instance when no longer needed to stop charges.
Resource Group SettingsThe resource group for the synchronization instance. Default: default resource group. See What is Resource Management?.
Instance ClassSelect an instance class based on your required synchronization speed. See Instance classes of data synchronization instances.
Subscription DurationAvailable only for the subscription billing method. Valid values: 1–9 months, or 1, 2, 3, or 5 years.
  1. Read and accept Data Transmission Service (Pay-as-you-go) Service Terms.

  2. Click Buy and Start, then click OK in the confirmation dialog box.

The task appears in the task list. Monitor its progress there.

Map nested fields

Use this reference when the auto-generated bson_value() expression does not match your source data structure.

All bson_value() expressions follow this pattern: specify the full field path from the top-level field down to the subfield, with each level as a separate argument.

Example source document

{
  "_id": "62cd344c85c1ea6a2a9f****",
  "person": {
    "name": "neo",
    "age": 26,
    "sex": "male"
  }
}

Destination table

Column nameTypeNotes
mongo_idVARCHARPrimary key
person_nameVARCHAR
person_ageDECIMAL

Column assignments

Column nameTypeAssignment expression
mongo_idSTRINGbson_value("_id")
person_nameSTRINGbson_value("person","name")
person_ageDECIMALbson_value("person","age")
Important

Using only the parent field — for example, bson_value("person") instead of bson_value("person","name") — causes incremental data loss for all subfields (name, age, sex). Always specify the complete field path.

What's next