All Products
Search
Document Center

Data Transmission Service:Migrate ApsaraDB for MongoDB (replica set architecture) to ApsaraDB for MongoDB (replica set or sharded cluster architecture)

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate data from an ApsaraDB for MongoDB replica set instance to another ApsaraDB for MongoDB instance—either a replica set or a sharded cluster.

Supported sources and destinations

SourceDestination
ApsaraDB for MongoDBApsaraDB for MongoDB
Self-managed database hosted on ECSSelf-managed database hosted on ECS
Self-managed database connected over a leased line, VPN Gateway, or Smart Access GatewaySelf-managed database connected over a leased line, VPN Gateway, or Smart Access Gateway
Self-managed database with a public IP addressSelf-managed database with a public IP address

This topic uses ApsaraDB for MongoDB (replica set) as the source and ApsaraDB for MongoDB (replica set or sharded cluster) as the destination. The configuration steps are similar for other source types.

Migration types

DTS supports three migration types, which you can combine based on your downtime requirements:

Migration typeWhat it migratesSupported objects
Schema migrationDatabase and collection structuresDATABASE, COLLECTION, INDEX
Full data migrationAll historical dataDATABASE, COLLECTION
Incremental data migrationOngoing changes after full migration completesDepends on the method: Oplog or Change Streams

Choosing a migration approach:

  • Full migration only: Select Schema Migration and Full Data Migration. Stop writes to the source during migration to avoid data inconsistency.

  • Near-zero downtime: Select all three types. Incremental migration keeps the destination in sync while your application continues running on the source.

Incremental migration: Oplog vs. Change Streams

Incremental migration captures changes from the source using either Oplog (recommended) or Change Streams.

OplogChange Streams
AvailabilityOplog must be enabled (default for ApsaraDB for MongoDB)Change Streams must be enabled on the source
LatencyLower (faster log retrieval)Higher
Supported DDLCREATE COLLECTION/INDEX, DROP DATABASE/COLLECTION/INDEX, RENAME COLLECTIONDROP DATABASE/COLLECTION, RENAME COLLECTION
Supported DMLINSERT, UPDATE ($set only), DELETEINSERT, UPDATE ($set only), DELETE
MongoDB versionAll supported versionsMongoDB 4.0 and later
When to useDefault choiceRequired for Amazon DocumentDB (non-elastic cluster)
Incremental migration does not capture databases created after the task starts.

Billing

Migration typeInstance feeInternet traffic fee
Schema migration + full data migrationFreeCharged only when Access Method is set to Public IP Address. For details, see Billing overview.
Incremental data migrationCharged. For details, see Billing overview.

Prerequisites

Before you begin, make sure that you have:

If the destination is a sharded cluster, complete these steps before starting the migration:

  1. Create the databases and collections to be sharded.

  2. Configure data sharding and enable the Balancer.

  3. Perform pre-sharding.

Skipping these steps causes all migrated data to land on a single shard, which limits cluster performance. For guidance, see Configure data sharding to maximize shard performance and How to handle uneven data distribution in a MongoDB sharded cluster.

Database account permissions

Configure accounts with the following permissions before starting the task:

DatabaseSchema migrationFull data migrationIncremental data migration
Source ApsaraDB for MongoDBRead on the databases to be migrated and the config databaseRead on the databases to be migrated, admin, and local
Destination ApsaraDB for MongoDBdbAdminAnyDatabase, readWrite on the destination database, and read on local

For instructions on creating and authorizing accounts, see Manage MongoDB database users in DMS.

Impact on source and destination databases

Review how DTS affects your databases before starting the migration:

  • Source database load: Full data migration consumes read resources on the source. Run migrations during off-peak hours to minimize impact.

  • Destination storage: Full data migration involves concurrent INSERT operations, which can cause fragmentation in the destination collections. Because DTS writes data concurrently, the destination may use 5%–10% more disk space than the source.

  • Document count queries: After migration, use db.$table_name.aggregate([{ $count:"myCount"}]) to count documents in the destination. The standard count() may return inaccurate results.

Constraints

Operations that cause task failure

Avoid the following actions during migration—they will cause the task to fail or result in data loss:

  • Do not change database or collection schemas during schema migration or full data migration (including updates to array-type data).

  • Do not write to the source during full-only migration. To maintain data consistency while the source stays live, run all three migration types together.

  • For sharded cluster destinations: do not start the task without first purging orphaned documents. Documents with conflicting _id values cause task failure or data inconsistency.

  • For sharded cluster destinations: add a sharding key to the source data that matches the destination sharding key before the task starts. After the task starts, INSERT operations must include the sharding key; UPDATE operations cannot change the sharding key.

Operations that may increase latency or cause inconsistency

These conditions do not stop the task but can degrade data quality:

  • Collections with TTL indexes: TTL index conflicts between source and destination can cause latency and data inconsistency. See FAQ for details.

  • Capped collections or unique indexes: These do not support concurrent replay during incremental migration—only single-threaded writes are used, which may increase latency.

  • Transaction data: Transactions are not retained. Source transactions become individual documents in the destination.

  • Primary key or unique key conflicts: When DTS writes to the destination and encounters a conflict, it skips the write and retains the existing destination data.

Source database requirements

RequirementDetails
BandwidthThe source server must have sufficient outbound bandwidth. Low bandwidth slows migration.
Primary keysCollections must have primary keys or UNIQUE constraints with unique field values. Otherwise, duplicate data may appear in the destination.
Collection limitMigrating at collection level supports a maximum of 1,000 collections per task. If you exceed this limit, an error is reported when you submit the task. Split into multiple tasks or migrate entire databases instead.
Document sizeA single document cannot exceed 16 MB.
Special sourcesAzure Cosmos DB for MongoDB and Amazon DocumentDB elastic clusters support only full data migration.
Incremental migrationThe source oplog must be enabled and retained for at least 7 days. Alternatively, enable Change Streams and make sure DTS can subscribe to changes from the last 7 days. If DTS cannot access these changes, the task may fail, and data inconsistency or loss may occur. This scenario is not covered by the DTS SLA.

General constraints

  • Connecting via an SRV record is not supported.

  • Migrate from a lower MongoDB version to a higher version. Migrating to a lower version may cause compatibility issues.

  • The admin, config, and local databases cannot be migrated.

  • If the source is MongoDB earlier than 3.6 and the destination is MongoDB 3.6 or later, field order in migrated documents may differ from the source due to differences in query plan execution. Field values remain consistent. If your business logic relies on text match queries against nested structures, assess the potential impact.

  • If the source is MongoDB 5.0 or later and the destination is earlier than 5.0, migrating capped collections is not supported, because MongoDB 5.0 introduced support for explicit deletions and document size increases on update, which earlier versions do not support.

  • Migrating time-series collections (introduced in MongoDB 5.0) is not supported.

  • Make sure the destination has no documents with the same _id values as the source. Delete those documents from the destination before starting if any exist.

  • DTS attempts to resume failed tasks automatically for up to 7 days. Before switching your application to the destination, end or release the task—or revoke write permissions from the DTS account—to prevent the source from overwriting destination data after an automatic resume.

  • If the task fails, DTS technical support attempts recovery within 8 hours. During recovery, DTS may restart the task or adjust task parameters (not database parameters). For adjustable parameters, see Modify instance parameters.

Replica set destination requirements

  • When Access Method is Express Connect, VPN Gateway, or Smart Access Gateway, Public IP Address, or Cloud Enterprise Network (CEN): set Domain Name or IP and Port Number to the primary node's address and port, or configure a high-availability connection address. See Create an instance with a high-availability MongoDB source or destination database.

  • When Access Method is Self-managed Database on ECS: enter the primary node's port for Port Number.

Special cases for self-managed MongoDB sources

  • A primary/secondary switchover during migration causes the task to fail.

  • DTS calculates latency by comparing the timestamp of the last migrated record with the current time. If the source has been idle for a long time, the latency reading may be inaccurate. Perform an update on the source to refresh the latency display.

When migrating an entire database, create a heartbeat table that is updated every second to maintain accurate latency readings.

Migrate an ApsaraDB for MongoDB replica set instance

Step 1: Open the migration task list

DTS console

  1. Log on to the DTS console.DTS console

  2. In the left navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where the migration instance will be located.

DMS console

Note

Steps may vary based on your DMS console mode. For details, see Simple mode console and Customize the layout and style of the DMS console.

  1. Log on to the DMS console.DMS console

  2. In the top menu bar, choose Data + AI > Data Transmission (DTS) > Data Migration.

  3. To the right of Data Migration Tasks, select the region where the migration instance will be located.

Step 2: Create the task

Click Create Task.

Step 3: Configure source and destination databases

Warning

After selecting the source and destination instances, review the limits displayed at the top of the page. Skipping this step may result in task failure or data inconsistency.

General settings:

ParameterDescription
Task NameDTS generates a name automatically. Specify a descriptive name for easy identification. The name does not need to be unique.

Source database:

ParameterValue
Select Existing ConnectionSelect a registered database instance from the list to auto-fill the fields below, or configure the fields manually.
Database TypeMongoDB
Connection TypeCloud Instance
Instance RegionRegion where the source instance resides
Replicate Data Across Alibaba Cloud AccountsNo (this example uses an instance under the current account)
Architecture TypeReplica Set Architecture
Migration MethodSelect based on your setup: Oplog (recommended when oplog is enabled) or ChangeStream (required for Amazon DocumentDB non-elastic cluster). If Architecture is set to Sharded Cluster, shard account and password fields are not required.
Instance IDID of the source ApsaraDB for MongoDB instance
Authentication Database NameName of the database that the account belongs to. Default: admin
Database AccountAccount with the required permissions (see Database account permissions)
Database PasswordPassword for the account
EncryptionNon-encrypted, SSL-encrypted, or Mongo Atlas SSL. Available options depend on Access Method and Architecture. Sharded cluster + Oplog does not support SSL-encrypted. For self-managed replica sets not using Alibaba Cloud Instance access with SSL-encrypted selected, you can upload a CA certificate.

Destination database:

ParameterValue
Select Existing ConnectionSelect a registered instance or configure manually
Database TypeMongoDB
Connection TypeCloud Instance
Instance RegionRegion where the destination instance resides
Replicate Data Across Alibaba Cloud AccountsNo
Architecture TypeReplica Set Architecture or Sharded Cluster Architecture
Instance IDID of the destination ApsaraDB for MongoDB instance
Authentication Database NameDefault: admin
Database AccountAccount with the required permissions
Database PasswordPassword for the account
EncryptionNon-encrypted, SSL-encrypted, or Mongo Atlas SSL. Sharded cluster destinations do not support SSL-encrypted. For self-managed replica sets not using Alibaba Cloud Instance access, you can upload a CA certificate when SSL-encrypted is selected.

Step 4: Test connectivity

Click Test Connectivity and Proceed at the bottom of the page.

Add the CIDR blocks of DTS servers to the security settings of both the source and destination databases. See Add the IP address whitelist of DTS servers.
If the source or destination is a self-managed database (Access Method is not Alibaba Cloud Instance), also click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.

Step 5: Configure objects

On the Configure Objects page, configure the following:

ParameterDescription
Migration TypesSelect the migration types that match your approach. For details, see Migration types.
Processing Mode of Conflicting TablesPrecheck and Block on Error: Checks whether a collection with the same name exists in the destination. If it does, the precheck fails and the task does not start. Use Object name mapping if you need to rename the conflicting collection. Ignore Errors and Continue: Skips the check. If a record with the same primary key exists in the destination, the destination record is kept and the source record is not migrated. This may cause data inconsistency.
Capitalization of object names in destination instanceConfigure case policy for database and collection names in the destination. Default: DTS Default Policy. See Case conversion policy for destination object names.
Source ObjectsClick objects to migrate, then click the right-arrow icon to move them to Selected Objects. Select at DATABASE or COLLECTION granularity.
Selected ObjectsRight-click an object to rename it, map it to a different destination object, or select incremental operations at the database/collection level. To remove an object, click it and then click the left-arrow icon. To set filter conditions (full migration only), right-click the object. See Set filter conditions.
If Schema Migration is not selected, make sure the destination already has the required databases and collections. If Incremental Data Migration is not selected, stop writes to the source during migration.

Click Next: Advanced Settings.

Step 6: Configure advanced settings

ParameterDescription
Dedicated Cluster for Task SchedulingDTS schedules tasks on a shared cluster by default. Purchase a dedicated cluster for more stable performance.
Retry Time for Failed ConnectionsDuration DTS retries after a connection failure. Default: 720 minutes. Range: 10–1440 minutes. Set to at least 30 minutes. If DTS reconnects within this window, the task resumes automatically. You are charged during the retry period.
Retry Time for Other IssuesDuration DTS retries after a non-connectivity error (such as a DDL or DML exception). Default: 10 minutes. Range: 1–1440 minutes. Set to at least 10 minutes. Must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data MigrationLimit the QPS to the source, RPS, and migration speed (MB/s) to reduce load during full migration. Available only when Full Data Migration is selected. You can also adjust throttling after the task starts.
Only one data type for primary key _id in a table of the data to be synchronizedIndicates whether _id has a single data type per collection. Yesalert notification settings: DTS skips scanning _id types and migrates data for one type only. No: DTS scans all _id types and migrates all data. Select carefully—incorrect selection may cause data loss. Available only when Full Data Migration is selected.
Enable Throttling for Incremental Data MigrationLimit RPS and migration speed (MB/s) for incremental migration. Available only when Incremental Data Migration is selected.
Environment TagAttach an environment label to the instance for identification.
Configure ETLSelect Yes to configure the ETL feature and enter data processing statements.
Monitoring and AlertingSelect Yes to configure an alert threshold and alert notifications. DTS sends an alert if the task fails or latency exceeds the threshold.

Click Next: Data Validation to configure data validation. For details, see Configure data validation.

Step 7: Save settings and run precheck

Click Next: Save Task Settings and Precheck.

To preview the API parameters for this configuration, hover over the button and click Preview OpenAPI parameters.
DTS runs a precheck before starting the task. The task starts only after passing.
If the precheck fails, click View Details next to the failed item, fix the issue, and rerun the precheck.
If a warning appears: for items that cannot be ignored, fix them and rerun. For ignorable items, click Confirm Alert Details > Ignore > OK > Precheck Again. Ignored warnings may lead to data inconsistency.

Step 8: Purchase and start the instance

  1. When Success Rate reaches 100%, click Next: Purchase Instance.

  2. On the Purchase page, select the instance class. For performance details, see Data migration link specifications.

    ParameterDescription
    Resource Group SettingsSelect the resource group for the instance. See What is Resource Management?
    Instance ClassSelect a specification based on your migration volume and performance requirements.
  3. Read and select Data Transmission Service (Pay-as-you-go) Service Terms.

  4. Click Buy and Start, then click OK in the confirmation dialog.

Track migration progress on the Data Migration Tasks list page.

Tasks without incremental migration stop automatically after full migration completes. The status changes to Completed.
Tasks with incremental migration continue running. The status remains Running until you manually stop the task.

FAQ

Why does latency occur even when no data is being written to the source?

This is typically caused by TTL (Time To Live) indexes. When the TTL index on the source deletes an expired document, it generates a DELETE entry in the oplog. DTS synchronizes this DELETE to the destination. If the destination's TTL index already deleted the same document, DTS finds nothing to delete—the database engine returns an unexpected row count, triggering an exception that slows migration.

A related inconsistency can also occur: because TTL deletion is asynchronous, an expired document may still exist on the source at the moment the destination has already deleted it. This creates a gap between source and destination.

The MongoDB Oplog or ChangeStream records only the updated fields for an UPDATE operation. It does not record the full document before and after the update. Therefore, if an UPDATE operation cannot find the target data on the destination, DTS ignores the operation.

For example:

TimeSourceDestination
1Service inserts document
2DTS syncs the INSERT
3Document has expired but TTL index has not yet deleted it
4Service updates the document (e.g., changes the TTL field)
5TTL index deletes the document
6DTS syncs the UPDATE, but the document is not found—operation is ignored

The document is now missing from the destination.

To resolve this, temporarily modify the TTL index expiration time on the destination during migration. For details, see Best practices for synchronizing/migrating collections with TTL indexes when MongoDB is the source.

What's next

After the migration completes and you verify data consistency:

  1. Stop writes to the source instance.

  2. Wait for the incremental migration task to catch up (latency approaches 0).

  3. Stop the DTS migration task, or revoke write permissions from the DTS account to prevent accidental overwrites.

  4. Switch your application to the destination instance.

  5. If the destination is a sharded cluster, make sure all application operations comply with the sharded collection requirements for that MongoDB database.