Use Data Transmission Service (DTS) to migrate data from an ApsaraDB for MongoDB replica set instance to an AnalyticDB for PostgreSQL instance. DTS supports full data migration and incremental data migration, so you can switch over with minimal downtime.
Supported features
| Feature | Supported | Notes |
|---|---|---|
| Full data migration | Yes | Migrates all historical data |
| Incremental data migration | Yes | Captures insert, update ($set only), and delete operations |
| Collection-level migration | Yes | Database-level migration is not supported |
| Sharded cluster as source | Yes | Mongos node count cannot exceed 10; _id must be unique across collections |
| ETL | Yes | Enter data processing statements in the task configuration |
| Throttling | Yes | Configurable for both full and incremental migration |
| Append-optimized (AO) tables as destination | No | |
| Transaction preservation | No | Transactions are converted to individual records |
| Time-series collections (MongoDB 5.0+) | No | |
| SRV record connections | No | |
admin, config, local database migration | No |
Billing
| Migration type | Link configuration fee | Data transfer cost |
|---|---|---|
| Full data migration | Free | Free when using Alibaba Cloud Instance access. Charged when using Public IP Address access. See Billable items. |
| Incremental data migration | Charged | See Billing overview. |
Prerequisites
Before you begin, ensure that you have:
A destination AnalyticDB for PostgreSQL instance with storage space at least 10% larger than the storage space used by the source ApsaraDB for MongoDB instance. See Create an instance.
A database, a schema, and a table with a unique (non-composite) primary key created in the destination instance for the migrated data. See SQL syntax.
ImportantWhen creating the destination table: - Map MongoDB's
_idfield (ObjectId type) to a varchar column in AnalyticDB for PostgreSQL. - Do not name any column_idor_value. - Assign the primary key column the valuebson_value("_id")when you configure field mappings. - Append-optimized (AO) tables are not supported as destination tables.For sharded cluster sources: endpoints for all shard nodes, with the same database account and password across all shards. See Apply for a shard endpoint.
Database accounts with the required permissions. See Required permissions.
Required permissions
| Database | Full migration | Incremental migration |
|---|---|---|
| Source ApsaraDB for MongoDB | Read on the databases to be migrated | Read on the databases to be migrated, the admin database, and the local database |
| Destination AnalyticDB for PostgreSQL | Read and write on the destination database | Read and write on the destination database |
To create and authorize source database accounts, see Manage permissions for MongoDB database accounts.
For the destination, use the initial account or an account with the RDS_SUPERUSER permission. See Create and manage users and Manage user permissions.
Limitations
Source database
Bandwidth: The source database server must have sufficient outbound bandwidth, or migration speed is affected.
Collection limit: A single migration task can migrate a maximum of 1,000 collections when name mapping is configured. If this limit is exceeded, split the objects across multiple tasks.
Full migration only: Standalone MongoDB instances, Azure Cosmos DB for MongoDB, and Amazon DocumentDB elastic clusters support full data migration only.
Incremental migration — oplog/change streams retention: The source database must have the oplog enabled and retained for at least 7 days, or have change streams enabled with at least 7 days of subscribable history. If this requirement is not met, the task may fail or data loss may occur. Such issues are not covered by the DTS Service-Level Agreement (SLA).
Use the oplog when possible — it provides lower latency for incremental tasks.
Change streams require MongoDB 4.0 or later.
For Amazon DocumentDB (non-elastic cluster), you must manually enable change streams, set Migration Method to ChangeStreamChange Streams, and set Architecture to Sharded Cluster.
Sharded cluster — unique `_id`: The
_idfield in all collections to be migrated must be unique. Otherwise, data inconsistency may occur.Sharded cluster — Mongos node count: The number of Mongos nodes cannot exceed 10.
Sharded cluster — orphaned documents: Ensure there are no orphaned documents in the cluster. Otherwise, data inconsistency or task failure may occur. See Orphaned Document and How do I purge orphaned documents from a MongoDB sharded cluster?.
Self-managed sharded cluster — connection type: Only Public IP, Leased Line/VPN Gateway/Smart Gateway, and CEN are supported.
Self-managed sharded cluster — MongoDB 8.0+ with Oplog: The shard account used by the migration task must have the
directShardOperationspermission. Grant it by running:db.adminCommand({ grantRolesToUser: "username", roles: [{ role: "directShardOperations", db: "admin"}]})Replace
usernamewith the actual shard account name.SRV records: Connecting to MongoDB using an SRV record is not supported.
Sharded cluster with active balancer: If the balancer is rebalancing data during migration, instance latency may occur.
Operations during migration
During full data migration, do not change database or collection schemas, including updates to array-type data.
If performing full migration only (no incremental migration), do not write new data to the source instance during migration.
For sharded cluster sources, do not run commands that change data distribution — such as
shardCollection,reshardCollection,unshardCollection,moveCollection, ormovePrimary— while the migration instance is running.
Other limitations
Only collection-level migration is supported.
The
admin,config, andlocaldatabases cannot be migrated.Transaction information is not retained. Transactions from the source are converted to individual records in the destination.
FLOAT columns are migrated with a precision of 38 digits; DOUBLE columns with a precision of 308 digits. DTS reads these values using
ROUND(COLUMN,PRECISION). Verify that this precision meets your business requirements before migrating.Full data migration uses concurrent INSERT operations, which can cause fragmentation. After migration, the disk space used by destination tables may exceed that of the source collections.
DTS attempts to resume failed tasks within 7 days. Before switching business traffic to the destination, end or release the task — or revoke the destination account's write permissions using the
revokecommand — to prevent the source from overwriting destination data after an automatic resume.DTS calculates latency based on the timestamp of the last migrated record compared to the current time. If the source has not been updated for a long time, the latency figure may be inaccurate. To refresh it, perform an update on the source database.
Migrating time-series collections (introduced in MongoDB 5.0) is not supported.
If a task fails, DTS support staff will attempt to restore it within 8 hours. During restoration, the task may be restarted or its parameters adjusted. Only DTS task parameters are modified — database parameters are not changed. Parameters that may be adjusted are listed in Modify instance parameters.
Choose a migration method
DTS supports two incremental migration methods. Choose based on your source configuration:
| Condition | Recommended method |
|---|---|
| Oplog is enabled on the source (default for ApsaraDB for MongoDB) | Oplog — lower latency, recommended |
| Source is Amazon DocumentDB (non-elastic cluster) | ChangeStream — required |
| Source is a sharded cluster and you want to avoid providing shard credentials | ChangeStream — no shard account or password required |
| Change streams are enabled and MongoDB 4.0 or later | ChangeStream |
Migrate data
The migration process has seven steps: navigate to the task list, create a task, configure source and destination databases, configure migration objects, configure advanced settings, run a precheck, and purchase the instance.
Step 1: Go to the migration task list
Use either the DTS console or the DMS console.
From the DTS console
Log on to the Data Transmission Service (DTS) console.
In the navigation pane on the left, click Data Migration.
In the upper-left corner of the page, select the region where the migration instance is located.
From the DMS console
The actual operations may vary based on the mode and layout of the DMS console. For more information, see Simple mode console and Customize the layout and style of the DMS console.
Log on to the Data Management (DMS) console.
In the top menu bar, choose Data + AI > Data Transmission (DTS) > Data Migration.
To the right of Data Migration Tasks, select the region where the migration instance is located.
Step 2: Create a task
Click Create Task to open the task configuration page.
Step 3: Configure source and destination databases
| Category | Parameter | Description |
|---|---|---|
| — | Task Name | DTS generates a name automatically. Specify a descriptive name for easy identification. The name does not need to be unique. |
| Source Database | Select Existing Connection | To reuse a previously registered database instance, select it from the drop-down list. The fields below are filled automatically. In the DMS console, this parameter is named Select a DMS database instance.. Otherwise, fill in the fields manually. |
| Database Type | Select MongoDB. | |
| Access Method | Select Alibaba Cloud Instance. | |
| Instance Region | Select the region where the source ApsaraDB for MongoDB instance resides. | |
| Replicate Data Across Alibaba Cloud Accounts | Select No for instances under the same Alibaba Cloud account. | |
| Architecture | Select Replica Set for this example. For a Sharded Cluster source, also specify Shard account and Shard password. | |
| Migration Method | Select based on your source. See Choose a migration method. Oplog is recommended when the source oplog is enabled. | |
| Instance ID | Select the instance ID of the source ApsaraDB for MongoDB instance. | |
| Authentication Database | Enter the database that the source account belongs to. Default: admin. | |
| Database Account | Enter the source database account. | |
| Database Password | Enter the password for the source account. | |
| Encryption | Select Non-encrypted, SSL-encrypted, or Mongo Atlas SSL. Available options depend on Access Method and Architecture. SSL-encrypted is not supported for sharded cluster sources using the Oplog migration method. | |
| Destination Database | Select Existing Connection | Same as source: reuse a registered instance or fill in manually. |
| Database Type | Select AnalyticDB for PostgreSQL. | |
| Access Method | Select Alibaba Cloud Instance. | |
| Instance Region | Select the region where the destination AnalyticDB for PostgreSQL instance resides. | |
| Instance ID | Select the instance ID of the destination AnalyticDB for PostgreSQL instance. | |
| Database Name | Enter the name of the destination database. | |
| Database Account | Enter the destination database account. | |
| Database Password | Enter the password for the destination account. |
After completing the configuration, click Test Connectivity and Proceed.
Ensure that the IP address ranges of the DTS service are automatically or manually added to the security settings of the source and destination databases to allow access from DTS servers. For self-managed databases (where Access Method is not Alibaba Cloud Instance), also click Test Connectivity in the CIDR Blocks of DTS Servers dialog. See Add DTS server IP addresses to a whitelist.
Step 4: Configure migration objects
On the Configure Objects page, set the following:
| Parameter | Description |
|---|---|
| Migration Types | Select Full Data Migration for a one-time migration. Select both Full Data Migration and Incremental Data Migration for a near-zero-downtime migration. If Incremental Data Migration is not selected, do not write new data to the source during migration. |
| DDL and DML Operations to Be Synchronized | Select the DML operations to include in incremental migration at the instance level. To configure at the collection level, right-click a migration object in Selected Objects and select operations in the dialog. |
| Processing Mode of Conflicting Tables | Precheck and Report Errors: Fails the precheck if identically named collections exist in the destination. Rename conflicting collections if needed (see Object name mapping). Ignore Errors and Proceed: Skips the check. Warning This may cause data inconsistency — existing destination records with matching primary keys are kept and source records are not migrated. |
| Source Objects | Click objects to migrate, then click the right-arrow icon to move them to Selected Objects. Selection is at the collection level. |
In Selected Objects, configure object name mapping and field mappings:
Map database names to schemas:
Right-click the database in Selected Objects.

Change Database Name to the name of the target schema in AnalyticDB for PostgreSQL.

(Optional) In Select DDL and DML Operations to Be Synchronized, select operations for incremental migration.

Click OK.
Map collection names to tables:
Right-click the collection in Selected Objects.

Change Table Name to the name of the target table in AnalyticDB for PostgreSQL.

(Optional) Set filter conditions. See Set filter conditions.

(Optional) In Select DDL and DML Operations to Be Synchronized, select operations for incremental migration.

Configure field mappings:
DTS maps collection fields to destination columns automatically. Review the bson_value() expressions and configure Column Name, Type, Length, and Precision for each field.
Assign
bson_value("_id")to the primary key column.Specify the full path to the lowest-level subfield in each
bson_value()expression. For example, usebson_value("person","name")— notbson_value("person")— for a nested field. Using a partial path causes incremental changes to child fields to be lost.
Fields with correct expressions
For fields where the existing bson_value() expression is correct:
Enter Column Name (the destination column name in AnalyticDB for PostgreSQL).
Select Type (ensure compatibility with the source MongoDB data type).
(Optional) Configure Length and Precision.
Repeat for all fields.
Fields with incorrect expressions
For fields where the expression is incorrect or missing (for example, nested fields):
In the Actions column, click the delete icon to remove the existing entry.
Click + Add Column.

Configure Column Name, Type, Length, and Precision.
In the Assignment field, enter the
bson_value()expression. See Field mapping example.Repeat for all fields.
Click OK.
Step 5: Configure advanced settings
Click Next: Advanced Settings and configure:
| Parameter | Description |
|---|---|
| Dedicated Cluster for Task Scheduling | By default, DTS schedules tasks on a shared cluster. Purchase a dedicated cluster for more stable task execution. |
| Retry Time for Failed Connections | Default: 720 minutes. Range: 10–1440 minutes. Set to more than 30 minutes. DTS retries the connection automatically within this window. When multiple DTS instances share the same source or destination, the retry time is determined by the most recently created task. You are charged during the retry period — release the instance promptly if the source or destination is also released. |
| Retry Time for Other Issues | Default: 10 minutes. Range: 1–1440 minutes. Set to more than 10 minutes. Must be less than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Migration | Limit Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s) to reduce database load. Available only when Full Data Migration is selected. Throttling can also be adjusted after the task starts — see Enable throttling for data migration. |
| Only one data type for primary key _id in a table of the data to be synchronized | Indicates whether the _id primary key uses a single data type across the collection. Yes: DTS skips scanning primary key data types and migrates data for one type per collection. No: DTS scans all primary key data types and migrates all data. Select carefully — incorrect selection may cause data loss. Available only when Full Data Migration is selected. |
| Enable Throttling for Incremental Data Migration | Limit RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. |
| Environment Tag | (Optional) Tag the instance for environment identification. |
| Configure ETL | Enable or disable the extract, transform, and load (ETL) feature. If enabled, enter data processing statements in the code editor. See What is ETL? and Configure ETL in a data migration or data synchronization task. |
| Monitoring and Alerting | Configure an alert threshold and notification contacts. If migration fails or latency exceeds the threshold, the system sends an alert. |
Step 6: Save settings and run a precheck
Click Next: Save Task Settings and Precheck.
To view OpenAPI parameters for this configuration, hover over the Next: Save Task Settings and Precheck button and click Preview OpenAPI parameters.
DTS runs a precheck before starting the task. If the precheck fails:
Click View Details next to the failed item, fix the issue, and click Precheck Again.
For warnings on items that can be ignored: click Confirm Alert Details, Ignore, OK, and Precheck Again. Ignoring warnings may cause data inconsistency.
Step 7: Purchase the instance
When Success Rate reaches 100%, click Next: Purchase Instance.
On the Purchase page, configure the instance:
Parameter Description Resource Group Settings Select the resource group for the instance. Default: default resource group. See What is Resource Management?. Instance Class Select a link specification based on your required migration speed. Higher specifications support faster migration. See Data migration link specifications. Read and accept Data Transmission Service (Pay-as-you-go) Service Terms.
Click Buy and Start. In the confirmation dialog, click OK.
The task appears on the Data Migration Tasks list page. Monitor its progress there.
For full-migration-only tasks, the task stops automatically after full migration completes. The Status changes to Completed.
For tasks that include incremental migration, the task continues running. The Status remains Running until you stop it manually.
Field mapping example
This example shows how MongoDB document fields map to AnalyticDB for PostgreSQL table columns using bson_value() expressions.
Source MongoDB document
{
"_id": "62cd344c85c1ea6a2a9f****",
"person": {
"name": "neo",
"age": 26,
"sex": "male"
}
}Destination AnalyticDB for PostgreSQL table schema
| Column name | Type | Note |
|---|---|---|
| mongo_id | varchar | Primary key |
| person_name | varchar | |
| person_age | decimal |
Data in the destination table after migration
The following shows what the destination table looks like after migration, given the source document above:
| mongo_id | person_name | person_age |
|---|---|---|
| 62cd344c85c1ea6a2a9f**** | neo | 26 |
Note that sex is not migrated in this example because no column was configured for it.
Field mapping configuration
Specify the full path to each subfield in the bson_value() expression. For example, bson_value("person","name") maps the name subfield of person. Using bson_value("person") alone cannot propagate incremental changes to child fields such as name, age, and sex.
| Column name | Type | Assignment |
|---|---|---|
| mongo_id | STRING | bson_value("_id") |
| person_name | STRING | bson_value("person","name") |
| person_age | DECIMAL | bson_value("person","age") |