All Products
Search
Document Center

Data Transmission Service:Migrate data from ApsaraDB for MongoDB to AnalyticDB for PostgreSQL

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate data from an ApsaraDB for MongoDB replica set instance to an AnalyticDB for PostgreSQL instance. DTS supports full data migration and incremental data migration, so you can switch over with minimal downtime.

Supported features

FeatureSupportedNotes
Full data migrationYes

Migrates all historical data

Incremental data migrationYesCaptures insert, update ($set only), and delete operations
Collection-level migrationYesDatabase-level migration is not supported
Sharded cluster as sourceYesMongos node count cannot exceed 10; _id must be unique across collections
ETLYesEnter data processing statements in the task configuration
ThrottlingYesConfigurable for both full and incremental migration
Append-optimized (AO) tables as destinationNo
Transaction preservationNoTransactions are converted to individual records
Time-series collections (MongoDB 5.0+)No
SRV record connectionsNo
admin, config, local database migrationNo

Billing

Migration typeLink configuration feeData transfer cost
Full data migrationFreeFree when using Alibaba Cloud Instance access. Charged when using Public IP Address access. See Billable items.
Incremental data migrationChargedSee Billing overview.

Prerequisites

Before you begin, ensure that you have:

  • A destination AnalyticDB for PostgreSQL instance with storage space at least 10% larger than the storage space used by the source ApsaraDB for MongoDB instance. See Create an instance.

  • A database, a schema, and a table with a unique (non-composite) primary key created in the destination instance for the migrated data. See SQL syntax.

    Important

    When creating the destination table: - Map MongoDB's _id field (ObjectId type) to a varchar column in AnalyticDB for PostgreSQL. - Do not name any column _id or _value. - Assign the primary key column the value bson_value("_id") when you configure field mappings. - Append-optimized (AO) tables are not supported as destination tables.

  • For sharded cluster sources: endpoints for all shard nodes, with the same database account and password across all shards. See Apply for a shard endpoint.

  • Database accounts with the required permissions. See Required permissions.

Required permissions

DatabaseFull migrationIncremental migration
Source ApsaraDB for MongoDBRead on the databases to be migratedRead on the databases to be migrated, the admin database, and the local database
Destination AnalyticDB for PostgreSQLRead and write on the destination databaseRead and write on the destination database

Limitations

Source database

  • Bandwidth: The source database server must have sufficient outbound bandwidth, or migration speed is affected.

  • Collection limit: A single migration task can migrate a maximum of 1,000 collections when name mapping is configured. If this limit is exceeded, split the objects across multiple tasks.

  • Full migration only: Standalone MongoDB instances, Azure Cosmos DB for MongoDB, and Amazon DocumentDB elastic clusters support full data migration only.

  • Incremental migration — oplog/change streams retention: The source database must have the oplog enabled and retained for at least 7 days, or have change streams enabled with at least 7 days of subscribable history. If this requirement is not met, the task may fail or data loss may occur. Such issues are not covered by the DTS Service-Level Agreement (SLA).

    • Use the oplog when possible — it provides lower latency for incremental tasks.

    • Change streams require MongoDB 4.0 or later.

    • For Amazon DocumentDB (non-elastic cluster), you must manually enable change streams, set Migration Method to ChangeStreamChange Streams, and set Architecture to Sharded Cluster.

  • Sharded cluster — unique `_id`: The _id field in all collections to be migrated must be unique. Otherwise, data inconsistency may occur.

  • Sharded cluster — Mongos node count: The number of Mongos nodes cannot exceed 10.

  • Sharded cluster — orphaned documents: Ensure there are no orphaned documents in the cluster. Otherwise, data inconsistency or task failure may occur. See Orphaned Document and How do I purge orphaned documents from a MongoDB sharded cluster?.

  • Self-managed sharded cluster — connection type: Only Public IP, Leased Line/VPN Gateway/Smart Gateway, and CEN are supported.

  • Self-managed sharded cluster — MongoDB 8.0+ with Oplog: The shard account used by the migration task must have the directShardOperations permission. Grant it by running:

    db.adminCommand({ grantRolesToUser: "username", roles: [{ role: "directShardOperations", db: "admin"}]})

    Replace username with the actual shard account name.

  • SRV records: Connecting to MongoDB using an SRV record is not supported.

  • Sharded cluster with active balancer: If the balancer is rebalancing data during migration, instance latency may occur.

Operations during migration

  • During full data migration, do not change database or collection schemas, including updates to array-type data.

  • If performing full migration only (no incremental migration), do not write new data to the source instance during migration.

  • For sharded cluster sources, do not run commands that change data distribution — such as shardCollection, reshardCollection, unshardCollection, moveCollection, or movePrimary — while the migration instance is running.

Other limitations

  • Only collection-level migration is supported.

  • The admin, config, and local databases cannot be migrated.

  • Transaction information is not retained. Transactions from the source are converted to individual records in the destination.

  • FLOAT columns are migrated with a precision of 38 digits; DOUBLE columns with a precision of 308 digits. DTS reads these values using ROUND(COLUMN,PRECISION). Verify that this precision meets your business requirements before migrating.

  • Full data migration uses concurrent INSERT operations, which can cause fragmentation. After migration, the disk space used by destination tables may exceed that of the source collections.

  • DTS attempts to resume failed tasks within 7 days. Before switching business traffic to the destination, end or release the task — or revoke the destination account's write permissions using the revoke command — to prevent the source from overwriting destination data after an automatic resume.

  • DTS calculates latency based on the timestamp of the last migrated record compared to the current time. If the source has not been updated for a long time, the latency figure may be inaccurate. To refresh it, perform an update on the source database.

  • Migrating time-series collections (introduced in MongoDB 5.0) is not supported.

  • If a task fails, DTS support staff will attempt to restore it within 8 hours. During restoration, the task may be restarted or its parameters adjusted. Only DTS task parameters are modified — database parameters are not changed. Parameters that may be adjusted are listed in Modify instance parameters.

Choose a migration method

DTS supports two incremental migration methods. Choose based on your source configuration:

ConditionRecommended method
Oplog is enabled on the source (default for ApsaraDB for MongoDB)Oplog — lower latency, recommended
Source is Amazon DocumentDB (non-elastic cluster)ChangeStream — required
Source is a sharded cluster and you want to avoid providing shard credentialsChangeStream — no shard account or password required
Change streams are enabled and MongoDB 4.0 or laterChangeStream

Migrate data

The migration process has seven steps: navigate to the task list, create a task, configure source and destination databases, configure migration objects, configure advanced settings, run a precheck, and purchase the instance.

Step 1: Go to the migration task list

Use either the DTS console or the DMS console.

From the DTS console

  1. Log on to the Data Transmission Service (DTS) console.

  2. In the navigation pane on the left, click Data Migration.

  3. In the upper-left corner of the page, select the region where the migration instance is located.

From the DMS console

Note

The actual operations may vary based on the mode and layout of the DMS console. For more information, see Simple mode console and Customize the layout and style of the DMS console.

  1. Log on to the Data Management (DMS) console.

  2. In the top menu bar, choose Data + AI > Data Transmission (DTS) > Data Migration.

  3. To the right of Data Migration Tasks, select the region where the migration instance is located.

Step 2: Create a task

Click Create Task to open the task configuration page.

Step 3: Configure source and destination databases

CategoryParameterDescription
Task NameDTS generates a name automatically. Specify a descriptive name for easy identification. The name does not need to be unique.
Source DatabaseSelect Existing ConnectionTo reuse a previously registered database instance, select it from the drop-down list. The fields below are filled automatically. In the DMS console, this parameter is named Select a DMS database instance.. Otherwise, fill in the fields manually.
Database TypeSelect MongoDB.
Access MethodSelect Alibaba Cloud Instance.
Instance RegionSelect the region where the source ApsaraDB for MongoDB instance resides.
Replicate Data Across Alibaba Cloud AccountsSelect No for instances under the same Alibaba Cloud account.
ArchitectureSelect Replica Set for this example. For a Sharded Cluster source, also specify Shard account and Shard password.
Migration MethodSelect based on your source. See Choose a migration method. Oplog is recommended when the source oplog is enabled.
Instance IDSelect the instance ID of the source ApsaraDB for MongoDB instance.
Authentication DatabaseEnter the database that the source account belongs to. Default: admin.
Database AccountEnter the source database account.
Database PasswordEnter the password for the source account.
EncryptionSelect Non-encrypted, SSL-encrypted, or Mongo Atlas SSL. Available options depend on Access Method and Architecture. SSL-encrypted is not supported for sharded cluster sources using the Oplog migration method.
Destination DatabaseSelect Existing ConnectionSame as source: reuse a registered instance or fill in manually.
Database TypeSelect AnalyticDB for PostgreSQL.
Access MethodSelect Alibaba Cloud Instance.
Instance RegionSelect the region where the destination AnalyticDB for PostgreSQL instance resides.
Instance IDSelect the instance ID of the destination AnalyticDB for PostgreSQL instance.
Database NameEnter the name of the destination database.
Database AccountEnter the destination database account.
Database PasswordEnter the password for the destination account.

After completing the configuration, click Test Connectivity and Proceed.

Ensure that the IP address ranges of the DTS service are automatically or manually added to the security settings of the source and destination databases to allow access from DTS servers. For self-managed databases (where Access Method is not Alibaba Cloud Instance), also click Test Connectivity in the CIDR Blocks of DTS Servers dialog. See Add DTS server IP addresses to a whitelist.

Step 4: Configure migration objects

On the Configure Objects page, set the following:

ParameterDescription
Migration TypesSelect Full Data Migration for a one-time migration. Select both Full Data Migration and Incremental Data Migration for a near-zero-downtime migration. If Incremental Data Migration is not selected, do not write new data to the source during migration.
DDL and DML Operations to Be SynchronizedSelect the DML operations to include in incremental migration at the instance level. To configure at the collection level, right-click a migration object in Selected Objects and select operations in the dialog.
Processing Mode of Conflicting TablesPrecheck and Report Errors: Fails the precheck if identically named collections exist in the destination. Rename conflicting collections if needed (see Object name mapping). Ignore Errors and Proceed: Skips the check.
Warning

This may cause data inconsistency — existing destination records with matching primary keys are kept and source records are not migrated.

Source ObjectsClick objects to migrate, then click the right-arrow icon to move them to Selected Objects. Selection is at the collection level.

In Selected Objects, configure object name mapping and field mappings:

Map database names to schemas:

  1. Right-click the database in Selected Objects.image

  2. Change Database Name to the name of the target schema in AnalyticDB for PostgreSQL.image

  3. (Optional) In Select DDL and DML Operations to Be Synchronized, select operations for incremental migration.image

  4. Click OK.

Map collection names to tables:

  1. Right-click the collection in Selected Objects.image

  2. Change Table Name to the name of the target table in AnalyticDB for PostgreSQL.image

  3. (Optional) Set filter conditions. See Set filter conditions.image

  4. (Optional) In Select DDL and DML Operations to Be Synchronized, select operations for incremental migration.image

Configure field mappings:

DTS maps collection fields to destination columns automatically. Review the bson_value() expressions and configure Column Name, Type, Length, and Precision for each field.

Important
  • Assign bson_value("_id") to the primary key column.

  • Specify the full path to the lowest-level subfield in each bson_value() expression. For example, use bson_value("person","name") — not bson_value("person") — for a nested field. Using a partial path causes incremental changes to child fields to be lost.

Fields with correct expressions

For fields where the existing bson_value() expression is correct:

  1. Enter Column Name (the destination column name in AnalyticDB for PostgreSQL).

  2. Select Type (ensure compatibility with the source MongoDB data type).

  3. (Optional) Configure Length and Precision.

  4. Repeat for all fields.

Fields with incorrect expressions

For fields where the expression is incorrect or missing (for example, nested fields):

  1. In the Actions column, click the delete icon to remove the existing entry.

  2. Click + Add Column.image

  3. Configure Column Name, Type, Length, and Precision.

  4. In the Assignment field, enter the bson_value() expression. See Field mapping example.

  5. Repeat for all fields.

Click OK.

Step 5: Configure advanced settings

Click Next: Advanced Settings and configure:

ParameterDescription
Dedicated Cluster for Task SchedulingBy default, DTS schedules tasks on a shared cluster. Purchase a dedicated cluster for more stable task execution.
Retry Time for Failed ConnectionsDefault: 720 minutes. Range: 10–1440 minutes. Set to more than 30 minutes. DTS retries the connection automatically within this window. When multiple DTS instances share the same source or destination, the retry time is determined by the most recently created task. You are charged during the retry period — release the instance promptly if the source or destination is also released.
Retry Time for Other IssuesDefault: 10 minutes. Range: 1–1440 minutes. Set to more than 10 minutes. Must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data MigrationLimit Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s) to reduce database load. Available only when Full Data Migration is selected. Throttling can also be adjusted after the task starts — see Enable throttling for data migration.
Only one data type for primary key _id in a table of the data to be synchronizedIndicates whether the _id primary key uses a single data type across the collection. Yes: DTS skips scanning primary key data types and migrates data for one type per collection. No: DTS scans all primary key data types and migrates all data. Select carefully — incorrect selection may cause data loss. Available only when Full Data Migration is selected.
Enable Throttling for Incremental Data MigrationLimit RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected.
Environment Tag(Optional) Tag the instance for environment identification.
Configure ETLEnable or disable the extract, transform, and load (ETL) feature. If enabled, enter data processing statements in the code editor. See What is ETL? and Configure ETL in a data migration or data synchronization task.
Monitoring and AlertingConfigure an alert threshold and notification contacts. If migration fails or latency exceeds the threshold, the system sends an alert.

Step 6: Save settings and run a precheck

Click Next: Save Task Settings and Precheck.

To view OpenAPI parameters for this configuration, hover over the Next: Save Task Settings and Precheck button and click Preview OpenAPI parameters.

DTS runs a precheck before starting the task. If the precheck fails:

  • Click View Details next to the failed item, fix the issue, and click Precheck Again.

  • For warnings on items that can be ignored: click Confirm Alert Details, Ignore, OK, and Precheck Again. Ignoring warnings may cause data inconsistency.

Step 7: Purchase the instance

  1. When Success Rate reaches 100%, click Next: Purchase Instance.

  2. On the Purchase page, configure the instance:

    ParameterDescription
    Resource Group SettingsSelect the resource group for the instance. Default: default resource group. See What is Resource Management?.
    Instance ClassSelect a link specification based on your required migration speed. Higher specifications support faster migration. See Data migration link specifications.
  3. Read and accept Data Transmission Service (Pay-as-you-go) Service Terms.

  4. Click Buy and Start. In the confirmation dialog, click OK.

The task appears on the Data Migration Tasks list page. Monitor its progress there.

For full-migration-only tasks, the task stops automatically after full migration completes. The Status changes to Completed.
For tasks that include incremental migration, the task continues running. The Status remains Running until you stop it manually.

Field mapping example

This example shows how MongoDB document fields map to AnalyticDB for PostgreSQL table columns using bson_value() expressions.

Source MongoDB document

{
  "_id": "62cd344c85c1ea6a2a9f****",
  "person": {
    "name": "neo",
    "age": 26,
    "sex": "male"
  }
}

Destination AnalyticDB for PostgreSQL table schema

Column nameTypeNote
mongo_idvarcharPrimary key
person_namevarchar
person_agedecimal

Data in the destination table after migration

The following shows what the destination table looks like after migration, given the source document above:

mongo_idperson_nameperson_age
62cd344c85c1ea6a2a9f****neo26

Note that sex is not migrated in this example because no column was configured for it.

Field mapping configuration

Important

Specify the full path to each subfield in the bson_value() expression. For example, bson_value("person","name") maps the name subfield of person. Using bson_value("person") alone cannot propagate incremental changes to child fields such as name, age, and sex.

Column nameTypeAssignment
mongo_idSTRINGbson_value("_id")
person_nameSTRINGbson_value("person","name")
person_ageDECIMALbson_value("person","age")

What's next