All Products
Search
Document Center

Data Transmission Service:Migrate data from an RDS for MariaDB instance to an RDS for PostgreSQL instance

Last Updated:Mar 28, 2026

Data Transmission Service (DTS) supports schema migration, full data migration, and incremental data migration from an RDS for MariaDB instance to an RDS for PostgreSQL instance. Use all three types together to migrate data with minimal downtime.

Prerequisites

Before you begin, make sure that you have:

Migration types

Choose a migration type based on your continuity requirements:

Migration typeWhat it doesRecommended for
Schema migrationMigrates schema definitions of the migration objects from source to destinationAlways include as a first step
Full data migrationMigrates all existing data from source to destinationOne-time or offline migrations
Incremental data migrationCaptures and applies ongoing changes after full migration completesZero-downtime migrations requiring service continuity

For zero-downtime migrations, select all three types together. Incremental migration runs continuously and does not stop automatically until you release the task.

Supported SQL operations for incremental migration:

Operation typeSQL statements
DMLINSERT, UPDATE, DELETE

Required permissions

Grant the following permissions to the database accounts that DTS uses to access the source and destination databases.

DatabaseSchema migrationFull migrationIncremental migration
RDS for MariaDB (source)SELECTSELECTREPLICATION CLIENT, REPLICATION SLAVE, SHOW VIEW, SELECT
RDS for PostgreSQL (destination)CREATE and USAGE on the migration objectsOwner of the schemaOwner of the schema

To create accounts and manage permissions:

Billing

Migration typeConfiguration feesData transfer fees
Schema migration and full data migrationFreeCharged if data is transferred out of Alibaba Cloud over the Internet. See Billing overview.
Incremental data migrationCharged. See Billing overview.

Constraints

Review the following constraints before starting your migration.

Source database requirements

ConstraintDetails
BandwidthThe server hosting the source database must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed.
Primary keys or UNIQUE constraintsTables to be migrated must have primary keys or UNIQUE constraints with unique field values. Without them, duplicate data may appear in the destination database.
Table limitIf you select individual tables and edit them (for example, by mapping column names), a single migration task supports up to 1,000 tables. If you exceed this limit, split the tables across multiple tasks or migrate the entire database instead.
Invisible columnsData in invisible columns cannot be read by DTS. If the tables contain invisible columns, data loss may occur.
DDL operations during migrationDo not run DDL operations that change database or table schemas during schema migration or full migration. During full migration, DTS queries the source database, which creates metadata locks that may block DDL operations on the source database.
Writing to source during full-only migrationIf you run only full data migration (without incremental migration), do not write new data to the source database during migration. Otherwise, data inconsistency occurs between the source and destination databases.

Binary logging requirements for incremental migration

If you include incremental migration, the source database must meet the following binary logging (binlog) requirements:

ParameterRequired valueNotes
Binary loggingEnabledRequired for incremental migration
binlog_formatROWAn error is reported during precheck if this is not set correctly
binlog_row_imageFULLAn error is reported during precheck if this is not set correctly
Local binlog retentionIf you select incremental migration only: more than 24 hours. If you select full + incremental migration: at least 7 days. After full migration completes, you can reduce the retention period to more than 24 hours.If DTS cannot obtain the binary logs due to insufficient retention, the task may fail or data may be lost or inconsistent. Issues caused by insufficient binlog retention are not covered by the DTS SLA.

Other constraints

  • Foreign keys: During schema migration, DTS migrates foreign keys to the destination database. During full and incremental migration, DTS temporarily disables constraint checks and cascade operations on foreign keys at the session level. Cascade update and delete operations on the source database during migration may cause data inconsistency.

  • FLOAT and DOUBLE precision: DTS reads FLOAT and DOUBLE column values using ROUND(COLUMN, PRECISION). The default precision is 38 for FLOAT and 308 for DOUBLE. Confirm that these precision values meet your requirements before starting migration.

  • Online DDL tools: Do not use tools such as pt-online-schema-change to run online DDL operations on the migration objects in the source database. This causes migration failure.

  • Migration timing: Migrate data during off-peak hours to minimize the impact on source and destination database performance. Full data migration consumes read and write resources on both databases.

  • Table fragmentation after full migration: Concurrent INSERT operations during full migration cause table fragmentation in the destination database. After full migration completes, the storage space used by tables in the destination database may be larger than in the source database.

  • Automatic task resumption: DTS attempts to resume a failed task for up to seven days. Before switching workloads to the destination instance, stop or release the migration task. Alternatively, run the REVOKE command to revoke write permissions from the DTS database account on the destination instance to prevent automatic resumption from overwriting destination data.

  • `session_replication_role` parameter (PostgreSQL): For migrations involving tables with foreign keys, triggers, or event triggers in the source database: if the destination database account has superuser permissions, DTS automatically sets session_replication_role to replica at the session level during migration. If the destination account does not have superuser permissions, set session_replication_role to replica manually before starting migration. After the migration task is released, you can reset this parameter to origin. Cascade update or delete operations on the source database while session_replication_role is set to replica may cause data inconsistency.

  • Instance recovery: If a DTS instance fails, the DTS helpdesk attempts to recover it within 8 hours. Recovery operations may include restarting the instance or adjusting DTS instance parameters (database parameters are not modified). For parameters that may be modified, see Modify instance parameters.

Create a migration task

Step 1: Go to the Data Migration page

Use one of the following methods to open the Data Migration page.

DTS console

  1. Log on to the DTS console.DTS console

  2. In the left-side navigation pane, click Data Migration.

  3. In the upper-left corner of the page, select the region where the migration instance resides.

DMS console

The steps below may vary based on the mode and layout of the DMS console. See Simple mode and Customize the layout and style of the DMS console.
  1. Log on to the DMS console.DMS console

  2. In the top navigation bar, move the pointer over Data + AI > DTS (DTS) > Data Migration.

  3. From the drop-down list to the right of Data Migration Tasks, select the region where the migration instance resides.

Step 2: Create a task

  1. Click Create Task.

  2. (Optional) Click New Configuration Page in the upper-right corner if it appears. If Back to Previous Version is displayed instead, skip this step.

    Specific parameters may differ between the new and previous configuration page versions. Use the new version.

Step 3: Configure source and destination databases

Configure the following parameters on the task configuration page.

General settings:

ParameterDescription
Task NameDTS generates a name automatically. Enter a descriptive name for easy identification. The name does not need to be unique.

Source database:

ParameterValue
Database TypeMariaDB
Connection TypeCloud Instance
Instance RegionRegion where the source RDS for MariaDB instance resides
Replicate Data Across Alibaba Cloud AccountsNo (for same-account migration)
Instance IDID of the source RDS for MariaDB instance
Database AccountAccount with the required permissions (see Required permissions)
Database PasswordPassword for the database account
Connection MethodNon-encrypted Connection (default for RDS for MariaDB)
If the instance is registered with DTS, select it from the Select Existing Connection drop-down list. DTS auto-fills the database parameters. In the DMS console, select the instance from Select a DMS database instance.

Destination database:

ParameterValue
Database TypePostgreSQL
Connection TypeCloud Instance
Instance RegionRegion where the destination RDS for PostgreSQL instance resides
Instance IDID of the destination RDS for PostgreSQL instance
Database NameName of the database in the destination instance that will store the migrated objects
Database AccountAccount with the required permissions (see Required permissions)
Database PasswordPassword for the database account
Connection MethodConfigure based on your requirements. Select SSL-encrypted to use an SSL-encrypted connection, then upload the required certificates. For details on configuring SSL for RDS for PostgreSQL, see SSL encryption.

Step 4: Test connectivity

Click Test Connectivity and Proceed.

Make sure DTS server IP addresses are added to the security settings (whitelists) of the source and destination databases. See Add DTS server IP addresses to a whitelist.

Step 5: Configure migration objects

On the Configure Objects page, set the following options:

ConfigurationDescription
Migration TypesSelect Schema Migration and Full Data Migration for a one-time migration. Add Incremental Data Migration to keep the destination in sync with the source and minimize downtime.
Processing Mode of Conflicting TablesPrecheck and Report Errors (default): checks for identically named tables before migration starts. If duplicates are found, the precheck fails and the task does not start. Use object name mapping to rename conflicting tables in the destination. Ignore Errors and Proceed: skips the precheck. During full migration, DTS skips conflicting records in the destination. During incremental migration, DTS overwrites them. Use with caution — this may cause data inconsistency.
Source ObjectsSelect one or more objects (databases, tables, or columns) and click 向右小箭头 to add them to Selected Objects.
Selected ObjectsRight-click an object to rename it or set a WHERE clause to filter rows. Click Batch Edit in the upper-right corner to rename multiple objects at once. See Map object names and Set filter conditions.
If you do not select Schema Migration, create the tables in the destination database manually and enable object name mapping in Selected Objects before starting. If you use object name mapping, dependent objects may fail to migrate.

Step 6: Configure advanced settings

Click Next: Advanced Settings and configure the following options:

ConfigurationDescription
Dedicated Cluster for Task SchedulingBy default, DTS uses the shared cluster. For higher stability, purchase a dedicated cluster. See What is a DTS dedicated cluster.
Retry Time for Failed ConnectionsHow long DTS retries failed connections after the task starts. Range: 10–1,440 minutes. Default: 720 minutes. Set a value greater than 30 minutes. If DTS reconnects within this period, the task resumes; otherwise, the task fails. When DTS retries, you are charged for the DTS instance.
Retry Time for Other IssuesHow long DTS retries failed DDL or DML operations. Range: 1–1,440 minutes. Default: 10 minutes. Set a value greater than 10 minutes. This value must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data MigrationLimits the read/write rate during full migration to reduce database load. Configure QPS (queries per second) to the source database, RPS (rows per second) of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected.
Enable Throttling for Incremental Data MigrationLimits the migration rate during incremental migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected.
Environment Tag(Optional) Assign an environment tag to the instance.
Configure ETLEnable the extract, transform, and load (ETL) feature to process data during migration. Select Yesalert notification settings to enter data processing statements. See Configure ETL in a data migration or data synchronization task.
Monitoring and AlertingSet up alerts for task failures or latency exceeding a threshold. Select Yes to configure the alert threshold and notification settings. See Configure monitoring and alerting.

Step 7: Run the precheck

Click Next: Save Task Settings and Precheck.

DTS runs a precheck before the migration task starts. The task can only start after the precheck passes.

If the precheck fails:

  • Click View Details next to the failed item, resolve the issue, then click Precheck Again.

  • If an alert item can be ignored, click Confirm Alert Details > Ignore > OK, then click Precheck Again. Ignoring alerts may cause data inconsistency.

To view the API parameters for this task configuration, move the pointer over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

Step 8: Purchase and start the instance

  1. Wait for Success Rate to reach 100%, then click Next: Purchase Instance.

  2. On the Purchase Instance page, configure the following:

    ParameterDescription
    Resource GroupResource group for the migration instance. Default: default resource group. See What is Resource Management?
    Instance ClassInstance class that determines migration speed. See Instance classes of data migration instances.
  3. Read and accept Data Transmission Service (Pay-as-you-go) Service Terms by selecting the check box.

  4. Click Buy and Start, then click OK in the confirmation dialog.

Verify the migration

After the task starts, go to the Data Migration page to monitor progress.

  • Full or schema migration only: The task stops automatically when complete. The Status column shows Completed.

  • Migration with incremental data migration: The task runs continuously and does not stop automatically. The Status column shows Running. Stop or release the task when you are ready to switch workloads to the destination instance.