Data Transmission Service (DTS) supports schema migration, full data migration, and incremental data migration from an RDS for MariaDB instance to an RDS for PostgreSQL instance. Use all three types together to migrate data with minimal downtime.
Prerequisites
Before you begin, make sure that you have:
A destination RDS for PostgreSQL instance with storage space larger than the storage used by the source RDS for MariaDB instance. See Create an RDS for PostgreSQL instance.
A database created in the destination RDS for PostgreSQL instance to receive the migrated data. See Create accounts and databases.
Migration types
Choose a migration type based on your continuity requirements:
| Migration type | What it does | Recommended for |
|---|---|---|
| Schema migration | Migrates schema definitions of the migration objects from source to destination | Always include as a first step |
| Full data migration | Migrates all existing data from source to destination | One-time or offline migrations |
| Incremental data migration | Captures and applies ongoing changes after full migration completes | Zero-downtime migrations requiring service continuity |
For zero-downtime migrations, select all three types together. Incremental migration runs continuously and does not stop automatically until you release the task.
Supported SQL operations for incremental migration:
| Operation type | SQL statements |
|---|---|
| DML | INSERT, UPDATE, DELETE |
Required permissions
Grant the following permissions to the database accounts that DTS uses to access the source and destination databases.
| Database | Schema migration | Full migration | Incremental migration |
|---|---|---|---|
| RDS for MariaDB (source) | SELECT | SELECT | REPLICATION CLIENT, REPLICATION SLAVE, SHOW VIEW, SELECT |
| RDS for PostgreSQL (destination) | CREATE and USAGE on the migration objects | Owner of the schema | Owner of the schema |
To create accounts and manage permissions:
For the RDS for MariaDB instance, see Create an account and Modify or reset account permissions.
For the RDS for PostgreSQL instance, see Create an account and Authorize a service account.
Billing
| Migration type | Configuration fees | Data transfer fees |
|---|---|---|
| Schema migration and full data migration | Free | Charged if data is transferred out of Alibaba Cloud over the Internet. See Billing overview. |
| Incremental data migration | Charged. See Billing overview. | — |
Constraints
Review the following constraints before starting your migration.
Source database requirements
| Constraint | Details |
|---|---|
| Bandwidth | The server hosting the source database must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed. |
| Primary keys or UNIQUE constraints | Tables to be migrated must have primary keys or UNIQUE constraints with unique field values. Without them, duplicate data may appear in the destination database. |
| Table limit | If you select individual tables and edit them (for example, by mapping column names), a single migration task supports up to 1,000 tables. If you exceed this limit, split the tables across multiple tasks or migrate the entire database instead. |
| Invisible columns | Data in invisible columns cannot be read by DTS. If the tables contain invisible columns, data loss may occur. |
| DDL operations during migration | Do not run DDL operations that change database or table schemas during schema migration or full migration. During full migration, DTS queries the source database, which creates metadata locks that may block DDL operations on the source database. |
| Writing to source during full-only migration | If you run only full data migration (without incremental migration), do not write new data to the source database during migration. Otherwise, data inconsistency occurs between the source and destination databases. |
Binary logging requirements for incremental migration
If you include incremental migration, the source database must meet the following binary logging (binlog) requirements:
| Parameter | Required value | Notes |
|---|---|---|
| Binary logging | Enabled | Required for incremental migration |
binlog_format | ROW | An error is reported during precheck if this is not set correctly |
binlog_row_image | FULL | An error is reported during precheck if this is not set correctly |
| Local binlog retention | If you select incremental migration only: more than 24 hours. If you select full + incremental migration: at least 7 days. After full migration completes, you can reduce the retention period to more than 24 hours. | If DTS cannot obtain the binary logs due to insufficient retention, the task may fail or data may be lost or inconsistent. Issues caused by insufficient binlog retention are not covered by the DTS SLA. |
Other constraints
Foreign keys: During schema migration, DTS migrates foreign keys to the destination database. During full and incremental migration, DTS temporarily disables constraint checks and cascade operations on foreign keys at the session level. Cascade update and delete operations on the source database during migration may cause data inconsistency.
FLOAT and DOUBLE precision: DTS reads FLOAT and DOUBLE column values using
ROUND(COLUMN, PRECISION). The default precision is 38 for FLOAT and 308 for DOUBLE. Confirm that these precision values meet your requirements before starting migration.Online DDL tools: Do not use tools such as
pt-online-schema-changeto run online DDL operations on the migration objects in the source database. This causes migration failure.Migration timing: Migrate data during off-peak hours to minimize the impact on source and destination database performance. Full data migration consumes read and write resources on both databases.
Table fragmentation after full migration: Concurrent INSERT operations during full migration cause table fragmentation in the destination database. After full migration completes, the storage space used by tables in the destination database may be larger than in the source database.
Automatic task resumption: DTS attempts to resume a failed task for up to seven days. Before switching workloads to the destination instance, stop or release the migration task. Alternatively, run the
REVOKEcommand to revoke write permissions from the DTS database account on the destination instance to prevent automatic resumption from overwriting destination data.`session_replication_role` parameter (PostgreSQL): For migrations involving tables with foreign keys, triggers, or event triggers in the source database: if the destination database account has superuser permissions, DTS automatically sets
session_replication_roletoreplicaat the session level during migration. If the destination account does not have superuser permissions, setsession_replication_roletoreplicamanually before starting migration. After the migration task is released, you can reset this parameter toorigin. Cascade update or delete operations on the source database whilesession_replication_roleis set toreplicamay cause data inconsistency.Instance recovery: If a DTS instance fails, the DTS helpdesk attempts to recover it within 8 hours. Recovery operations may include restarting the instance or adjusting DTS instance parameters (database parameters are not modified). For parameters that may be modified, see Modify instance parameters.
Create a migration task
Step 1: Go to the Data Migration page
Use one of the following methods to open the Data Migration page.
DTS console
Log on to the DTS console.DTS console
In the left-side navigation pane, click Data Migration.
In the upper-left corner of the page, select the region where the migration instance resides.
DMS console
The steps below may vary based on the mode and layout of the DMS console. See Simple mode and Customize the layout and style of the DMS console.
Log on to the DMS console.DMS console
In the top navigation bar, move the pointer over Data + AI > DTS (DTS) > Data Migration.
From the drop-down list to the right of Data Migration Tasks, select the region where the migration instance resides.
Step 2: Create a task
Click Create Task.
(Optional) Click New Configuration Page in the upper-right corner if it appears. If Back to Previous Version is displayed instead, skip this step.
Specific parameters may differ between the new and previous configuration page versions. Use the new version.
Step 3: Configure source and destination databases
Configure the following parameters on the task configuration page.
General settings:
| Parameter | Description |
|---|---|
| Task Name | DTS generates a name automatically. Enter a descriptive name for easy identification. The name does not need to be unique. |
Source database:
| Parameter | Value |
|---|---|
| Database Type | MariaDB |
| Connection Type | Cloud Instance |
| Instance Region | Region where the source RDS for MariaDB instance resides |
| Replicate Data Across Alibaba Cloud Accounts | No (for same-account migration) |
| Instance ID | ID of the source RDS for MariaDB instance |
| Database Account | Account with the required permissions (see Required permissions) |
| Database Password | Password for the database account |
| Connection Method | Non-encrypted Connection (default for RDS for MariaDB) |
If the instance is registered with DTS, select it from the Select Existing Connection drop-down list. DTS auto-fills the database parameters. In the DMS console, select the instance from Select a DMS database instance.
Destination database:
| Parameter | Value |
|---|---|
| Database Type | PostgreSQL |
| Connection Type | Cloud Instance |
| Instance Region | Region where the destination RDS for PostgreSQL instance resides |
| Instance ID | ID of the destination RDS for PostgreSQL instance |
| Database Name | Name of the database in the destination instance that will store the migrated objects |
| Database Account | Account with the required permissions (see Required permissions) |
| Database Password | Password for the database account |
| Connection Method | Configure based on your requirements. Select SSL-encrypted to use an SSL-encrypted connection, then upload the required certificates. For details on configuring SSL for RDS for PostgreSQL, see SSL encryption. |
Step 4: Test connectivity
Click Test Connectivity and Proceed.
Make sure DTS server IP addresses are added to the security settings (whitelists) of the source and destination databases. See Add DTS server IP addresses to a whitelist.
Step 5: Configure migration objects
On the Configure Objects page, set the following options:
| Configuration | Description |
|---|---|
| Migration Types | Select Schema Migration and Full Data Migration for a one-time migration. Add Incremental Data Migration to keep the destination in sync with the source and minimize downtime. |
| Processing Mode of Conflicting Tables | Precheck and Report Errors (default): checks for identically named tables before migration starts. If duplicates are found, the precheck fails and the task does not start. Use object name mapping to rename conflicting tables in the destination. Ignore Errors and Proceed: skips the precheck. During full migration, DTS skips conflicting records in the destination. During incremental migration, DTS overwrites them. Use with caution — this may cause data inconsistency. |
| Source Objects | Select one or more objects (databases, tables, or columns) and click |
| Selected Objects | Right-click an object to rename it or set a WHERE clause to filter rows. Click Batch Edit in the upper-right corner to rename multiple objects at once. See Map object names and Set filter conditions. |
If you do not select Schema Migration, create the tables in the destination database manually and enable object name mapping in Selected Objects before starting. If you use object name mapping, dependent objects may fail to migrate.
Step 6: Configure advanced settings
Click Next: Advanced Settings and configure the following options:
| Configuration | Description |
|---|---|
| Dedicated Cluster for Task Scheduling | By default, DTS uses the shared cluster. For higher stability, purchase a dedicated cluster. See What is a DTS dedicated cluster. |
| Retry Time for Failed Connections | How long DTS retries failed connections after the task starts. Range: 10–1,440 minutes. Default: 720 minutes. Set a value greater than 30 minutes. If DTS reconnects within this period, the task resumes; otherwise, the task fails. When DTS retries, you are charged for the DTS instance. |
| Retry Time for Other Issues | How long DTS retries failed DDL or DML operations. Range: 1–1,440 minutes. Default: 10 minutes. Set a value greater than 10 minutes. This value must be less than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Migration | Limits the read/write rate during full migration to reduce database load. Configure QPS (queries per second) to the source database, RPS (rows per second) of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. |
| Enable Throttling for Incremental Data Migration | Limits the migration rate during incremental migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. |
| Environment Tag | (Optional) Assign an environment tag to the instance. |
| Configure ETL | Enable the extract, transform, and load (ETL) feature to process data during migration. Select Yesalert notification settings to enter data processing statements. See Configure ETL in a data migration or data synchronization task. |
| Monitoring and Alerting | Set up alerts for task failures or latency exceeding a threshold. Select Yes to configure the alert threshold and notification settings. See Configure monitoring and alerting. |
Step 7: Run the precheck
Click Next: Save Task Settings and Precheck.
DTS runs a precheck before the migration task starts. The task can only start after the precheck passes.
If the precheck fails:
Click View Details next to the failed item, resolve the issue, then click Precheck Again.
If an alert item can be ignored, click Confirm Alert Details > Ignore > OK, then click Precheck Again. Ignoring alerts may cause data inconsistency.
To view the API parameters for this task configuration, move the pointer over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
Step 8: Purchase and start the instance
Wait for Success Rate to reach 100%, then click Next: Purchase Instance.
On the Purchase Instance page, configure the following:
Parameter Description Resource Group Resource group for the migration instance. Default: default resource group. See What is Resource Management? Instance Class Instance class that determines migration speed. See Instance classes of data migration instances. Read and accept Data Transmission Service (Pay-as-you-go) Service Terms by selecting the check box.
Click Buy and Start, then click OK in the confirmation dialog.
Verify the migration
After the task starts, go to the Data Migration page to monitor progress.
Full or schema migration only: The task stops automatically when complete. The Status column shows Completed.
Migration with incremental data migration: The task runs continuously and does not stop automatically. The Status column shows Running. Stop or release the task when you are ready to switch workloads to the destination instance.