All Products
Search
Document Center

Data Transmission Service:Migrate data from RDS for MySQL to AnalyticDB for PostgreSQL

Last Updated:Mar 30, 2026

Use Data Transmission Service (DTS) to migrate data from ApsaraDB RDS for MySQL to AnalyticDB for PostgreSQL. DTS supports schema migration, full data migration, and incremental data migration, so you can keep the source database running while the migration is in progress.

Supported source databases

DTS supports migrating data from the following MySQL sources to an AnalyticDB for PostgreSQL instance:

  • ApsaraDB RDS for MySQL instance

  • Self-managed MySQL databases:

    • Database with a public IP address

    • Database hosted on an Elastic Compute Service (ECS) instance

    • Database connected over Database Gateway

    • Database connected over Cloud Enterprise Network (CEN)

    • Database connected over Express Connect, VPN Gateway, or Smart Access Gateway

This topic uses ApsaraDB RDS for MySQL as the example source. The same procedure applies to all supported MySQL source types listed above.

Migration types

DTS supports the following migration types for this scenario.

Migration type What it does
Schema migration Migrates table schemas from the source to the destination. Because the source and destination are heterogeneous databases, schemas may differ after migration. Review data type mappings between heterogeneous databases before you start.
Full data migration Migrates all existing data from the source to the destination.
Incremental data migration After full migration completes, continuously replicates changes from the source so your applications can keep running during migration.

Choose your migration approach:

  • One-time cutover with downtime: Select Schema migration and Full data migration.

  • Zero-downtime migration (recommended): Select Schema migration, Full data migration, and Incremental data migration.

Supported SQL operations for incremental migration

Operation type SQL statements
DML INSERT, UPDATE, DELETE
DDL CREATE TABLE, ALTER TABLE, TRUNCATE TABLE, DROP TABLE
When writing to AnalyticDB for PostgreSQL, DTS automatically converts UPDATE statements to REPLACE INTO statements. UPDATE operations on primary keys are converted to DELETE and INSERT statements.
Warning

If you change a field's data type in the source table during migration, DTS returns an error and stops the task. To recover: 1. Identify the table with the changed data type (for example, the customer table). 2. Create a new table with the updated schema in the destination (for example, customer_new). 3. Copy data from customer into customer_new using INSERT INTO SELECT. 4. Rename or delete the original customer table, then rename customer_new to customer. 5. Restart the migration task in the DTS console.

Billing

Migration type Instance configuration fee Internet traffic fee
Schema migration and full data migration Free Charged if Access Method is set to Public IP Address. For details, see Billing overview.
Incremental data migration Charged. For details, see Billing overview.

Limitations

Review these limitations before starting a migration task.

Source database requirements

Binary logging (required for incremental migration)

Configure the following binary log parameters on the source database before running an incremental migration task.

Parameter Required value Notes
Binary logging Enabled See Modify instance parameters to enable binary logging for RDS MySQL.
binlog_row_image full Required for all MySQL sources. The precheck fails if this is not set to full.
binlog_format row Required for self-managed MySQL only.
log_slave_updates ON Required for self-managed MySQL in a dual-primary cluster only.
Binary log retention >24 hours (incremental only); ≥7 days (full + incremental) After full migration completes, you can reduce the retention to more than 24 hours. Insufficient retention may cause the task to fail or result in data loss.

For self-managed MySQL setup, see Create an account and configure binary logging.

For managing binary log files on RDS MySQL, see Manage binary log files.

Table constraints

  • Tables must have PRIMARY KEY or UNIQUE constraints, with all fields unique. Otherwise, duplicate records may appear in the destination.

  • When migrating specific tables (not the entire database), a single task supports up to 1,000 tables. For more than 1,000 tables, configure multiple tasks or migrate the entire database instead.

Restrictions on operations during migration

  • Do not execute DDL statements that modify primary keys or add comments (for example, ALTER TABLE table_name COMMENT='Table comment';).

  • Do not execute DDL statements that change database or table schemas during schema migration and full data migration.

  • If running full data migration only, do not write data to the source database. To avoid this constraint, run schema migration, full data migration, and incremental data migration together.

Other source database limitations

  • MySQL 8.0.23 or later — invisible columns: If the source tables contain invisible columns, those columns cannot be migrated and data loss occurs. To make a column visible, run: ALTER TABLE <table_name> ALTER COLUMN <column_name> SET VISIBLE;. Tables without primary keys automatically generate invisible primary keys — make those visible too. See Invisible Columns and Generated Invisible Primary Keys.

  • DATETIME value `0000-00-00 00:00:00`: DTS converts this to null in the destination. To avoid this, temporarily change the value to 0001-01-01 00:00:00 or leave the destination column blank before migrating.

  • Binary log change operations: Data generated by binary log change operations — such as data restored from a physical backup or cascade operations — is not migrated. If this data is missing from the destination, run a full migration again when your business allows it.

  • EncDB feature: For ApsaraDB RDS for MySQL instances with the EncDB feature enabled, full data migration is not supported.

  • Transparent Data Encryption (TDE): For ApsaraDB RDS for MySQL instances with TDE enabled, schema migration, full data migration, and incremental data migration are all supported.

Destination database requirements

  • Only tables can be selected as migration objects. Views, triggers, and stored procedures are not migrated.

  • The following data types cannot be migrated: VARBIT, GEOMETRY, ARRAY, UUID, TSQUERY, TSVECTOR, TXID_SNAPSHOT, and POINT.

  • Prefix indexes cannot be migrated. If the source database contains prefix indexes, the task may fail.

  • If a source table has a primary key, the primary key column in the destination must match the source. If a source table has no primary key, the primary key column and distribution key in the destination must be the same.

  • A unique key (containing the primary key column) in the destination must include all columns of its distribution key.

  • The destination table cannot be an append-optimized (AO) table.

  • If column mapping is used or the source and destination schemas differ, columns missing in the destination will not be migrated and data loss occurs.

Performance and operational considerations

  • Evaluate the impact of migration on source and destination database performance before starting. Run migration tasks during off-peak hours when possible.

  • During initial full data migration, concurrent INSERT operations cause fragmentation in destination tables. After full migration completes, the destination tablespace may be larger than the source.

  • If migrating specific tables (not the entire database), do not use tools such as pt-online-schema-change for online DDL operations on objects being migrated. Use Data Management (DMS) instead. See Perform lock-free DDL operations.

  • Do not write data from other sources to the destination during migration. Writing from multiple sources simultaneously may cause data loss.

Special cases for self-managed MySQL

  • A primary/secondary switchover on the source database while the migration task is running causes the task to fail.

  • Migration latency is calculated based on the latest migrated data timestamp and the current source timestamp. If no DML operations run on the source for an extended period, the latency reading may be inaccurate. To refresh the latency, execute a DML operation on the source. If the entire database is selected as the migration object, create a heartbeat table that receives data updates every second.

  • DTS periodically executes CREATE DATABASE IF NOT EXISTS \test\`` on the source database to advance the binary log file position.

Special cases for ApsaraDB RDS for MySQL

  • In incremental data migration, ApsaraDB RDS for MySQL instances that do not record transaction logs — such as read-only RDS MySQL V5.6 instances — cannot be used as the source.

  • DTS periodically executes CREATE DATABASE IF NOT EXISTS \test\`` on the source database to advance the binary log file position.

  • During schema migration, DTS migrates foreign keys from the source database to the destination database. During full data migration and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. Cascade update and delete operations on the source during migration may cause data inconsistency.

Required account permissions

The following table lists the minimum permissions required for each migration type.

Database Schema migration Full data migration Incremental data migration
ApsaraDB RDS for MySQL SELECT SELECT SELECT, REPLICATION SLAVE, REPLICATION CLIENT
AnalyticDB for PostgreSQL Read and write Read and write Read and write
REPLICATION SLAVE and REPLICATION CLIENT are required for incremental data migration only. Full-data-only migration tasks do not require these privileges. DTS grants these permissions to the database account automatically.

To create and authorize database accounts, see:

Create a migration task

Before you begin, make sure that:

  • An AnalyticDB for PostgreSQL instance is created. For details, see Create an instance

  • The available storage space of the destination instance is larger than the total data size in the source instance

Steps overview:

  1. Go to the Data Migration page in the DTS or DMS console.

  2. Create a task and configure the source and destination databases.

  3. Select migration types and objects.

  4. Configure advanced settings.

  5. Run a precheck and purchase a migration instance.

Step 1: Go to the Data Migration page

DTS console

  1. Log on to the DTS console.DTS console

  2. In the left-side navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where your migration instance will reside.

DMS console

Note

The actual steps may vary based on the DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console for details.

  1. Log on to the DMS console.DMS console

  2. In the top navigation bar, go to Data + AI > DTS (DTS) > Data Migration.

  3. From the drop-down list next to Data Migration Tasks, select the region.

Step 2: Configure source and destination databases

Click Create Task and configure the following parameters.

Task settings:

Parameter Description
Task Name DTS generates a name automatically. Specify a descriptive name to identify the task. The name does not need to be unique.

Source Database

Parameter Description
Database Type Select MySQL.
Access Method Select Alibaba Cloud Instance.
Instance Region Select the region where the source ApsaraDB RDS for MySQL instance resides.
Cross-account Select No for same-account migration.
RDS Instance ID Select the source ApsaraDB RDS for MySQL instance.
Database Account Enter the database account. See Required account permissions.
Database Password Enter the account password.
Connection Method Select Non-encrypted or SSL-encrypted based on your requirements. To use SSL encryption, enable SSL on the RDS instance first. See Use a cloud certificate to enable SSL encryption.

Destination Database

Parameter Description
Database Type Select AnalyticDB for PostgreSQL.
Access Method Select Alibaba Cloud Instance.
Instance Region Select the region where the destination AnalyticDB for PostgreSQL instance resides.
Instance ID Select the destination AnalyticDB for PostgreSQL instance.
Database Name Enter the name of the destination database in the AnalyticDB for PostgreSQL instance.
Database Account Enter the initial account of the instance, or an account with the RDS_SUPERUSER permission. See Manage users and permissions.
Database Password Enter the account password.

Click Test Connectivity and Proceed.

DTS server CIDR blocks must be added to the security settings of the source and destination databases. DTS adds these automatically for Alibaba Cloud instances. For self-managed databases, click Test Connectivity in the CIDR Blocks of DTS Servers dialog box. See Add the CIDR blocks of DTS servers.

Step 3: Select migration types and objects

On the Configure Objects page, configure the following parameters.

Parameter Description
Migration Types Select the migration types for your scenario. See Choose your migration approach for guidance.
Note

If you skip Schema Migration, create the destination database and tables manually and enable object name mapping in Selected Objects before proceeding. If you skip Incremental Data Migration, do not write data to the source during migration.

Select DDL and DML to Sync at the Instance Level Select the SQL operations to include in incremental migration. To customize operations per table, right-click an object in Selected Objects and select the operations.
Processing Mode of Conflicting Tables Precheck and Report Errors (default): The precheck fails if the destination contains tables with the same names as source tables. Use object name mapping to rename conflicting tables. Ignore Errors and Proceed: Skips the precheck for name conflicts. During full migration, conflicting records in the destination are kept. During incremental migration, conflicting records are overwritten. Use with caution — schema differences between source and destination may cause partial migration or task failure.
Storage Engine Type The storage engine for destination tables. Default: Beam. Available only when the destination AnalyticDB for PostgreSQL instance minor version is v7.0.6.6 or later and Schema Migration is selected.
Source Objects Select objects from the list and click the arrow icon to add them to Selected Objects. You can select columns, tables, or schemas. Selecting tables or columns excludes views, triggers, and stored procedures.
Selected Objects Right-click an object to rename it, customize SQL operations, or specify WHERE filter conditions. Click Batch Edit to rename multiple objects at once. See Map object names and Specify filter conditions.
Note

Renaming an object using object name mapping may cause dependent objects to fail migration.

Click Next: Advanced Settings.

Step 4: Configure advanced settings

Parameter Description
Dedicated Cluster for Task Scheduling By default, the task runs on the shared cluster. For higher stability, purchase a dedicated cluster. See What is a DTS dedicated cluster.
Retry Time for Failed Connections The time range during which DTS retries failed connections. Valid range: 10–1,440 minutes. Default: 720 minutes. Set this to at least 30 minutes. If DTS reconnects within this window, the task resumes automatically; otherwise, the task fails.
Note

If multiple tasks share the same source or destination database, the most recently set retry time takes precedence. DTS charges for the instance during retry periods.

Retry Time for Other Issues The time range during which DTS retries failed DDL or DML operations. Valid range: 1–1,440 minutes. Default: 10 minutes. Set this to at least 10 minutes. This value must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data Migration Limits read/write load on the source and destination during full migration by configuring QPS (queries per second) to the source, RPS (records per second) for full migration, and data migration speed (MB/s). Available only when Full Data Migration is selected.
Enable Throttling for Incremental Data Migration Limits load during incremental migration by configuring RPS and data migration speed (MB/s). Available only when Incremental Data Migration is selected.
Enclose Object Names in Quotation Marks If Yesalert notification settings, DTS encloses schema, table, and column names in quotation marks when: the source database is case-sensitive with mixed-case names; a table name starts with a non-letter character or contains unsupported special characters; or the names are reserved keywords in the destination.
Whether to delete SQL operations on heartbeat tables of forward and reverse tasks Yes: DTS does not write heartbeat table operations to the source database. A latency value may be displayed. No: DTS writes heartbeat table operations to the source database. This may affect physical backups and cloning of the source database.
Environment Tag Tags the DTS instance for identification. Optional.
Configure ETL Enables the extract, transform, and load (ETL) feature. Select Yes to enter data processing statements. See What is ETL? and Configure ETL in a data migration or synchronization task.
Monitoring and Alerting Select Yes to receive notifications when the task fails or migration latency exceeds the threshold. Configure the alert threshold and notification settings. See Configure monitoring and alerting.

Click Next Step: Data Verification to configure a data verification task. See Configure a data verification task.

(Optional) Configure primary key columns and distribution keys for the destination AnalyticDB for PostgreSQL tables. See CREATE TABLE.

This option is available only when Schema Migration is selected.

Step 5: Run the precheck and purchase a migration instance

  1. Click Next: Save Task Settings and Precheck.

    To view the API parameters for this task configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
  2. Wait for the precheck to complete.

    • If a check item fails, click View Details next to the failed item, fix the issue, then click Precheck Again.

    • If a check item generates an alert that you want to ignore, click Confirm Alert Details, then Ignore, then OK, then Precheck Again. Ignoring alerts may result in data inconsistency.

  3. After Success Rate reaches 100%, click Next: Purchase Instance.

  4. On the Purchase Instance page, configure the instance parameters.

    Parameter Description
    Resource Group The resource group for the migration instance. Default: default resource group. See What is Resource Management?
    Instance Class Select an instance class based on the required migration speed. See Instance classes of data migration instances.
  5. Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start. In the confirmation dialog, click OK.

Monitor the migration task

After the task starts, go to the Data Migration page to monitor its progress.

  • Full data migration only: The task stops automatically when migration completes. The status changes to Completed.

  • Full + incremental data migration: The incremental migration phase runs continuously and does not stop automatically. The status shows Running.

If a DTS task fails, DTS technical support will attempt to restore it within 8 hours. During restoration, the task may be restarted and task parameters may be modified. Database parameters are not modified.

What's next

After migration is complete and you have verified data consistency between the source and destination databases, update your application's connection settings to point to the AnalyticDB for PostgreSQL instance.