All Products
Search
Document Center

Data Transmission Service:Migrate data from PolarDB for MySQL to AnalyticDB for PostgreSQL

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate data from a PolarDB for MySQL cluster to an AnalyticDB for PostgreSQL instance. DTS supports schema migration, full data migration, and incremental data migration, so you can cut over without downtime.

Prerequisites

Before you begin, make sure that you have:

  • Created a destination AnalyticDB for PostgreSQL instance. See Create an instance

  • Confirmed that the destination instance has more available storage space than the source cluster currently uses. Full data migration uses concurrent INSERT operations, so destination tables may consume more space than the source due to fragmentation

Migration types

DTS supports three migration types for this source-destination pair. Use all three together for zero-downtime migration.

Migration typeWhat DTS does
Schema migrationCopies table schema definitions from the source cluster to the destination instance
Full data migrationCopies all existing data at the time the task starts; runs concurrent INSERTs in parallel for speed
Incremental data migrationCaptures and applies ongoing data changes after full migration completes, keeping the destination in sync until you cut over

Zero-downtime migration: Select all three types. Full migration runs first, then incremental migration continues until you are ready to switch your application to the destination.

Full migration only: Select schema migration and full data migration. Stop writes to the source cluster before the task starts to avoid data inconsistency.

Supported SQL operations for incremental migration

Operation typeSQL operations
DMLINSERT, UPDATE, DELETE
DDLCREATE TABLE, ALTER TABLE, TRUNCATE TABLE, DROP TABLE
When DTS writes UPDATE statements to AnalyticDB for PostgreSQL, it converts them to REPLACE INTO. For UPDATE statements on primary keys, DTS converts them to DELETE + INSERT.

Database account permissions

Set up the following accounts before creating the migration task.

DatabaseRequired permissionsReference
PolarDB for MySQL clusterRead permissions on the migration objectsCreate and manage a database account
AnalyticDB for PostgreSQLRead and write permissions on the destination databaseCreate and manage a user
For AnalyticDB for PostgreSQL, you can use an account with RDS_SUPERUSER permissions as an alternative. See User permission management.

Incremental migration (additional requirements)

For incremental migration, the source PolarDB for MySQL cluster requires additional configuration:

RequirementRecommended valueDetails
Binary loggingEnabledEnable binary logging and set loose_polar_log_bin to on. See Enable binary logging. Enabling binary logging incurs storage charges for binary log files.
Binary log retention period7 days (minimum: 3 days)If DTS cannot retrieve the binary logs, the task fails and data inconsistency or loss may occur. See Modify the retention period.

Limitations

Source database

LimitationDetails and workaround
BandwidthThe server hosting the source cluster must have sufficient outbound bandwidth; otherwise, migration speed is affected.
Primary keys or UNIQUE constraintsTables to be migrated must have primary keys or UNIQUE constraints, and column values must be unique. Without these, duplicate data may appear in the destination.
Table count (table-level migration)A single task can migrate a maximum of 1,000 tables when you use column mapping or other object edits at the table level. For more than 1,000 tables, split them across multiple tasks or configure a database-level migration task.
DDL operations during migrationDo not perform DDL operations on the source cluster during schema migration or full migration. DDL changes cause the task to fail. DTS queries the source database during full migration, which can create metadata locks and block DDL operations.
Writes during full-only migrationIf you run full migration only (no incremental), stop writes to the source cluster before migration starts. New writes cause data inconsistency between source and destination.
Date value 0000-00-00 00:00:00Source data with this date value may cause the task to fail. DTS converts it to null when writing to the destination. Workaround: change the source data to 0001-01-01 00:00:00, or allow null values in the corresponding destination column.
Read-only nodesDTS does not migrate data from read-only nodes of the source PolarDB for MySQL cluster.
OSS external tablesDTS does not migrate Object Storage Service (OSS) external tables from the source cluster.
Online DDL with temporary tablesIf the source database uses online DDL operations that generate temporary tables (for example, multi-table merges), data loss may occur in the destination or the task may fail. DTS does not support pt-online-schema-change for online DDL. Use DMS or gh-ost instead.
Primary/secondary switchoverDTS does not support primary/secondary switchover of the source database instance during full migration. If a switchover occurs, reconfigure the migration task.

Destination database

LimitationDetails and workaround
Migration granularityOnly table-level migration is supported. Destination tables cannot be append-optimized (AO) tables.
Unsupported objectsMigration of partitions, views, stored procedures, functions, triggers, foreign keys, and indexes is not supported.
Column mapping and structural differencesIf you use column mapping for a partial table migration, or the source and destination table structures differ, data in columns that exist only in the source is lost.
Primary key and distribution key alignmentFor tables with a primary key: the primary key column in the destination must match the source. For tables without a primary key: the primary key column and the distribution key in the destination must be the same. A unique key (including the primary key column) in the destination table must contain all columns of its distribution key.
FLOAT or DOUBLE precisionDTS reads values using ROUND(COLUMN, PRECISION). If precision is not defined, DTS uses 38 for FLOAT and 308 for DOUBLE. Confirm that this meets your requirements.

Task and instance behavior

LimitationDetails and workaround
Automatic task resumeDTS tries to resume tasks that failed within the last 7 days. Before you switch traffic to the destination, stop or release the task — or run REVOKE to remove write permissions from the DTS database account to prevent automatic resume from overwriting destination data.
Instance recoveryIf a DTS instance fails, the DTS helpdesk attempts recovery within 8 hours. During recovery, DTS may restart the instance or adjust instance parameters. Database parameters are not modified. See Modify instance parameters for parameters that may change.
Full migration loadDuring full migration, DTS reads from the source and writes to the destination concurrently, increasing load on both servers. Run full migration during off-peak hours.
Heartbeat table operationsDTS periodically runs CREATE DATABASE IF NOT EXISTS \test\`` on the source database to advance the binary log offset.
Foreign key constraintsDTS migrates foreign keys from the source to the destination during schema migration. During full and incremental migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. If cascade updates or deletes occur on the source during migration, data inconsistency may result.

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration and full data migrationFreeCharged when Access method is set to Public IP address. See Billing overview.
Incremental data migrationCharged. See Billing overview.

Create a migration task

The task configuration follows seven steps: open the Data Migration page, configure the source and destination, test connectivity, select migration objects, configure advanced settings, set primary and distribution key columns, and run the precheck.

Step 1: Open the Data Migration page

Use either the DTS console or the DMS console.

DTS console

  1. Log on to the DTS console.

  2. In the left-side navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where your migration instance resides.

DMS console

Note

Steps may vary based on the DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console.

  1. Log on to the DMS console.

  2. In the top navigation bar, move the pointer over Data + AI > DTS (DTS) > Data Migration.

  3. From the drop-down list to the right of Data Migration Tasks, select the region.

Step 2: Configure source and destination databases

Click Create Task, then configure the following parameters.

ParameterDescription
Task NameDTS auto-generates a name. Specify a descriptive name to make the task easy to identify. Names do not need to be unique.

Source database settings:

ParameterValue
Select existing connectionIf the source instance is registered with DTS, select it from the list. DTS populates the remaining fields. Otherwise, configure the fields below. In the DMS console, select from Select a DMS database instance.
Database TypePolarDB for MySQL
Access methodCloud instance
Instance RegionRegion of the source PolarDB for MySQL cluster
Cross-accountNo (for same-account migration)
PolarDB instance IDID of the source PolarDB for MySQL cluster
Database AccountAccount with the required permissions. See Database account permissions.
Database PasswordPassword for the database account
EncryptionOptional. Configure SSL encryption based on your security requirements. See Configure SSL encryption.

Destination database settings:

ParameterValue
Select existing connectionIf the destination instance is registered with DTS, select it from the list. DTS populates the remaining fields. Otherwise, configure the fields below.
Database TypeAnalyticDB PostgreSQL
Access methodCloud instance
Instance RegionRegion of the destination AnalyticDB for PostgreSQL instance
Instance IDID of the destination AnalyticDB for PostgreSQL instance
Database nameName of the destination database that receives the migrated objects
Database AccountAccount with read and write permissions on the destination database
Database PasswordPassword for the database account

Step 3: Test connectivity and proceed

Click Test connectivity and proceed at the bottom of the page.

Make sure DTS server CIDR blocks are added to the security settings of the source and destination databases. See Add DTS server IP addresses to a whitelist.

Step 4: Configure migration objects

On the Configure objects page, set the following:

ConfigurationDescription
Migration TypesSelect Schema migration and Full data migration for one-time migration. Add Incremental data migration for zero-downtime migration. If you skip schema migration, create the tables in the destination database manually and enable object name mapping in Selected objects.
Processing Mode for Existing Destination TablesPrecheck and Report Error (default): fails the precheck if same-name tables exist in the destination. Clear Destination Table Data: clears existing data from destination tables before migration. Ignore Error and Continue: skips the check. Use with caution — if the table structures are consistent and the destination contains a record with the same primary key as a source record, DTS retains the destination record during full migration but overwrites it with the source record during incremental migration. If the table structures are inconsistent, data initialization may fail, only partial data may be migrated, or migration may fail.
Select DDL and DML for Instance-Level SynchronizationSQL operations to sync during incremental migration, configured at the instance level. To configure at the database or table level, right-click an object in Selected objects.
Case Policy for Destination Object NamesControls case policy for database, table, and column names. Default is DTS Default Policy. See Capitalization of destination object names.
Storage engine typeStorage engine for destination tables. Default: Beam. Available only if the destination AnalyticDB for PostgreSQL instance is v7.0.6.6 or later and schema migration is selected.
Source ObjectsSelect tables, columns, or schemas, then click the rightwards arrow icon to add them to Selected objects. If you select tables or columns, DTS does not migrate views, triggers, or stored procedures.
Selected objectsRight-click an object to rename it or set a WHERE clause filter. Click Batch edit to rename multiple objects at once. See Map object names and Set filter conditions. Renaming an object may cause dependent objects to fail migration.

Step 5: Configure advanced settings

Click Next: Advanced settings and configure the following:

ConfigurationDescription
Dedicated cluster for task schedulingDTS uses the shared cluster by default. For higher task stability, purchase a dedicated cluster. See What is a DTS dedicated cluster.
Copy temporary tables from online DDL toolsControls whether DTS migrates temporary tables created by online DDL operations. Yesalert notification settings: migrates temporary table data (may increase latency). No, adapt to DMS online DDL: skips temporary tables, migrates only the original DDL from DMS (destination tables may be locked). No, adapt to gh-ost: skips temporary tables, migrates only the original DDL from gh-ost; use default or custom regular expressions to filter ghost tables.
Retry time for failed connectionsTime range (minutes) for DTS to retry failed connections. Range: 10–1,440. Default: 720. Set to more than 30 to give DTS enough time to recover from transient failures. If reconnection succeeds within this window, the task resumes; otherwise it fails.
Note

If you specify different retry time ranges for multiple data migration tasks that share the same source or destination database, the value specified later takes precedence. When DTS retries a connection, you are charged for the DTS instance. Specify the retry time range based on your business requirements, and release the DTS instance promptly after the source database and destination instance are released.

Retry time for other issuesTime range (minutes) for DTS to retry failed DDL or DML operations. Range: 1–1,440. Default: 10. Set to more than 10. Must be less than Retry time for failed connections.
Enable throttling for full data migrationLimits DTS read/write throughput during full migration to reduce load on source and destination servers. Configure Source Database, RPS of full data migration, and Data migration speed (MB/s). Available only when full data migration is selected.
Enable throttling for incremental data migrationLimits throughput during incremental migration. Configure RPS of incremental data migration and Data migration speed for incremental migration (MB/s). Available only when incremental data migration is selected.
Delete SQL operations on heartbeat tablesYes: DTS does not write heartbeat table operations to the source (a latency figure may display for the instance). No: DTS writes heartbeat operations to the source (may affect physical backup and cloning).
Environment tagOptional. Tag the instance to identify its environment (for example, production, staging).
Configure ETLYes: configure extract, transform, and load (ETL) rules using data processing statements. See Configure ETL in a data migration or data synchronization task. No: skip ETL configuration.
Monitoring and alertingYes: configure alert thresholds and notification contacts. DTS sends alerts if the task fails or migration latency exceeds the threshold. See Configure monitoring and alerting. No: disable alerting.

Step 6: Configure primary key and distribution key columns

Click Next: Configure database and table fields to set primary key and distribution key columns for destination tables.

This page appears only when schema migration is selected. See Table management and Table distribution for guidance on choosing these columns.

Step 7: Run the precheck and purchase the instance

  1. Click Next: Save task settings and precheck.

    To review the API parameters for this task configuration before saving, hover over Next: Save task settings and precheck and click Preview OpenAPI parameters.
  2. Wait for the precheck to complete. If any item fails, click View details to see the cause, fix the issue, and click Precheck again. If an item shows an alert rather than a failure, click Confirm alert details, then Ignore if the risk is acceptable for your use case. Data inconsistency may occur if you ignore alert items.

  3. After Success rate reaches 100%, click Next: Purchase instance.

  4. On the Purchase instance page, configure the instance class:

    ParameterDescription
    Resource groupResource group for the migration instance. Default: default resource group. See What is Resource Management?
    Instance classDetermines migration speed. See Instance classes of data migration instances.
  5. Read and accept Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and start. Click OK in the confirmation message.

Monitor task progress

After the task starts, view progress on the Data Migration page.

  • Tasks that do not include incremental migration stop automatically when full migration completes. The status shows Completed.

  • Tasks that include incremental migration run continuously and never complete automatically. The status shows Running. Stop the task manually when you are ready to cut over.

What's next

After the migration task reaches a stable running state:

  1. Verify data consistency between the source and destination databases.

  2. When ready to cut over, stop writes to the source, wait for incremental migration to sync the remaining changes, then switch your application to the destination.

  3. Stop or release the DTS migration task to prevent it from automatically resuming and overwriting destination data. Alternatively, run REVOKE to remove write permissions from the DTS database account.