All Products
Search
Document Center

Data Transmission Service:Migrate data from a PolarDB for PostgreSQL (Compatible with Oracle) cluster to an AnalyticDB for MySQL 3.0 cluster

Last Updated:Jan 17, 2026

You can use Data Transmission Service (DTS) to migrate a PolarDB for PostgreSQL (Compatible with Oracle) cluster to an AnalyticDB for MySQL 3.0 cluster.

Prerequisites

  • A destination AnalyticDB for MySQL V3.0 cluster is created. For more information, see Create a cluster.

  • In the source PolarDB for PostgreSQL (Compatible with Oracle) cluster, the wal_level parameter is set to logical. This adds the information required for logical replication to the write-ahead log (WAL). For more information, see Set cluster parameters.

Precautions

Note
  • During schema migration, DTS does not migrate foreign keys from the source database to the destination database.

  • During full data migration and incremental data migration, DTS temporarily disables constraint checks and foreign key cascade operations at the session level. If cascade update or delete operations occur in the source database while the task is running, data inconsistency may occur.

Type

Description

Source database limits

  • Bandwidth requirement: The server that hosts the source database must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected.

  • The tables to be migrated must have a primary key or a UNIQUE constraint. The fields in the key or constraint must have unique values. Otherwise, duplicate data may appear in the destination database.

  • If you migrate data at the table level and need to edit objects, such as mapping table and column names, a single data migration task supports a maximum of 1,000 tables. If you exceed this limit, an error is reported after you submit the task. In this case, split the tables into multiple migration tasks, or configure a task to migrate the entire database.

  • For incremental migration, the write-ahead log (wal) must be enabled.

    • This must be enabled.

    • For an incremental migration task, Data Transmission Service (DTS) requires that the wal logs of the source database are retained for more than 24 hours. For a task that includes both full migration and incremental migration, DTS requires that the wal logs of the source database are retained for at least 7 days. You can set the log retention period to more than 24 hours after the full migration is complete. Otherwise, the DTS task may fail because DTS cannot obtain the wal logs. In extreme cases, data inconsistency or data loss may occur. Issues that are caused because the wal log retention period is shorter than the required period are not covered by the DTS Service-Level Agreement (SLA).

  • Source database operation limits:

    • During schema migration and full data migration, do not perform DDL operations to change the schema of the database or tables. Otherwise, the data migration task fails.

    • If you perform only full data migration, do not write new data to the source instance. Otherwise, data inconsistency occurs between the source and destination databases. To ensure real-time data consistency, select schema migration, full data migration, and incremental data migration.

    • To ensure the migration task runs properly and to prevent logical subscription interruptions caused by a primary/secondary switchover, your PolarDB for PostgreSQL (Compatible with Oracle) instance must support and have Logical Replication Slot Failover enabled. For more information, see Enable Logical Replication Slot Failover.

      Note

      If the source PolarDB for PostgreSQL (Compatible with Oracle) cluster does not support the logical replication slot failover feature (for example, when the cluster's Database Engine is Oracle Syntax Compatible 2.0), the migration instance may fail and cannot be recovered when the source database triggers an HA failover.

    • Due to the limits of logical replication in the source database, if a single piece of incremental data to be migrated exceeds 256 MB during the migration, the DTS instance may fail and cannot be recovered. You must reconfigure the DTS instance.

  • If the source database has long-running transactions and the instance includes an incremental migration task, the write-ahead logging (WAL) logs generated before the long-running transactions are committed cannot be purged. This causes the logs to accumulate and may lead to insufficient disk space in the source database.

Other limits

  • A single data migration task can migrate only one database. To migrate multiple databases, configure a data migration task for each database.

  • The destination database must have a custom primary key, or you must configure the Primary Key Column in the Configurations for Databases, Tables, and Columns step. Otherwise, the data migration may fail.

  • DTS does not support the migration of TimescaleDB extension tables or tables with cross-schema inheritance.

  • If the DTS instance performs an incremental data migration task, you must run the ALTER TABLE schema.table REPLICA IDENTITY FULL; command on the tables to be migrated in the source database before you write data to them. This applies to the following two scenarios and ensures data consistency. During the execution of this command, we recommend that you do not perform table lock operations. Otherwise, the tables may be locked. If you skip the relevant check in the precheck, DTS automatically runs this command during the instance initialization.

    • When the instance runs for the first time.

    • When the migration object granularity is Schema, and a new table is created in the schema to be migrated or a table to be migrated is rebuilt using the RENAME command.

    Note
    • In the command, replace schema and table with the schema name and table name of the data to be migrated.

    • We recommend that you perform this operation during off-peak hours.

  • DTS creates the following temporary tables in the source database to obtain DDL statements for incremental data, the schemas of incremental tables, and heartbeat information. Do not delete these temporary tables during the migration. Otherwise, the DTS task becomes abnormal. The temporary tables are automatically deleted after the DTS instance is released.

    public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.

  • To ensure the accuracy of the displayed migration latency for incremental data, DTS adds a heartbeat table named dts_postgres_heartbeat to the source database.

  • During incremental data migration, DTS creates a replication slot with the prefix dts_sync_ in the source database to replicate data. Using this replication slot, DTS can obtain incremental logs from the source database within the last 15 minutes. When the data migration fails or the migration instance is released, DTS attempts to automatically clear this replication slot.

    Note
    • If you change the password of the source database account used by the task or delete the DTS IP address from the whitelist of the source database during data migration, the replication slot cannot be automatically cleared. In this case, you must manually clear the replication slot in the source database to prevent it from accumulating and occupying disk space, which can make the source database unavailable.

    • If a failover occurs in the source database, you must log on to the secondary database to manually clear the slot.

  • Due to the limits of AnalyticDB for MySQL 3.0, if the disk space usage of a node in the AnalyticDB for MySQL 3.0 cluster exceeds 80%, the DTS task becomes abnormal and latency occurs. Estimate the required space based on the objects to be migrated and make sure that the destination cluster has sufficient storage space.

  • If the destination AnalyticDB for MySQL 3.0 cluster is being backed up while the DTS task is running, the task fails.

  • Before you migrate data, evaluate the performance of the source and destination databases. We also recommend that you migrate data during off-peak hours. Otherwise, DTS consumes read and write resources on the source and destination databases during full data migration, which may increase the database load.

  • Confirm whether the migration precision that DTS uses for columns of the FLOAT or DOUBLE data type meets your business requirements. DTS reads the values of these columns using ROUND(COLUMN,PRECISION). If you do not specify the precision, DTS migrates FLOAT columns with a precision of 38 and DOUBLE columns with a precision of 308.

  • DTS attempts to resume failed migration tasks within seven days. Before you switch your business to the destination instance, make sure to end or release the task, or run the revoke command to revoke the write permissions of the account that DTS uses to access the destination instance. This prevents the source data from overwriting the data in the destination instance after the task is automatically resumed.

  • If a DDL statement fails to be written to the destination database, the DTS task continues to run. You can view the failed DDL statement in the task logs. For more information, see Query task logs.

  • If the task fails, DTS technical support will attempt to recover it within 8 hours. During the recovery process, operations such as restarting the task or adjusting its parameters may be performed.

    Note

    When parameters are adjusted, only DTS task parameters are modified. Database parameters remain unchanged.The parameters that may be modified include but are not limited to those described in Modify instance parameters.

  • When migrating partitioned tables, include both the child partitions and the parent table as synchronization objects. Otherwise, data inconsistency may occur for the partitioned table.

    Note

    The parent table of a partitioned table in PolarDB for PostgreSQL (Compatible with Oracle) does not directly store data. All data is stored in the child partitions. The sync task must include both the parent table and all its child partitions. Otherwise, data from the child partitions may be missed, which leads to data inconsistency between the source and destination.

Billing

Migration type

Instance configuration fee

Internet traffic fee

Schema migration and full data migration

Free of charge.

When the Access Method parameter of the destination database is set to Public IP Address, you are charged for Internet traffic. For more information, see Billing overview.

Incremental data migration

Charged. For more information, see Billing overview.

Migration types

Migration type

Note

Schema migration

DTS migrates the schema definitions of migration objects to the destination database. Currently, DTS supports schema migration for tables.

Full data migration

DTS migrates all historical data of the migration objects from the source database to the destination database.

Note

Before schema migration and full data migration are complete, do not perform DDL operations on the migration objects. Otherwise, the migration may fail.

Incremental data migration

After the full data migration, DTS polls and captures redo logs from the source database and migrates the incremental data to the destination database.

Incremental data migration lets you smoothly migrate data without stopping your applications.

SQL operations that can be incrementally migrated

Operation type

SQL operation statement

DML

INSERT, UPDATE, DELETE

Note

When data is written to the destination AnalyticDB for MySQL cluster, the UPDATE statement is automatically converted to the REPLACE INTO statement. If the UPDATE statement is executed on the primary key, the UPDATE statement is converted to the DELETE and INSERT statements.

Permissions required for database accounts

Database

Required permissions

Account creation and authorization

PolarDB for PostgreSQL (Compatible with Oracle) cluster

A privileged account.

For more information, see Create a database account.

AnalyticDB for MySQL V3.0

Read and write permissions on the destination database that contains the migration objects.

For more information, see Create a database account.

Steps

  1. Go to the migration task list page of the destination region. You can use one of the following methods.

    From the DTS console

    1. Log on to the Data Transmission Service (DTS) console.

    2. In the navigation pane on the left, click Data Migration.

    3. In the upper-left corner of the page, select the region where the migration instance is located.

    From the DMS console

    Note

    The actual operations may vary based on the mode and layout of the DMS console. For more information, see Simple mode console and Customize the layout and style of the DMS console.

    1. Log on to the Data Management (DMS) console.

    2. In the top navigation bar, choose Data + AI > Data Transmission (DTS) > Data Migration.

    3. To the right of Data Migration Tasks, select the region where the migration instance is located.

  2. Click Create Task to go to the task configuration page.

  3. Configure the source and destination databases.

    Warning

    After you select the source and destination instances, we recommend that you carefully read the limits displayed at the top of the page. Otherwise, the task may fail or data inconsistency may occur.

    Category

    Configuration

    Note

    N/A

    Task Name

    DTS automatically generates a task name. We recommend that you specify a descriptive name for easy identification. The name does not need to be unique.

    Source database

    Database Type

    Select PolarDB (Compatible with Oracle).

    Connection Type

    Select the Cloud Instance.

    Instance Region

    Select the region where the source PolarDB for PostgreSQL (Compatible with Oracle) cluster resides.

    Instance ID

    Select the instance ID of the source PolarDB for PostgreSQL (Compatible with Oracle) cluster.

    Database Name

    Enter the name of the database to which the migration objects belong in the source PolarDB for PostgreSQL (Compatible with Oracle) cluster.

    Database Account

    Enter the database account of the source PolarDB for PostgreSQL (Compatible with Oracle) cluster. For information about the required permissions, see Permissions required for database accounts.

    Database Password

    Enter the password for the database account.

    Destination Database

    Database Type

    Select AnalyticDB MySQL 3.0.

    Connection Type

    Select cloud instance.

    Instance Region

    Select the region where the destination AnalyticDB for MySQL 3.0 database is located.

    Instance ID

    Select the ID of the destination AnalyticDB for MySQL 3.0 cluster.

    Database Account

    Enter the account of the destination AnalyticDB for MySQL 3.0 database. For more information, see Permissions required for database accounts.

    Database Password

    Enter the password for the database account.

  4. After you complete the configuration, click Test Connectivity and Proceed at the bottom of the page.

    Note
    • Ensure that the IP address segment of the DTS service is automatically or manually added to the security settings of the source and destination databases to allow access from DTS servers. For more information, see Add DTS server IP addresses to a whitelist.

    • If the source or destination database is a self-managed database (the Access Method is not Alibaba Cloud Instance), you must also click Test Connectivity in the CIDR Blocks of DTS Servers dialog box that appears.

  5. Configure the task objects.

    1. On the Configure Objects page, configure the objects to be migrated.

      Configuration

      Note

      Migration Types

      Select migration types based on your requirements and the types supported by each engine.

      • If you only need to perform a full migration, select both Schema Migration and Full Data Migration.

      • To perform a migration with no downtime, select Schema Migration, Full Data Migration, and Incremental Data Migration.

      Note
      • If you do not select Schema Migration, ensure that a database and tables to receive the data exist in the destination database. Also, use the object name mapping feature in the Selected Objects box as needed.

      • If you do not select Incremental Data Migration, do not write new data to the source instance during data migration to ensure data consistency.

      Processing Mode of Conflicting Tables

      • Precheck and Report Errors: Checks whether tables with the same names exist in the destination database. If no tables with the same names exist, the precheck item is passed. If tables with the same names exist, an error is reported during the precheck phase, and the data migration task does not start.

        Note

        If a table in the destination database has the same name but cannot be easily deleted or renamed, you can change the name of the table in the destination database. For more information, see Object name mapping.

      • Ignore Errors and Proceed: Skips the check for tables with the same names.

        Warning

        Selecting Ignore Errors and Proceed may cause data inconsistency and business risks. For example:

        • If the table schemas are consistent and a record in the destination database has the same primary key value as a record in the source database:

          • During full migration, DTS keeps the record in the destination cluster. The record from the source database is not migrated to the destination database.

          • During incremental migration, DTS does not keep the record in the destination cluster. The record from the source database overwrites the record in the destination database.

        • If the table schemas are inconsistent, only some columns of data may be migrated, or the migration may fail. Proceed with caution.

      DDL and DML Operations to Be Synchronized

      Select the DDL or DML operations to be migrated at the instance level. For information about supported operations, see SQL operations that can be incrementally migrated.

      Note

      To select SQL operations for incremental migration at the table level, right-click the migration object in the Selected Objects section. In the dialog box that appears, select the SQL operations that you want to migrate.

      Merge Tables

      • If you select Yes, DTS adds the __dts_data_source column to each table to record data sources. For more information, see Enable multi-table merge.

      • No: This is the default option.

      Note

      The table merging feature is task-level. This means that you cannot perform table merging at the table level. To merge some tables but not others, you can create two data migration tasks.

      Warning

      Do not perform DDL operations to change the schema of the source database or tables. Otherwise, data inconsistency or task failure may occur.

      Capitalization of Object Names in Destination Instance

      You can configure the case sensitivity policy for the English names of migrated objects, such as databases, tables, and columns, in the destination instance. By default, DTS default policy is selected. You can also choose to keep it consistent with the default policy of the source or destination database. For more information, see Case sensitivity of object names in the destination database.

      Source Objects

      In the Source Objects box, click the objects to migrate, and then click 向右小箭头 to move them to the Selected Objects box.

      Important
      • If you select Incremental Data Migration for Migration Types, you can select only one data table.

      • If you do not select Incremental Data Migration for Migration Types, you can select databases, tables, and columns.

      • If the migration object is an entire database, the default behavior is as follows:

        • If the table to be migrated in the source database has a primary key, such as a single-column primary key or a composite primary key, the primary key columns are used as the distribution keys.

        • If the table to be migrated in the source database does not have a primary key, an auto-increment primary key column is automatically generated in the destination table. This may cause data inconsistency between the source and destination databases.

      Selected Objects

      • To rename an object that you want to migrate to the destination instance, right-click the object in the Selected Objects section. For more information, see Individual table column mapping.

      • To rename multiple objects at a time, click Batch Edit in the upper-right corner of the Selected Objects section. For more information, see Map multiple object names at a time.

      Note
      • If you use the object name mapping feature, the migration of other objects that depend on the renamed object may fail.

      • To set a WHERE clause to filter data, right-click a table to be migrated in the Selected Objects section. In the dialog box that appears, set the filter condition. For more information, see Set filter conditions.

      • To select SQL operations for migration at the database or table level, right-click the object to be migrated in the Selected Objects section. In the dialog box that appears, select the SQL operations that you want to migrate.

    2. Click Next: Advanced Settings to configure advanced parameters.

      Configuration

      Description

      Dedicated Cluster for Task Scheduling

      By default, DTS schedules tasks on a shared cluster. You do not need to select one. If you want more stable tasks, you can purchase a dedicated cluster to run DTS migration tasks.

      Retry Time for Failed Connections

      After the migration task starts, if the connection to the source or destination database fails, DTS reports an error and immediately starts continuous retry attempts. The default retry duration is 720 minutes. You can also customize the retry time within a range of 10 to 1440 minutes. We recommend that you set it to more than 30 minutes. If DTS reconnects to the source and destination databases within the set time, the migration task automatically resumes. Otherwise, the task fails.

      Note
      • For multiple DTS instances that share the same source or destination, the network retry time is determined by the setting of the last created task.

      • Because you are charged for the task during the connection retry period, we recommend that you customize the retry time based on your business needs, or release the DTS instance as soon as possible after the source and destination database instances are released.

      Retry Time for Other Issues

      After the migration task starts, if other non-connectivity issues occur in the source or destination database (such as a DDL or DML execution exception), DTS reports an error and immediately starts continuous retry attempts. The default retry duration is 10 minutes. You can also customize the retry time within a range of 1 to 1440 minutes. We recommend that you set it to more than 10 minutes. If the related operations succeed within the set retry time, the migration task automatically resumes. Otherwise, the task fails.

      Important

      The value of Retry Time for Other Issues must be less than the value of Retry Time for Failed Connections.

      Enable Throttling for Full Data Migration

      During the full migration phase, DTS consumes some read and write resources of the source and destination databases, which may increase the database load. As needed, you can choose whether to set speed limits for the full migration task. You can set Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s) to reduce the pressure on the destination database.

      Note
      • This configuration item is available only if you select Full Data Migration for Migration Types.

      • You can also adjust the full migration speed after the migration instance is running.

      Enable Throttling for Incremental Data Migration

      As needed, you can also choose whether to set speed limits for the incremental migration task. You can set RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s) to reduce the pressure on the destination database.

      Note
      • This configuration item is available only if you select Incremental Data Migration for Migration Types.

      • You can also adjust the incremental migration speed after the migration instance is running.

      Environment Tag

      You can select an environment tag to identify the instance. This is not required for this example.

      Configure ETL

      Choose whether to enable the extract, transform, and load (ETL) feature. For more information, see What is ETL? Valid values:

      Monitoring and Alerting

      Select whether to set alerts and receive alert notifications based on your business needs.

      • No: Does not set an alert.

      • Yes: Sets an alert. You must also set the alert threshold and alert notifications. The system sends an alert notification if the migration fails or the latency exceeds the threshold.

    3. Click Next: Data Validation to configure a data validation task.

      For more information about the data validation feature, see Configure data validation.

    4. Optional: After you complete the previous configurations, click Next: Configure Database and Table Fields. Then, configure the Type, Primary Key Column, Distribution Key, and partition key parameters for the tables to be migrated to the destination database. The partition key parameters include Partition Key, Partitioning Rules, and Partition Lifecycle.

      Note
      • This step is available only if you select Schema Migration for Migration Types. To make modifications, select All for Definition Status.

      • In the Primary Key Column field, you can select multiple columns to form a composite primary key. For a composite primary key, you must also select one or more Primary Key Column to serve as the Distribution Key and the Partition Key. For more information, see CREATE TABLE.

  6. Save the task and run a precheck.

    • To view the parameters for configuring this instance when you call the API operation, move the pointer over the Next: Save Task Settings and Precheck button and click Preview OpenAPI parameters in the bubble.

    • If you do not need to view or have finished viewing the API parameters, click Next: Save Task Settings and Precheck at the bottom of the page.

    Note
    • Before the migration task starts, a precheck is performed. The task starts only after it passes the precheck.

    • If the precheck fails, click View Details next to the failed check item, fix the issue based on the prompt, and then run the precheck again.

    • If a warning is reported during the precheck:

      • For check items that cannot be ignored, click View Details next to the failed item, fix the issue based on the prompt, and then run the precheck again.

      • For check items that can be ignored and do not need to be fixed, you can click Confirm Alert Details, Ignore, OK, and Precheck Again to skip the alert item and run the precheck again. If you choose to shield an alert item, it may cause issues such as data inconsistency and pose risks to your business.

  7. Purchase the instance.

    1. When the Success Rate is 100%, click Next: Purchase Instance.

    2. On the Purchase page, select the link specification for the data migration instance. For more information, see the following table.

      Category

      Parameter

      Description

      New Instance Class

      Resource Group Settings

      Select the resource group to which the instance belongs. The default value is default resource group. For more information, see What is Resource Management?

      Instance Class

      DTS provides migration specifications with different performance levels. The link specification affects the migration speed. You can select a specification based on your business scenario. For more information, see Data migration link specifications.

    3. After the configuration is complete, read and select Data Transmission Service (Pay-as-you-go) Service Terms.

    4. Click Buy and Start, and in the OK dialog box that appears, click OK.

      You can view the progress of the migration instance on the Data Migration Tasks list page.

      Note
      • If the migration instance does not include an incremental migration task, it stops automatically. After the instance stops, its Status is Completed.

      • If the migration instance includes an incremental migration task, it does not stop automatically, and the incremental migration task continues to run. While the incremental migration task is running normally, the Status of the instance is Running.