All Products
Search
Document Center

Data Transmission Service:Migrate data from PolarDB-X 2.0 to AnalyticDB for PostgreSQL

Last Updated:Mar 30, 2026

Use Data Transmission Service (DTS) to migrate data from a PolarDB-X 2.0 instance to an AnalyticDB for PostgreSQL instance. DTS supports schema migration, full data migration, and incremental data migration, so you can run the migration without interrupting your application services.

Prerequisites

Before you begin, make sure that you have:

  • A PolarDB-X 2.0 instance. For more information, see Create an instance.

  • An AnalyticDB for PostgreSQL instance with more available storage space than the source PolarDB-X 2.0 instance. For more information, see Create an instance.

Migration types

DTS supports three migration types for this scenario:

Migration type Description
Schema migration Migrates table schemas from the source to the destination database, including foreign keys if selected.
Full data migration Migrates historical data from the source to the destination database. Only tables created with CREATE TABLE are supported.
Incremental data migration After full data migration completes, continuously replicates new changes from the source to the destination, so you can migrate without interrupting your application services.

To migrate without downtime, select all three types.

Permissions required

Grant the following permissions to the database accounts used by DTS:

Database Schema migration Full data migration Incremental data migration
PolarDB-X 2.0 SELECT SELECT REPLICATION SLAVE, REPLICATION CLIENT, and SELECT on objects to be migrated
AnalyticDB for PostgreSQL Read and write Read and write Read and write

For more information about granting permissions on PolarDB-X 2.0, see Account permissions required for data synchronization.

Billing

Migration type Task configuration fee Data transfer fee
Schema migration and full data migration Free Free, unless Access Method of the destination database is Public IP Address. For more information, see Billing overview.
Incremental data migration Charged. For more information, see Billing overview.

SQL operations supported for incremental migration

Operation type SQL statements
DML INSERT, UPDATE, DELETE
DDL ADD COLUMN

Behavior note: When data is written to the destination AnalyticDB for PostgreSQL instance:

  • UPDATE statements are automatically converted to REPLACE INTO statements.

  • If the primary key is updated, DTS converts the UPDATE to a DELETE followed by an INSERT.

Limitations

Source database

  • Tables to be migrated must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate records.

  • Read-only instances of Enterprise Edition PolarDB-X 2.0 are not supported as a source.

  • If you select tables as migration objects and need to rename tables or columns in the destination, a single task supports up to 1,000 tables. For more than 1,000 tables, split the migration across multiple tasks, or migrate the entire database instead of individual tables.

  • TABLEGROUP and databases or schemas with a Locality attribute are not supported.

  • Tables whose names are reserved words (for example, select) cannot be migrated.

Incremental migration

If you include incremental data migration, the source database must meet these requirements:

  • Binary logging is enabled and the binlog_row_image parameter is set to full. If either condition is not met, the precheck returns an error and the task cannot start.

  • For incremental-only migration: binary logs are retained for more than 24 hours.

  • For full + incremental migration: binary logs are retained for at least 7 days. After full migration completes, you can reduce retention to more than 24 hours.

Warning

If binary log retention requirements are not met, DTS may fail to read the binary logs, which can cause task failure or, in exceptional cases, data inconsistency or loss. The DTS service level agreement (SLA) does not apply in this situation.

Operations during migration

  • During schema migration and full data migration, do not run DDL operations that change database or table schemas. The task will fail.

  • DTS temporarily disables foreign key constraint checks and cascade operations at the session level. If you run cascade update or delete operations on the source during migration, data inconsistency may occur.

  • If you change the network type of the PolarDB-X 2.0 instance during migration, update the network connection settings in the DTS task as well.

  • For full data migration only (without incremental): do not write to the source database during migration. To ensure data consistency, select schema migration, full data migration, and incremental data migration together.

Destination database

  • Only table-level migration is supported. Append-optimized (AO) tables are not supported as destination tables.

  • If you use column mapping for partial table migration, or if source and destination table schemas differ, data in columns that exist in the source but not in the destination is lost.

  • If the source table has a primary key, the primary key column in the destination must match.

  • If the source table has no primary key, the primary key column and distribution key in the destination must be the same.

  • A unique key (that includes the primary key column) in the destination must contain all columns of its distribution key.

Operational notes

  • Evaluate source and destination database performance before migration. During full data migration, DTS reads from the source and writes to the destination concurrently, which increases load on both databases. Run the migration during off-peak hours to reduce the impact.

  • DTS keeps failed tasks eligible for automatic resumption for up to 7 days. Before switching workloads to the destination, stop or release the failed task. Alternatively, run the revoke command to revoke write permissions from the DTS account on the destination to prevent data from being overwritten if the task resumes automatically.

  • If a DTS instance fails, DTS will attempt to recover it within 8 hours. During recovery, DTS may restart the instance or adjust instance parameters. Database parameters are not modified. Parameters that may be adjusted include those described in Modify instance parameters.

  • DTS periodically updates the dts_health_check.ha_health_check table in the source database to advance the binary log offset.

Data type mappings

For data type mappings between PolarDB-X 2.0 and AnalyticDB for PostgreSQL, see Data type mappings for initial schema synchronization.

Create a migration task

Step 1: Go to the Data Migration page

Use one of the following methods:

DTS console

  1. Log on to the DTS console.

  2. In the left-side navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where the migration instance resides.

Data Management (DMS) console

The actual steps may vary based on the DMS console mode and layout. For more information, see Simple mode and Customize the layout and style of the DMS console.
  1. Log on to the DMS console.

  2. In the top navigation bar, choose Data + AI > DTS (DTS) > Data Migration.

  3. From the drop-down list next to Data Migration Tasks, select the region where the migration instance resides.

Step 2: Configure the task

  1. Click Create Task.

  2. Read the instructions in the Limits section at the top of the page before proceeding.

  3. Configure the source and destination databases:

    Section Parameter Description
    N/A Task Name DTS auto-generates a task name. Specify a descriptive name for easier identification. The name does not need to be unique.
    Source Database Select Existing Connection Select a registered database instance from the list, or configure the connection manually if the instance is not registered. For more information, see Manage database connections.
    Note

    In the DMS console, you can select the database instance from the Select a DMS database instance drop-down list.

    Database Type Select PolarDB-X 2.0.
    Access Method Select Alibaba Cloud Instance.
    Instance Region Select the region where the source PolarDB-X 2.0 instance resides.
    Cross-Account Select No to migrate data within the same Alibaba Cloud account.
    Instance ID Select the ID of the source PolarDB-X 2.0 instance.
    Database Account Enter the database account. For required permissions, see Permissions required.
    Database Password Enter the password for the database account.
    Destination Database Select Existing Connection Select a registered database instance from the list, or configure the connection manually if the instance is not registered. For more information, see Manage database connections.
    Note

    In the DMS console, you can select the database instance from the Select a DMS database instance drop-down list.

    Database Type Select AnalyticDB For PostgreSQL.
    Access Method Select Alibaba Cloud Instance.
    Instance Region Select the region where the destination AnalyticDB for PostgreSQL instance resides.
    Instance ID Select the ID of the destination AnalyticDB for PostgreSQL instance.
    Database Name Enter the name of the destination database that will receive the migrated objects.
    Database Account Enter the database account for the destination instance.
    Database Password Enter the password for the database account.
  4. Click Test Connectivity and Proceed.

    Make sure that DTS server CIDR blocks are added to the security settings of the source and destination databases. For more information, see Add DTS server IP addresses to a whitelist.

Step 3: Select objects and configure migration settings

  1. On the Configure Objects page, configure the following settings:

    Parameter Description
    Migration Types Select the migration types based on your requirements: <br>- Schema Migration and Full Migration only: for a one-time migration with no ongoing replication. Do not write to the source during migration.<br>- Schema Migration, Full Migration, and Incremental Migration: to ensure service continuity during migration (recommended).
    Processing Mode of Conflicting Tables Precheck and Report Errors: checks for tables in the destination with the same name as source tables. The precheck fails if conflicts are found. To resolve naming conflicts without deleting destination tables, use object name mapping. For more information, see Map object names.<br><br>Ignore Errors and Proceed: skips the naming conflict check. Use with caution — this may cause data inconsistency: during full migration, conflicting records in the destination are retained; during incremental migration, conflicting records are overwritten. If schemas differ between source and destination, the task may fail or only partial columns are migrated.
    Storage Engine Type The storage engine for destination tables. Default: Beam. This parameter is available only when the destination AnalyticDB for PostgreSQL instance minor version is v7.0.6.6 or later and Schema Migration is selected.
    Capitalization of object names in destination instance Configures the capitalization policy for database, table, and column names in the destination. By default, the DTS policy is applied. For more information, see Specify the capitalization of object names in the destination instance.
    Source Objects Select objects from the Source Objects section and click the arrow icon to add them to Selected Objects. Only tables are supported. Views, triggers, and stored procedures are not migrated.
    Selected Objects To rename a single object, right-click it. To rename multiple objects at once, click Batch Edit in the upper-right corner. For more information, see Map object names. <br><br>
    Note

    Renaming an object may cause other objects that depend on it to fail migration. To filter rows using SQL conditions, right-click a table in Selected Objects and configure the filter. For more information, see Filter data using SQL conditions. To select which DML operations to migrate for a specific table, right-click the table and configure the operations.

  2. Click Next: Advanced Settings and configure the following optional parameters:

    Parameter Description
    Dedicated Cluster for Task Scheduling By default, DTS schedules the task to the shared cluster. To improve task stability, purchase a dedicated cluster. For more information, see What is a DTS dedicated cluster.
    Retry Time for Failed Connections The retry window for reconnecting to the source or destination after a connection failure. Valid values: 10–1,440 minutes. Default: 720. Set to at least 30 minutes. DTS resumes the task if reconnected within this window; otherwise the task fails.
    Note

    When DTS retries, you are charged for the DTS instance.

    Retry Time for Other Issues The retry window for DDL or DML operation failures. Valid values: 1–1,440 minutes. Default: 10. Set to at least 10 minutes. Must be smaller than Retry Time for Failed Connections.
    Enable Throttling for Full Data Migration Limits read/write throughput during full migration to reduce database load. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected.
    Enable Throttling for Incremental Data Migration Limits throughput during incremental migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected.
    Whether to delete SQL operations on heartbeat tables of forward and reverse tasks Controls whether DTS writes heartbeat SQL operations to the source database while the instance runs. Yes: does not write heartbeat operations; the DTS instance may show a latency. No: writes heartbeat operations; may affect features such as physical backup and cloning of the source database.
    Environment Tag Optional. Select a tag to identify the instance by environment (for example, production or test).
    Configure ETL Enables the extract, transform, and load (ETL) feature for in-flight data transformation. For more information, see What is ETL? and Configure ETL in a data migration or data synchronization task.
    Monitoring and Alerting Configures alerts for task failures or migration latency that exceeds a threshold. If enabled, configure the alert threshold and notification settings. For more information, see Configure monitoring and alerting when you create a DTS task.
  3. (Optional) Click Next: Configure Database and Table Fields to set the primary key columns and distribution keys for destination tables.

    If you selected Schema Migration, define the Type, Primary Key Column, and Distribution Key for each destination table. For a composite primary key, select multiple columns. At least one column from the Primary Key Column list must also be the Distribution Key. For more information, see CREATE TABLE.

Step 4: Run the precheck

  1. Click Next: Save Task Settings and Precheck.

    To preview the API parameters for this task, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters before proceeding.
  2. Review and resolve any precheck results:

    Result Action
    Failed The task is blocked. Click View Details next to the failed item, troubleshoot the issue, then click Precheck Again.
    Alert (must resolve) Click View Details, troubleshoot, then click Precheck Again.
    Alert (can ignore) Click Confirm Alert Details. In the dialog box, click Ignore, then click OK, then click Precheck Again. Ignoring an alert may cause data inconsistency or expose your workload to risk.

Step 5: Purchase the instance and start the task

  1. Wait for Success Rate to reach 100%, then click Next: Purchase Instance.

  2. On the Purchase Instance page, configure the following parameters:

    Section Parameter Description
    New Instance Class Resource Group The resource group for the migration instance. Default: default resource group. For more information, see What is Resource Management?
    Instance Class Select an instance class based on your required migration speed. For more information, see Instance classes of data migration instances.
  3. Read and select the Data Transmission Service (Pay-as-you-go) Service Terms check box.

  4. Click Buy and Start, then click OK in the confirmation message.

Monitor the migration task

After the task starts, monitor its progress on the Data Migration page:

  • Schema migration and full data migration only: the task stops automatically when complete. The status shows Completed.

  • With incremental data migration: the task does not stop automatically. The status shows Running.

What's next