All Products
Search
Document Center

Data Transmission Service:Migrate PolarDB for PostgreSQL to RDS PostgreSQL

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate data from a PolarDB for PostgreSQL cluster to an RDS for PostgreSQL instance with minimal downtime.

DTS supports three migration types that you can combine based on your goal:

Migration typeWhat it doesWhen to use
Schema migrationCopies schema definitions (tables, keys, constraints) to the destinationAlways include when the destination schema does not exist yet
Full data migrationCopies all existing data at a point in timeRequired for any migration
Incremental data migrationContinuously replicates changes after the full migrationAdd this for near-zero-downtime migration

For a full migration with minimal downtime, select all three types. For a one-time migration where you can tolerate a maintenance window, select schema migration and full data migration only.

Prerequisites

Before you begin, make sure you have:

Important notes

During schema migration, DTS migrates foreign keys from the source database to the destination database.
During full and incremental migration, DTS temporarily disables constraint checks and foreign key cascade operations at the session level. If cascading updates or deletes occur in the source database while the task is running, data inconsistency may occur.

Limitations

Source database requirements

  • Tables to be migrated must have a primary key or a non-null unique index.

  • Long-running transactions cause write-ahead log (WAL) accumulation in the source database. If disk space runs low during an incremental migration, commit or terminate long-running transactions.

  • The PolarDB for PostgreSQL cluster must support and enable Logical Replication Slot Failover to survive primary/secondary switchovers during migration. If the cluster's database engine is PostgreSQL 14 (which does not support this feature), a high availability (HA) switchover during migration causes the migration instance to fail unrecoverably.

  • Do not perform DDL operations that change the database or table structure during schema migration and full data migration. Otherwise, the migration task fails.

  • Do not write new data to the source instance during full data migration (without incremental migration). To keep data consistent in real time, run schema migration, full data migration, and incremental data migration together.

  • A single incremental change that produces more than 256 MB of data causes the migration instance to fail unrecoverably. Reconfigure the migration instance if this occurs.

Other limitations

  • Each migration task migrates one database. Configure a separate task for each additional database.

  • DTS does not support migrating TimescaleDB extension tables, tables with cross-schema inheritance, or tables with unique indexes based on expressions.

  • Schemas created by installing plugins cannot be migrated.

  • If a table contains a SERIAL field and you include Schema Migration, also select Sequence or migrate the entire schema. Otherwise the migration instance may fail.

  • Full data migration uses concurrent INSERT operations, which causes table fragmentation. The table space in the destination database will be larger than in the source after full migration completes.

  • DTS validates data content but does not validate metadata such as sequences. Validate sequences separately before switching over — see Update the sequence value of the destination database.

  • DTS creates a replication slot with the prefix dts_sync_ in the source database during incremental migration. DTS uses this replication slot to obtain incremental logs from the source database within the last 15 minutes. This slot is automatically cleaned up when the migration fails or the instance is released, unless the source database password changed or the DTS IP address was removed from the whitelist — in which case, clean up the slot manually to prevent disk space accumulation.

  • If a failover occurs on the source database, log on to the secondary database to manually clean up the replication slot.

  • DTS creates the following temporary tables in the source database to support incremental migration. Do not delete these tables during migration — they are cleaned up automatically when the DTS instance is released: public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.

  • For tables with foreign keys, triggers, or event triggers: if the destination database account is a privileged account or has superuser permissions, DTS temporarily sets session_replication_role to replica at the session level. If the account does not have these permissions, set session_replication_role to replica in the destination database manually. Reset it to origin after the DTS migration task is released.

  • When migrating partitioned tables, include both the parent table and all its child tables as migration objects. In PostgreSQL, all data is stored in child tables — omitting any child table causes data inconsistency.

  • DTS attempts to automatically recover failed tasks within seven days. Therefore, before you switch your business to the destination instance, you must end or release the task, or use the revoke command to revoke the write permissions of the account that DTS uses to access the destination instance. This prevents the source data from overwriting the data in the destination instance after the task is automatically recovered.

  • If a task fails, DTS support staff will attempt to restore it within eight hours and may restart the task or adjust its parameters. Only DTS task parameters are modified, not database parameters. See Modify instance parameters for parameters that may be adjusted.

Incremental migration: REPLICA IDENTITY requirement

If the migration instance includes incremental data migration, run the following command on each table to be migrated in the source database before writing data to those tables:

ALTER TABLE schema.table REPLICA IDENTITY FULL;

Replace schema and table with the actual schema and table names. Run this during off-peak hours and avoid table lock operations while the command executes to prevent deadlocks.

Run this command in the following situations:

  • When the instance runs for the first time

  • When the migration object is a schema and a new table is created, or an existing table is rebuilt using the RENAME command

If you skip the related precheck item, DTS runs this command automatically during instance initialization.

DDL migration

DDL migration is supported only for tasks created after October 1, 2020.

Supported DDL statementsNotes
CREATE TABLE, DROP TABLE
ALTER TABLE (including RENAME TABLE, ADD COLUMN, ADD COLUMN DEFAULT, ALTER COLUMN TYPE, DROP COLUMN, ADD CONSTRAINT, ADD CONSTRAINT CHECK, ALTER COLUMN DROP DEFAULT)
TRUNCATE TABLEPostgreSQL 11 and later only
CREATE INDEX ON TABLE

DDL migration requires a privileged account in the source database. The following DDL statements are not supported:

  • Statements with CASCADE or RESTRICT clauses

  • Statements executed in sessions where SET session_replication_role = replica is applied

  • Statements executed by calling functions

  • Transactions that contain both DML and DDL statements (the DDL statement is not migrated)

  • DDL statements for objects not selected for migration, within the same transaction

Important

Billing

Migration typeInstance feeInternet traffic fee
Schema migration and full data migrationFreeCharged when the destination database uses Public IP Address as the access method. See Billing overview.
Incremental data migrationCharged. See Billing overview.

Supported migration objects

DTS migrates schemas and tables, including primary keys, unique keys, foreign keys, built-in data types, and default constraints.

Supported objects may vary by destination database type. Check the console for the current list.

Database account permissions

DatabaseSchema migrationFull migrationIncremental migrationHow to create
PolarDB for PostgreSQLPrivileged accountPrivileged accountPrivileged accountCreate a Database Account
RDS for PostgreSQLCREATE and USAGE on migration objectsOwner on the schemaOwner on the schemaCreate an Account

Create a migration task

Step 1: Open the migration task page

Use one of the following methods to reach the migration task list for the destination region.

From the DTS console

  1. Log on to the DTS console.Data Transmission Service (DTS) console

  2. In the left navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where the migration instance will be located.

From the DMS console

Note

The actual operations may vary based on the mode and layout of the DMS console. For more information, see Simple mode console and Customize the layout and style of the DMS console.

  1. Log on to the DMS console.Data Management (DMS) console

  2. In the top menu bar, choose Data + AI > Data Transmission (DTS) > Data Migration.

  3. To the right of Data Migration Tasks, select the region.

Step 2: Configure source and destination databases

  1. Click Create Task.

  2. Configure the source database:

    FieldValue
    Task NameDTS generates a name automatically. Specify a descriptive name for easy identification — it does not need to be unique.
    Database TypePolarDB for PostgreSQL
    Connection TypeCloud Instance
    Instance RegionRegion where the source PolarDB for PostgreSQL cluster resides
    Across Alibaba Cloud AccountsNot Cross-Account (for same-account migrations)
    Instance IDID of the source PolarDB for PostgreSQL cluster
    Database NameName of the source PolarDB for PostgreSQL database
    Database AccountAccount for the source database
    Database PasswordPassword for the account
  3. Configure the destination database:

    FieldValue
    Database TypePostgreSQL
    Connection TypeCloud Instance
    Instance RegionRegion where the destination RDS for PostgreSQL instance resides
    Instance IDID of the destination RDS for PostgreSQL instance
    Database NameName of the database in the destination instance that will receive the migrated objects
    Database AccountAccount for the destination database
    Database PasswordPassword for the account
    EncryptionSelect Non-encrypted or SSL-encrypted. For SSL encryption, upload the CA Certificate and, if using client certificates, the Client Certificate, Private Key of Client Certificate, and Private Key Password of Client Certificate.
  4. Click Test Connectivity and Proceed at the bottom of the page.

    Add the DTS server CIDR blocks to the security settings (IP whitelist) of both the source and destination databases before testing. See Add the IP address whitelist of DTS servers. If either database's Access Method is not Alibaba Cloud Instance, also click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.

Step 3: Select migration objects

On the Configure Objects page:

  1. Select the migration types:

    If you skip Schema Migration, the destination database must already have the target schema and tables. If you skip Incremental Data Migration, do not write new data to the source instance during migration.
    GoalMigration types to select
    Full migration only (with maintenance window)Schema Migration + Full Data Migration
    Near-zero-downtime migrationSchema Migration + Full Data Migration + Incremental Data Migration
  2. Set Processing Mode of Conflicting Tables:

    • Precheck and Report Errors: DTS checks for tables with the same name in the destination before starting. If a conflict exists, the task does not start. To resolve naming conflicts without deleting data, use Object name mapping.

    • Ignore Errors and Proceed: Skips the conflict check. During full migration, conflicting records in the destination are kept. During incremental migration, source records overwrite destination records. Use with caution — inconsistent schemas may cause partial or failed migration.

  3. (Optional) Configure Case Policy for Destination Object Names to control case sensitivity for migrated object names. The default is DTS default policy. See Case sensitivity of object names in the destination database.

  4. In the Source Objects box, click the schemas or tables to migrate, then click the right-arrow icon to move them to the Selected Objects box.

    - Selecting tables as migration objects excludes views, triggers, and stored procedures. - If a table contains a SERIAL field and Schema Migration is selected, also include Sequence or migrate the entire schema.
  5. (Optional) To rename objects in the destination:

    Object name mapping may cause dependent objects to fail migration.

Step 4: Configure advanced settings

Click Next: Advanced Settings and configure the following options:

SettingDescription
Dedicated Cluster for Task SchedulingDTS uses a shared cluster by default. Purchase a dedicated cluster for more stable task scheduling.
Retry Time for Failed ConnectionsDuration DTS retries after a connection failure. Default: 720 minutes. Range: 10–1440 minutes. Set to at least 30 minutes. The task resumes automatically if reconnected within the retry window; otherwise it fails.
Retry Time for Other IssuesDuration DTS retries after non-connectivity errors (DDL or DML exceptions). Default: 10 minutes. Range: 1–1440 minutes. Must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data MigrationLimit QPS to the source, RPS of the migration, or migration speed (MB/s) to reduce database load during full migration. Adjustable after the instance starts.
Enable Throttling for Incremental Data MigrationLimit RPS or migration speed (MB/s) for incremental migration. Adjustable after the instance starts.
Configure ETLEnable extract, transform, and load (ETL) to apply data transformations during migration. See Configure ETL in a data migration or data synchronization task.
Monitoring and AlertingConfigure alert thresholds and notifications. DTS sends alerts when a migration fails or latency exceeds the threshold.
Environment TagTag the instance for environment identification (optional).

Step 5: Configure data validation (optional)

Click Next: Data Validation to set up a data validation task. See Configure data validation.

Step 6: Save the task and run a precheck

Click Next: Save Task Settings and Precheck at the bottom of the page.

To view API parameters for this configuration, hover over the Next: Save Task Settings and Precheck button and click Preview OpenAPI parameters.

DTS runs a precheck before the migration starts. If the precheck fails:

  • For failed items: click View Details, fix the issue, then click Precheck Again.

  • For warning items that can be ignored: click Confirm Alert Details > Ignore > OK > Precheck Again. Ignored warnings may cause data inconsistency.

Step 7: Purchase and start the instance

  1. When Success Rate reaches 100%, click Next: Purchase Instance.

  2. On the Purchase page, select the instance class:

    FieldDescription
    Resource Group SettingsResource group for the instance. Default: default resource group. See What is Resource Management?
    Instance ClassControls migration speed. See Data migration link specifications.
  3. Read and accept Data Transmission Service (Pay-as-you-go) Service Terms.

  4. Click Buy and Start, then click OK in the confirmation dialog box.

The migration task appears on the Data Migration Tasks list page. If the task includes only schema and full migration, it stops automatically when complete and the Status changes to Completed. If the task includes incremental migration, it continues running with Status showing Running.

Step 8: Switch over to the destination database

Before switching your applications to the destination instance:

  1. Wait until all pending changes from the source database have been applied to the destination. If incremental migration is running, monitor the task latency until it reaches zero.

  2. Update sequence values in the destination database. Sequences do not automatically start from the maximum value in the source database after a switchover. See Update the sequence value of the destination database.

  3. End or release the DTS migration task, or revoke the write permissions of the DTS account using the revoke command. This prevents DTS from overwriting destination data if the task is automatically recovered.

    If you manually set session_replication_role to replica in the destination database before migration, reset it to origin now.
  4. Reconnect your applications to the destination RDS for PostgreSQL instance.

What's next