All Products
Search
Document Center

Data Transmission Service:Migrate data from a self-managed PostgreSQL database to a PolarDB for PostgreSQL cluster

Last Updated:Mar 30, 2026

Data Transmission Service (DTS) migrates data from a self-managed PostgreSQL database to a PolarDB for PostgreSQL cluster with minimal downtime. This guide walks you through configuring and running a migration task.

Choose a migration type

Select the migration type combination that fits your situation:

Migration type What it does Downtime Cost Recommended when
Schema migration + Full data migration Migrates schemas and historical data. The task completes and stops automatically. Required — stop writes to the source before migration. Free The application can tolerate a maintenance window and simplicity is a priority.
Schema migration + Full data migration + Incremental data migration Migrates historical data, then continuously replicates ongoing changes until you cut over. Not required — services keep running during migration. Free for schema and full phases; charged for incremental phase. Business continuity is critical and the application cannot tolerate downtime.

If you run full migration only: Stop writes to the source database before starting. Data written to the source during migration will not appear in the destination.

Prerequisites

Before you begin, make sure you have:

For supported source and destination database versions, see Overview of data migration scenarios.

Required permissions

Database Schema migration Full data migration Incremental data migration
Self-managed PostgreSQL USAGE on pg_catalog SELECT on objects to migrate Superuser
PolarDB for PostgreSQL Target schema owner permissions

Why superuser is required for incremental migration: DTS uses PostgreSQL logical replication functions to capture ongoing changes from the source. These functions are accessible only to superusers. Without superuser permissions on the source database, the incremental migration task cannot start.

To create accounts and grant permissions:

Supported objects

Tables and schemas

TABLE and SCHEMA, including PRIMARY KEY, UNIQUE KEY, FOREIGN KEY, DATATYPE (built-in data type), and DEFAULT CONSTRAINT.

Other database objects

VIEW, PROCEDURE (PostgreSQL V11 or later), FUNCTION, RULE, SEQUENCE, EXTENSION, TRIGGER, AGGREGATE, INDEX, OPERATOR, and DOMAIN.

Supported SQL operations for incremental migration

Operation type Supported statements
DML INSERT, UPDATE, DELETE
DDL CREATE TABLE, DROP TABLE, ALTER TABLE (including RENAME TABLE, ADD COLUMN, ADD COLUMN DEFAULT, ALTER COLUMN TYPE, DROP COLUMN, ADD CONSTRAINT, ADD CONSTRAINT CHECK, ALTER COLUMN DROP DEFAULT), TRUNCATE TABLE (PostgreSQL V11 or later), CREATE INDEX ON TABLE

DDL migration notes:

  • DDL migration is available only for tasks created after October 1, 2020.

  • Tasks created before May 12, 2023 require a trigger and function in the source database to capture DDL. See Use triggers and functions to implement incremental DDL migration for PostgreSQL databases.

  • The source database account must be a privileged account for DDL migration.

  • DTS does not migrate: CASCADE or RESTRICT modifiers, DDL from sessions with SET session_replication_role = replica, DDL executed via function calls, DDL mixed with DML in the same submission, DDL statements outside the migration scope, or CREATE SEQUENCE.

  • To use a data migration task created before May 12, 2023 to migrate DDL operations, you must create a trigger and a function in the source database to capture DDL information before you configure the data migration task. For more information, see Use triggers and functions to implement incremental DDL migration for PostgreSQL databases.

  • Incremental migration does not support BIT type data.

Billing

Migration type Instance configuration fee Internet traffic fee
Schema migration + Full data migration Free Charged when the destination Access Method is Public IP Address. See Billing overview.
Incremental data migration Charged. See Billing overview.

Usage notes

Foreign keys and constraints

  • DTS migrates foreign keys from the source database during schema migration.

  • During full and incremental migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. Cascade updates and deletes on the source during this period may cause data inconsistency.

Source database requirements

  • The source server must have sufficient outbound bandwidth, or migration speed will be affected.

  • Tables must have a primary key or UNIQUE constraint with unique field values. Without these, duplicate data may appear in the destination.

    If the destination table was not created by DTS (that is, Schema Migration was not selected), the table must have the same primary key or a non-null UNIQUE constraint as the source table. Otherwise, duplicate data may appear.
  • The source database name cannot contain hyphens (-). For example, dts-testdata is not a valid name.

  • Table-level migration with object name mapping supports a maximum of 1,000 tables per task. If you exceed this limit, split the tables across multiple tasks or migrate the entire database.

Write-ahead log (WAL) requirements for incremental migration

  • Set wal_level to logical.

  • WAL retention: at least 24 hours for incremental-only tasks; at least 7 days for tasks that include both full and incremental migration. After full migration completes, reduce the retention to more than 24 hours. If the retention period is too short, DTS cannot retrieve the required WAL files, which may cause task failure, data inconsistency, or data loss. Issues caused by insufficient retention are not covered by the DTS Service Level Agreement (SLA).

  • If long-running transactions exist in the source database during incremental migration, WAL files generated before those transactions commit may accumulate and exhaust disk space on the source server.

Operational limits

  • A primary/secondary switchover on the self-managed PostgreSQL database causes the migration task to fail.

  • Do not perform DDL operations that change the database or table schema during full data migration. The task will fail.

  • If a single piece of data to be migrated from the source database exceeds 256 MB after an incremental change, the migration instance may fail and cannot be recovered. Reconfigure the migration instance in that case.

  • A major version upgrade of the source database while a migration instance is running causes the instance to fail and cannot be recovered.

Google Cloud Platform source databases

If the source is Google Cloud Platform Cloud SQL for PostgreSQL, the Database Account must have cloudsqlsuperuser permissions, and the account must have Owner permissions on all objects to migrate. Grant Owner permissions with:

GRANT <owner_of_the_object_to_be_migrated> TO <source_database_account>;
An account with cloudsqlsuperuser permissions cannot manage data owned by another cloudsqlsuperuser account.

REPLICA IDENTITY requirement

For incremental migration, run the following command on each table to migrate before writing data to it in these two situations:

  • When the migration instance runs for the first time.

  • When the migration object granularity is Schema and a new table is created or rebuilt with RENAME.

ALTER TABLE schema.table REPLICA IDENTITY FULL;

Replace schema and table with the actual schema name and table name. Run this command during off-peak hours to avoid table locks. If you skip the relevant precheck item, DTS runs this command automatically when the instance initializes.

SERIAL data type

If a table contains a SERIAL column, the source database automatically creates a Sequence for that column. If Schema Migration is selected, also select Sequence or migrate the entire schema. Otherwise, the migration instance may fail.

session_replication_role parameter

For full or incremental migration with foreign keys, triggers, or event triggers:

  • If the destination account is a privileged account, DTS sets session_replication_role to replica at the session level during migration.

  • If the account does not have this permission, manually set session_replication_role to replica on the destination database.

While this parameter is set to replica, cascade updates or deletes on the source may cause data inconsistency. After the DTS migration task is released, reset session_replication_role to origin.

Temporary tables and replication slot

DTS creates the following temporary tables in the source database to capture DDL statements, incremental table schemas, and heartbeat data. Do not delete these tables during migration:

public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.

These tables are deleted automatically when the DTS instance is released.

DTS also creates a replication slot prefixed with dts_sync_ in the source database. This slot retains incremental logs for the last 15 minutes. If the migration task is released or fails, DTS clears the replication slot automatically. If a primary/secondary switchover occurs on an RDS for PostgreSQL instance, manually clear the replication slot on the secondary database.

Other notes

  • One migration task migrates data from one database only. To migrate multiple databases, create a separate task for each.

  • DTS does not support migrating TimescaleDB extension tables or tables with cross-schema inheritance relationships.

  • Run migration during off-peak hours. Full data migration uses read and write resources on both source and destination databases, increasing database server load.

  • Full data migration uses concurrent INSERT operations, which cause table fragmentation in the destination. After full migration, destination table storage will be larger than source table storage.

  • For FLOAT or DOUBLE columns, DTS reads values using ROUND(COLUMN, PRECISION). Default precision is 38 digits for FLOAT and 308 digits for DOUBLE. Confirm this meets your requirements before starting.

  • DTS attempts to resume failed tasks for up to 7 days. Before switching business to the destination instance, stop or release the task. Alternatively, use the REVOKE command to remove write permissions from the DTS account on the destination to prevent source data from overwriting destination data if the task resumes.

  • DTS validates data content but not metadata such as Sequences. Validate metadata yourself.

  • After switching business to the destination, newly written sequence values will not continue from the maximum value in the source. Update sequence values in the destination before the switchover. See Update the sequence value in the destination database.

  • If a migration instance fails, DTS support attempts recovery within 8 hours. During recovery, DTS may restart the instance or adjust instance parameters (database parameters are not modified). See Modify instance parameters.

  • For partitioned tables, include both the parent table and all child partitions as migration objects. In PostgreSQL, the parent table stores no data directly — all data is in child partitions. Omitting either causes data inconsistency.

Prepare the source database

For Amazon RDS for PostgreSQL sources, see the Before you begin section of the "Migrate incremental data from an Amazon RDS for PostgreSQL instance to an ApsaraDB RDS for PostgreSQL instance" topic. For Amazon Aurora PostgreSQL sources, see Preparation 1: Edit the inbound rule of the Amazon Aurora PostgreSQL instance.

PostgreSQL 10.1 or later

  1. Log on to the server running the self-managed PostgreSQL database.

  2. Edit postgresql.conf. Set wal_level to logical, and set max_wal_senders and max_replication_slots to values greater than the sum of: the number of currently used replication slots plus the number of DTS instances using this database as a source. For example, if you currently have 2 replication slots in use and plan to run 3 DTS instances against this database, set both parameters to a value greater than 5 (for example, 10).

    Restart the PostgreSQL database after modifying postgresql.conf for the changes to take effect.
    # - Settings -
    
    wal_level = logical           # minimal, replica, or logical
                                  # (change requires restart)
    
    ......
    
    # - Sending Server(s) -
    
    max_wal_senders = 10          # max number of walsender processes
                                  # (change requires restart)
    #wal_keep_segments = 0        # in logfile segments, 16MB each; 0 disables
    #wal_sender_timeout = 60s     # in milliseconds; 0 disables
    
    max_replication_slots = 10    # max number of replication slots
                                  # (change requires restart)
  3. Add DTS server CIDR blocks to pg_hba.conf. Add only the CIDR blocks for the region where the destination database resides. See Add DTS server IP addresses to a whitelist.

    After modifying pg_hba.conf, run SELECT pg_reload_conf(); or restart the database for changes to take effect. If you have already set the IP address in pg_hba.conf to 0.0.0.0/0, skip this step. For details on pg_hba.conf, see The pg_hba.conf File.

    IP

  4. Create the target database and schema in the destination cluster to match the source objects you plan to migrate.

PostgreSQL 9.4.8 to 10.0

  1. Download and install PostgreSQL from source.

    1. Download the PostgreSQL source code for your version from the PostgreSQL official website.

    2. Run the following commands to configure, compile, and install:

    Important

    - The OS version of PostgreSQL must match the GNU Compiler Collection (GCC) version. - If sudo ./configure fails with readline library not found. Use --without-readline to disable readline support., run sudo ./configure --without-readline instead. - If you use another installation method, compile the ali_decoding plugin in a test environment with the same OS and GCC version.

    sudo ./configure
    sudo make
    sudo make install
  2. Download, compile, and install the ali_decoding plugin provided by DTS.

    1. Copy the output files to the required directories.

    # contrib/ali_decoding/Makefile
    MODULE_big = ali_decoding
    MODULES = ali_decoding
    OBJS    = ali_decoding.o
    
    DATA = ali_decoding--0.0.1.sql ali_decoding--unpackaged--0.0.1.sql
    
    EXTENSION = ali_decoding
    
    NAME = ali_decoding
    
    #subdir = contrib/ali_decoding
    #top_builddir = ../..
    #include $(top_builddir)/src/Makefile.global
    #include $(top_srcdir)/contrib/contrib-global.mk
    
    #PG_CONFIG = /usr/pgsql-9.6/bin/pg_config
    #pgsql_lib_dir := $(shell $(PG_CONFIG) --libdir)
    #PGXS := $(shell $(PG_CONFIG) --pgxs)
    #include $(PGXS)
    
    # Run the following commands to install the ali_decoding plug-in:
    ifdef USE_PGXS
    PG_CONFIG = pg_config
    PGXS := $(shell $(PG_CONFIG) --pgxs)
    include $(PGXS)
    else
    subdir = contrib/ali_decoding
    top_builddir = ../..
    include $(top_builddir)/src/Makefile.global
    include $(top_srcdir)/contrib/contrib-global.mk
    endif
    sudo make
    sudo make install

    contrib目录

    指定位置

  3. Create the target database and schema in the destination cluster to match the source objects you plan to migrate.

Create a migration task

Step 1: Go to the Data Migration page

  1. Log on to the DTS console.DTS console

  2. In the left-side navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where the migration instance will reside.

DMS console navigation may vary by mode and layout. See Simple mode and Customize the layout and style of the DMS console.

  1. Log on to the DMS console.DMS console

  2. In the top navigation bar, hover over Data + AI > DTS (DTS) > Data Migration.

  3. In the drop-down list next to Data Migration Tasks, select the region where the migration instance will reside.

Step 2: Configure source and destination databases

  1. Click Create Task.

  2. Configure the source and destination databases.

    Warning

    After selecting source and destination instances, review the Limits displayed at the top of the page to avoid task configuration issues.

    Source database (self-managed PostgreSQL on ECS)

    Parameter Description
    Task Name DTS generates a name automatically. Specify a descriptive name to identify the task. Task names do not need to be unique.
    Select Existing Connection Select a registered instance from the drop-down list, or configure the connection manually.
    Database Type Select PostgreSQL.
    Access Method Select the deployment location of the source database. This example uses Self-managed Database on ECS. For other access methods, see Preparations.
    Instance Region The region where the source PostgreSQL database resides.
    ECS Instance ID The ECS instance ID hosting the source database.
    Port Number The service port. Default: 5432.
    Database Name The name of the database containing the objects to migrate.
    Database Account The source database account. See Required permissions.
    Database Password The password for the database account.
    Encryption Select Non-encrypted or SSL-encrypted. For SSL: upload CA Certificate (required), and optionally Client Certificate, Private Key of Client Certificate, and Private Key Password of Client Certificate.

    Destination database (PolarDB for PostgreSQL)

    Parameter Description
    Select Existing Connection Select a registered instance from the drop-down list, or configure the connection manually.
    Database Type Select PolarDB for PostgreSQL.
    Access Method Select Alibaba Cloud Instance.
    Instance Region The region where the destination PolarDB for PostgreSQL cluster resides.
    Instance ID The ID of the destination PolarDB for PostgreSQL cluster.
    Database Name The name of the destination database to receive the migrated objects.
    Database Account The destination database account. See Required permissions.
    Database Password The password for the database account.
  3. Click Test Connectivity and Proceed.

    - DTS server CIDR blocks must be added to the security settings of the source and destination databases. See Add DTS server IP addresses to a whitelist. - If the source or destination database is self-managed and Access Method is not Alibaba Cloud Instance, click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.

Step 3: Select migration objects and configure advanced settings

  1. On the Configure Objects page, configure migration settings.

    - If Schema Migration is not selected, create the target database and table in the destination before starting, and enable object name mapping in Selected Objects. - If a table contains a SERIAL column and Schema Migration is selected, also select Sequence or migrate the entire schema.
    Parameter Description
    Migration Types Select the migration types for your scenario. For details, see Choose a migration type.
    Processing Mode of Conflicting Tables Precheck and Report Errors: fails the precheck if source and destination have tables with identical names. Use object name mapping to resolve conflicts before migration. Ignore Errors and Proceed: skips the conflict check. During full migration, DTS keeps existing destination records. During incremental migration, DTS overwrites destination records. If schemas differ, only matching columns are migrated or the task fails.
    Source Objects Select schemas or tables to migrate. Click the arrow icon to add them to Selected Objects. If you select tables, DTS does not migrate views, triggers, or stored procedures.
    Selected Objects Right-click an object to rename it (see Map the name of a single object), set WHERE filter conditions, or select SQL operations to migrate. Click Batch Edit to rename multiple objects at once (see Map multiple object names at a time). Renaming an object may cause dependent objects to fail migration.
  2. Click Next: Advanced Settings and configure advanced options.

    Parameter Description
    Dedicated Cluster for Task Scheduling By default, DTS schedules tasks to a shared cluster. For improved stability, purchase a dedicated cluster. See What is a DTS dedicated cluster.
    Retry Time for Failed Connections How long DTS retries on connection failure. Range: 10–1,440 minutes. Default: 720. Set to more than 30 minutes. If reconnection succeeds within this window, DTS resumes the task; otherwise, the task stops. If multiple tasks share the same source or destination, the most recently set value applies. DTS charges continue during retry.
    Retry Time for Other Issues How long DTS retries on DDL or DML failures. Range: 1–1,440 minutes. Default: 10. Set to more than 10 minutes. Must be less than Retry Time for Failed Connections.
    Enable Throttling for Full Data Migration Limits the full migration speed to reduce load on the source and destination. Configure QPS (Queries Per Second) to the source database, RPS (Rows Per Second) of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected.
    Enable Throttling for Incremental Data Migration Limits incremental migration speed. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected.
    Environment Tag Optional. Tag the instance with an environment label.
    Configure ETL Enable extract, transform, and load (ETL) to apply data processing rules during migration. Select Yesalert notification settings to enter statements in the code editor. See Configure ETL in a data migration or data synchronization task.
    Monitoring and Alerting Select Yes to receive alerts when the task fails or migration latency exceeds a threshold. Configure alert thresholds and notification contacts. See Configure monitoring and alerting when you create a DTS task.
  3. Click Next Step: Data Verification to configure a data verification task. See Configure a data verification task.

Step 4: Run precheck and purchase the instance

  1. Click Next: Save Task Settings and Precheck.

    • To preview API parameters before saving, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

    - DTS runs a precheck before the migration task starts. The task cannot start until the precheck passes. - For failed precheck items, click View Details to see the cause, troubleshoot, and then run the precheck again. - For precheck alerts: if the alert cannot be ignored, fix the issue and rerun the precheck. If the alert can be ignored, click Confirm Alert Details > Ignore > OK > Precheck Again. Ignoring alerts may lead to data inconsistency.
  2. When Success Rate reaches 100%, click Next: Purchase Instance.

  3. On the Purchase Instance page, configure the instance.

    Parameter Description
    Resource Group The resource group for the migration instance. Default: default resource group. See What is Resource Management?
    Instance Class The instance class determines migration speed. Select based on your workload. See Instance classes of data migration instances.
  4. Select the Data Transmission Service (Pay-as-you-go) Service Terms check box.

  5. Click Buy and Start, then click OK in the confirmation dialog.

Step 5: Monitor migration progress

Go to the Data Migration page to monitor the task.

  • Full migration only: the task stops automatically when complete. The status shows Completed.

  • Full + incremental migration: the task runs continuously. The status shows Running. Incremental migration does not stop until you manually release it.

Verify data and cut over

Before switching business traffic to the destination cluster, verify data consistency and update sequences.

When to cut over (incremental migration)

Cut over when the source database has no new writes and the migration latency reaches 0. Verify the following before cutting over:

Check How to verify Ready when
No new incoming changes Monitor the incremental migration latency in the DTS console Latency reaches 0
Sequences updated See Update the sequence value in the destination database Sequence values in destination are updated

Cutover steps

  1. Stop writes to the source database.

  2. Wait for the incremental migration latency to reach 0.

  3. Update sequence values in the destination database. See Update the sequence value in the destination database.

  4. Stop or release the DTS migration task to prevent it from resuming and overwriting destination data.

  5. Switch your application connection string to the destination cluster.

After the business switchover, newly written sequence values will not continue from the maximum value in the source database. Update sequence values in the destination before switching.

What's next