All Products
Search
Document Center

Data Transmission Service:Migrate data between PolarDB for PostgreSQL clusters

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate data between PolarDB for PostgreSQL clusters with minimal or no downtime.

Prerequisites

Before you begin, make sure that you have:

Billing

Migration typeLink configuration feeData transfer cost
Schema migration and full data migrationFree of chargeFree of charge for this example
Incremental data migrationPaid. See Billing overview

Supported migration objects

SCHEMA, TABLE — including PRIMARY KEY, UNIQUE KEY, FOREIGN KEY, DATATYPE (built-in data types), and DEFAULT CONSTRAINT.

The features supported for this database type as a source vary depending on the destination database type. The console displays the applicable features.

Supported SQL operations for incremental migration

Operation typeSQL statements
DMLINSERT, UPDATE, DELETE
DDLCREATE TABLE, DROP TABLE, ALTER TABLE (RENAME TABLE, ADD COLUMN, ADD COLUMN DEFAULT, ALTER COLUMN TYPE, DROP COLUMN, ADD CONSTRAINT, ADD CONSTRAINT CHECK, ALTER COLUMN DROP DEFAULT), TRUNCATE TABLE (PostgreSQL 11 and later only), CREATE INDEX ON TABLE
Important
  • DDL migration is only supported for tasks created after October 1, 2020.

  • For tasks created before May 12, 2023, create triggers and functions in the source database to capture DDL changes before configuring the task. See Use triggers and functions for PostgreSQL DDL incremental migration.

  • DDL migration requires a privileged account in the source database.

  • Incremental migration does not support the BIT data type.

  • DDL statements with CASCADE or RESTRICT clauses are not supported.

  • DDL statements executed in sessions where SET session_replication_role = replica is set are not supported.

  • DDL statements executed by calling functions are not supported.

  • If a single transaction contains both DML and DDL statements, the DDL statement is not migrated.

  • If a single transaction contains DDL statements for objects not selected for migration, those DDL statements are not migrated.

Database account permissions

DatabaseRequired permissionsHow to create
Source PolarDB for PostgreSQLPrivileged accountCreate a database account
Destination PolarDB for PostgreSQLPrivileged account or superuserCreate a database account

Limitations

Review the following limitations before you start the migration task.

Source database:

  • Tables to be migrated must have a primary key or a non-null unique index.

  • Long-running transactions during incremental migration cause write-ahead log (WAL) accumulation, which may exhaust disk space on the source database.

  • The source cluster must support and enable Logical Replication Slot Failover. If the cluster does not support it (for example, the Database Engine is PostgreSQL 14), a high-availability (HA) switchover in the source database may cause the migration instance to fail unrecoverably.

  • During schema migration and full migration, do not perform DDL operations that change the database or table structure — the migration task fails if you do.

  • During full-only migration, do not write new data to the source instance, or data inconsistency occurs. To maintain real-time consistency, run schema migration, full data migration, and incremental data migration together.

  • If a single incremental change exceeds 256 MB, the migration instance fails and cannot be recovered. You must reconfigure the migration instance.

Migration scope:

  • A single migration task can migrate only one database. Configure a separate task for each database you need to migrate.

  • DTS does not support: TimescaleDB extension tables, tables with cross-schema inheritance, or tables with unique indexes based on expressions.

  • Schemas created by installing plugins cannot be migrated and are not visible in the console when configuring a task.

Incremental migration requirements:

  • Run ALTER TABLE schema.table REPLICA IDENTITY FULL; on all tables to be migrated in the source database before writing data to them. Do not perform table lock operations while running this command — it can cause deadlocks. DTS runs this command automatically during instance initialization if you skip the precheck. Run this command in the following situations:

    • When the instance runs for the first time.

    • When the migration object is a schema and a new table is created in the schema, or an existing table is rebuilt using RENAME.

    Replace schema and table with the actual names. Run this command during off-peak hours.
  • If a table contains a SERIAL field and you select Schema Migration for Migration Types, also select Sequence or migrate the entire schema. Otherwise the migration instance may fail.

  • DTS creates a replication slot with the prefix dts_sync_ in the source database to replicate data, retaining incremental logs from the last 15 minutes. DTS attempts to clean up this slot automatically when the migration fails or the instance is released.

    If you change the source database account password or remove DTS IP addresses from the source database whitelist during migration, the replication slot cannot be cleaned up automatically. Manually clean up the slot to prevent it from accumulating and consuming disk space, which can make the source database unavailable. If a failover occurs on the source database, log on to the secondary database to clean up the slot manually.
  • For tasks with foreign keys, triggers, or event triggers, DTS sets session_replication_role to replica at the session level if the destination database account is a privileged account or has superuser permissions. If the destination account does not have these permissions, set session_replication_role to replica manually in the destination database. After the migration task is released, change the parameter back to origin.

Destination database:

  • Full data migration runs concurrent INSERT operations, which causes table fragmentation. The table space in the destination database will be larger than in the source database after migration.

  • DTS validates data content but does not validate metadata such as sequences. Validate metadata yourself.

  • After cutover to the destination database, sequences do not start from the maximum sequence value of the source database. Update the sequence value in the destination database before cutover. See Update the sequence value of the destination database.

Partitioned tables:

  • Include both the parent table and all child tables as migration objects. In PostgreSQL, parent tables do not store data directly — all data lives in child tables. If child tables are not included, data inconsistency occurs.

Task recovery:

  • DTS attempts to auto-recover failed tasks within seven days. Before you switch your business to the destination instance, end or release the task, or use the revoke command to revoke the write permissions of the DTS account on the destination instance. This prevents source data from overwriting destination data after an auto-recovery.

  • If a task fails, DTS support staff attempt to restore it within eight hours. They may restart the task or adjust DTS task parameters (not database parameters). See Modify instance parameters for parameters that may be adjusted.

Temporary tables:

  • DTS creates the following temporary tables in the source database to capture DDL statements, incremental table structures, and heartbeat information. Do not delete them during migration — the DTS task will fail. DTS deletes them automatically after the instance is released: public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, public.aliyun_dts_instance

Schema migration behavior:

  • DTS migrates foreign keys from the source database to the destination database during schema migration.

  • During full and incremental migration, DTS temporarily disables constraint checks and foreign key cascade operations at the session level. If cascade update or delete operations occur in the source database while the task is running, data inconsistency may occur.

Create a migration task

Step 1: Navigate to data migration

Go to the data migration task list using one of the following methods.

From the DTS console

  1. Log on to the Data Transmission Service (DTS) console.

  2. In the left navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where the migration instance is located.

From the DMS console

Note

The actual operations may vary based on the mode and layout of the DMS console. For more information, see Simple mode console and Customize the layout and style of the DMS console.

  1. Log on to the Data Management (DMS) console.

  2. In the top menu bar, choose Data + AI > Data Transmission (DTS) > Data Migration > > .

  3. To the right of Data Migration Tasks, select the region where the migration instance is located.

Step 2: Configure source and destination databases

  1. Click Create Task.

  2. Configure the source and destination database connections.

    ParameterDescription
    Task NameDTS generates a task name automatically. Specify a descriptive name for easy identification. The name does not need to be unique.
    Source DatabaseSelect Existing ConnectionSelect a database instance already added to the system. The connection details are filled in automatically. If no registered instance exists, or you do not want to use one, fill in the connection details manually.
    Database Type (source)Select PolarDB for PostgreSQL.
    Access Method (source)Select Alibaba Cloud Instance.
    Instance Region (source)Select the region where the source cluster resides.
    Replicate Data Across Alibaba Cloud AccountsSelect No for migration within the same Alibaba Cloud account.
    Instance ID (source)Select the ID of the source PolarDB for PostgreSQL cluster.
    Database Name (source)Enter the name of the source database containing the objects to migrate.
    Database Account (source)Enter the database account. See Database account permissions for required permissions.
    Database Password (source)Enter the password for the database account.
    Destination DatabaseSelect Existing ConnectionSelect a registered database instance, or fill in connection details manually.
    Database Type (destination)Select PolarDB for PostgreSQL.
    Access Method (destination)Select Alibaba Cloud Instance.
    Instance Region (destination)Select the region where the destination cluster resides.
    Instance ID (destination)Select the ID of the destination PolarDB for PostgreSQL cluster.
    Database Name (destination)Enter the name of the destination database that will receive the data.
    Database Account (destination)Enter the database account for the destination cluster.
    Database Password (destination)Enter the password for the destination database account.
  3. Click Test Connectivity and Proceed.

    Make sure DTS server IP address ranges are added to the security settings of both source and destination databases. See Add DTS server IP addresses to a whitelist.

Step 3: Configure migration objects

On the Configure Objects page, configure what to migrate.

ParameterDescription
Migration TypesSelect the migration types based on your scenario: Schema Migration only copies the schema. Schema Migration + Full Data Migration copies the schema and all existing data — suitable for a one-time migration with planned downtime. Schema Migration + Full Data Migration + Incremental Data Migration copies the schema, all existing data, and ongoing changes — use this for no-downtime migration.
Processing Mode of Conflicting TablesPrecheck and Report Errors: DTS checks for tables with the same names in the destination database. If conflicts exist, the precheck fails and the task does not start. To resolve conflicts without deleting or renaming destination tables, use object name mapping. Ignore Errors and Proceed: DTS skips the conflict check. During full migration, conflicting records in the destination are kept. During incremental migration, source records overwrite destination records. If table schemas are inconsistent, only some columns may migrate, or migration may fail.
Capitalization of Object Names in Destination InstanceSet the case sensitivity policy for migrated object names (databases, tables, columns). The default is DTS default policy. See Case sensitivity of object names.
Source ObjectsClick objects in the Source Objects box and click the right arrow to move them to the Selected Objects box. You can select at the schema or table level. Selecting tables does not migrate views, triggers, or stored procedures. If a table has a SERIAL field and you select Schema Migration, also select Sequence or migrate the entire schema.
Selected ObjectsTo rename a migrated object, right-click it in Selected Objects. See Object name mapping. To rename multiple objects at once, click Batch Edit. See Map multiple object names. To filter rows, right-click a table and set a WHERE clause. See Set a filter condition.
If you do not select Schema Migration, you must ensure that a database and tables to receive the data already exist in the destination database. You can also use the object name mapping feature in the Selected Objects box as needed. Using object name mapping may cause dependent objects to fail to migrate.

Step 4: Configure advanced settings

Click Next: Advanced Settings and configure the following parameters.

ParameterDescription
Dedicated Cluster for Task SchedulingDTS schedules tasks on a shared cluster by default. For more stable tasks, purchase a dedicated cluster.
Retry Time for Failed ConnectionsIf the source or destination database connection fails after the task starts, DTS retries immediately. Default: 720 minutes. Range: 10–1,440 minutes. We recommend setting this to more than 30 minutes. If DTS reconnects within the retry period, the task resumes automatically.
Retry Time for Other IssuesIf a non-connectivity issue (such as a DDL or DML execution error) occurs, DTS retries immediately. Default: 10 minutes. Range: 1–1,440 minutes. We recommend setting this to more than 10 minutes. Must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data MigrationLimit the read/write load on source and destination databases during full migration. Set Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. You can also adjust the full migration speed after the task starts.
Enable Throttling for Incremental Data MigrationLimit the load during incremental migration. Set RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. You can also adjust the incremental migration speed after the task starts.
Environment TagOptionally tag the instance to identify its environment.
Configure ETLEnable the extract, transform, and load (ETL) feature to transform data during migration. See What is ETL? and Configure ETL.
Monitoring and AlertingConfigure alerts to receive notifications when migration fails or latency exceeds a threshold. See Configure monitoring and alerting.

Step 5: Configure data validation (optional)

Click Next: Data Validation to configure a data validation task. See Configure data validation.

Step 6: Run the precheck

Click Next: Save Task Settings and Precheck.

To preview the API parameters for this configuration, hover over the button and click Preview OpenAPI parameters.

DTS runs a precheck before starting the migration task. If the precheck fails:

  • For failed items: click View Details, fix the issue, and run the precheck again.

  • For warnings that cannot be ignored: click View Details, fix the issue, and run the precheck again.

  • For warnings that can be ignored: click Confirm Alert Details > Ignore > OK > Precheck Again to skip the item. Ignoring warnings may cause data inconsistency.

Step 7: Purchase and start the instance

  1. When the Success Rate reaches 100%, click Next: Purchase Instance.

  2. On the Purchase page, configure the instance class.

    ParameterDescription
    Resource Group SettingsSelect the resource group for the instance. Default: default resource group. See What is Resource Management?
    Instance ClassSelect a specification based on your performance requirements. The link specification affects migration speed. See Data migration link specifications.
  3. Read and accept Data Transmission Service (Pay-as-you-go) Service Terms.

  4. Click Buy and Start, then click OK in the confirmation dialog.

Verify the migration

Monitor task progress on the Data Migration Tasks page:

  • Tasks without incremental migration stop automatically after full migration completes. The Status changes to Completed.

  • Tasks with incremental migration continue running. The Status shows Running while incremental migration is active.

What's next

Before cutting over to the destination database:

  • Update the sequence value in the destination database. After cutover, sequences do not resume from the maximum value of the source database. See Update the sequence value of the destination database.

  • End or release the DTS task, or revoke the write permissions of the DTS account on the destination instance using the revoke command. This prevents auto-recovered tasks from overwriting destination data.