All Products
Search
Document Center

Data Transmission Service:Migrate data between PolarDB for PostgreSQL (Compatible with Oracle) clusters

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate data between PolarDB for PostgreSQL (Compatible with Oracle) clusters. DTS supports schema migration, full data migration, and incremental data migration, so you can migrate without taking your applications offline.

Prerequisites

Before you begin, make sure you have:

Choose a migration strategy

Select your migration type before configuring the task:

GoalMigration types to selectConsideration
Migrate existing data with a maintenance windowSchema migration + Full data migrationDo not write new data to the source during migration.
Migrate with no downtimeSchema migration + Full data migration + Incremental data migrationDTS applies changes to the destination in real time. Cut over when ready.

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration and full data migrationFreeCharged when Access Method is Public IP Address. See Billing overview.
Incremental data migrationCharged. See Billing overview.

Migration types

Migration typeDescription
Schema migrationMigrates schema definitions to the destination database. Supported objects: tables, views, synonyms, triggers, stored procedures, stored functions, packages, and custom types.
Note

Triggers are not currently compatible. Delete triggers in the source database before migration to avoid data inconsistency. See How do I configure a synchronization or migration job when the source database contains triggers?

Full data migrationMigrates all existing data from source objects to the destination database. Do not perform DDL operations on migration objects until schema migration and full data migration are complete.
Incremental data migrationAfter full data migration, polls and captures redo logs from the source to continuously apply changes to the destination—keeping your applications running during migration.

Supported objects

Schema and table objects

Includes: PRIMARY KEY, UNIQUE KEY, FOREIGN KEY, built-in data types, and DEFAULT CONSTRAINT.

Other objects

Support for VIEW, PROCEDURE (PostgreSQL 11 and later), and other objects varies by destination database type. The options shown in the console are definitive.

Supported incremental SQL operations

Operation typeSQL statements
DMLINSERT, UPDATE, DELETE
DDLAvailable for tasks created after October 1, 2020. Requires a privileged source account.

Supported DDL statements (privileged source account required):

  • CREATE TABLE, DROP TABLE

  • ALTER TABLE: RENAME TABLE, ADD COLUMN, ADD COLUMN DEFAULT, ALTER COLUMN TYPE, DROP COLUMN, ADD CONSTRAINT, ADD CONSTRAINT CHECK, ALTER COLUMN DROP DEFAULT

  • TRUNCATE TABLE

  • CREATE INDEX ON TABLE

Important
  • Additional DDL options such as CASCADE or RESTRICT are not migrated.

  • DDL in sessions using SET session_replication_role = replica is not supported.

  • DDL executed by calling functions is not supported.

  • If a single commit contains both DML and DDL statements, the DDL is not migrated.

  • DDL for objects outside the migration scope is not migrated.

  • For data migration tasks created before May 12, 2023, you must create triggers and functions in the source database to capture DDL information before configuring the migration task. For more information, see Use triggers and functions to implement incremental DDL migration for PostgreSQL.

  • Data of the bit type is not supported in incremental data migration.

Tasks created before May 12, 2023: Create triggers and functions in the source database to capture DDL information before configuring the migration task. See Use triggers and functions to implement incremental DDL migration for PostgreSQL.

Database account permissions

DatabaseRequired permission
Source PolarDB for PostgreSQL (Compatible with Oracle) clusterPrivileged account
Destination PolarDB for PostgreSQL (Compatible with Oracle) clusterDatabase Owner — specified when the database is created

To create an account and grant permissions, see Create an account.

Limitations

Source database limits

LimitBehavior and consequence
BandwidthThe source database server must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed.
Primary key or unique constraintTables must have a primary key or UNIQUE constraint with unique fields. If this requirement is not met, duplicate data may appear in the destination.
Table-level migration with column mappingA single task supports a maximum of 1,000 tables. If you exceed this limit, split the tables into multiple tasks, or migrate the entire database in one task.
WAL retention (incremental migration)WAL must be enabled. For incremental-only tasks, retain WAL for more than 24 hours. For tasks combining full and incremental migration, retain WAL for at least 7 days. You can set the longer retention period after full migration completes. If WAL is not retained long enough, the task may fail or data loss may occur—this scenario is not covered by the DTS Service-Level Agreement (SLA).
Logical Replication Slot FailoverThe source cluster must support and enable Logical Replication Slot Failover. If the Database Engine is Oracle syntax compatible 2.0, a failover in the source may cause the migration instance to fail unrecoverably.
Incremental change size limitIf a single incremental change exceeds 256 MB, the migration instance may fail unrecoverably and must be reconfigured.
Long-running transactionsLong-running transactions in the source prevent WAL from being cleared before the transaction commits. This can exhaust disk space in the source database.
DDL operations during migrationDo not perform DDL operations during schema migration or full data migration. If incremental migration is not selected, do not write new data to the source.

General limits

LimitDetails
One database per taskA single task migrates one database. Configure separate tasks for each additional database.
Unsupported object typesTimescaleDB extension tables, tables with cross-schema inheritance, and tables with unique indexes based on expressions are not supported.
Extension schemasSchemas created by installing extensions cannot be migrated and are not retrievable in the console.
SERIAL fieldsIf a table contains a SERIAL field and Schema Migration is selected, also select Sequence or migrate the entire schema. Otherwise, the migration instance may fail.
REPLICA IDENTITY for incremental migrationBefore writing data to tables in an incremental migration task, run the following command on each table in the source database: ALTER TABLE schema.table REPLICA IDENTITY FULL;. Run it during off-peak hours and avoid table lock operations to prevent deadlocks. If you skip the related precheck item, DTS runs this command automatically during initialization. Re-run it in these cases: the instance runs for the first time; a new table is created in a schema migration object; or an existing table is rebuilt using RENAME.
DTS temporary tablesDTS creates the following temporary tables in the source database. Do not delete them during migration—they are deleted automatically when the DTS instance is released: public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance.
Heartbeat tableDTS adds dts_postgres_heartbeat to the source database to track incremental migration latency.
Replication slot cleanupDTS creates a replication slot prefixed dts_sync_ in the source database and uses it to obtain incremental logs from the last 15 minutes. DTS attempts to clean up the slot when the migration fails or the instance is released. If the source account password changes or the DTS IP address is removed from the whitelist, the automatic cleanup fails—clean up the slot manually to prevent disk space accumulation. After a failover, log on to the secondary database to clean up the slot manually.
Full migration performanceFull data migration uses concurrent INSERT operations, so the destination table storage space may be larger than in the source after migration. Run migrations during off-peak hours to reduce load on source and destination databases.
FLOAT and DOUBLE precisionDTS reads FLOAT and DOUBLE values using ROUND(COLUMN,PRECISION). If precision is not explicitly defined, DTS uses 38 digits for FLOAT and 308 digits for DOUBLE. Verify that this meets your requirements before migrating.
DTS auto-recoveryDTS attempts to recover failed tasks for up to seven days. Before switching traffic to the destination, end or release the task, or revoke DTS write permissions using the revoke command—otherwise, DTS may overwrite destination data if the task auto-recovers.
Sequence validationDTS validates data content but not metadata such as sequences. Validate sequence metadata manually.
Sequence values after switchoverAfter switching to the destination, new sequences do not start from the maximum source sequence value. Update the sequence value in the destination before the switchover. See Update the sequence value of the destination database.
session_replication_roleFor tasks involving foreign keys, triggers, or event triggers, DTS sets session_replication_role to replica at the session level if the destination account is a privileged account or superuser. If the destination account does not have these permissions, set session_replication_role to replica manually in the destination. Cascade update or delete operations during this period may cause data inconsistency. After the DTS task is released, set session_replication_role back to origin.
DTS support staff restorationIf a task fails, DTS support staff attempt to restore it within eight hours. Only DTS task parameters are modified—not database parameters.
Partitioned tablesInclude both the parent table and all child partitions as migration objects. The parent table does not store data directly—all data is in child partitions. Omitting child partitions causes data inconsistency between source and destination.
Foreign keys and cascade operationsDuring schema migration, DTS migrates foreign keys to the destination. During full and incremental migration, DTS temporarily disables constraint checks and foreign key cascade operations at the session level. Cascade update or delete operations in the source during this period may cause data inconsistency.

Create a migration task

Step 1: Go to the migration task list

From the DTS console:

  1. Log on to the DTS console.

  2. In the left navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where the migration instance is located.

From the DMS console:

Steps may vary based on your DMS console mode. See Simple mode console and Customize the layout and style of the DMS console.
  1. Log on to the DMS console.

  2. In the top menu bar, choose Data + AI > Data Transmission (DTS) > Data Migration.

  3. To the right of Data Migration Tasks, select the region.

Step 2: Configure source and destination databases

  1. Click Create Task.

  2. On the task configuration page, configure the source and destination databases.

Warning

Read the limits shown at the top of the page before selecting source and destination instances. Ignoring them may cause task failure or data inconsistency.

Task settings:

ParameterDescription
Task NameDTS generates a name automatically. Specify a descriptive name for easy identification. Names do not need to be unique.

Source database settings:

ParameterDescription
Select Existing ConnectionSelect a previously registered database instance from the drop-down list to populate the fields automatically. If no registered instance is available, configure the fields manually. In the DMS console, this parameter is named Select a DMS database instance.
Database TypeSelect PolarDB (Compatible with Oracle).
Connection TypeSelect Public IP.
Instance RegionSelect the region of the source cluster.
Select Multi-source DataSingle IP:Port — enter the DNS or IP Address and Port of the primary node. Multiple IP:Port — enter multiple node addresses in the Multi-source IP:Port field. DTS automatically switches to the new primary node after a primary/secondary switchover, which is suitable for disaster recovery scenarios. This example uses Single IP:Port.
DNS or IP AddressEnter the endpoint of the primary node. Use the Ping command to resolve the IP address from the direct connection endpoint.
PortEnter the service port. Default: 1521. The port must be accessible over the Internet in this example.
Multi-source IP:PortEnter multiple node IP addresses and ports, separated by commas (,).
Oracle TypeNon-RAC Instance — also enter the SID. RAC or PDB Instance — also enter the ServiceName. This example uses Non-RAC Instance.
Database NameEnter the name of the database containing the migration objects in the source cluster.
Database AccountEnter the database account. For permission requirements, see Database account permissions.
Database PasswordEnter the password for the database account.

Destination database settings:

ParameterDescription
Select Existing ConnectionSelect a previously registered database instance from the drop-down list to populate the fields automatically. If no registered instance is available, configure the fields manually. In the DMS console, this parameter is named Select a DMS database instance.
Database TypeSelect PolarDB (Compatible with Oracle).
Connection TypeSelect Public IP.
Instance RegionSelect the region of the destination cluster.
Select Multi-source DataSingle IP:Port — enter the DNS or IP Address and Port of the primary node. Multiple IP:Port — enter multiple node addresses in the Multi-source IP:Port field. DTS automatically switches to the new primary node after a primary/secondary switchover. This example uses Single IP:Port.
DNS or IP AddressEnter the endpoint of the primary node. Use the Ping command to resolve the IP address from the direct connection endpoint.
PortEnter the service port. Default: 1521.
Multi-source IP:PortEnter multiple node IP addresses and ports, separated by commas (,).
Database NameEnter the name of the database containing the migration objects in the destination cluster.
Database AccountEnter the database account. For permission requirements, see Database account permissions.
Database PasswordEnter the password for the database account.
  1. Click Test Connectivity and Proceed.

Add the CIDR blocks of DTS servers to the security settings of both the source and destination databases. See Add the IP address whitelist of DTS servers.
If the source or destination is a self-managed database (Access Method is not Alibaba Cloud Instance), also click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.

Step 3: Configure migration objects

On the Configure Objects page, set the following:

ConfigurationDescription
Migration TypesSelect Schema Migration and Full Data Migration for a full migration. Add Incremental Data Migration for zero-downtime migration. If Schema Migration is not selected, make sure the destination already has the required databases and tables. If Incremental Data Migration is not selected, do not write new data to the source during migration.
Processing Mode of Conflicting TablesPrecheck and Report Errors — checks for duplicate table names in the destination before starting. An error is reported if duplicates exist. If a table in the destination cannot be deleted or renamed, use the object name mapping feature to rename it. Ignore Errors and Proceed — skips the check. During full migration, conflicting rows in the destination are kept. During incremental migration, source rows overwrite destination rows. If schemas are inconsistent, partial data migration or failure may result.
Source ObjectsClick the objects to migrate in the Source Objects box, then click the right arrow to move them to Selected Objects. Objects can be selected at the schema or table level. If you select tables, other objects (views, triggers, stored procedures) are not migrated. If a table contains a SERIAL field and Schema Migration is selected, also select Sequence or migrate the entire schema.
Selected ObjectsRight-click an object to rename it. See Individual table column mapping. Click Batch Edit to rename multiple objects at once. See Map multiple object names at a time. Renaming objects may cause dependent objects to fail migration. Right-click a table to set a WHERE clause filter. See Set filter conditions. Right-click a migration object to select specific SQL operations to migrate.

Click Next: Advanced Settings.

Step 4: Configure advanced settings

ConfigurationDescription
Dedicated Cluster for Task SchedulingBy default, tasks run on a shared cluster. To use a dedicated cluster for more stable performance, purchase one separately.
Retry Time for Failed ConnectionsDefault: 720 minutes (range: 10–1440 minutes). Set to at least 30 minutes. DTS resumes the task automatically if reconnection succeeds within this period. For multiple DTS instances sharing the same source or destination, the retry time is determined by the most recently created task. Charges apply during the retry period.
Retry Time for Other IssuesDefault: 10 minutes (range: 1–1440 minutes). Applies to non-connectivity issues such as DDL or DML exceptions. This value must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data MigrationLimits full migration throughput by setting Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. Throttling settings can also be adjusted after the instance starts running.
Enable Throttling for Incremental Data MigrationLimits incremental migration throughput by setting RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected.
Environment TagOptional. Select a tag to identify the instance.
Configure ETLSelect Yes to enable extract, transform, and load (ETL) and enter data processing statements. See Configure ETL in a data migration or data synchronization task and What is ETL?
Monitoring and AlertingSelect Yes to configure an alert threshold and notification settings. DTS sends alerts when a migration fails or latency exceeds the threshold.

Click Next: Data Validation to configure a data validation task. See Configure data validation.

Step 5: Save settings and run a precheck

Click Next: Save Task Settings and Precheck.

To preview the API parameters for this configuration, hover over the button and click Preview OpenAPI parameters.

DTS runs a precheck before starting the task. The task starts only after passing.

  • If the precheck fails, click View Details next to the failed item, fix the issue, then run the precheck again.

  • If a warning appears for a non-ignorable item, click View Details, fix the issue, then run the precheck again.

  • If a warning appears for an ignorable item, click Confirm Alert Details > Ignore > OK > Precheck Again to skip it. Ignoring warnings may cause data inconsistency.

Step 6: Purchase the instance

  1. When Success Rate reaches 100%, click Next: Purchase Instance.

  2. On the Purchase page, configure the instance:

ParameterDescription
Resource Group SettingsSelect the resource group for the instance. Default: default resource group. See What is Resource Management?
Instance ClassSelect a link specification based on your migration requirements. Higher specifications support faster migration speeds. See Data migration link specifications.
  1. Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms.

  2. Click Buy and Start, then click OK in the confirmation dialog.

Monitor migration progress

After the task starts, view its progress on the Data Migration Tasks list page.

  • Full migration only: The task stops automatically after full migration completes. Status changes to Completed.

  • Incremental migration included: The task continues running. Status remains Running until you manually stop it.

What to do before cutting over

Before switching your applications to the destination database:

  • End or release the DTS task, or revoke the write permissions of the DTS account using the revoke command. This prevents DTS from overwriting destination data if the task auto-recovers within seven days.

  • Update the sequence value in the destination database. See Update the sequence value of the destination database.

  • Validate sequence metadata manually, as DTS does not validate metadata such as sequences.

  • If session_replication_role was set to replica in the destination during migration, set it back to origin after the DTS task is released.