Use Data Transmission Service (DTS) to migrate data between PolarDB for PostgreSQL (Compatible with Oracle) clusters. DTS supports schema migration, full data migration, and incremental data migration, so you can migrate without taking your applications offline.
Prerequisites
Before you begin, make sure you have:
Created both source and destination PolarDB for PostgreSQL (Compatible with Oracle) clusters. See Create a PolarDB for PostgreSQL (Compatible with Oracle) cluster.
Set
wal_leveltologicalin the source cluster to enable logical replication. See Set cluster parameters.
Choose a migration strategy
Select your migration type before configuring the task:
| Goal | Migration types to select | Consideration |
|---|---|---|
| Migrate existing data with a maintenance window | Schema migration + Full data migration | Do not write new data to the source during migration. |
| Migrate with no downtime | Schema migration + Full data migration + Incremental data migration | DTS applies changes to the destination in real time. Cut over when ready. |
Billing
| Migration type | Instance configuration fee | Internet traffic fee |
|---|---|---|
| Schema migration and full data migration | Free | Charged when Access Method is Public IP Address. See Billing overview. |
| Incremental data migration | Charged. See Billing overview. | — |
Migration types
| Migration type | Description |
|---|---|
| Schema migration | Migrates schema definitions to the destination database. Supported objects: tables, views, synonyms, triggers, stored procedures, stored functions, packages, and custom types. Note Triggers are not currently compatible. Delete triggers in the source database before migration to avoid data inconsistency. See How do I configure a synchronization or migration job when the source database contains triggers? |
| Full data migration | Migrates all existing data from source objects to the destination database. Do not perform DDL operations on migration objects until schema migration and full data migration are complete. |
| Incremental data migration | After full data migration, polls and captures redo logs from the source to continuously apply changes to the destination—keeping your applications running during migration. |
Supported objects
Schema and table objects
Includes: PRIMARY KEY, UNIQUE KEY, FOREIGN KEY, built-in data types, and DEFAULT CONSTRAINT.
Other objects
Support for VIEW, PROCEDURE (PostgreSQL 11 and later), and other objects varies by destination database type. The options shown in the console are definitive.
Supported incremental SQL operations
| Operation type | SQL statements |
|---|---|
| DML | INSERT, UPDATE, DELETE |
| DDL | Available for tasks created after October 1, 2020. Requires a privileged source account. |
Supported DDL statements (privileged source account required):
CREATE TABLE, DROP TABLE
ALTER TABLE: RENAME TABLE, ADD COLUMN, ADD COLUMN DEFAULT, ALTER COLUMN TYPE, DROP COLUMN, ADD CONSTRAINT, ADD CONSTRAINT CHECK, ALTER COLUMN DROP DEFAULT
TRUNCATE TABLE
CREATE INDEX ON TABLE
Additional DDL options such as CASCADE or RESTRICT are not migrated.
DDL in sessions using
SET session_replication_role = replicais not supported.DDL executed by calling functions is not supported.
If a single commit contains both DML and DDL statements, the DDL is not migrated.
DDL for objects outside the migration scope is not migrated.
For data migration tasks created before May 12, 2023, you must create triggers and functions in the source database to capture DDL information before configuring the migration task. For more information, see Use triggers and functions to implement incremental DDL migration for PostgreSQL.
Data of the
bittype is not supported in incremental data migration.
Tasks created before May 12, 2023: Create triggers and functions in the source database to capture DDL information before configuring the migration task. See Use triggers and functions to implement incremental DDL migration for PostgreSQL.
Database account permissions
| Database | Required permission |
|---|---|
| Source PolarDB for PostgreSQL (Compatible with Oracle) cluster | Privileged account |
| Destination PolarDB for PostgreSQL (Compatible with Oracle) cluster | Database Owner — specified when the database is created |
To create an account and grant permissions, see Create an account.
Limitations
Source database limits
| Limit | Behavior and consequence |
|---|---|
| Bandwidth | The source database server must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed. |
| Primary key or unique constraint | Tables must have a primary key or UNIQUE constraint with unique fields. If this requirement is not met, duplicate data may appear in the destination. |
| Table-level migration with column mapping | A single task supports a maximum of 1,000 tables. If you exceed this limit, split the tables into multiple tasks, or migrate the entire database in one task. |
| WAL retention (incremental migration) | WAL must be enabled. For incremental-only tasks, retain WAL for more than 24 hours. For tasks combining full and incremental migration, retain WAL for at least 7 days. You can set the longer retention period after full migration completes. If WAL is not retained long enough, the task may fail or data loss may occur—this scenario is not covered by the DTS Service-Level Agreement (SLA). |
| Logical Replication Slot Failover | The source cluster must support and enable Logical Replication Slot Failover. If the Database Engine is Oracle syntax compatible 2.0, a failover in the source may cause the migration instance to fail unrecoverably. |
| Incremental change size limit | If a single incremental change exceeds 256 MB, the migration instance may fail unrecoverably and must be reconfigured. |
| Long-running transactions | Long-running transactions in the source prevent WAL from being cleared before the transaction commits. This can exhaust disk space in the source database. |
| DDL operations during migration | Do not perform DDL operations during schema migration or full data migration. If incremental migration is not selected, do not write new data to the source. |
General limits
| Limit | Details |
|---|---|
| One database per task | A single task migrates one database. Configure separate tasks for each additional database. |
| Unsupported object types | TimescaleDB extension tables, tables with cross-schema inheritance, and tables with unique indexes based on expressions are not supported. |
| Extension schemas | Schemas created by installing extensions cannot be migrated and are not retrievable in the console. |
| SERIAL fields | If a table contains a SERIAL field and Schema Migration is selected, also select Sequence or migrate the entire schema. Otherwise, the migration instance may fail. |
| REPLICA IDENTITY for incremental migration | Before writing data to tables in an incremental migration task, run the following command on each table in the source database: ALTER TABLE schema.table REPLICA IDENTITY FULL;. Run it during off-peak hours and avoid table lock operations to prevent deadlocks. If you skip the related precheck item, DTS runs this command automatically during initialization. Re-run it in these cases: the instance runs for the first time; a new table is created in a schema migration object; or an existing table is rebuilt using RENAME. |
| DTS temporary tables | DTS creates the following temporary tables in the source database. Do not delete them during migration—they are deleted automatically when the DTS instance is released: public.dts_pg_class, public.dts_pg_attribute, public.dts_pg_type, public.dts_pg_enum, public.dts_postgres_heartbeat, public.dts_ddl_command, public.dts_args_session, and public.aliyun_dts_instance. |
| Heartbeat table | DTS adds dts_postgres_heartbeat to the source database to track incremental migration latency. |
| Replication slot cleanup | DTS creates a replication slot prefixed dts_sync_ in the source database and uses it to obtain incremental logs from the last 15 minutes. DTS attempts to clean up the slot when the migration fails or the instance is released. If the source account password changes or the DTS IP address is removed from the whitelist, the automatic cleanup fails—clean up the slot manually to prevent disk space accumulation. After a failover, log on to the secondary database to clean up the slot manually. |
| Full migration performance | Full data migration uses concurrent INSERT operations, so the destination table storage space may be larger than in the source after migration. Run migrations during off-peak hours to reduce load on source and destination databases. |
| FLOAT and DOUBLE precision | DTS reads FLOAT and DOUBLE values using ROUND(COLUMN,PRECISION). If precision is not explicitly defined, DTS uses 38 digits for FLOAT and 308 digits for DOUBLE. Verify that this meets your requirements before migrating. |
| DTS auto-recovery | DTS attempts to recover failed tasks for up to seven days. Before switching traffic to the destination, end or release the task, or revoke DTS write permissions using the revoke command—otherwise, DTS may overwrite destination data if the task auto-recovers. |
| Sequence validation | DTS validates data content but not metadata such as sequences. Validate sequence metadata manually. |
| Sequence values after switchover | After switching to the destination, new sequences do not start from the maximum source sequence value. Update the sequence value in the destination before the switchover. See Update the sequence value of the destination database. |
| session_replication_role | For tasks involving foreign keys, triggers, or event triggers, DTS sets session_replication_role to replica at the session level if the destination account is a privileged account or superuser. If the destination account does not have these permissions, set session_replication_role to replica manually in the destination. Cascade update or delete operations during this period may cause data inconsistency. After the DTS task is released, set session_replication_role back to origin. |
| DTS support staff restoration | If a task fails, DTS support staff attempt to restore it within eight hours. Only DTS task parameters are modified—not database parameters. |
| Partitioned tables | Include both the parent table and all child partitions as migration objects. The parent table does not store data directly—all data is in child partitions. Omitting child partitions causes data inconsistency between source and destination. |
| Foreign keys and cascade operations | During schema migration, DTS migrates foreign keys to the destination. During full and incremental migration, DTS temporarily disables constraint checks and foreign key cascade operations at the session level. Cascade update or delete operations in the source during this period may cause data inconsistency. |
Create a migration task
Step 1: Go to the migration task list
From the DTS console:
Log on to the DTS console.
In the left navigation pane, click Data Migration.
In the upper-left corner, select the region where the migration instance is located.
From the DMS console:
Steps may vary based on your DMS console mode. See Simple mode console and Customize the layout and style of the DMS console.
Log on to the DMS console.
In the top menu bar, choose Data + AI > Data Transmission (DTS) > Data Migration.
To the right of Data Migration Tasks, select the region.
Step 2: Configure source and destination databases
Click Create Task.
On the task configuration page, configure the source and destination databases.
Read the limits shown at the top of the page before selecting source and destination instances. Ignoring them may cause task failure or data inconsistency.
Task settings:
| Parameter | Description |
|---|---|
| Task Name | DTS generates a name automatically. Specify a descriptive name for easy identification. Names do not need to be unique. |
Source database settings:
| Parameter | Description |
|---|---|
| Select Existing Connection | Select a previously registered database instance from the drop-down list to populate the fields automatically. If no registered instance is available, configure the fields manually. In the DMS console, this parameter is named Select a DMS database instance. |
| Database Type | Select PolarDB (Compatible with Oracle). |
| Connection Type | Select Public IP. |
| Instance Region | Select the region of the source cluster. |
| Select Multi-source Data | Single IP:Port — enter the DNS or IP Address and Port of the primary node. Multiple IP:Port — enter multiple node addresses in the Multi-source IP:Port field. DTS automatically switches to the new primary node after a primary/secondary switchover, which is suitable for disaster recovery scenarios. This example uses Single IP:Port. |
| DNS or IP Address | Enter the endpoint of the primary node. Use the Ping command to resolve the IP address from the direct connection endpoint. |
| Port | Enter the service port. Default: 1521. The port must be accessible over the Internet in this example. |
| Multi-source IP:Port | Enter multiple node IP addresses and ports, separated by commas (,). |
| Oracle Type | Non-RAC Instance — also enter the SID. RAC or PDB Instance — also enter the ServiceName. This example uses Non-RAC Instance. |
| Database Name | Enter the name of the database containing the migration objects in the source cluster. |
| Database Account | Enter the database account. For permission requirements, see Database account permissions. |
| Database Password | Enter the password for the database account. |
Destination database settings:
| Parameter | Description |
|---|---|
| Select Existing Connection | Select a previously registered database instance from the drop-down list to populate the fields automatically. If no registered instance is available, configure the fields manually. In the DMS console, this parameter is named Select a DMS database instance. |
| Database Type | Select PolarDB (Compatible with Oracle). |
| Connection Type | Select Public IP. |
| Instance Region | Select the region of the destination cluster. |
| Select Multi-source Data | Single IP:Port — enter the DNS or IP Address and Port of the primary node. Multiple IP:Port — enter multiple node addresses in the Multi-source IP:Port field. DTS automatically switches to the new primary node after a primary/secondary switchover. This example uses Single IP:Port. |
| DNS or IP Address | Enter the endpoint of the primary node. Use the Ping command to resolve the IP address from the direct connection endpoint. |
| Port | Enter the service port. Default: 1521. |
| Multi-source IP:Port | Enter multiple node IP addresses and ports, separated by commas (,). |
| Database Name | Enter the name of the database containing the migration objects in the destination cluster. |
| Database Account | Enter the database account. For permission requirements, see Database account permissions. |
| Database Password | Enter the password for the database account. |
Click Test Connectivity and Proceed.
Add the CIDR blocks of DTS servers to the security settings of both the source and destination databases. See Add the IP address whitelist of DTS servers.
If the source or destination is a self-managed database (Access Method is not Alibaba Cloud Instance), also click Test Connectivity in the CIDR Blocks of DTS Servers dialog box.
Step 3: Configure migration objects
On the Configure Objects page, set the following:
| Configuration | Description |
|---|---|
| Migration Types | Select Schema Migration and Full Data Migration for a full migration. Add Incremental Data Migration for zero-downtime migration. If Schema Migration is not selected, make sure the destination already has the required databases and tables. If Incremental Data Migration is not selected, do not write new data to the source during migration. |
| Processing Mode of Conflicting Tables | Precheck and Report Errors — checks for duplicate table names in the destination before starting. An error is reported if duplicates exist. If a table in the destination cannot be deleted or renamed, use the object name mapping feature to rename it. Ignore Errors and Proceed — skips the check. During full migration, conflicting rows in the destination are kept. During incremental migration, source rows overwrite destination rows. If schemas are inconsistent, partial data migration or failure may result. |
| Source Objects | Click the objects to migrate in the Source Objects box, then click the right arrow to move them to Selected Objects. Objects can be selected at the schema or table level. If you select tables, other objects (views, triggers, stored procedures) are not migrated. If a table contains a SERIAL field and Schema Migration is selected, also select Sequence or migrate the entire schema. |
| Selected Objects | Right-click an object to rename it. See Individual table column mapping. Click Batch Edit to rename multiple objects at once. See Map multiple object names at a time. Renaming objects may cause dependent objects to fail migration. Right-click a table to set a WHERE clause filter. See Set filter conditions. Right-click a migration object to select specific SQL operations to migrate. |
Click Next: Advanced Settings.
Step 4: Configure advanced settings
| Configuration | Description |
|---|---|
| Dedicated Cluster for Task Scheduling | By default, tasks run on a shared cluster. To use a dedicated cluster for more stable performance, purchase one separately. |
| Retry Time for Failed Connections | Default: 720 minutes (range: 10–1440 minutes). Set to at least 30 minutes. DTS resumes the task automatically if reconnection succeeds within this period. For multiple DTS instances sharing the same source or destination, the retry time is determined by the most recently created task. Charges apply during the retry period. |
| Retry Time for Other Issues | Default: 10 minutes (range: 1–1440 minutes). Applies to non-connectivity issues such as DDL or DML exceptions. This value must be less than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Migration | Limits full migration throughput by setting Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. Throttling settings can also be adjusted after the instance starts running. |
| Enable Throttling for Incremental Data Migration | Limits incremental migration throughput by setting RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. |
| Environment Tag | Optional. Select a tag to identify the instance. |
| Configure ETL | Select Yes to enable extract, transform, and load (ETL) and enter data processing statements. See Configure ETL in a data migration or data synchronization task and What is ETL? |
| Monitoring and Alerting | Select Yes to configure an alert threshold and notification settings. DTS sends alerts when a migration fails or latency exceeds the threshold. |
Click Next: Data Validation to configure a data validation task. See Configure data validation.
Step 5: Save settings and run a precheck
Click Next: Save Task Settings and Precheck.
To preview the API parameters for this configuration, hover over the button and click Preview OpenAPI parameters.
DTS runs a precheck before starting the task. The task starts only after passing.
If the precheck fails, click View Details next to the failed item, fix the issue, then run the precheck again.
If a warning appears for a non-ignorable item, click View Details, fix the issue, then run the precheck again.
If a warning appears for an ignorable item, click Confirm Alert Details > Ignore > OK > Precheck Again to skip it. Ignoring warnings may cause data inconsistency.
Step 6: Purchase the instance
When Success Rate reaches 100%, click Next: Purchase Instance.
On the Purchase page, configure the instance:
| Parameter | Description |
|---|---|
| Resource Group Settings | Select the resource group for the instance. Default: default resource group. See What is Resource Management? |
| Instance Class | Select a link specification based on your migration requirements. Higher specifications support faster migration speeds. See Data migration link specifications. |
Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms.
Click Buy and Start, then click OK in the confirmation dialog.
Monitor migration progress
After the task starts, view its progress on the Data Migration Tasks list page.
Full migration only: The task stops automatically after full migration completes. Status changes to Completed.
Incremental migration included: The task continues running. Status remains Running until you manually stop it.
What to do before cutting over
Before switching your applications to the destination database:
End or release the DTS task, or revoke the write permissions of the DTS account using the
revokecommand. This prevents DTS from overwriting destination data if the task auto-recovers within seven days.Update the sequence value in the destination database. See Update the sequence value of the destination database.
Validate sequence metadata manually, as DTS does not validate metadata such as sequences.
If
session_replication_rolewas set toreplicain the destination during migration, set it back tooriginafter the DTS task is released.