All Products
Search
Document Center

Data Transmission Service:Migrate self-managed Oracle to PolarDB for PostgreSQL (Compatible with Oracle)

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate data from a self-managed Oracle database to a PolarDB for PostgreSQL (Compatible with Oracle) cluster. DTS supports schema migration, full data migration, and incremental data migration—enabling migration with minimal downtime.

Migration types

Choose the migration types based on your requirements. Your choice affects prerequisites and billing.

Migration typeWhat DTS migratesDowntime
Schema migrationSchema definitions for tables, views, synonyms, stored procedures, stored functions, packages, and user-defined types. Triggers are not supported—remove them from the source before migrating.Required
Full data migrationAll existing data for the selected objects.Required
Incremental data migrationOngoing changes captured from redo logs, applied to the destination after full migration completes.Minimal

For migration with minimal downtime, select all three types: Schema Migration, Full Data Migration, and Incremental Data Migration.

Incremental data migration requires ARCHIVELOG mode and supplemental logging on the source Oracle database. See Prerequisites for details.

Billing

Migration typeLink configuration feePublic network traffic fee
Schema migration + full data migrationNo chargeCharged when migrating data over the public network. See Billing overview.
Incremental data migrationCharged. See Billing overview.

Limitations

DTS migrates foreign keys from the source database to the destination. Cascading updates or deletes on the source during migration may cause data inconsistency.

Source database limitations

CategoryLimitation
BandwidthThe server hosting the source database must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed.
RAC over leased lineConfigure one of the virtual IP addresses (VIPs) in the connection information—not a Single Client Access Name (SCAN) IP. Node switching for Oracle Real Application Clusters (RAC) is not supported in this configuration.
VARCHAR2 empty stringsOracle treats empty VARCHAR2 strings as null. If the destination field has a NOT NULL constraint, the migration fails.
Fine-Grained Audit (FGA)If an FGA policy is enabled on a table to be migrated, DTS cannot recognize the ORA_ROWSCN pseudocolumn, causing the migration to fail. Disable the FGA policy on those tables, or exclude them from migration.
Primary key or unique constraintTables to be migrated must have a primary key or unique constraint with unique fields. For tables without these, use the Oracle ROWID as a hidden primary key—see Configure task objects.
Table name length (Oracle 12c and later)Table names must not exceed 30 bytes.
Table-level migration with name mappingA single task supports a maximum of 1,000 tables when object-level editing (such as name mapping) is applied. For more tables, split them into multiple tasks, or configure a task to migrate the entire database.
Redo log and archive log retentionFor incremental-only tasks: retain logs for more than 24 hours. For full + incremental tasks: retain logs for at least 7 days (you can reduce this to 24 hours after full migration completes). Shorter retention periods may cause task failure or data loss, and are not covered by the DTS service-level agreement (SLA).
DDL during schema or full migrationDo not perform DDL operations that change database or table schemas while schema migration or full data migration is running.
Writes during full-only migrationDo not write new data to the source while running full migration without incremental migration. To maintain consistency, select all three migration types.
Large text fieldsUpdating large text fields separately is not supported and causes the task to fail.
Oracle Data Pump during incremental migrationImporting data into the source using Oracle Data Pump during incremental migration is not supported and may cause data loss.

Destination and general limitations

CategoryLimitation
Task auto-resumeDTS attempts to resume failed tasks for up to seven days. Before switching workloads to the destination, stop or release the task—or revoke write permissions from the DTS account—to prevent resumed tasks from overwriting destination data.
Table fragmentationFull data migration uses concurrent INSERT operations, causing table fragmentation. Destination storage space will be larger than the source.
Foreign tablesMigration of foreign tables is not supported.
ROWID-mapped indexesThe destination cluster generates unique indexes (such as pg_oid_1498503_index) to correspond to Oracle ROWID values. The destination will have more indexes than the source.
String terminatorThe destination cluster does not support the string terminator '\0'. DTS does not write this character to the destination, which causes data inconsistency.
CHECK constraintsCHECK constraints from Oracle are converted to NOT NULL constraints in the destination cluster.
Character setsThe character sets of the source and destination must be compatible. Incompatible character sets cause data inconsistency or task failure.
Schema migration requiredUse DTS schema migration to avoid data type incompatibility failures.
Time zonesThe source and destination must use the same time zone.
User-defined typesUser-defined types can be migrated. Oracle built-in type objects are not migrated—the destination cluster already supports Oracle's built-in objects.
Metadata validationDTS validates data content but does not validate metadata such as sequences. Validate metadata manually.
Sequences after switchoverAfter switching workloads to the destination, sequences do not start from the maximum value in the source. Update the sequence value in the destination before the switchover. See Update the sequence value of the destination database.
Foreign keys and triggersFor migrations that include tables with foreign keys, triggers, or event triggers: if the destination account is a privileged account or superuser, DTS sets session_replication_role to replica at the session level. Otherwise, set this parameter manually before starting the migration. Reset it to origin after the DTS task is released.
Failed task restorationDTS support staff attempt to restore failed tasks within eight hours. During restoration, they may restart the task or adjust DTS task parameters (not database parameters).

SQL operations supported for incremental migration

Operation typeSupported statements
DMLINSERT, UPDATE, DELETE
DDLCREATE TABLE (excluding CREATE TABLE AS SELECT, and tables with partitions, subpartitions, or function-based definitions)
DDLALTER TABLE: ADD COLUMN, ADD INDEX, DROP COLUMN, DROP INDEX, MODIFY COLUMN, RENAME COLUMN
DDLDROP TABLE, RENAME TABLE, TRUNCATE TABLE, CREATE INDEX

Data type mapping

For the full mapping between Oracle and PolarDB for PostgreSQL (Compatible with Oracle) data types, see Data type mappings between heterogeneous databases.

Prerequisites

Before you begin, make sure that you have:

Account permissions

DatabaseSchema migrationFull data migrationIncremental data migration
Self-managed Oracle databaseSchema ownerSchema ownerFine-grained permissions (see below)
PolarDB for PostgreSQL (Compatible with Oracle) clusterSchema owner

To create accounts and grant permissions:

Important

For incremental data migration, also enable archive logging and supplemental logging. See Database configuration.

Create a migration task

Step 1: Open the migration task list

Use either the DTS console or the Data Management (DMS) console.

From the DTS console

  1. Log on to the DTS console.Data Transmission Service (DTS) console

  2. In the left navigation pane, click Data Migration.

  3. In the upper-left corner, select the region where the migration instance is located.

From the DMS console

The exact steps may vary based on your DMS console mode. See Simple mode console and Customize the layout and style of the DMS console.
  1. Log on to the DMS console.Data Management (DMS) console

  2. In the top menu bar, choose Data + AI > Data Transmission (DTS) > Data Migration.

  3. To the right of Data Migration Tasks, select the region where the migration instance is located.

Step 2: Configure source and destination databases

  1. Click Create Task.

  2. (Optional) In the upper-right corner, click New Configuration Page to switch to the new configuration interface.

    If the button already shows Back to Previous Version, you are already on the new page. Use the new configuration page—it provides more options.
  3. Configure the source and destination databases.

    Warning

    Review the limits displayed at the top of the page after selecting source and destination instances. Ignoring these limits can cause task failure or data inconsistency.

Source database parameters:

ParameterDescription
Task NameDTS generates a name automatically. Specify a descriptive name for easy identification—it does not need to be unique.
Select Existing ConnectionSelect a registered database instance from the list to auto-fill the connection details. In the DMS console, this is labeled Select a DMS database instance. If you haven't registered the instance, configure the fields below manually.
Database TypeSelect Oracle.
Access MethodSelect the method that matches your source database deployment. This example uses Self-managed Database on ECS. For other access methods, complete the corresponding preparations first. See Overview of preparations.
Instance RegionSelect the region where the source Oracle database is located.
ECS Instance IDSelect the ECS instance that hosts the source Oracle database.
Port NumberEnter the service port of the source Oracle database. Default: 1521.
Oracle TypeNon-RAC Instance: also specify the SID. RAC or PDB Instance: also specify the Service Name. This example uses RAC or PDB Instance.
Database AccountEnter the account for the source Oracle database. See Account permissions.
Database PasswordEnter the password for the database account.

Destination database parameters:

ParameterDescription
Select Existing ConnectionSelect a registered database instance from the list to auto-fill the connection details. In the DMS console, this is labeled Select a DMS database instance. If you haven't registered the instance, configure the fields below manually.
Database TypeSelect PolarDB (Compatible with Oracle).
Access MethodSelect Alibaba Cloud Instance.
Instance RegionSelect the region where the destination PolarDB cluster is located.
Instance IDSelect the destination PolarDB cluster.
Database NameEnter the name of the database in the destination cluster where migrated objects will be stored.
Database AccountEnter the account for the destination cluster. See Account permissions.
Database PasswordEnter the password for the database account.
EncryptionSelect Non-encrypted or SSL-encrypted. For SSL encryption, upload a CA certificate.
  1. Click Test Connectivity and Proceed. In the CIDR Blocks of DTS Servers dialog box, click Test Connectivity.

    Add the DTS server IP address ranges to the security settings of both the source and destination databases to allow DTS access. See Add DTS server IP addresses to a whitelist.

Step 3: Configure task objects

  1. On the Configure Objects page, set the following parameters.

ParameterDescription
Migration TypesSelect the migration types based on your needs: Schema Migration and Full Data Migration for a full migration; add Incremental Data Migration for minimal-downtime migration. If you skip Schema Migration, make sure the destination already has the required databases and tables. If you skip Incremental Data Migration, do not write new data to the source during migration.
Add a hidden primary key for a table without a primary key/a table without a unique keySelect Yes for tables that have neither a primary key nor a unique key. DTS uses the Oracle ROWID as a hidden primary key in the destination to prevent data duplication. This option is available only when Schema Migration is selected along with at least one of Full Data Migration or Incremental Data Migration.
Processing Mode for Existing Destination TablesPrecheck and Report Errors: checks for existing tables with the same names. The task does not start if conflicts are found. To resolve conflicts without deleting tables, use object name mapping to rename them. Ignore Errors and Proceed: skips the check. During full migration, DTS keeps existing destination records with conflicting primary keys; during incremental migration, source records overwrite destination records. Proceed with caution—this can cause data inconsistency.
Source ObjectsClick objects to migrate in the Source Objects box, then click the right arrow to add them to the Selected Objects box.
Selected ObjectsTo rename a single object, right-click it in Selected Objects. See Individual table column mapping. To rename multiple objects at once, click Batch Edit. See Map multiple object names at a time. To filter rows with WHERE clauses, right-click a table and specify the conditions. See Set filter conditions. To configure which SQL operations to replicate at the database or table level, right-click the object and select the operations. Note: renaming an object may cause dependent objects to fail migration.
  1. Click Next: Advanced Settings.

ParameterDescription
Dedicated Cluster for Task SchedulingDTS schedules tasks on a shared cluster by default. To use a dedicated cluster, purchase one first. See What is a DTS dedicated cluster?
Retry Time for Failed ConnectionsDefault: 720 minutes. Range: 10–1440 minutes. Set to at least 30 minutes. DTS retries the connection within this duration and resumes the task if successful. Note: you are billed during the retry period. Multiple tasks sharing the same source or destination use the retry time from the most recently created task.
Retry Time for Other IssuesDefault: 10 minutes. Range: 1–1440 minutes. Set to at least 10 minutes. This covers non-connectivity issues such as DDL or DML exceptions. Must be less than Retry Time for Failed Connections.
Enable Throttling for Full Data MigrationLimit read/write load on the source and destination during full migration by setting Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. You can also adjust the speed after the task starts.
Enable Throttling for Incremental Data MigrationLimit load during incremental migration by setting RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. You can also adjust the speed after the task starts.
Environment Tag(Optional) Tag the instance to identify its environment.
Actual Write CodeSelect the character encoding for writing data to the destination.
Configure ETLChoose whether to enable the extract, transform, and load (ETL) feature. Select Yes and enter processing statements to transform data in transit. See Configure ETL in a data migration or data synchronization task. Select No to skip ETL. See What is ETL?
Monitoring and AlertingSelect Yes to configure an alert thresholdalert notifications and notification recipients. DTS sends an alert if the migration fails or latency exceeds the threshold.
  1. Click Next: Data Validation to configure a data validation task. See Configure data validation.

Step 4: Run precheck and purchase

  1. Click Next: Save Task Settings and Precheck. DTS runs a precheck before starting the task. Address any issues as follows:

    • If the precheck fails, click View Details next to the failed item, fix the issue, and run the precheck again.

    • If a warning appears for an item that cannot be ignored, click View Details, fix the issue, and run the precheck again.

    • If a warning appears for an item that can be ignored, click Confirm Alert Details > Ignore > OK > Precheck Again. Ignoring warnings may cause data inconsistency.

    To view the API parameters for this configuration, hover over the button and click Preview OpenAPI parameters.
  2. When Success Rate reaches 100%, click Next: Purchase Instance.

  3. On the Purchase page, select a link specification for the migration instance.

    ParameterDescription
    Resource Group SettingsSelect the resource group for the instance. Default: default resource group. See What is Resource Management?
    Instance ClassSelect a specification based on your workload. Higher specifications provide faster migration speeds. See Data migration link specifications.
  4. Read and accept Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start > OK.

The task appears on the Data Migration Tasks page. Tasks that include incremental migration continue running with a Running status. Tasks that include only full migration stop automatically when complete and show a Completed status.

What's next

If you enabled the hidden primary key option, delete the hidden primary key before stopping or releasing the DTS instance—but after switching workloads to the destination:

  1. On the Data Migration Tasks page, click the task name.

  2. On the Basic Information tab, go to the Full Data Migration or Incremental Write module.

  3. In the Actions column, click Delete Invisible Primary Key.

Important

After deleting the hidden primary key, you are responsible for data consistency.

Additional post-migration tasks:

  • Update sequence values in the destination database before switching workloads. See Update the sequence value of the destination database.

  • Validate data consistency between source and destination.

  • Update application connection strings to point to the destination cluster.

  • Stop or release the DTS instance after the switchover to avoid unintended resumption.