Use Data Transmission Service (DTS) to migrate data from a self-managed Oracle database to a PolarDB for PostgreSQL (Compatible with Oracle) cluster. DTS supports schema migration, full data migration, and incremental data migration—enabling migration with minimal downtime.
Migration types
Choose the migration types based on your requirements. Your choice affects prerequisites and billing.
| Migration type | What DTS migrates | Downtime |
|---|---|---|
| Schema migration | Schema definitions for tables, views, synonyms, stored procedures, stored functions, packages, and user-defined types. Triggers are not supported—remove them from the source before migrating. | Required |
| Full data migration | All existing data for the selected objects. | Required |
| Incremental data migration | Ongoing changes captured from redo logs, applied to the destination after full migration completes. | Minimal |
For migration with minimal downtime, select all three types: Schema Migration, Full Data Migration, and Incremental Data Migration.
Incremental data migration requires ARCHIVELOG mode and supplemental logging on the source Oracle database. See Prerequisites for details.
Billing
| Migration type | Link configuration fee | Public network traffic fee |
|---|---|---|
| Schema migration + full data migration | No charge | Charged when migrating data over the public network. See Billing overview. |
| Incremental data migration | Charged. See Billing overview. |
Limitations
DTS migrates foreign keys from the source database to the destination. Cascading updates or deletes on the source during migration may cause data inconsistency.
Source database limitations
| Category | Limitation |
|---|---|
| Bandwidth | The server hosting the source database must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed. |
| RAC over leased line | Configure one of the virtual IP addresses (VIPs) in the connection information—not a Single Client Access Name (SCAN) IP. Node switching for Oracle Real Application Clusters (RAC) is not supported in this configuration. |
| VARCHAR2 empty strings | Oracle treats empty VARCHAR2 strings as null. If the destination field has a NOT NULL constraint, the migration fails. |
| Fine-Grained Audit (FGA) | If an FGA policy is enabled on a table to be migrated, DTS cannot recognize the ORA_ROWSCN pseudocolumn, causing the migration to fail. Disable the FGA policy on those tables, or exclude them from migration. |
| Primary key or unique constraint | Tables to be migrated must have a primary key or unique constraint with unique fields. For tables without these, use the Oracle ROWID as a hidden primary key—see Configure task objects. |
| Table name length (Oracle 12c and later) | Table names must not exceed 30 bytes. |
| Table-level migration with name mapping | A single task supports a maximum of 1,000 tables when object-level editing (such as name mapping) is applied. For more tables, split them into multiple tasks, or configure a task to migrate the entire database. |
| Redo log and archive log retention | For incremental-only tasks: retain logs for more than 24 hours. For full + incremental tasks: retain logs for at least 7 days (you can reduce this to 24 hours after full migration completes). Shorter retention periods may cause task failure or data loss, and are not covered by the DTS service-level agreement (SLA). |
| DDL during schema or full migration | Do not perform DDL operations that change database or table schemas while schema migration or full data migration is running. |
| Writes during full-only migration | Do not write new data to the source while running full migration without incremental migration. To maintain consistency, select all three migration types. |
| Large text fields | Updating large text fields separately is not supported and causes the task to fail. |
| Oracle Data Pump during incremental migration | Importing data into the source using Oracle Data Pump during incremental migration is not supported and may cause data loss. |
Destination and general limitations
| Category | Limitation |
|---|---|
| Task auto-resume | DTS attempts to resume failed tasks for up to seven days. Before switching workloads to the destination, stop or release the task—or revoke write permissions from the DTS account—to prevent resumed tasks from overwriting destination data. |
| Table fragmentation | Full data migration uses concurrent INSERT operations, causing table fragmentation. Destination storage space will be larger than the source. |
| Foreign tables | Migration of foreign tables is not supported. |
| ROWID-mapped indexes | The destination cluster generates unique indexes (such as pg_oid_1498503_index) to correspond to Oracle ROWID values. The destination will have more indexes than the source. |
| String terminator | The destination cluster does not support the string terminator '\0'. DTS does not write this character to the destination, which causes data inconsistency. |
| CHECK constraints | CHECK constraints from Oracle are converted to NOT NULL constraints in the destination cluster. |
| Character sets | The character sets of the source and destination must be compatible. Incompatible character sets cause data inconsistency or task failure. |
| Schema migration required | Use DTS schema migration to avoid data type incompatibility failures. |
| Time zones | The source and destination must use the same time zone. |
| User-defined types | User-defined types can be migrated. Oracle built-in type objects are not migrated—the destination cluster already supports Oracle's built-in objects. |
| Metadata validation | DTS validates data content but does not validate metadata such as sequences. Validate metadata manually. |
| Sequences after switchover | After switching workloads to the destination, sequences do not start from the maximum value in the source. Update the sequence value in the destination before the switchover. See Update the sequence value of the destination database. |
| Foreign keys and triggers | For migrations that include tables with foreign keys, triggers, or event triggers: if the destination account is a privileged account or superuser, DTS sets session_replication_role to replica at the session level. Otherwise, set this parameter manually before starting the migration. Reset it to origin after the DTS task is released. |
| Failed task restoration | DTS support staff attempt to restore failed tasks within eight hours. During restoration, they may restart the task or adjust DTS task parameters (not database parameters). |
SQL operations supported for incremental migration
| Operation type | Supported statements |
|---|---|
| DML | INSERT, UPDATE, DELETE |
| DDL | CREATE TABLE (excluding CREATE TABLE AS SELECT, and tables with partitions, subpartitions, or function-based definitions) |
| DDL | ALTER TABLE: ADD COLUMN, ADD INDEX, DROP COLUMN, DROP INDEX, MODIFY COLUMN, RENAME COLUMN |
| DDL | DROP TABLE, RENAME TABLE, TRUNCATE TABLE, CREATE INDEX |
Data type mapping
For the full mapping between Oracle and PolarDB for PostgreSQL (Compatible with Oracle) data types, see Data type mappings between heterogeneous databases.
Prerequisites
Before you begin, make sure that you have:
A source self-managed Oracle database and a destination PolarDB for PostgreSQL (Compatible with Oracle) cluster. To create a cluster, see Create a PolarDB for PostgreSQL (Compatible with Oracle) cluster. For supported source and destination versions, see Overview of migration solutions
(For incremental data migration) ARCHIVELOG mode enabled on the source Oracle database, with an appropriate retention period for archived log files and the files accessible. See ARCHIVELOG
(For incremental data migration) Supplemental logging enabled on the source Oracle database, with
supplemental_log_data_pkandsupplemental_log_data_uiset toYES. See Supplemental loggingDatabase accounts with the required permissions on both the source and destination databases. See Account permissions below
(Recommended) An Advanced Database & Application Migration (ADAM) evaluation of the source database completed before migration, to identify compatibility issues early. See Limitations and preparations for Oracle databases and Overview of database evaluation
Account permissions
| Database | Schema migration | Full data migration | Incremental data migration |
|---|---|---|---|
| Self-managed Oracle database | Schema owner | Schema owner | Fine-grained permissions (see below) |
| PolarDB for PostgreSQL (Compatible with Oracle) cluster | Schema owner |
To create accounts and grant permissions:
Source Oracle database: See Database account preparations, CREATE USER, and GRANT
Destination PolarDB cluster: See Create database accounts
For incremental data migration, also enable archive logging and supplemental logging. See Database configuration.
Create a migration task
Step 1: Open the migration task list
Use either the DTS console or the Data Management (DMS) console.
From the DTS console
Log on to the DTS console.Data Transmission Service (DTS) console
In the left navigation pane, click Data Migration.
In the upper-left corner, select the region where the migration instance is located.
From the DMS console
The exact steps may vary based on your DMS console mode. See Simple mode console and Customize the layout and style of the DMS console.
Log on to the DMS console.Data Management (DMS) console
In the top menu bar, choose Data + AI > Data Transmission (DTS) > Data Migration.
To the right of Data Migration Tasks, select the region where the migration instance is located.
Step 2: Configure source and destination databases
Click Create Task.
(Optional) In the upper-right corner, click New Configuration Page to switch to the new configuration interface.
If the button already shows Back to Previous Version, you are already on the new page. Use the new configuration page—it provides more options.
Configure the source and destination databases.
WarningReview the limits displayed at the top of the page after selecting source and destination instances. Ignoring these limits can cause task failure or data inconsistency.
Source database parameters:
| Parameter | Description |
|---|---|
| Task Name | DTS generates a name automatically. Specify a descriptive name for easy identification—it does not need to be unique. |
| Select Existing Connection | Select a registered database instance from the list to auto-fill the connection details. In the DMS console, this is labeled Select a DMS database instance. If you haven't registered the instance, configure the fields below manually. |
| Database Type | Select Oracle. |
| Access Method | Select the method that matches your source database deployment. This example uses Self-managed Database on ECS. For other access methods, complete the corresponding preparations first. See Overview of preparations. |
| Instance Region | Select the region where the source Oracle database is located. |
| ECS Instance ID | Select the ECS instance that hosts the source Oracle database. |
| Port Number | Enter the service port of the source Oracle database. Default: 1521. |
| Oracle Type | Non-RAC Instance: also specify the SID. RAC or PDB Instance: also specify the Service Name. This example uses RAC or PDB Instance. |
| Database Account | Enter the account for the source Oracle database. See Account permissions. |
| Database Password | Enter the password for the database account. |
Destination database parameters:
| Parameter | Description |
|---|---|
| Select Existing Connection | Select a registered database instance from the list to auto-fill the connection details. In the DMS console, this is labeled Select a DMS database instance. If you haven't registered the instance, configure the fields below manually. |
| Database Type | Select PolarDB (Compatible with Oracle). |
| Access Method | Select Alibaba Cloud Instance. |
| Instance Region | Select the region where the destination PolarDB cluster is located. |
| Instance ID | Select the destination PolarDB cluster. |
| Database Name | Enter the name of the database in the destination cluster where migrated objects will be stored. |
| Database Account | Enter the account for the destination cluster. See Account permissions. |
| Database Password | Enter the password for the database account. |
| Encryption | Select Non-encrypted or SSL-encrypted. For SSL encryption, upload a CA certificate. |
Click Test Connectivity and Proceed. In the CIDR Blocks of DTS Servers dialog box, click Test Connectivity.
Add the DTS server IP address ranges to the security settings of both the source and destination databases to allow DTS access. See Add DTS server IP addresses to a whitelist.
Step 3: Configure task objects
On the Configure Objects page, set the following parameters.
| Parameter | Description |
|---|---|
| Migration Types | Select the migration types based on your needs: Schema Migration and Full Data Migration for a full migration; add Incremental Data Migration for minimal-downtime migration. If you skip Schema Migration, make sure the destination already has the required databases and tables. If you skip Incremental Data Migration, do not write new data to the source during migration. |
| Add a hidden primary key for a table without a primary key/a table without a unique key | Select Yes for tables that have neither a primary key nor a unique key. DTS uses the Oracle ROWID as a hidden primary key in the destination to prevent data duplication. This option is available only when Schema Migration is selected along with at least one of Full Data Migration or Incremental Data Migration. |
| Processing Mode for Existing Destination Tables | Precheck and Report Errors: checks for existing tables with the same names. The task does not start if conflicts are found. To resolve conflicts without deleting tables, use object name mapping to rename them. Ignore Errors and Proceed: skips the check. During full migration, DTS keeps existing destination records with conflicting primary keys; during incremental migration, source records overwrite destination records. Proceed with caution—this can cause data inconsistency. |
| Source Objects | Click objects to migrate in the Source Objects box, then click the right arrow to add them to the Selected Objects box. |
| Selected Objects | To rename a single object, right-click it in Selected Objects. See Individual table column mapping. To rename multiple objects at once, click Batch Edit. See Map multiple object names at a time. To filter rows with WHERE clauses, right-click a table and specify the conditions. See Set filter conditions. To configure which SQL operations to replicate at the database or table level, right-click the object and select the operations. Note: renaming an object may cause dependent objects to fail migration. |
Click Next: Advanced Settings.
| Parameter | Description |
|---|---|
| Dedicated Cluster for Task Scheduling | DTS schedules tasks on a shared cluster by default. To use a dedicated cluster, purchase one first. See What is a DTS dedicated cluster? |
| Retry Time for Failed Connections | Default: 720 minutes. Range: 10–1440 minutes. Set to at least 30 minutes. DTS retries the connection within this duration and resumes the task if successful. Note: you are billed during the retry period. Multiple tasks sharing the same source or destination use the retry time from the most recently created task. |
| Retry Time for Other Issues | Default: 10 minutes. Range: 1–1440 minutes. Set to at least 10 minutes. This covers non-connectivity issues such as DDL or DML exceptions. Must be less than Retry Time for Failed Connections. |
| Enable Throttling for Full Data Migration | Limit read/write load on the source and destination during full migration by setting Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. You can also adjust the speed after the task starts. |
| Enable Throttling for Incremental Data Migration | Limit load during incremental migration by setting RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. You can also adjust the speed after the task starts. |
| Environment Tag | (Optional) Tag the instance to identify its environment. |
| Actual Write Code | Select the character encoding for writing data to the destination. |
| Configure ETL | Choose whether to enable the extract, transform, and load (ETL) feature. Select Yes and enter processing statements to transform data in transit. See Configure ETL in a data migration or data synchronization task. Select No to skip ETL. See What is ETL? |
| Monitoring and Alerting | Select Yes to configure an alert thresholdalert notifications and notification recipients. DTS sends an alert if the migration fails or latency exceeds the threshold. |
Click Next: Data Validation to configure a data validation task. See Configure data validation.
Step 4: Run precheck and purchase
Click Next: Save Task Settings and Precheck. DTS runs a precheck before starting the task. Address any issues as follows:
If the precheck fails, click View Details next to the failed item, fix the issue, and run the precheck again.
If a warning appears for an item that cannot be ignored, click View Details, fix the issue, and run the precheck again.
If a warning appears for an item that can be ignored, click Confirm Alert Details > Ignore > OK > Precheck Again. Ignoring warnings may cause data inconsistency.
To view the API parameters for this configuration, hover over the button and click Preview OpenAPI parameters.
When Success Rate reaches 100%, click Next: Purchase Instance.
On the Purchase page, select a link specification for the migration instance.
Parameter Description Resource Group Settings Select the resource group for the instance. Default: default resource group. See What is Resource Management? Instance Class Select a specification based on your workload. Higher specifications provide faster migration speeds. See Data migration link specifications. Read and accept Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start > OK.
The task appears on the Data Migration Tasks page. Tasks that include incremental migration continue running with a Running status. Tasks that include only full migration stop automatically when complete and show a Completed status.
What's next
If you enabled the hidden primary key option, delete the hidden primary key before stopping or releasing the DTS instance—but after switching workloads to the destination:
On the Data Migration Tasks page, click the task name.
On the Basic Information tab, go to the Full Data Migration or Incremental Write module.
In the Actions column, click Delete Invisible Primary Key.
After deleting the hidden primary key, you are responsible for data consistency.
Additional post-migration tasks:
Update sequence values in the destination database before switching workloads. See Update the sequence value of the destination database.
Validate data consistency between source and destination.
Update application connection strings to point to the destination cluster.
Stop or release the DTS instance after the switchover to avoid unintended resumption.