Use Data Transmission Service (DTS) to migrate data from a self-managed Oracle database to a PolarDB for PostgreSQL (Compatible with Oracle) cluster. DTS supports schema migration, full data migration, and incremental data migration, so you can migrate without interrupting your application.
Prerequisites
Before you begin, make sure that:
The source Oracle database and the destination PolarDB for PostgreSQL (Compatible with Oracle) cluster are both created. To create a PolarDB for PostgreSQL (Compatible with Oracle) cluster, see Create a cluster. For supported database versions, see Overview of data migration scenarios.
The source Oracle database is running in ARCHIVELOG mode, archived log files are accessible, and a retention period is configured. See Managing archived redo log files.
Supplemental logging is enabled on the source Oracle database, with
SUPPLEMENTAL_LOG_DATA_PKandSUPPLEMENTAL_LOG_DATA_UIset toYes. See Supplemental logging.You are familiar with DTS capabilities and limits for Oracle migrations. Use Advanced Database & Application Migration (ADAM) to evaluate the source database before migration. See Prepare an Oracle database and Overview.
Billing
| Migration type | Instance configuration fee | Internet traffic fee |
|---|---|---|
| Schema migration and full data migration | Free | Charged only when data is migrated from Alibaba Cloud over the Internet. See Billing overview. |
| Incremental data migration | Charged. See Billing overview. | — |
Migration types
| Migration type | Description |
|---|---|
| Schema migration | DTS migrates schemas of the selected objects to the destination cluster. Supported object types: table, view, synonym, trigger, stored procedure, stored function, package, and user-defined type. Note DTS does not support schema migration for triggers in this scenario. Delete the triggers on the source database before migration to prevent data inconsistency. See Configure a data synchronization or migration task for a source database that contains a trigger. |
| Full data migration | DTS migrates all existing data from the source Oracle database to the destination cluster. Note Do not perform DDL operations on objects being migrated during schema migration or full data migration. |
| Incremental data migration | After full data migration completes, DTS polls and retrieves redo log files from the source Oracle database and migrates incremental changes to the destination cluster. This keeps your applications running during the migration. |
SQL operations supported for incremental migration
| Operation type | SQL statement |
|---|---|
| DML | INSERT, UPDATE, DELETE |
| DDL | CREATE TABLE (partitioning clauses, subpartitioning clauses, and functions are not supported; CREATE TABLE AS SELECT is not supported), ALTER TABLE (ADD COLUMN, ADD INDEX, DROP COLUMN, DROP INDEX, MODIFY COLUMN, RENAME COLUMN), DROP TABLE, RENAME TABLE, TRUNCATE TABLE, CREATE INDEX |
Data type mappings
Limitations
During schema migration, DTS migrates foreign keys to the destination database. During full data migration and incremental data migration, DTS temporarily disables constraint checks and cascade operations on foreign keys at the session level. Cascade update and delete operations performed on the source database during migration may cause data inconsistency.
Source database limits
Bandwidth
The server hosting the source database must have enough outbound bandwidth, or migration speed will decrease.
Oracle RAC over Express Connect
If the source database is an Oracle RAC database connected over Express Connect, specify a virtual IP address (VIP) for the database when configuring the source. Do not use a Single Client Access Name (SCAN) IP address.
If the source database is an Oracle RAC database connected over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN), you can use a single VIP rather than a SCAN IP address. After you specify the VIP, node failover is not supported for the Oracle RAC database.
VARCHAR2 empty strings
If a source VARCHAR2 field contains an empty string (evaluated as null in Oracle) and the corresponding destination field has a NOT NULL constraint, the migration task fails.
Objects to be migrated
Tables must have PRIMARY KEY or UNIQUE constraints with all fields unique, or the destination database may contain duplicate records. Alternatively, use ROWID values as the primary key for tables without a primary key or UNIQUE constraint.
For Oracle 12c or later: table names cannot exceed 30 bytes.
If you select individual tables and need to rename tables or columns in the destination, a single migration task supports up to 1,000 tables. For more than 1,000 tables, configure multiple tasks or migrate the entire database.
Incremental migration requirements
Redo logging and archive logging must be enabled.
Log retention periods: If DTS cannot obtain the required logs, the task fails. In extreme cases, data loss or inconsistency may occur.
Incremental migration only: store redo logs and archived redo logs for more than 24 hours.
Full data migration + incremental data migration: store logs for at least 7 days. After full migration completes, you can reduce the retention period to more than 24 hours.
Operations to avoid on the source database
During schema migration and full data migration: do not perform DDL operations. The migration task will fail.
During full-only migration: do not write to the source database. Data inconsistency may occur. To ensure data consistency, select schema migration, full data migration, and incremental data migration together.
During any migration: do not update LONGTEXT fields. The migration task will fail.
Other limits
Performance impact
Evaluate the impact of migration on the performance of both databases. Migrate during off-peak hours. During full data migration, DTS uses read and write resources of both databases, which increases server load.
Tablespace size
During full data migration, concurrent INSERT operations cause table fragmentation in the destination cluster. After full migration, the destination tablespace will be larger than the source database.
Failed task resumption
DTS attempts to resume failed tasks for up to 7 days. Before switching workloads to the destination cluster, stop or release any failed tasks. Alternatively, run the REVOKE statement to revoke write permissions from the DTS accounts. Otherwise, resumed tasks will overwrite destination data with source data.
Indexes
The destination cluster generates unique indexes (for example, pg_oid_1498503_index) to correspond to ROWID in the source Oracle database. The destination cluster will have more indexes than the source database.
String terminator
The destination cluster does not support the string terminator ('\0'). DTS will not write the terminator to the destination, causing data inconsistency between the source and destination.
CHECK constraints
CHECK constraints from the source Oracle database are converted to NOT NULL constraints in the destination cluster.
Character sets and time zones
The character sets and time zones of the source and destination databases must be compatible. Incompatible settings may cause data inconsistency or task failure.
Schema migration
Use the schema migration feature of DTS to avoid task failures caused by incompatible data types.
Custom types and built-in objects
Custom type objects can be migrated. Built-in Oracle objects (auto-generated by Oracle) cannot be migrated. The destination cluster natively supports Oracle built-in objects — no manual migration is needed.
Sequences
DTS does not check the validity of metadata such as sequences. You must manually check the validity of metadata. After switching workloads to the destination, newly written sequences do not continue from the maximum value in the source. Query the maximum sequence values from the source before switching, then set those values as the initial values in the destination. See What's next for the query procedure.
Foreign keys, triggers, and the `session_replication_role` parameter
If tables being migrated contain foreign keys, triggers, or event triggers, and the destination database account is a privileged account or an account that has the permissions of the superuser role, DTS sets session_replication_role to replica at the session level during full or incremental migration. If the account does not have these permissions, set session_replication_role to replica manually in the destination database.
If a cascade update or delete operation is performed on the source database while session_replication_role is set to replica, data inconsistency may occur. After the migration task is released, change session_replication_role back to origin.
Set up the Oracle database account
Log on to the source Oracle database, create an account for DTS, and grant the required permissions.
Skip this step if you have already created an account with the permissions listed below.
| Database | Schema migration | Full data migration | Incremental data migration |
|---|---|---|---|
| Self-managed Oracle database | Schema owner permissions | Schema owner permissions | Fine-grained permissions |
| PolarDB for PostgreSQL (Compatible with Oracle) cluster | Schema owner permissions | — | — |
For instructions on creating accounts and granting permissions, see:
Self-managed Oracle database: Prepare a database account, CREATE USER, and GRANT
PolarDB for PostgreSQL (Compatible with Oracle) cluster: Create database accounts
To migrate incremental data, enable archive logging and supplemental logging on the Oracle database before creating the migration task. See Configure an Oracle database.
Create a migration task
Go to the Data Migration Tasks page.
Log on to the Data Management (DMS) console.
In the top navigation bar, move the pointer over DTS.
Choose DTS (DTS) > Data Migration.
The navigation may vary depending on your DMS console mode and layout. See Simple mode and Customize the layout and style of the DMS console. You can also go directly to the Data Migration page of the new DTS console.
From the drop-down list on the right side of Data Migration Tasks, select the region where your migration instance resides.
In the new DTS console, select the region in the upper-left corner.
Click Create Task. In the Create Data Migration Task wizard, configure the source and destination databases.
WarningAfter configuring the source and destination databases, read the Limits displayed at the top of the page before proceeding. Skipping this step may cause the task to fail or result in data inconsistency.
Section Parameter Description N/A Task Name A name for the task. DTS generates a name automatically. Specify an informative name to identify the task. The name does not need to be unique. Source Database Database Type Select Oracle. Access Method The method used to connect to the source database. This example uses Public IP Address. For other access methods, set up the required environment first. See Preparation overview. Instance Region The region where the source Oracle database resides. Hostname or IP address The endpoint of the source Oracle database. Port Number The service port of the source Oracle database. Default: 1521. The port must be accessible over the Internet for this example. Oracle Type The architecture of the source Oracle database. Select Non-RAC Instance to configure the SID parameter, or RAC or PDB Instance to configure the Service Name parameter. This example uses Non-RAC Instance. Database Account The account of the source Oracle database. See Set up the Oracle database account for required permissions. Database Password The password for the database account. Destination Database Database Type Select PolarDB (Compatible with Oracle). Access Method Select Public IP Address. Instance Region The region where the destination cluster is deployed. Data Sources Select Single Data Source to configure the primary node endpoint and port. Select Multiple Data Sources to specify IP addresses and port numbers for multiple nodes in the IP:Port of Data Sources field. With Multiple Data Sources, DTS can automatically switch to the new primary node during a primary/secondary switchover, which is suitable for disaster recovery scenarios. This example uses Single Data Source. Domain Name or IP Address The endpoint of the primary node in the destination cluster. Run pingon your computer to get the IP address.Port Number The service port of the destination database. Default: 1521. IP:Port of Data Sources The IP addresses and port numbers of multiple nodes, including the primary node. Separate multiple entries with commas (,). Database Name The name of the database in the destination cluster. Database Account The account for the destination cluster. See Set up the Oracle database account for required permissions. Database Password The password for the database account. If your source database has an IP address whitelist configured, add the CIDR blocks of DTS servers to the whitelist. Then click Test Connectivity.
WarningAdding DTS server CIDR blocks to a database whitelist or ECS security group may introduce security risks. Before proceeding, take preventive measures such as: strengthening account credentials, restricting exposed ports, validating API calls, and regularly auditing whitelist rules. Consider using Express Connect, VPN Gateway, or Smart Access Gateway to connect the database to DTS instead of using public IP addresses.
Configure the objects to migrate and advanced settings.
Parameter Description Migration Types Select Schema Migration and Full Data Migration for a one-time migration. To migrate without interrupting your services, also select Incremental Data Migration. NoteIf you do not select Incremental Data Migration, avoid writing to the source database during migration to maintain data consistency.
Add a hidden primary key for a table without a primary key/a table without a unique key Set to Yes if any tables being migrated lack a primary key or unique key. DTS uses the Oracle ROWID column as the hidden primary key in the destination, preventing duplicate data. This parameter is available only when Schema Migration, Full Data Migration, and Incremental Data Migration are all selected. Processing Mode of Conflicting Tables Precheck and Report Errors: DTS checks whether the destination database contains tables with the same names as the source. If duplicates exist, the precheck fails and the task cannot start. Use the object name mapping feature to rename conflicting tables. See Map object names. Ignore Errors and Proceed: Skips the precheck for duplicate table names. If source and destination have identical schemas and a record shares a primary key with an existing destination record: during full migration, DTS skips the record; during incremental migration, DTS overwrites the existing record. If schemas differ, only specific columns are migrated or the task fails. Use with caution. Source Objects Select one or more objects. Click the
icon to move them to Selected Objects.Selected Objects To rename a single object, right-click it. See Map the name of a single object. To rename multiple objects at once, click Batch Edit. See Map multiple object names at a time. NoteRenaming an object may cause other objects that depend on it to fail migration. To filter rows, right-click a table and specify WHERE conditions. See Specify filter conditions. To select specific SQL operations for incremental migration, right-click an object and select the operations.
Click Next: Advanced Settings and configure the following settings. Data Verification Settings See Configure a data verification task.
Advanced Settings
Parameter Description Dedicated Cluster for Task Scheduling By default, DTS schedules tasks to a shared cluster. To use a dedicated cluster, purchase one with the required specifications. See What is a DTS dedicated cluster? Retry Time for Failed Connections How long DTS retries after a connection failure. Valid values: 10–1440 minutes. Default: 720 minutes. Set this to more than 30 minutes. If DTS reconnects within this period, it resumes the task; otherwise, the task fails. NoteIf multiple tasks share the same source or destination database, the most recently configured retry time takes precedence. DTS charges for the instance during retries.
Retry Time for Other Issues How long DTS retries after DDL or DML operation failures. Valid values: 1–1440 minutes. Default: 10 minutes. Set this to more than 10 minutes. ImportantThis value must be less than the Retry Time for Failed Connections value.
Enable Throttling for Full Data Migration Limits read/write resource usage during full migration to reduce server load. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected. Enable Throttling for Incremental Data Migration Limits resource usage during incremental migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected. Environment Tag An optional tag to identify the DTS instance. Actual Write Code The encoding format for writing data to the destination database. Configure ETL Enables the extract, transform, and load (ETL) feature. Select Yes to enter data processing statements in the code editor. See Configure ETL in a data migration or data synchronization task. Select No to skip ETL configuration. See What is ETL? Monitoring and Alerting Configures alerts for task failures or latency exceeding a threshold. Select Yes to configure alert thresholds and notification settings. See Configure monitoring and alerting when you create a DTS task. Click Next: Save Task Settings and Precheck. To view the API parameters for this task configuration, move the pointer over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.
DTS runs a precheck before starting the task. The task can only start after passing the precheck. If the precheck fails, click View Details next to each failed item, troubleshoot the issue, then click Precheck Again. If an alert item can be ignored, click Confirm Alert Details > Ignore > OK > Precheck Again. Ignoring alert items may cause data inconsistency.
Wait until Success Rate reaches 100%, then click Next: Purchase Instance.
On the Purchase Instance page, configure the instance class.
Section Parameter Description New Instance Class Resource Group The resource group for the migration instance. Default: default resource group. See What is Resource Management? Instance Class The instance class determines migration speed. Select a class based on your requirements. See Instance classes of data migration instances. Read and select the Data Transmission Service (Pay-as-you-go) Service Terms check box.
Click Buy and Start. In the confirmation dialog, click OK. View migration progress on the Data Migration page.
What's next
Delete the hidden primary key (if applicable)
If you set Add a hidden primary key for a table without a primary key/a table without a unique key to Yes, delete the hidden primary key after switching your workloads to the destination cluster and before terminating the DTS instance. On the Basic Information tab of the Full Data Migration or Incremental Write module, click Delete Invisible Primary Key in the Actions column.
After deleting the hidden primary key, verify that the data in the source and destination databases is identical.
Reset sequence initial values
After switching workloads to the destination, newly written sequences do not continue from the maximum value in the source database. Before switching, query the maximum sequence values from the source database, then set those values as the initial values in the destination.
To query maximum sequence values from the source Oracle database, run:
DECLARE
CURSOR cur_data IS
SELECT SEQUENCE_NAME, LAST_NUMBER FROM user_sequences;
v_column1_value varchar(1000);
v_column2_value number(20);
BEGIN
dbms_output.enable(NULL);
OPEN cur_data;
LOOP
FETCH cur_data INTO v_column1_value, v_column2_value;
EXIT WHEN cur_data%NOTFOUND;
DBMS_OUTPUT.PUT_LINE('select setval(''' || lower(v_column1_value) || ''', ' || v_column2_value || ');');
END LOOP;
CLOSE cur_data;
END;
/The output contains setval statements for all sequences in the source database. Run only the statements for the sequences relevant to your workloads in the destination database.