All Products
Search
Document Center

Data Transmission Service:Migrate data from a self-managed Oracle database to a self-managed PostgreSQL database

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate data from a self-managed Oracle database to a self-managed PostgreSQL database with minimal downtime. DTS supports schema migration, full data migration, and incremental data migration — run all three together to keep your Oracle database online while the migration is in progress.

Prerequisites

Before you begin, make sure that:

  • The Oracle database version is 9i, 10g, 11g, 12c, 18c, or 19c

  • The Oracle database is running in ARCHIVELOG mode, archived log files are accessible, and a suitable retention period is set — see Managing Archived Redo Log Files

  • Supplemental logging is enabled: SUPPLEMENTAL_LOG_DATA_PK and SUPPLEMENTAL_LOG_DATA_UI are both set to Yes — see Supplemental Logging

  • Each table to be migrated has a primary key or a UNIQUE NOT NULL index

  • The destination PostgreSQL database is created and accessible

  • You have reviewed DTS capabilities and limits for Oracle sources — use Advanced Database & Application Migration (ADAM) to evaluate your database before migration. See Prepare an Oracle database and Overview

Billing

Migration typeTask configuration feeInternet traffic fee
Schema migration and full data migrationFreeCharged only when data is migrated from Alibaba Cloud over the Internet. See Billing overview.
Incremental data migrationCharged. See Billing overview.

Migration types

TypeWhat DTS does
Schema migrationMigrates schemas of the selected objects to the destination PostgreSQL database. Supported object types: table, view, synonym, trigger, stored procedure, stored function, package, and custom type. Built-in objects generated automatically by Oracle are not migrated.
Note

DTS is incompatible with triggers in this scenario. Delete triggers from the source database before migration to avoid data inconsistency. See Configure a data synchronization task for a source database that contains a trigger.

Full data migrationMigrates all existing data from the Oracle database to the destination PostgreSQL database. Do not run DDL operations on migrated objects during this phase.
Incremental data migrationAfter full data migration completes, DTS polls redo log files from the Oracle database and replicates ongoing changes to PostgreSQL. This lets your applications keep running on Oracle during the migration.

Choose your migration types

  • Schema migration + full data migration only: use this for a one-time migration where you can take the source database offline during cutover.

  • Schema migration + full data migration + incremental data migration: use this to migrate with near-zero downtime. Your Oracle database stays online while DTS replicates ongoing changes to PostgreSQL.

If you do not include incremental data migration, stop writes to the source database during migration to avoid data inconsistency.

SQL operations supported for incremental migration

Operation typeStatements
DMLINSERT, UPDATE, DELETE
DDLCREATE TABLE (excluding statements with partitioning clauses, subpartitioning clauses, or functions; CREATE TABLE AS SELECT is not supported), ALTER TABLE (ADD COLUMN, ADD INDEX, DROP COLUMN, DROP INDEX, MODIFY COLUMN, RENAME COLUMN), DROP TABLE, RENAME TABLE, TRUNCATE TABLE, CREATE INDEX

Limitations

Source database limits:

  • The server hosting the source database must have enough outbound bandwidth; insufficient bandwidth reduces migration speed.

  • Tables to be migrated must have PRIMARY KEY or UNIQUE constraints with all fields unique — otherwise the destination database may contain duplicate records.

  • If the Oracle version is 12c or later, table names cannot exceed 30 bytes.

  • If you select tables as migration objects and need to rename tables or columns in the destination: a single task supports up to 1,000 tables. For more than 1,000 tables, run multiple tasks in batches, or migrate the entire database in one task.

  • If a VARCHAR2 field in Oracle contains an empty string (treated as NULL in Oracle) and the corresponding destination column has a NOT NULL constraint, the migration task fails.

  • If the source database is an Oracle RAC database connected over Express Connect, you must specify a VIP for the database when you configure the source database.

  • If the source database is an Oracle RAC database connected over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN), you can use a single VIP rather than a Single Client Access Name (SCAN) IP address when you configure the source database. After specifying the VIP, node failover is not supported for the Oracle RAC database.

Incremental migration requirements:

  • Redo logging and archive logging must be enabled.

  • Incremental migration only: retain redo logs and archive logs for more than 24 hours.

  • Full data migration + incremental data migration: retain redo logs and archive logs for at least seven days. After full data migration completes, you can reduce retention to more than 24 hours.

    Important

    If DTS cannot obtain redo logs or archive logs due to insufficient retention, the task fails. In extreme cases, data inconsistency or loss may occur. Insufficient retention is outside the DTS Service Level Agreement (SLA).

Operations to avoid on the source database during migration:

  • Do not run DDL operations (schema changes) during schema migration or full data migration — the task fails.

  • Do not write to the source database during full-data-migration-only tasks — data inconsistency may result.

  • Do not update LONGTEXT columns during any migration phase — the task fails.

Other limits:

  • Migrate data during off-peak hours. Full data migration uses read and write resources on both databases, which increases server load.

  • After full data migration, the used tablespace of the destination cluster is larger than the source database because concurrent INSERT operations cause table fragmentation.

  • DTS retries failed migration tasks for up to seven days. Before switching workloads to the destination, stop or release any failed tasks, or revoke DTS write permissions on the destination database using REVOKE. Otherwise, a resumed task may overwrite data in the destination.

  • The character sets of the source and destination databases must be compatible — incompatible character sets cause data inconsistency or task failure.

  • Use DTS schema migration to avoid task failures from incompatible data types.

  • The time zones of the source and destination databases must match.

  • The null terminator '\0' cannot be written to PostgreSQL. If migrated data contains '\0', DTS drops it, which may cause data inconsistency.

  • During schema migration, CHECK constraints in Oracle are converted to NOT NULL constraints in PostgreSQL.

  • Custom type objects are migrated; built-in objects generated automatically by Oracle are not.

  • During schema migration, DTS migrates foreign keys from source to destination.

  • During full data migration and incremental data migration, DTS temporarily disables foreign key constraint checks and cascade operations at the session level. Cascade or delete operations on the source during this phase may cause data inconsistency.

Grant required permissions

Log in to the source Oracle database, create an account for DTS, and grant the required permissions.

Skip this step if you already have an account with the permissions listed below.
DatabaseSchema migrationFull data migrationIncremental data migration
Self-managed Oracle databaseSchema owner permissionsSchema owner permissionsFine-grained permissions
Self-managed PostgreSQL databaseSchema owner permissions

For instructions on creating accounts and granting permissions:

Create a migration task

  1. Go to the Data Migration Tasks page.

    1. Log in to the Data Management (DMS) console.

    2. In the top navigation bar, click DTS.

    3. In the left-side navigation pane, choose DTS (DTS) > Data Migration.

    Steps may vary based on the DMS console layout. See Simple mode and Customize the layout and style of the DMS console. You can also go to the Data Migration Tasks page of the new DTS console directly.
  2. From the drop-down list next to Data Migration Tasks, select the region where the data migration instance resides.

    In the new DTS console, select the region in the upper-left corner.
  3. Click Create Task. In the wizard, configure the source and destination databases.

    Warning

    After configuring the source and destination databases, read the limits displayed at the top of the page before proceeding.

    SectionParameterDescription
    N/ATask NameDTS assigns a name automatically. Specify a descriptive name to identify the task easily. Task names do not need to be unique.
    Source DatabaseDatabase TypeSelect Oracle.
    Connection TypeSelect Self-managed Database on ECS for this example. For other access methods, set up the required environment first. See Preparation overview.
    Instance RegionThe region where the source Oracle database resides.
    ECS Instance IDThe ID of the Elastic Compute Service (ECS) instance hosting the source Oracle database.
    Port NumberThe service port of the source Oracle database. Default: 1521.
    Oracle TypeSelect the architecture of the source Oracle database. Non-RAC Instance requires the SID parameter. RAC or PDB Instance requires the Service Name parameter.
    Database AccountThe DTS account for the source Oracle database. See Grant required permissions for required permissions.
    Database PasswordThe password for the database account.
    Destination DatabaseDatabase TypeSelect PostgreSQL.
    Connection TypeSelect Self-managed Database on ECS for this example.
    Instance RegionThe region where the destination PostgreSQL database resides.
    ECS Instance IDThe ID of the ECS instance hosting the destination PostgreSQL database.
    Port NumberThe service port of the destination PostgreSQL database. Default: 3433.
    Database NameThe name of the destination PostgreSQL database.
    Database AccountThe DTS account for the destination PostgreSQL database. See Grant required permissions for required permissions.
    Database PasswordThe password for the database account.
  4. If your self-managed database has an IP address whitelist, add the DTS server CIDR blocks to it. Then click Test Connectivity and Proceed.

    Warning

    Adding DTS server CIDR blocks to a database whitelist or ECS security group rules creates security risks. Before proceeding, take preventive measures: use strong credentials, restrict exposed ports, authenticate API calls, review whitelist rules regularly, and block unauthorized CIDR blocks. Alternatively, connect the database to DTS using Express Connect, VPN Gateway, or Smart Access Gateway to avoid exposing the database to the public internet.

  5. Configure the migration objects and advanced settings.

    ParameterDescription
    Migration TypesSelect Schema Migration and Full Data Migration for a one-time migration. Select all three — Schema Migration, Full Data Migration, and Incremental Data Migration — to keep the source database online during migration.
    Processing Mode of Conflicting TablesPrecheck and Report Errors: the precheck fails if the destination database contains tables with the same names as source tables. If identical table names exist and cannot be renamed, use object name mapping to rename the tables that are migrated to the destination database. See Map object names. Ignore Errors and Proceed: skips the precheck for identical table names.
    Warning

    with this option, records with matching primary keys are not migrated, and schema differences may cause partial migration or task failure.

    Source ObjectsSelect objects from Source Objects and click the arrow icon to move them to Selected Objects.
    Selected ObjectsTo rename a single object in the destination, right-click it in Selected Objects. See Map the name of a single object. To rename multiple objects at once, click Batch Edit. See Map multiple object names at a time.
    Note

    Renaming an object may cause dependent objects to fail migration. To filter rows by condition, right-click a table and specify a WHERE clause. See Set filter conditions. To control which SQL operations are replicated incrementally for a specific object, right-click it and select the operations.

  6. Click Next: Advanced Settings and configure the following parameters.

    ParameterDescription
    Select the dedicated cluster used to schedule the taskDTS schedules to a shared cluster by default. Purchase a dedicated cluster for isolated resources. See What is a DTS dedicated cluster?
    Retry Time for Failed ConnectionsHow long DTS retries if the source or destination database becomes unreachable after the task starts. Valid values: 10–1440 minutes. Default: 720. We recommend that you set the parameter to a value greater than 30. If DTS reconnects within the retry window, the task resumes; otherwise the task fails.
    Note

    If multiple tasks share the same database, the most recently set retry time applies. DTS charges for the instance during retries — set the retry window based on your requirements and release the instance promptly when it's no longer needed.

    The wait time before a retry when other issues occur in the source and destination databasesHow long DTS retries if DDL or DML operations fail after the task starts. Valid values: 1–1440 minutes. Default: 10. We recommend that you set the parameter to a value greater than 10.
    Important

    This value must be less than Retry Time for Failed Connections.

    Enable Throttling for Full Data MigrationLimit the read/write load DTS places on your databases during full data migration. Configure QPS to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected.
    Enable Throttling for Incremental Data MigrationLimit the load during incremental migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected.
    Environment TagTag the DTS instance by environment (for example, production or staging). Optional.
    Actual Write CodeThe encoding format used to write data to the destination database.
    Configure ETLEnable the extract, transform, and load (ETL) feature to transform data in flight. See What is ETL? and Configure ETL in a data migration or data synchronization task.
    Monitoring and AlertingConfigure alerts so designated contacts are notified when the task fails or migration latency exceeds your threshold. See Configure monitoring and alerting for a new DTS task.
  7. Click Next: Save Task Settings and Precheck. DTS runs a precheck before the task can start. If the precheck fails:

    • Click View Details next to each failed item, fix the issue, then click Precheck Again.

    • If an item shows an alert that you can safely ignore, click Confirm Alert Details > Ignore > OK, then click Precheck Again.

    To review the API parameters for this configuration, hover over Next: Save Task Settings and Precheck and click Preview API Call.
  8. Wait until the precheck success rate reaches 100%, then click Next: Purchase Instance.

  9. On the Purchase Instance page, configure the instance class.

    SectionParameterDescription
    New Instance ClassResource GroupThe resource group for the data migration instance. Default: default resource group. See What is Resource Management?
    Instance ClassThe instance class determines migration throughput. See Specifications of data migration instances.
  10. Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms, then click Buy and Start. The migration task starts. Monitor its progress on the Task Management page.