All Products
Search
Document Center

Data Transmission Service:Migrate data from a self-managed Oracle database to an AnalyticDB for PostgreSQL instance

Last Updated:Mar 28, 2026

Use Data Transmission Service (DTS) to migrate data from a self-managed Oracle database to an AnalyticDB for PostgreSQL instance. After migration, you can build real-time data warehouses on the destination instance.

Prerequisites

Before you begin, make sure that you have:

  • A self-managed Oracle database and a destination AnalyticDB for PostgreSQL instance. To create the destination instance, see Create an instance.

  • The Oracle database running in ARCHIVELOG mode with accessible archive logs and a suitable retention period. See Managing Archived Redo Log Files.

  • Supplemental logging enabled on the Oracle database, with SUPPLEMENTAL_LOG_DATA_PK and SUPPLEMENTAL_LOG_DATA_UI set to Yes. See Supplemental Logging.

  • Familiarity with DTS capabilities and limits for Oracle migrations. Use Advanced Database & Application Migration (ADAM) to evaluate your database before migrating. See Prepare an Oracle database and Overview.

Migration types

Migration typeDescription
Schema migrationMigrates schemas for the following object types: table, index, constraint, function, sequence, and view. Because Oracle and AnalyticDB for PostgreSQL are heterogeneous databases, schema consistency after migration is not guaranteed. Evaluate data type conversion impact before migrating. See Data type mappings between heterogeneous databases. DTS does not support triggers — delete source triggers before migration to prevent data inconsistency. For partitioned tables, DTS discards partition definitions; define partitions in the destination database manually.
Full data migrationMigrates all existing data from the source database to the destination instance.
Incremental data migrationPolls redo log files from the Oracle database and migrates incremental data in real time, allowing migration without interrupting source application services.

Choosing a migration strategy:

StrategyWhen to use
Schema migration + full data migrationOne-time migration where downtime is acceptable. Stop writes to the source database during migration to ensure data consistency.
Schema migration + full data migration + incremental data migrationKeep services running during migration, then cut over with minimal downtime. Recommended for production databases.

SQL operations supported for incremental migration

Operation typeSQL statements
DMLINSERT, UPDATE, DELETE
DDLADD COLUMN

Billing

Migration typeInstance configuration feeInternet traffic fee
Schema migration and full data migrationFreeCharged only when data is migrated from Alibaba Cloud over the Internet. See Billing overview.
Incremental data migrationCharged. See Billing overview.

Limitations

During schema migration, DTS migrates foreign keys from the source database to the destination database. During full and incremental data migration, DTS temporarily disables constraint checks and cascade operations on foreign keys at the session level. Cascade delete operations on the source during migration may cause data inconsistency.

Source database limitations:

  • The server must have sufficient outbound bandwidth. Insufficient bandwidth reduces migration speed.

  • For Oracle RAC databases connected over Express Connect: specify a VIP for the database instead of a Single Client Access Name (SCAN) IP address. After specifying the VIP, node failover is not supported.

  • For Oracle RAC databases connected over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN): use a single VIP rather than a Single Client Access Name (SCAN) IP address when you configure the source database. After you specify the VIP, node failover is not supported for the Oracle RAC database.

  • If a VARCHAR2 field contains an empty string (treated as NULL in Oracle) and the corresponding destination column has a NOT NULL constraint, the migration task fails.

  • Tables to be migrated must have PRIMARY KEY or UNIQUE constraints with all fields unique. Without these constraints, the destination database may contain duplicate records.

  • For Oracle 12c and later: table names cannot exceed 30 bytes.

  • If you select tables as migration objects and need to rename tables or columns in the destination, a single task supports a maximum of 1,000 tables. For more than 1,000 tables, create multiple tasks or migrate the entire database.

  • For incremental migration, redo and archive log retention requirements: Insufficient log retention may cause the task to fail, or result in data inconsistency or loss. DTS Service Level Agreements (SLAs) do not apply if retention requirements are not met.

    • Incremental migration only: retain logs for more than 24 hours.

    • Full migration + incremental migration: retain logs for at least 7 days. After full migration completes, you can reduce the retention period to more than 24 hours.

  • During schema migration and full data migration: do not perform DDL operations on the source database.

  • During full migration only: do not write data to the source database during migration.

  • During migration: do not update LONGTEXT fields.

Other limitations:

  • Only tables can be selected as migration objects. Append-optimized (AO) tables are not supported in the destination.

  • If column mapping is used for non-full-table migration or the source and destination schemas differ, source columns not present in the destination are lost.

  • AnalyticDB for PostgreSQL does not support the string terminator '\0'. DTS omits the terminator when writing, which causes data inconsistency.

  • Run migrations during off-peak hours. Full data migration uses read and write resources on both the source and destination, which increases database load.

  • Full data migration with concurrent INSERT operations causes fragmentation in destination tables. After migration completes, the destination tablespace size will be larger than the source.

  • DTS attempts to resume failed tasks for up to 7 days. Before switching workloads to the destination, stop or release failed tasks, or run REVOKE to remove write permissions from the DTS accounts. Otherwise, resumed tasks may overwrite destination data.

Data type mappings

For data type mappings between Oracle and AnalyticDB for PostgreSQL, see Data type mappings between heterogeneous databases.

Set up the Oracle database account

Create an account on the Oracle database and grant the required permissions based on the migration types you plan to use.

Skip this step if you already have an account with the required permissions.

Required permissions by migration type:

DatabaseSchema migrationFull data migrationIncremental data migration
Self-managed Oracle databaseSchema owner permissionsSchema owner permissionsFine-grained permissions
AnalyticDB for PostgreSQL instanceWrite permissions on the destination database

To create an Oracle account, see CREATE USER and GRANT.

For the AnalyticDB for PostgreSQL destination, the initial account has the required write permissions. Alternatively, use an account with the RDS_SUPERUSER permission. See Create a database account and Manage users and permissions.

Create a migration task

  1. Go to the Data Migration Tasks page.

    1. Log on to the Data Management (DMS) console.

    2. In the top navigation bar, click DTS.

    3. In the left-side navigation pane, choose DTS (DTS) > Data Migration.

    Console layout may vary. See Simple mode and Customize the layout and style of the DMS console. You can also go directly to the Data Migration Tasks page of the new DTS console.
  2. From the drop-down list next to Data Migration Tasks, select the region where the migration instance resides.

    In the new DTS console, select the region in the upper-left corner.
  3. Click Create Task. In the Create Data Migration Task wizard, configure the source and destination databases.

    Warning

    Read the limits displayed at the top of the page before proceeding. Skipping this step may cause the task to fail or result in data inconsistency.

    Source database parameters:

    ParameterDescription
    Task NameEnter a descriptive name to identify the task. Task names do not need to be unique.
    Database TypeSelect Oracle.
    Connection TypeSelect the access method for the source database. This example uses Public IP Address. For other access methods, set up the required environment first. See Preparation overview.
    Instance RegionSelect the region where the source Oracle database resides.
    Hostname or IP addressEnter the endpoint for the source Oracle database.
    Port NumberEnter the service port. Default: 1521. The port must be accessible over the Internet in this example.
    Oracle TypeSelect Non-RAC Instance (requires SID) or RAC or PDB Instance (requires Service Name). This example uses Non-RAC Instance.
    Database AccountEnter the Oracle account. See Set up the Oracle database account for required permissions.
    Database PasswordEnter the account password.

    Destination database parameters:

    ParameterDescription
    Database TypeSelect AnalyticDB for PostgreSQL.
    Connection TypeSelect Alibaba Cloud Instance.
    Instance RegionSelect the region where the destination instance resides.
    Instance IDSelect the destination AnalyticDB for PostgreSQL instance.
    Database NameEnter the name of the destination database.
    Database AccountEnter the destination database account. See Set up the Oracle database account for required permissions.
    Database PasswordEnter the account password.
  4. If the source database has an IP address whitelist configured, add the CIDR blocks of DTS servers to the whitelist. Then click Test Connectivity and Proceed.

    Warning

    Adding DTS server CIDR blocks to your database whitelist or ECS security group rules creates potential security exposure. Take preventive measures before proceeding: use strong credentials, limit exposed ports, authenticate API calls, and regularly review whitelist rules. For more secure connectivity, use Express Connect, VPN Gateway, or Smart Access Gateway instead of public IP access.

  5. Configure migration objects and settings.

    Renaming an object may cause dependent objects to fail migration.
    ParameterDescription
    Migration TypesSelect the migration types based on your needs. For a one-time migration, select Schema Migration and Full Data Migration. To keep services running during migration, also select Incremental Data Migration. If you do not select Incremental Data Migration, do not write data to the source database during migration.
    Processing Mode of Conflicting TablesPrecheck and Report Errors: checks for identical table names between source and destination before starting. The task fails precheck if conflicts exist. Use object name mapping to resolve conflicts. See Map object names. Ignore Errors and Proceed: skips precheck for identical table names. If schemas are identical, DTS skips records with matching primary keys. If schemas differ, only specific columns migrate or the task fails. Use with caution.
    Source ObjectsSelect objects to migrate and click the arrow icon to add them to Selected Objects.
    Selected ObjectsTo rename a single object in the destination, right-click it. See Map the name of a single object. To rename multiple objects at once, click Batch Edit. See Map multiple object names at a time. To filter rows by condition, right-click a table and specify WHERE clause conditions. See Set filter conditions. To select specific SQL operations for incremental migration on a table or database, right-click the object and choose the operations.
  6. Click Next: Advanced Settings and configure the following parameters. Data verification: To use data verification, see Configure data verification.

    Advanced settings:

    ParameterDescription
    Select the dedicated cluster used to schedule the taskBy default, DTS schedules to a shared cluster. To use a dedicated cluster, purchase one first. See What is a DTS dedicated cluster?
    Retry Time for Failed ConnectionsHow long DTS retries when it cannot connect to the source or destination database. Valid range: 10–1,440 minutes. Default: 720 minutes. Set this to more than 30 minutes so that transient network issues do not fail the task. If DTS reconnects within this period, the task resumes automatically.
    Note

    When multiple tasks share the same source or destination database, the most recently set retry time applies to all. DTS instance charges continue during retry.

    The wait time before a retry when other issues occur in the source and destination databasesHow long DTS retries when DDL or DML operations fail. Valid range: 1–1,440 minutes. Default: 10 minutes. Set this to more than 10 minutes. This value must be less than the Retry Time for Failed Connections value.
    Enable Throttling for Full Data MigrationThrottles full migration to reduce load on the source and destination. Configure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Migration is selected.
    Enable Throttling for Incremental Data MigrationThrottles incremental migration. Configure RPS of Incremental Data Migration and Data migration speed for incremental migration (MB/s). Available only when Incremental Data Migration is selected.
    Environment TagAn optional tag to identify the DTS instance by environment (for example, production or test).
    Actual Write CodeThe encoding format for writing data to the destination database.
    Configure ETLEnables the extract, transform, and load (ETL) feature. Select Yes to enter data processing statements. See Configure ETL in a data migration or data synchronization task. Select No to skip. See What is ETL?
    Monitoring and AlertingSelect Yes to receive alerts when the task fails or migration latency exceeds a threshold. Specify the alert threshold and contacts. See Configure monitoring and alerting for a new DTS task. Select No to skip.
  7. Specify the primary key columns and distribution key columns for the tables to be migrated to the AnalyticDB for PostgreSQL instance.

  8. Click Next: Save Task Settings and Precheck.

    To preview the API call parameters, hover over Next: Save Task Settings and Precheck and click Preview API Call.
  9. Wait for the precheck to complete. Wait until the success rate reaches 100%, then click Next: Purchase Instance.

    • If an item fails, click View Details next to the failed item, fix the issue, and then run the precheck again.

    • If an alert appears for an item, click Confirm Alert Details. In the View Details dialog, click Ignore, confirm, and then click Precheck Again. Ignoring an alert may cause data inconsistency.

  10. Read and accept the Data Transmission Service (Pay-as-you-go) Service Terms by selecting the check box.

  11. Click Buy and Start. Monitor task progress on the Task Management page.

What's next

After migration completes and before you switch workloads to the destination instance:

  • Stop or release any failed DTS tasks, or run REVOKE to remove DTS write permissions on the destination database. DTS may resume failed tasks for up to 7 days and overwrite destination data.

  • Validate data consistency between the source and destination. See Configure data verification.

  • For supported data type mappings between Oracle and AnalyticDB for PostgreSQL, see Data type mappings between heterogeneous databases.