All Products
Search
Document Center

Data Transmission Service:Synchronize a self-managed Oracle database to PolarDB for PostgreSQL (Oracle compatible)

Last Updated:Apr 01, 2026

Use Data Transmission Service (DTS) to continuously synchronize data from a self-managed Oracle database to a PolarDB for PostgreSQL (Compatible with Oracle) cluster. DTS supports schema synchronization, full data synchronization, and incremental data synchronization, keeping the destination cluster in sync while your Oracle source continues to serve production traffic.

Prerequisites

Before you begin, ensure that you have:

Billing

Synchronization type Pricing
Schema synchronization and full data synchronization Free of charge
Incremental data synchronization Charged. See Billing overview

Supported SQL operations

Operation type SQL statements
DML (Data Manipulation Language) INSERT, UPDATE, and DELETE
DDL (Data Definition Language) CREATE TABLE (excluding CREATE TABLE AS SELECT and tables with partitions or subpartitions); ALTER TABLE (ADD COLUMN, ADD INDEX, DROP COLUMN, DROP INDEX, MODIFY COLUMN, RENAME COLUMN); DROP TABLE; RENAME TABLE; TRUNCATE TABLE; CREATE INDEX

Database account permissions

Database Required permissions Reference
Self-managed Oracle Fine-grained permissions Prepare a database account, CREATE USER, and GRANT
PolarDB for PostgreSQL (Compatible with Oracle) Schema owner permissions (privileged account recommended) Create a database account and Database management
Important

Enable ARCHIVELOG mode and supplemental logging to capture incremental data changes. See Database configuration.

Limitations

Source database limitations

Limitation Details
Primary key or unique constraint required Tables must have a primary key or unique constraint with unique field values. Tables without this produce duplicate data in the destination.
Table name length (Oracle 12c and later) Table names must not exceed 30 bytes.
Maximum tables per task (table-level sync with mapping) 1,000 tables. To exceed this limit, split tables across multiple tasks or synchronize the entire database as a single object.
Oracle RAC connected via Express Connect If the source database is an Oracle RAC connected through an Express Connect circuit, configure one of its VIPs in the connection settings. If the self-managed Oracle database is an RAC cluster, you cannot configure a SCAN IP — you must use a VIP in the connection settings. RAC node failover is not supported with this configuration.
Redo logs and archive logs Redo logs and archive logs must be enabled.
Log retention For incremental-only tasks: retain logs for at least 24 hours. For full + incremental tasks: retain logs for at least 7 days; after full synchronization completes, you can reduce this to 24 hours. If DTS cannot access these logs, the task may fail or cause data loss — this scenario is not covered by the DTS Service Level Agreement (SLA).
Primary/secondary switchover If a primary/secondary switchover occurs on the source database during synchronization, the task fails.
VARCHAR2 empty string with NOT NULL constraint Oracle treats an empty VARCHAR2 string as null. If the destination column has a NOT NULL constraint, the task fails.
Fine-Grained Audit (FGA) policy If FGA is enabled on a table, DTS cannot detect the ORA_ROWSCN pseudocolumn and the task fails. Disable the FGA policy on the table or exclude the table from synchronization.
Updating only large text fields Not supported. The task fails if an UPDATE modifies only large text fields.
DDL during schema or full synchronization Do not run DDL operations that change schema during schema synchronization or full synchronization. DTS queries cause metadata locks on the source that can also block DDL operations there.

Other limitations

Limitation Details
Oracle Data Pump during incremental sync Do not use Oracle Data Pump to write to the source database during incremental synchronization — data loss may occur.
External tables Cannot be synchronized.
Synchronization timing Run synchronization during off-peak hours. Initial full data synchronization consumes read and write resources on both the source and destination databases.
Tablespace size after full synchronization Concurrent INSERT operations during full synchronization cause table fragmentation in the destination. The destination tablespace size will be larger than the source after full synchronization completes.
Synchronization latency calculation DTS calculates latency based on the timestamp of the latest synchronized record and the current source timestamp. If no DML operations run on the source for an extended period, the reported latency may be inaccurate. To keep latency accurate, create a heartbeat table when synchronizing an entire database — DTS updates it every second.
ROWID-based unique indexes The destination cluster generates unique indexes (for example, pg_oid_1498503_index) to correspond to ROWID values in the source. The destination will have more indexes than the source.
String terminator '\0' The destination cluster does not support the string terminator '\0'. DTS drops this character if present in source data, causing data inconsistency.
CHECK constraints CHECK constraints from the source are converted to NOT NULL constraints in the destination.
Character set compatibility The character sets of the source and destination must be compatible. Incompatibility causes data inconsistency or task failure.
Data type compatibility Use the schema synchronization feature of DTS to prevent failures from incompatible data types.
Time zone The source and destination must use the same time zone.
Custom-defined types Synchronized to the destination. Oracle's automatically generated built-in type objects are not synchronized — the destination already supports them.
External writes to destination during sync Do not write data from sources other than DTS to the destination during synchronization — data inconsistency may occur.
Sequences DTS validates data content but not metadata such as sequences. Validate sequences manually. After you switch your business to the destination, sequences do not increment from the maximum source value. Update sequence values in the destination before switching. See Update the sequence value in the destination database.
session_replication_role for foreign keys and triggers For a full or incremental synchronization task, if the tables to synchronize contain foreign keys, triggers, or event triggers: DTS sets session_replication_role to replica at the session level when the destination account has privileged or superuser permissions. If the account lacks these permissions, set session_replication_role to replica manually. Cascade update or delete operations on the source while this parameter is replica may cause data inconsistency. After the DTS task is released, change session_replication_role back to origin.
Task failure recovery If a task fails, DTS support staff attempt to restore it within 8 hours. They may restart the task or adjust task parameters (not database parameters).
During schema synchronization, DTS synchronizes foreign keys from the source to the destination. During full and incremental synchronization, DTS temporarily disables constraint checks and foreign key cascade operations at the session level. Data inconsistency may occur if cascade update or delete operations run on the source while the task is active.

Create a synchronization task

Step 1: Open the data synchronization task list

Open the task list using either the DTS console or the DMS console.

DTS console

  1. Log on to the DTS console.

  2. In the left navigation pane, click Data Synchronization.

  3. In the upper-left corner, select the region where the synchronization instance will be located.

DMS console

Note

The steps may vary depending on the mode and layout of the DMS console. See Simple mode console and Customize the layout and style of the DMS console.

  1. Log on to the DMS console.

  2. In the top menu bar, choose Data + AI > DTS (DTS) > Data Synchronization.

  3. To the right of Data Synchronization Tasks, select the region of the synchronization instance.

Step 2: Configure the task

  1. Click Create Task.

  2. (Optional) In the upper-right corner, click New Configuration Page to switch to the new configuration UI.

    If the button reads Back to Previous Version, you are already on the new configuration page. The new version is recommended.
  3. Configure the source and destination databases using the following parameters.

    Task settings

    Parameter Description
    Task Name DTS generates a name automatically. Specify a descriptive name for easy identification — the name does not need to be unique.

    Source database

    Parameter Description
    Select Existing Connection Select a registered database instance from the list, or leave blank and configure the fields below manually. In the DMS console, this field is labeled Select a DMS database instance.
    Database Type Select Oracle.
    Access Method Select the method that matches your source database deployment. This topic uses Self-managed Database on ECS as an example. For other access methods and any additional steps they require, see Overview of preparations.
    Instance Region Select the region where the source Oracle database resides.
    ECS Instance ID Select the ID of the ECS instance running the source Oracle database.
    Port Number Enter the service port of the source Oracle database. The default is 1521.
    Oracle Type Select Non-RAC Instance and provide the SID, or select RAC or PDB Instance and provide the Service Name. This topic uses RAC or PDB Instance with a Service Name.
    Database Account Enter the database account. For permission requirements, see Database account permissions.
    Database Password Enter the password for the database account.
    Encryption Select Non-encrypted or SSL-encrypted. For SSL encryption, upload the CA Certificate and enter the CA Key. SSL encryption is available only when Access Method is Express Connect, VPN Gateway, or Smart Access Gateway and Oracle Type is RAC or PDB Instance.

    Destination database

    Parameter Description
    Select Existing Connection Select a registered database instance from the list, or leave blank and configure the fields below manually. In the DMS console, this field is labeled Select a DMS database instance.
    Database Type Select PolarDB (Compatible with Oracle).
    Access Method Select Alibaba Cloud Instance.
    Instance Region Select the region where the destination cluster is located.
    Instance ID Select the ID of the destination PolarDB for PostgreSQL (Compatible with Oracle) cluster.
    Database Name Enter the name of the database in the destination cluster that will receive the synchronized objects.
    Database Account Enter the database account for the destination cluster. For permission requirements, see Database account permissions.
    Database Password Enter the password for the database account.
    Encryption Select Non-encrypted or SSL-encrypted. For SSL encryption, upload the CA Certificate.
  4. Click Test Connectivity and Proceed at the bottom of the page. In the CIDR Blocks of DTS Servers dialog box, click Test Connectivity.

    DTS server IP address blocks must be added to the security settings of the source and destination databases before the connectivity test. DTS can add them automatically, or you can add them manually. See Add the IP address whitelist of DTS servers.

Step 3: Select objects to synchronize

  1. On the Configure Objects page, set the following parameters.

    Parameter Description
    Synchronization Types DTS always selects Incremental Data Synchronization. By default, Schema Synchronization and Full Data Synchronization are also selected. DTS uses the full data of the selected objects to initialize the destination cluster before starting incremental synchronization.
    Processing Mode of Conflicting Tables Precheck and Report Errors: Checks for tables with the same name in the destination. If any are found, the precheck reports an error and the task does not start. To handle naming conflicts without deleting or renaming the destination table, map the object to a different name. See Database Table Column Name Mapping. Ignore Errors and Proceed: Skips the name conflict check.
    Warning

    This option may cause data inconsistency. During full synchronization, DTS skips source records that conflict with destination records. During incremental synchronization, DTS overwrites destination records with source records. If schemas are inconsistent, data initialization may fail partially or entirely.

    Capitalization of Object Names in Destination Instance Sets the case policy for database, table, and column names in the destination. The default is DTS default policy. See Case policy for destination object names.
    Source Objects Click objects in the Source Objects box, then click the right-arrow icon to move them to the Selected Objects box. You can select objects at the database, table, or column level.
    Selected Objects If you are not using a privileged account, or if the source schema name does not match the naming conventions of PolarDB for PostgreSQL (Compatible with Oracle), right-click the schema in the Selected Objects pane, open the Edit Schema dialog box, and update the Schema Name to match the target database. See Map table and column names. To select specific SQL operations for a database or table, right-click the object and choose operations from the dialog. To apply row filters, right-click a table and configure filter conditions. See Set filter conditions. Renaming an object may break synchronization for other objects that depend on it.
  2. Click Next: Advanced Settings and configure the following parameters.

    Parameter Description
    Dedicated Cluster for Task Scheduling DTS uses a shared cluster by default. For greater task stability, purchase a dedicated cluster. See What is a DTS dedicated cluster?
    Retry Time for Failed Connections How long DTS retries after a connection failure. Default: 720 minutes. Valid range: 10–1,440 minutes. Set to 30 minutes or more. If the connection is restored within this period, the task resumes automatically.
    Note

    When multiple DTS instances share a source or destination, DTS applies the shortest configured retry duration across all instances. DTS charges for runtime during connection retries.

    Retry Time for Other Issues How long DTS retries after non-connection errors (for example, DDL or DML execution errors). Default: 10 minutes. Valid range: 1–1,440 minutes. Set to 10 minutes or more. This value must be less than Retry Time for Failed Connections.
    Enable Throttling for Full Data Synchronization Limits full synchronization read/write rates to reduce load on the destination. Set Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s). Available only when Full Data Synchronization is selected. You can also adjust the rate while the task is running.
    Enable Throttling for Incremental Data Synchronization Limits incremental synchronization rates. Set RPS of Incremental Data Synchronization and Data synchronization speed for incremental synchronization (MB/s).
    Environment Tag Attach an environment tag to the instance for identification.
    Actual Write Code Select the encoding type for data written to the destination.
    Configure ETL Choose whether to enable extract, transform, and load (ETL). Select Yesalert notifications to enter data processing statements in the code editor. See Configure ETL in a data migration or data synchronization task. Select No to disable ETL. For ETL background information, see What is ETL?
    Monitoring and Alerting Configure alerts for task failures or latency exceeding a threshold. Select Yes to set an alert threshold and notification contacts. See Configure monitoring and alerting during task configuration.
  3. Click Data Verification to configure a data verification task. See Configure data verification.

Step 4: Save the task and run the precheck

  • To preview the API parameters for this configuration, hover over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

  • Click Next: Save Task Settings and Precheck to save the configuration and start the precheck.

DTS runs a precheck before starting any synchronization task. The task starts only if the precheck passes.
If the precheck fails, click View Details next to the failed item, fix the issue, and rerun the precheck.
If the precheck generates non-ignorable warnings, click View Details, fix the issue, and rerun.
If the precheck generates ignorable warnings, click Confirm Alert Details > Ignore > OK, then click Precheck Again to proceed. Ignoring warnings may cause data inconsistency.

Step 5: Purchase the instance

  1. When the Success Rate reaches 100%, click Next: Purchase Instance.

  2. On the Purchase page, select the billing method and instance specifications.

    Parameter Description
    Billing Method Subscription: Pay upfront for a fixed duration. Cost-effective for long-term, continuous tasks. Pay-as-you-go: Billed hourly for actual usage. Suitable for short-term or test tasks — release the instance at any time to stop charges.
    Resource Group Settings The resource group for the instance. Default: default resource group. See What is Resource Management?
    Instance Class Determines synchronization performance. Select based on your data volume and throughput requirements. See Data synchronization link specifications.
    Subscription Duration Available only for Subscription billing. Monthly options: 1–9 months. Yearly options: 1, 2, 3, or 5 years.
  3. Select the Data Transmission Service (Pay-as-you-go) Service Terms checkbox.

  4. Click Buy and Start, then click OK in the confirmation dialog.

Monitor the task on the data synchronization page.

What's next

After the synchronization task starts and the destination cluster is fully initialized, validate data in the destination before switching business traffic:

  • Update sequence values in the destination before switching. See Update the sequence value in the destination database.

  • Verify that session_replication_role is set back to origin in the destination after the DTS task is released.

  • Check that character sets and time zones match between the source and destination to avoid post-switch data issues.