All Products
Search
Document Center

Data Transmission Service:Synchronize data from a self-managed Oracle database to an AnalyticDB for PostgreSQL instance

Last Updated:Feb 28, 2026

Data Transmission Service (DTS) synchronizes data from a self-managed Oracle database to an AnalyticDB for PostgreSQL instance. DTS supports schema synchronization, full data synchronization, and incremental data synchronization.

Prerequisites

Before you begin, ensure that you have:

  • A self-managed Oracle database running engine version 9i, 10g, 11g, 12c, 18c, or 19c

  • ARCHIVELOG mode enabled on the Oracle database, with accessible archived log files and an appropriate retention period configured. For more information, see Managing Archived Redo Log Files

  • Supplemental logging enabled on the Oracle database, with SUPPLEMENTAL_LOG_DATA_PK and SUPPLEMENTAL_LOG_DATA_UI set to Yes. For more information, see Supplemental Logging

  • An AnalyticDB for PostgreSQL instance created. For more information, see Create an instance

  • Familiarity with the capabilities and limits of DTS for Oracle data synchronization. Use Advanced Database & Application Migration (ADAM) for database evaluation to help ensure a smooth synchronization. For more information, see Prepare an Oracle database and Overview

Billing

Synchronization typeFee
Schema synchronization and full data synchronizationFree of charge
Incremental data synchronizationCharged. For more information, see Billing overview.

Database account permissions

DatabaseRequired permissionReferences
Self-managed Oracle databaseFine-grained permissionsPrepare a database account, CREATE USER, GRANT
AnalyticDB for PostgreSQL instanceWrite permissions on the destination databaseThe initial account of the AnalyticDB for PostgreSQL instance has the required permissions. For more information, see Create a database account. An account with the RDS_SUPERUSER permission also works. For more information, see Manage users and permissions.
Important

To synchronize incremental data from an Oracle database, enable archive logging and supplemental logging to obtain incremental data. For more information, see the Configure an Oracle database section of the "Prepare an Oracle database" topic.

Create the synchronization task

Step 1: Open the Data Synchronization Tasks page

  1. Log on to the Data Management (DMS) console.

  2. In the top navigation bar, click Data + AI.

  3. In the left-side navigation pane, choose DTS (DTS) > Data Synchronization.

Note

Operations may vary based on the mode and layout of the DMS console. For more information, see Simple mode and Customize the layout and style of the DMS console. You can also go to the Data Synchronization Tasks page of the new DTS console.

Step 2: Configure source and destination databases

  1. On the right side of Data Synchronization Tasks, select the region in which the data synchronization instance resides.

    Note

    If you use the new DTS console, select the region in the top navigation bar.

  2. Click Create Task. In the Create Data Synchronization Task wizard, configure the source and destination databases.

    Source database parameters

    ParameterDescription
    Task NameThe name of the DTS task. DTS generates a task name automatically. Specify a descriptive name that makes it easy to identify the task. A unique task name is not required.
    Database TypeSelect Oracle.
    Connection TypeThe access method of the source database. In this example, Self-managed Database on ECS is selected. If your source database is a self-managed database, deploy the network environment first. For more information, see Preparation overview.
    Instance RegionThe region in which the self-managed Oracle database resides.
    ECS Instance IDThe ID of the Elastic Compute Service (ECS) instance that hosts the self-managed Oracle database.
    Port NumberThe service port number of the self-managed Oracle database. Default value: 1521.
    Oracle TypeThe architecture of the self-managed Oracle database. Select Non-RAC Instance and configure the SID parameter, or select RAC or PDB Instance and configure the Service Name parameter. In this example, Non-RAC Instance is selected.
    Database AccountThe account of the self-managed Oracle database. For more information about the required permissions, see the Database account permissions section.
    Database PasswordThe password for the database account.

    Destination database parameters

    ParameterDescription
    Database TypeSelect AnalyticDB for PostgreSQL.
    Connection TypeSelect Alibaba Cloud Instance.
    Instance RegionThe region in which the AnalyticDB for PostgreSQL instance resides.
    Instance IDThe ID of the AnalyticDB for PostgreSQL instance.
    Database NameThe name of the database in the AnalyticDB for PostgreSQL instance to which data is synchronized.
    Database AccountThe database account of the AnalyticDB for PostgreSQL instance. For more information about the required permissions, see the Database account permissions section.
    Database PasswordThe password for the database account.
  3. Click Test Connectivity and Proceed. If the source or destination database is an Alibaba Cloud instance, DTS automatically adds the CIDR blocks of DTS servers to the whitelist of the instance. If the source or destination database is a self-managed database hosted on an ECS instance, DTS automatically adds the CIDR blocks of DTS servers to the security group rules of the ECS instance. Make sure that the ECS instance can access the database. If the database is deployed on multiple ECS instances, manually add the CIDR blocks of DTS servers to the security group rules of each ECS instance. If the source or destination database is a self-managed database deployed in a data center or provided by a third-party cloud service provider, manually add the CIDR blocks of DTS servers to the whitelist of the database. For more information, see CIDR blocks of DTS servers.

    Warning

    Adding CIDR blocks of DTS servers to the database whitelist or ECS security group rules may introduce security risks. Before using DTS, take preventive measures such as strengthening credentials, limiting exposed ports, authenticating API calls, regularly reviewing whitelist and security group rules, and connecting the database to DTS through Express Connect, VPN Gateway, or Smart Access Gateway.

Step 3: Configure synchronization objects and settings

Configure the objects to be synchronized and advanced settings.

ParameterDescription
Synchronization TypesBy default, Incremental Data Synchronization is selected. Also select Schema Synchronization and Full Data Synchronization. After the precheck, DTS synchronizes the historical data of the selected objects from the source database to the destination database as the basis for subsequent incremental synchronization.
Processing Mode for Existing Destination TablesPrecheck and Report Errors: Checks whether the destination database contains tables with the same names as tables in the source database. If identical table names exist, an error is returned and the task cannot start. To resolve this, rename tables using the object name mapping feature. For more information, see Map object names. Ignore Errors and Proceed: Skips the precheck for identical table names. During full data synchronization, DTS does not synchronize data records with the same primary key or unique key values and retains existing records in the destination database. During incremental data synchronization, DTS overwrites existing records. If schemas differ, data initialization may fail partially or completely.
Table MergingYes: Merges multiple source tables with the same schema into a single destination table. DTS adds a __dts_data_source column (TEXT type) to the destination table in the format data synchronization instance ID:source database name:source schema name:source table name (for example, dts********:dtstestdata:testschema:customer1). After selecting multiple source tables, rename them to the destination table name using the object name mapping feature. For more information, see Map object names and Enable the multi-table merging feature. Do not perform DDL operations on source database or table schemas when merging is enabled. No (default): Each source table is synchronized to a separate destination table.
Select DDL and DML for Instance-Level SynchronizationSelect the DDL and DML operations to synchronize. For more information, see the SQL operations that can be synchronized section. To select SQL operations for a specific database or table, right-click an object in the Selected Objects section and select the operations.
Source ObjectsSelect objects from the Source Objects section and click the arrow icon to add them to the Selected Objects section. Only tables can be selected.
Selected ObjectsTo rename a single object, right-click it in the Selected Objects section. For more information, see Map the name of a single object. To rename multiple objects at a time, click Batch Edit in the upper-right corner. For more information, see Map multiple object names at a time. To set filter conditions for data synchronization, right-click a table in the Selected Objects section and specify conditions. For more information, see Set filter conditions. If you rename an object using the object name mapping feature, dependent objects may fail to synchronize.

Step 4: Configure advanced settings

Click Next: Advanced Settings to configure advanced settings.

Data verification

For more information about the data verification feature, see Configure data verification.

Advanced settings

ParameterDescription
Dedicated Cluster for Task SchedulingBy default, DTS schedules the task to a shared cluster. To improve stability, purchase a dedicated cluster. For more information, see What is a DTS dedicated cluster.
Enclose Object Names in Quotation MarksIf set to Yes, DTS encloses object names in single (') or double (") quotation marks during schema synchronization and incremental data synchronization when any of the following conditions is met: the source database is case-sensitive but the database name contains mixed-case letters; a source table name does not start with a letter and contains characters other than letters, digits, underscores (_), number signs (#), and dollar signs ($); the schema, table, or column names are reserved keywords or contain invalid characters in the destination database. After the task, specify object names in quotation marks when querying them.
Retry Time for Failed ConnectionsThe retry time range if the source or destination database connection fails after the task starts. Valid values: 10 to 1440 minutes. Default: 720 minutes. Set this value to at least 30 minutes. If DTS reconnects within the specified time range, the task resumes. Otherwise, the task fails. If multiple tasks share the same source or destination database, the shortest retry time takes precedence. DTS charges for the instance during retries.
Retry Time for Other IssuesThe retry time range if DDL or DML operations fail after the task starts. Valid values: 1 to 1440 minutes. Default: 10 minutes. Set this value to at least 10 minutes. This value must be smaller than the Retry Time for Failed Connections value.
Enable Throttling for Full Data MigrationConfigure Queries per second (QPS) to the source database, RPS of Full Data Migration, and Data migration speed for full migration (MB/s) to reduce the load on the destination database server. This parameter appears only if Full Data Synchronization is selected for Synchronization Types.
Enable Throttling for Incremental Data SynchronizationConfigure RPS of Incremental Data Synchronization and Data synchronization speed for incremental synchronization (MB/s) to reduce the load on the destination database server.
Environment TagAn environment tag that identifies the DTS instance. Select a tag based on your business requirements.
Actual Write CodeThe encoding format for writing data to the destination database.
Configure ETLSet to Yes to enable extract, transform, and load (ETL). Enter data processing statements in the code editor. For more information, see What is ETL? and Configure ETL in a data migration or data synchronization task. Set to No to skip ETL configuration.
Monitoring and AlertingSet to Yes to receive notifications when the task fails or synchronization latency exceeds a threshold. Configure the alert threshold and notification settings. For more information, see Configure monitoring and alerting when you create a DTS task.

Step 5: Configure table fields

Click Next: Configure Database and Table Fields. Set the primary key columns and distribution columns of the tables to synchronize to the destination AnalyticDB for PostgreSQL instance.

Step 6: Run the precheck and purchase the instance

  1. Save the task settings and run the precheck.

    • To view the API parameters for this DTS task, move the pointer over Next: Save Task Settings and Precheck and click Preview OpenAPI parameters.

    • Click Next: Save Task Settings and Precheck.

    Note

    DTS runs a precheck before starting the task. If the precheck fails, click View Details next to each failed item, troubleshoot the issues, and run the precheck again. If an alert item cannot be ignored, troubleshoot the issue and run the precheck again. If an alert item can be ignored, click Confirm Alert Details, then click Ignore and OK in the dialog box, and click Precheck Again. Ignoring alert items may cause data inconsistency.

  2. Wait until the Success Rate reaches 100%, then click Next: Purchase Instance.

  3. On the purchase page, configure the following parameters:

    ParameterDescription
    Billing MethodSubscription: Pay upfront for long-term use at a lower cost. Pay-as-you-go: Billed hourly. Suitable for short-term use. Release the instance when no longer needed.
    Resource Group SettingsThe resource group to which the instance belongs. Default: default resource group. For more information, see What is Resource Management?
    Instance ClassDTS provides instance classes with different synchronization speeds. For more information, see Instance classes of data synchronization instances.
    Subscription DurationThe subscription duration and number of instances. Available options: one to nine months, one year, two years, three years, or five years. This parameter appears only for the Subscription billing method.
  4. Read and select Data Transmission Service (Pay-as-you-go) Service Terms.

  5. Click Buy and Start. In the dialog box, click OK.

The task appears in the task list where you can monitor its progress.

Limits

Note

During schema synchronization, DTS synchronizes foreign keys from the source database to the destination database. During full data synchronization and incremental data synchronization, DTS temporarily disables constraint checks and cascade operations on foreign keys at the session level. If you perform cascade update or delete operations on the source database during data synchronization, data inconsistency may occur.

Source database limits

LimitDescription
Table constraintsTables to be synchronized must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate data records.
Table name lengthFor Oracle 12c or later, table names cannot exceed 30 bytes.
Table editing limitIf you select tables as the objects to be synchronized and edit tables in the destination database (such as renaming tables or columns), a single task supports up to 1,000 tables. If you run a task to synchronize more than 1,000 tables, a request error occurs. To synchronize more than 1,000 tables, configure multiple tasks to synchronize tables in batches or synchronize the entire database.
Oracle RAC over Express ConnectSpecify a virtual IP address (VIP) for the database when configuring the synchronization task.
Oracle RAC connectionUse a VIP rather than a Single Client Access Name (SCAN) IP address. After you specify the VIP, node failover of the Oracle Real Application Clusters (RAC) database is not supported.
Redo and archived log retentionRedo logging and archive logging must be enabled. For incremental-only synchronization, store redo logs and archived logs for more than 24 hours. For full and incremental synchronization, store logs for at least seven days. After full data synchronization completes, reduce the retention period to more than 24 hours. If DTS cannot obtain the redo logs and archived logs, the task may fail, or data inconsistency or data loss may occur. Failure to meet retention requirements voids the DTS service level agreement (SLA) for reliability and performance.
Primary/secondary switchoverA primary/secondary switchover on the source database during synchronization causes the task to fail.
VARCHAR2 empty stringsThe task fails if the source database contains an empty string of the VARCHAR2 type and the corresponding column in the destination database has a NOT NULL constraint. Oracle databases treat empty VARCHAR2 strings as null values.
LONGTEXT fieldsDo not update LONGTEXT fields during data synchronization. Otherwise, the task fails.
DDL during initial syncDo not execute DDL statements to change database or table schemas during schema synchronization and full data synchronization. Otherwise, the task fails.

Synchronization and destination limits

LimitDescription
Schema synchronization scopeDTS supports initial schema synchronization for the following object types: table, index, constraint, function, sequence, and view.
Schema consistency
Warning

DTS does not guarantee schema consistency between the source and destination databases after initial schema synchronization. Evaluate the impact of data type conversion on your business before proceeding. For more information, see Data type mappings for schema synchronization.

TriggersTriggers cannot be synchronized. We recommend that you delete the triggers of the source database to prevent data inconsistency caused by triggers. For more information, see Configure a data synchronization or migration task for a source database that contains a trigger.
Partitioned tablesDTS discards partition definitions. Define partitions in the destination database manually.
Synchronizable objectsOnly tables can be selected as the objects to be synchronized. The tables cannot be append-optimized (AO) tables.
Column mappingIf column mapping is used for non-full table synchronization, or if the source and destination table schemas are inconsistent, data is lost for columns not present in the destination table.
String terminatorAnalyticDB for PostgreSQL does not support the string terminator \0. If the data to be synchronized contains this terminator, DTS does not write it to the destination database, causing data inconsistency.
Oracle Data PumpDo not use Oracle Data Pump to write data to the source database during incremental data synchronization. Otherwise, data loss may occur.
Performance impactFull data synchronization uses read and write resources of both databases, which may increase server load. Synchronize data during off-peak hours.
Table fragmentationDuring initial full data synchronization, concurrent INSERT operations cause fragmentation in the destination tables. After full data synchronization, the tablespace of the destination database is larger than that of the source database.
Synchronization latencyDTS calculates latency based on the timestamp of the latest synchronized data in the destination database and the current timestamp in the source database. If no DML operation is performed on the source database for an extended period, the latency may be inaccurate. To update the latency, perform a DML operation on the source database. For entire-database synchronization, create a heartbeat table that receives updates every second.
Destination writesDuring data synchronization, use only DTS to write data to the destination database to prevent data inconsistency. Using other tools such as Data Management (DMS) to perform online DDL operations may cause data loss.

Supported synchronization topologies

DTS supports the following one-way synchronization topologies for this scenario:

  • One-to-one

  • One-to-many

  • Many-to-one

  • Cascade

For more information, see Synchronization topologies.

SQL operations that can be synchronized

Operation typeSQL statement
DMLINSERT, UPDATE, DELETE
DDLADD COLUMN