All Products
Search
Document Center

Data Transmission Service:Usage notes and limits for synchronizing data from an Oracle database

Last Updated:Nov 21, 2025

This topic describes the usage notes and limits when synchronizing data from a self-managed Oracle database. To ensure your data synchronization task runs as expected, you should read these notes and limits before configuring the task.

Use cases

To view its usage notes and limits, choose the following options:

Note
  • During schema synchronization, DTS synchronizes foreign keys from the source database to the destination database.

  • During full data synchronization and incremental data synchronization, DTS temporarily disables the constraint check and cascade operations on foreign keys at the session level. If you perform the cascade update and delete operations on the source database during data synchronization, data inconsistency may occur.

Synchronize data from a self-managed Oracle database to an AnalyticDB for PostgreSQL instance

Category

Description

Limits on the source database

  • Requirements for the objects to be synchronized:

    • The tables to be synchronized must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate data records.

    • If the version of your Oracle database is 12c or later, the names of the tables to be synchronized cannot exceed 30 bytes in length.

    • If you select tables as the objects to be synchronized and you want to edit the tables in the destination database, such as renaming tables or columns, you can synchronize up to 1,000 tables in a single data synchronization task. If you run a task to synchronize more than 1,000 tables, a request error occurs. In this case, we recommend that you split the tables and configure multiple tasks to synchronize the tables in batches or configure a task to synchronize the entire database.

  • If the source database is an Oracle Real Application Cluster (RAC) database connected over Express Connect, you must specify a virtual IP address (VIP) for the database when you configure the data synchronization task.

  • If the self-managed Oracle database is an Oracle RAC database, you can only use a VIP rather than a Single Client Access Name (SCAN) IP address when you configure the data synchronization task. After you specify the VIP, node failover of the Oracle RAC database is not supported.

  • The redo logging and archive logging features must be enabled.

    Note

    If you perform only incremental data synchronization, the redo logs and archive logs of the source database must be stored for more than 24 hours. If you perform both full data synchronization and incremental data synchronization, the redo logs and archive logs of the source database must be stored for at least seven days. Otherwise, DTS may fail to obtain the redo logs and archive logs and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. After the full data synchronization is complete, you can set the retention period to more than 24 hours. Make sure that you set the retention period of redo logs and archive logs in accordance with the preceding requirements. Otherwise, the service reliability and performance stated in the Service Level Agreement (SLA) of DTS may not be guaranteed.

  • If you perform a primary/secondary switchover on the source database when the data synchronization task is running, the task fails.

  • If the data to be synchronized contains an empty string of the VARCHAR2 type (Oracle processes it as null) and the corresponding column in the destination database has a NOT NULL constraint, the data synchronization task fails.

  • If the table to be synchronized has the Fine-Grained Audit (FGA) policy enabled, DTS cannot detect the ORA_ROWSCN pseudocolumn, which causes the synchronization task to fail.

    Note

    You can disable the FGA policy for the table to be synchronized, or exclude the table from synchronization.

  • During data synchronization, do not update LONGTEXT fields. Otherwise, the data synchronization task fails.

  • During schema synchronization and initial full data synchronization, do not perform DDL operations to change the schemas of databases or tables. Otherwise, the data synchronization task fails.

Other limits

  • DTS supports initial schema synchronization for the following types of objects: table, constraint, function, sequence, and view.

    Warning

    DTS does not ensure that the schemas of the source and destination databases are consistent after initial schema synchronization. We recommend that you evaluate the impact of data type conversion on your business. Otherwise, the data synchronization task may fail, or data inconsistency may occur. For more information, see Data type mappings for schema synchronization.

  • External tables cannot be synchronized.

  • Packages, package bodies, materialized views, synonyms, types, type bodies, procedures, and indexes cannot be synchronized.

  • Triggers cannot be synchronized. We recommend that you delete the triggers of the source database to prevent data inconsistency caused by triggers. For more information about how to synchronize triggers, see Configure a data synchronization or migration task for a source database that contains a trigger.

  • For partitioned tables, DTS discards the partition definitions. You must define partitions in the destination database.

  • You can select only tables as the objects to be synchronized. The tables cannot be append-optimized (AO) tables.

  • If column mapping is used for non-full table synchronization or if the source and destination table structures are inconsistent, the data for the columns that are missing in the destination database compared to the source database will be lost.

  • The destination AnalyticDB for PostgreSQL instance does not support the string terminator '\0'. If the data to be synchronized contains the terminator, DTS does not write the terminator to the destination database. This causes data inconsistency between the source and destination databases.

  • During incremental data synchronization, you cannot use Oracle Data Pump to write data to the source database. Otherwise, data loss may occur.

  • Before you synchronize data, evaluate the impact of data synchronization on the performance of the source and destination databases. We recommend that you synchronize data during off-peak hours. During initial full data synchronization, DTS uses the read and write resources of the source and destination databases. This may increase the loads on the database servers.

  • During full data synchronization, concurrent INSERT operations cause fragmentation in the tables of the destination database. After the full data synchronization is complete, the size of the used tablespace of the destination database is larger than that of the source database.

  • DTS calculates synchronization latency based on the timestamp of the latest synchronized data in the destination database and the current timestamp in the source database. If no DML operation is performed on the source database for a long time, the synchronization latency may be inaccurate. If the latency of the synchronization task is excessively high, you can perform a DML operation on the source database to update the latency.

    Note

    If you select an entire database as the object to synchronize, you can create a heartbeat table. The heartbeat table is updated or receives data every second.

  • During data synchronization, use only DTS to write data to the destination database. This prevents data inconsistency between the source and destination databases. For example, if you use tools other than DTS to write data to the destination database, data loss may occur in the destination database when you use Data Management (DMS) to perform online DDL operations.

  • If an instance fails, DTS helpdesk will try to recover the instance within 8 hours. During the recovery process, operations such as restarting the instance and adjusting parameters may be performed.

    Note

    When parameters are adjusted, only the parameters of the DTS instance are modified. The parameters of the database are not modified. The parameters that may be modified include but are not limited to those described in Modify instance parameters.

Synchronize data from a self-managed Oracle database to an Message Queue for Apache Kafka instance or a self-managed Kafka cluster

Category

Description

Limits on the source database

  • Requirements for the objects to be synchronized:

    • The tables to be synchronized must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate data records.

    • If the version of your Oracle database is 12c or later, the names of the tables to be synchronized cannot exceed 30 bytes in length.

    • If you select tables as the objects to be synchronized and you want to edit the tables in the destination database, such as renaming tables or columns, you can synchronize up to 1,000 tables in a single data synchronization task. If you run a task to synchronize more than 1,000 tables, a request error occurs. In this case, we recommend that you split the tables and configure multiple tasks to synchronize the tables in batches or configure a task to synchronize the entire database.

  • If the source database is an Oracle RAC database connected over Express Connect, you must specify a virtual IP address (VIP) for the database when you configure the data synchronization task.

  • If the self-managed Oracle database is an Oracle RAC database, you can only use a VIP rather than a Single Client Access Name (SCAN) IP address when you configure the data synchronization task. After you specify the VIP, node failover of the Oracle RAC database is not supported.

  • The redo logging and archive logging features must be enabled.

    Note

    If you perform only incremental data synchronization, the redo logs and archive logs of the source database must be stored for more than 24 hours. If you perform both full data synchronization and incremental data synchronization, the redo logs and archive logs of the source database must be stored for at least seven days. Otherwise, DTS may fail to obtain the redo logs and archive logs and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. After the full data synchronization is complete, you can set the retention period to more than 24 hours. Make sure that you set the retention period of redo logs and archive logs in accordance with the preceding requirements. Otherwise, the service reliability and performance stated in the Service Level Agreement (SLA) of DTS may not be guaranteed.

  • If you perform a primary/secondary switchover on the source database when the data synchronization task is running, the task fails.

  • If the data to be synchronized contains an empty string of the VARCHAR2 type (Oracle processes it as null) and the corresponding column in the destination database has a NOT NULL constraint, the data synchronization task fails.

  • If the table to be synchronized has the Fine-Grained Audit (FGA) policy enabled, DTS cannot detect the ORA_ROWSCN pseudocolumn, which causes the synchronization task to fail.

    Note

    You can disable the FGA policy for the table to be synchronized, or exclude the table from synchronization.

  • During data synchronization, do not update LONGTEXT fields. Otherwise, the data synchronization task fails.

  • During schema synchronization and initial full data synchronization, do not perform DDL operations to change the schemas of databases or tables. Otherwise, the data synchronization task fails.

Other limits

  • DTS does not synchronize the data in a renamed table to the destination Kafka cluster. This applies if the new table name is not included in the objects to be synchronized. To synchronize the data in a renamed table to the destination Kafka cluster, you must perform Modify Synchronized Objects operation. For more information, see Add an object to a data synchronization task.

  • External tables cannot be synchronized.

  • Indexes, partitions, views, procedures, functions, triggers, foreign keys, table comments, and column comments cannot be synchronized.

  • During incremental data synchronization, you cannot use Oracle Data Pump to write data to the source database. Otherwise, data loss may occur.

  • Before you synchronize data, evaluate the impact of data synchronization on the performance of the source and destination databases. We recommend that you synchronize data during off-peak hours. During initial full data synchronization, DTS uses the read and write resources of the source and destination databases. This may increase the loads on the database servers.

  • During full data synchronization, concurrent INSERT operations cause fragmentation in the tables of the destination database. After the full data synchronization is complete, the size of the used tablespace of the destination database is larger than that of the source database.

  • DTS calculates synchronization latency based on the timestamp of the latest synchronized data in the destination database and the current timestamp in the source database. If no DML operation is performed on the source database for a long time, the synchronization latency may be inaccurate. If the latency of the synchronization task is excessively high, you can perform a DML operation on the source database to update the latency.

    Note

    If you select an entire database as the object to synchronize, you can create a heartbeat table. The heartbeat table is updated or receives data every second.

  • During data synchronization, we recommend that you use only DTS to write data to the destination database. This prevents data inconsistency between the source and destination databases. For example, if you use tools other than DTS to write data to the destination database, data loss may occur in the destination database when you use DMS to perform online DDL operations.

  • During data synchronization, if the destination Kafka cluster is scaled up or down, you need to restart the instance.

  • If an instance fails, DTS helpdesk will try to recover the instance within 8 hours. During the recovery process, operations such as restarting the instance and adjusting parameters may be performed.

    Note

    When parameters are adjusted, only the parameters of the DTS instance are modified. The parameters of the database are not modified. The parameters that may be modified include but are not limited to those described in Modify instance parameters.

Synchronize data from a self-managed Oracle database to a DataHub project

Category

Description

Limits on the source database

  • Requirements for the objects to be synchronized:

    • The tables to be synchronized must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate data records.

    • If the version of your Oracle database is 12c or later, the names of the tables to be synchronized cannot exceed 30 bytes in length.

    • If you select tables as the objects to be synchronized and you want to edit the tables in the destination database, such as renaming tables or columns, you can synchronize up to 1,000 tables in a single data synchronization task. If you run a task to synchronize more than 1,000 tables, a request error occurs. In this case, we recommend that you split the tables and configure multiple tasks to synchronize the tables in batches or configure a task to synchronize the entire database.

  • If the source database is an Oracle RAC database connected over Express Connect, you must specify a virtual IP address (VIP) for the database when you configure the data synchronization task.

  • If the self-managed Oracle database is an Oracle RAC database, you can only use a VIP rather than a Single Client Access Name (SCAN) IP address when you configure the data synchronization task. After you specify the VIP, node failover of the Oracle RAC database is not supported.

  • The redo logging and archive logging features must be enabled.

    Note

    If you perform only incremental data synchronization, the redo logs and archive logs of the source database must be stored for more than 24 hours. If you perform both full data synchronization and incremental data synchronization, the redo logs and archive logs of the source database must be stored for at least seven days. Otherwise, DTS may fail to obtain the redo logs and archive logs and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. After the full data synchronization is complete, you can set the retention period to more than 24 hours. Make sure that you set the retention period of redo logs and archive logs in accordance with the preceding requirements. Otherwise, the service reliability and performance stated in the Service Level Agreement (SLA) of DTS may not be guaranteed.

  • If you perform a primary/secondary switchover on the source database when the data synchronization task is running, the task fails.

  • If the data to be synchronized contains an empty string of the VARCHAR2 type (Oracle processes it as null) and the corresponding column in the destination database has a NOT NULL constraint, the data synchronization task fails.

  • If the table to be synchronized has the Fine-Grained Audit (FGA) policy enabled, DTS cannot detect the ORA_ROWSCN pseudocolumn, which causes the synchronization task to fail.

    Note

    You can disable the FGA policy for the table to be synchronized, or exclude the table from synchronization.

  • During data synchronization, do not update LONGTEXT fields. Otherwise, the data synchronization task fails.

  • During schema synchronization, do not execute DDL statements to change the schemas of databases or tables. Otherwise, the data synchronization task fails.

Other limits

  • The structure initialization object is TABLE.

    Warning

    In this scenario, DTS does not support schema migration for triggers. We recommend that you delete the triggers of the source database to prevent data inconsistency caused by the triggers. For more information, see Configure a data synchronization or migration task for a source database that contains a trigger.

  • A single string in the destination DataHub project cannot exceed 2 MB in length.

  • External tables cannot be synchronized.

  • During incremental data synchronization, you cannot use Oracle Data Pump to write data to the source database. Otherwise, data loss may occur.

  • DTS calculates synchronization latency based on the timestamp of the latest synchronized data in the destination database and the current timestamp in the source database. If no DML operation is performed on the source database for a long time, the synchronization latency may be inaccurate. If the latency of the synchronization task is excessively high, you can perform a DML operation on the source database to update the latency.

    Note

    If you select an entire database as the object to synchronize, you can create a heartbeat table. The heartbeat table is updated or receives data every second.

  • During data synchronization, we recommend that you use only DTS to write data to the destination database. This prevents data inconsistency between the source and destination databases. For example, if you use tools other than DTS to write data to the destination database, data loss may occur in the destination database when you use DMS to perform online DDL operations.

  • If an instance fails, DTS helpdesk will try to recover the instance within 8 hours. During the recovery process, operations such as restarting the instance and adjusting parameters may be performed.

    Note

    When parameters are adjusted, only the parameters of the DTS instance are modified. The parameters of the database are not modified. The parameters that may be modified include but are not limited to those described in Modify instance parameters.

Synchronize data from a self-managed Oracle database to a PolarDB-X 2.0 instance

Category

Description

Limits on the source database

  • Requirements for the objects to be synchronized:

    • The tables to be synchronized must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate data records.

    • If the version of your Oracle database is 12c or later, the names of the tables to be synchronized cannot exceed 30 bytes in length.

    • If you select tables as the objects to be synchronized and you want to edit the tables in the destination database, such as renaming tables or columns, you can synchronize up to 1,000 tables in a single data synchronization task. If you run a task to synchronize more than 1,000 tables, a request error occurs. In this case, we recommend that you split the tables and configure multiple tasks to synchronize the tables in batches or configure a task to synchronize the entire database.

  • If the source database is an Oracle RAC database connected over Express Connect, you must specify a virtual IP address (VIP) for the database when you configure the data synchronization task.

  • If the self-managed Oracle database is an Oracle RAC database, you can only use a VIP rather than a Single Client Access Name (SCAN) IP address when you configure the data synchronization task. After you specify the VIP, node failover of the Oracle RAC database is not supported.

  • The redo logging and archive logging features must be enabled.

    Note

    If you perform only incremental data synchronization, the redo logs and archive logs of the source database must be stored for more than 24 hours. If you perform both full data synchronization and incremental data synchronization, the redo logs and archive logs of the source database must be stored for at least seven days. Otherwise, DTS may fail to obtain the redo logs and archive logs and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. After the full data synchronization is complete, you can set the retention period to more than 24 hours. Make sure that you set the retention period of redo logs and archive logs in accordance with the preceding requirements. Otherwise, the service reliability and performance stated in the Service Level Agreement (SLA) of DTS may not be guaranteed.

  • If you perform a primary/secondary switchover on the source database when the data synchronization task is running, the task fails.

  • If the data to be synchronized contains an empty string of the VARCHAR2 type (Oracle processes it as null) and the corresponding column in the destination database has a NOT NULL constraint, the data synchronization task fails.

  • If the table to be synchronized has the Fine-Grained Audit (FGA) policy enabled, DTS cannot detect the ORA_ROWSCN pseudocolumn, which causes the synchronization task to fail.

    Note

    You can disable the FGA policy for the table to be synchronized, or exclude the table from synchronization.

  • During data synchronization, do not update LONGTEXT fields. Otherwise, the data synchronization task fails.

  • During schema synchronization and initial full data synchronization, do not perform DDL operations to change the schemas of databases or tables. Otherwise, the data synchronization task fails.

Other limits

  • During incremental data synchronization, you cannot use Oracle Data Pump to write data to the source database. Otherwise, data loss may occur.

  • Before you synchronize data, evaluate the impact of data synchronization on the performance of the source and destination databases. We recommend that you synchronize data during off-peak hours. During initial full data synchronization, DTS uses the read and write resources of the source and destination databases. This may increase the loads on the database servers.

  • External tables cannot be synchronized.

  • DTS calculates synchronization latency based on the timestamp of the latest synchronized data in the destination database and the current timestamp in the source database. If no DML operation is performed on the source database for a long time, the synchronization latency may be inaccurate. If the latency of the synchronization task is excessively high, you can perform a DML operation on the source database to update the latency.

    Note

    If you select an entire database as the object to synchronize, you can create a heartbeat table. The heartbeat table is updated or receives data every second.

  • During data synchronization, we recommend that you use only DTS to write data to the destination database. This prevents data inconsistency between the source and destination databases. For example, if you use tools other than DTS to write data to the destination database, data loss may occur in the destination database when you use DMS to perform online DDL operations.

  • If an instance fails, DTS helpdesk will try to recover the instance within 8 hours. During the recovery process, operations such as restarting the instance and adjusting parameters may be performed.

    Note

    When parameters are adjusted, only the parameters of the DTS instance are modified. The parameters of the database are not modified. The parameters that may be modified include but are not limited to those described in Modify instance parameters.