All Products
Search
Document Center

Data Transmission Service:Usage notes and limits for migrating data from an Oracle database

Last Updated:Nov 22, 2024

This topic describes the usage notes and limits that apply when you migrate data from a self-managed Oracle database. To ensure that your data migration task runs as expected, you must read the usage notes and limits before you configure the task.

Scenarios of migrating data from an Oracle database

The following list provides the scenarios of migrating data from an Oracle database. The usage notes and limits in the scenarios may vary. You can go to the related section to view the usage notes and limits in a specific scenario.

Migrate data from a self-managed Oracle database to a PolarDB for PostgreSQL (Compatible with Oracle) cluster

The following table describes the usage notes and limits that apply when you migrate data from a self-managed Oracle database to a PolarDB for PostgreSQL (Compatible with Oracle) cluster.

Category

Description

Limits on the source database

  • Bandwidth requirements: The server to which the source database belongs must have sufficient outbound bandwidth. Otherwise, the data migration speed decreases.

  • If the source database is an Oracle Real Application Cluster (RAC) database connected over Express Connect, you must specify a virtual IP address (VIP) for the database when you configure the source database.

  • If the source database is an Oracle RAC database connected over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN), you can use a single VIP rather than a Single Client Access Name (SCAN) IP address when you configure the source database. After you specify the VIP, node failover is not supported for the Oracle RAC database.

  • If a field in the source Oracle database contains an empty string of the VARCHAR2 type, which is evaluated as null in the Oracle database, and the corresponding field in the destination database has a NOT NULL constraint, the migration task fails.

  • Requirements for the objects to be migrated:

    • The tables to be migrated must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate data records.

      Note

      You can also use the ROWID values as the primary key for an Oracle table without a primary key or UNIQUE constraint.

    • If the version of your Oracle database is 12c or later, the names of the tables to be migrated cannot exceed 30 bytes in length.

    • If you select tables as the objects to be migrated and you need to modify the tables in the destination database, such as renaming tables or columns, up to 1,000 tables can be migrated in a single data migration task. If you run a task to migrate more than 1,000 tables, a request error occurs. In this case, we recommend that you configure multiple tasks to migrate the tables in batches, or configure a task to migrate the entire database.

  • To perform incremental migration, you must make sure that the following requirements are met:

    • The redo logging and archive logging must be enabled.

    • If you perform only incremental data migration, the redo logs and archive logs of the source database must be stored for more than 24 hours. If you perform both full data migration and incremental data migration, the redo logs and archive logs of the source database must be stored for at least seven days. After full data migration is complete, you can set the retention period to more than 24 hours. Otherwise, Data Transmission Service (DTS) may fail to obtain the redo logs and archive logs and the task may fail. In extreme cases, data inconsistency or loss may occur. Make sure that you set the retention period of redo logs and archive logs based on the preceding requirements. Otherwise, the Service Level Agreement (SLA) of DTS does not guarantee service reliability or performance.

  • Limits on operations to be performed on the source database:

    • During schema migration and full data migration, do not perform DDL operations to change the schemas of databases or tables. Otherwise, the data migration task fails.

    • If you perform only full data migration, do not write data to the source database during data migration. Otherwise, data inconsistency may occur between the source and destination databases. To ensure data consistency, we recommend that you select schema migration, full data migration, and incremental data migration as the migration types.

    • During data migration, do not update LONGTEXT fields. Otherwise, the data migration task fails.

Other limits

  • Before you migrate data, evaluate the impact of data migration on the performance of the source and destination databases. We recommend that you migrate data during off-peak hours. During full data migration, DTS uses read and write resources of the source and destination databases. This may increase the loads of the database servers.

  • During full data migration, concurrent INSERT operations cause fragmentation in the tables of the destination cluster. After full data migration is complete, the size of used tablespace of the destination cluster is larger than that of the source database.

  • DTS attempts to resume the data migration task that failed within the last seven days. Before you switch workloads to the destination cluster, you must stop or release the failed tasks. You can also execute the REVOKE statement to revoke the write permissions from the accounts that are used by DTS to access the destination database. Otherwise, the data in the source database overwrites the data in the destination database after the data migration task is resumed.

  • External tables cannot be migrated.

  • The destination PolarDB for PostgreSQL (Compatible with Oracle) cluster generates unique indexes like pg_oid_1498503_index to correspond to ROWID of the source self-managed Oracle database. Therefore, the number of indexes in the destination PolarDB for PostgreSQL (Compatible with Oracle) cluster is greater than the number of indexes in the source self-managed Oracle database.

  • The destination PolarDB for PostgreSQL (Compatible with Oracle) cluster does not support the string terminator ('\0'). If the data to be migrated contains the terminator, DTS does not write the terminator to the destination database. This causes data inconsistency between the source and destination databases.

  • When CHECK constraints created in the source Oracle database are migrated to the destination PolarDB for PostgreSQL (Compatible with Oracle) cluster, the CHECK constraints are converted to NOT NULL constraints.

  • Make sure that the character sets of the source and destination databases are compatible. Otherwise, data inconsistency may occur or the migration task may fail.

  • We recommend that you use the schema migration feature of DTS. Otherwise, the migration task may fail due to incompatible data types.

  • The time zones of the source and destination databases must be the same.

  • During incremental data synchronization, do not use Oracle Data Pump to write data to the source database. Otherwise, data loss may occur.

  • Objects of the custom types can be migrated to the destination PolarDB for PostgreSQL (Compatible with Oracle) cluster. However, the objects that are automatically generated by the source Oracle database (known as built-in objects) cannot be migrated.

    Note

    PolarDB for PostgreSQL (Compatible with Oracle) clusters support the built-in objects in Oracle databases. You do not need to manually migrate data.

  • DTS does not check the validity of metadata such as sequences. You must manually check the validity of metadata.

  • After your workloads are switched to the destination database, newly written sequences do not increment from the maximum value of the sequences in the source database. Therefore, you must query the maximum value of the sequences in the source database before you switch your workloads to the destination database. Then, you must specify the queried maximum value as the starting value of the sequences in the destination database. You can execute the following statements to query the maximum value of the sequences in the source database:

    do language plpgsql $$
    declare
      nsp name;
      rel name;
      val int8;
    begin
      for nsp,rel in select nspname,relname from pg_class t2 , pg_namespace t3 where t2.relnamespace=t3.oid and t2.relkind='S'
      loop
        execute format($_$select last_value from %I.%I$_$, nsp, rel) into val;
        raise notice '%',
        format($_$select setval('%I.%I'::regclass, %s);$_$, nsp, rel, val+1);
      end loop;
    end;
    $$;
  • If you run a full or incremental data migration task, the tables to be migrated from the source database contain foreign keys, triggers, or event triggers, and the account of the destination database is a privileged account or an account that has the permissions of the superuser role, DTS temporarily sets the session_replication_role parameter to replica at the session level during full or incremental data migration. If the account of the destination database does not have the permissions, you must manually set the session_replication_role parameter to replica in the destination database. After the session_replication_role parameter is set to replica during full or incremental data migration, if a cascade update or delete operation is performed in the source database, data inconsistency may occur. After the data migration task is released, you can change the value of the session_replication_role parameter to origin.

  • If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.

    Note

    Only the parameters of the task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.

Migrate data from a self-managed Oracle database to a MySQL database

The following table describes the usage notes and limits that apply when you migrate data to a MySQL database, such as a self-managed MySQL database or an ApsaraDB RDS for MySQL instance.

Category

Description

Limits on the source database

  • Bandwidth requirements: The server to which the source database belongs must have sufficient outbound bandwidth. Otherwise, the data migration speed decreases.

  • If the source database is an Oracle RAC database connected over Express Connect, you must specify a VIP for the database when you configure the source database.

  • If the source database is an Oracle RAC database connected over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN), you can use a single VIP rather than a Single Client Access Name (SCAN) IP address when you configure the source database. After you specify the VIP, node failover is not supported for the Oracle RAC database.

  • If a field in the source Oracle database contains an empty string of the VARCHAR2 type, which is evaluated as null in the Oracle database, and the corresponding field in the destination database has a NOT NULL constraint, the migration task fails.

  • Requirements for the objects to be migrated:

    • The tables to be migrated must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate data records.

    • If the version of your Oracle database is 12c or later, the names of the tables to be migrated cannot exceed 30 bytes in length.

    • If you select tables as the objects to be migrated and you need to modify the tables in the destination database, such as renaming tables or columns, up to 1,000 tables can be migrated in a single data migration task. If you run a task to migrate more than 1,000 tables, a request error occurs. In this case, we recommend that you configure multiple tasks to migrate the tables in batches, or configure a task to migrate the entire database.

  • To perform incremental migration, you must make sure that the following requirements are met:

    • The redo logging and archive logging must be enabled.

    • If you perform only incremental data migration, the redo logs and archive logs of the source database must be stored for more than 24 hours. If you perform both full data migration and incremental data migration, the redo logs and archive logs of the source database must be stored for at least seven days. After full data migration is complete, you can set the retention period to more than 24 hours. Otherwise, Data Transmission Service (DTS) may fail to obtain the redo logs and archive logs and the task may fail. In extreme cases, data inconsistency or loss may occur. Make sure that you set the retention period of redo logs and archive logs based on the preceding requirements. Otherwise, the Service Level Agreement (SLA) of DTS does not guarantee service reliability or performance.

  • Limits on operations to be performed on the source database:

    • During schema migration and full data migration, do not perform DDL operations to change the schemas of databases or tables. Otherwise, the data migration task fails.

    • If you perform only full data migration, do not write data to the source database during data migration. Otherwise, data inconsistency may occur between the source and destination databases. To ensure data consistency, we recommend that you select schema migration, full data migration, and incremental data migration as the migration types.

    • During data migration, do not update LONGTEXT fields. Otherwise, the data migration task fails.

Other limits

  • During incremental data synchronization, do not use Oracle Data Pump to write data to the source database. Otherwise, data loss may occur.

  • External tables cannot be migrated.

  • Before you migrate data, evaluate the impact of data migration on the performance of the source and destination databases. We recommend that you migrate data during off-peak hours. During full data migration, DTS uses read and write resources of the source and destination databases. This may increase the loads of the database servers.

  • During full data migration, concurrent INSERT operations cause fragmentation in the tables of the destination cluster. After full data migration is complete, the size of used tablespace of the destination cluster is larger than that of the source database.

  • DTS attempts to resume the data migration task that failed within the last seven days. Before you switch workloads to the destination cluster, you must stop or release the failed tasks. You can also execute the REVOKE statement to revoke the write permissions from the accounts that are used by DTS to access the destination database. Otherwise, the data in the source database overwrites the data in the destination database after the data migration task is resumed.

  • If DDL statements fail to be executed in the destination database, the DTS task continues to run. You can view the DDL statements that fail to be executed in task logs. For more information about how to view task logs, see View task logs.

  • Make sure that the character sets of the source and destination databases are compatible. Otherwise, data inconsistency may occur or the migration task may fail.

  • We recommend that you use the schema migration feature of DTS. Otherwise, the migration task may fail due to incompatible data types.

  • The time zones of the source and destination databases must be the same.

  • If you write column names that differ only in capitalization to the same table in the destination MySQL database, the data migration result may not meet your expectations because the column names in MySQL databases are not case-sensitive.

  • After data migration is complete, we recommend that you run the analyze table <table name> command to check whether data is written to the destination table. For example, if a high-availability (HA) switchover is triggered in the source MySQL database, data may be written only to the memory. As a result, data loss occurs.

  • If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.

    Note

    Only the parameters of the task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.

Special cases

If the destination database runs on an ApsaraDB RDS for MySQL instance, take note of the following limits:

  • Table names in the ApsaraDB RDS for MySQL instance are case-insensitive. If a table name in the source Oracle database contains uppercase letters, ApsaraDB RDS for MySQL converts all uppercase letters to lowercase letters before a table is created.

    If the source Oracle database contains identical table names that differ only in capitalization, these table names are identified as duplicates. As a result, the "The object already exists" message may be displayed during schema migration. To prevent name conflicts in the destination database, you can use the object name mapping feature to capitalize the table names when you configure the objects to be migrated. For more information, see Object name mapping.

  • DTS automatically creates a destination database in the ApsaraDB RDS for MySQL instance. However, if the name of the source database is invalid, you must manually create a database in the ApsaraDB RDS for MySQL instance before you configure the data migration task. For more information, see Manage databases.

Migrate data from a self-managed Oracle database to a PolarDB for MySQL cluster

The following table describes the usage notes and limits that apply when you migrate data from a self-managed Oracle database to a PolarDB for MySQL cluster.

Category

Description

Limits on the source database

  • Bandwidth requirements: The server to which the source database belongs must have sufficient outbound bandwidth. Otherwise, the data migration speed decreases.

  • If the source database is an Oracle RAC database connected over Express Connect, you must specify a VIP for the database when you configure the source database.

  • If the source database is an Oracle RAC database connected over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN), you can use a single VIP rather than a Single Client Access Name (SCAN) IP address when you configure the source database. After you specify the VIP, node failover is not supported for the Oracle RAC database.

  • If a field in the source Oracle database contains an empty string of the VARCHAR2 type, which is evaluated as null in the Oracle database, and the corresponding field in the destination database has a NOT NULL constraint, the migration task fails.

  • Requirements for the objects to be migrated:

    • The tables to be migrated must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate data records.

    • If the version of your Oracle database is 12c or later, the names of the tables to be migrated cannot exceed 30 bytes in length.

    • If you select tables as the objects to be migrated and you need to modify the tables in the destination database, such as renaming tables or columns, up to 1,000 tables can be migrated in a single data migration task. If you run a task to migrate more than 1,000 tables, a request error occurs. In this case, we recommend that you configure multiple tasks to migrate the tables in batches, or configure a task to migrate the entire database.

  • To perform incremental migration, you must make sure that the following requirements are met:

    • The redo logging and archive logging must be enabled.

    • If you perform only incremental data migration, the redo logs and archive logs of the source database must be stored for more than 24 hours. If you perform both full data migration and incremental data migration, the redo logs and archive logs of the source database must be stored for at least seven days. After full data migration is complete, you can set the retention period to more than 24 hours. Otherwise, Data Transmission Service (DTS) may fail to obtain the redo logs and archive logs and the task may fail. In extreme cases, data inconsistency or loss may occur. Make sure that you set the retention period of redo logs and archive logs based on the preceding requirements. Otherwise, the Service Level Agreement (SLA) of DTS does not guarantee service reliability or performance.

  • Limits on operations to be performed on the source database:

    • During schema migration and full data migration, do not perform DDL operations to change the schemas of databases or tables. Otherwise, the data migration task fails.

    • If you perform only full data migration, do not write data to the source database during data migration. Otherwise, data inconsistency may occur between the source and destination databases. To ensure data consistency, we recommend that you select schema migration, full data migration, and incremental data migration as the migration types.

    • During data migration, do not update LONGTEXT fields. Otherwise, the data migration task fails.

Other limits

  • During incremental data synchronization, do not use Oracle Data Pump to write data to the source database. Otherwise, data loss may occur.

  • External tables cannot be migrated.

  • Before you migrate data, evaluate the impact of data migration on the performance of the source and destination databases. We recommend that you migrate data during off-peak hours. During full data migration, DTS uses read and write resources of the source and destination databases. This may increase the loads of the database servers.

  • During full data migration, concurrent INSERT operations cause fragmentation in the tables of the destination cluster. After full data migration is complete, the size of used tablespace of the destination cluster is larger than that of the source database.

  • DTS attempts to resume the data migration task that failed within the last seven days. Before you switch workloads to the destination cluster, you must stop or release the failed tasks. You can also execute the REVOKE statement to revoke the write permissions from the accounts that are used by DTS to access the destination database. Otherwise, the data in the source database overwrites the data in the destination database after the data migration task is resumed.

  • If DDL statements fail to be executed in the destination database, the DTS task continues to run. You can view the DDL statements that fail to be executed in task logs. For more information about how to view task logs, see View task logs.

  • Make sure that the character sets of the source and destination databases are compatible. Otherwise, data inconsistency may occur or the migration task may fail.

  • We recommend that you use the schema migration feature of DTS. Otherwise, the migration task may fail due to incompatible data types.

  • The time zones of the source and destination databases must be the same.

  • If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.

    Note

    Only the parameters of the task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.

Special cases

If the destination database runs on a PolarDB for MySQL cluster, take note of the following limits:

  • Table names in the PolarDB for MySQL cluster are case-insensitive. If a table name in the source Oracle database contains uppercase letters, PolarDB for MySQL converts all uppercase letters to lowercase letters before a table is created.

    If the source Oracle database contains identical table names that differ only in capitalization, these table names are identified as duplicates. As a result, the "The object already exists" message may be displayed during schema migration. To prevent name conflicts in the destination database, you can use the object name mapping feature to capitalize the table names when you configure the objects to be migrated. For more information, see Object name mapping.

  • DTS automatically creates a database in the PolarDB for MySQL cluster. However, if the name of the source database does not meet the requirements of PolarDB for MySQL, you must create a database in the PolarDB for MySQL cluster before you configure the data migration task. For more information, see Database management operations.

Migrate data from a self-managed Oracle database to an AnalyticDB for PostgreSQL instance

The following table describes the usage notes and limits that apply when you migrate data from a self-managed Oracle database to an AnalyticDB for PostgreSQL instance.

Category

Description

Limits on the source database

  • Bandwidth requirements: The server to which the source database belongs must have sufficient outbound bandwidth. Otherwise, the data migration speed decreases.

  • If the source database is an Oracle RAC database connected over Express Connect, you must specify a VIP for the database when you configure the source database.

  • If the source database is an Oracle RAC database connected over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN), you can use a single VIP rather than a Single Client Access Name (SCAN) IP address when you configure the source database. After you specify the VIP, node failover is not supported for the Oracle RAC database.

  • If a field in the source Oracle database contains an empty string of the VARCHAR2 type, which is evaluated as null in the Oracle database, and the corresponding field in the destination database has a NOT NULL constraint, the migration task fails.

  • Requirements for the objects to be migrated:

    • The tables to be migrated must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate data records.

    • If the version of your Oracle database is 12c or later, the names of the tables to be migrated cannot exceed 30 bytes in length.

    • If you select tables as the objects to be migrated and you need to modify the tables in the destination database, such as renaming tables or columns, up to 1,000 tables can be migrated in a single data migration task. If you run a task to migrate more than 1,000 tables, a request error occurs. In this case, we recommend that you configure multiple tasks to migrate the tables in batches, or configure a task to migrate the entire database.

  • To perform incremental migration, you must make sure that the following requirements are met:

    • The redo logging and archive logging must be enabled.

    • If you perform only incremental data migration, the redo logs and archive logs of the source database must be stored for more than 24 hours. If you perform both full data migration and incremental data migration, the redo logs and archive logs of the source database must be stored for at least seven days. After full data migration is complete, you can set the retention period to more than 24 hours. Otherwise, Data Transmission Service (DTS) may fail to obtain the redo logs and archive logs and the task may fail. In extreme cases, data inconsistency or loss may occur. Make sure that you set the retention period of redo logs and archive logs based on the preceding requirements. Otherwise, the Service Level Agreement (SLA) of DTS does not guarantee service reliability or performance.

  • Limits on operations to be performed on the source database:

    • During schema migration and full data migration, do not perform DDL operations to change the schemas of databases or tables. Otherwise, the data migration task fails.

    • If you perform only full data migration, do not write data to the source database during data migration. Otherwise, data inconsistency may occur between the source and destination databases. To ensure data consistency, we recommend that you select schema migration, full data migration, and incremental data migration as the migration types.

    • During data migration, do not update LONGTEXT fields. Otherwise, the data migration task fails.

Other limits

  • Only tables can be selected as the objects to be migrated. The destination table does not support append-optimized (AO) tables.

  • If column mapping is used for non-full table migration or if the schemas of the source and destination tables are inconsistent, the data in the columns of the source database that are not contained in the destination database will be lost.

  • The destination AnalyticDB for PostgreSQL instance does not support the string terminator ('\0'). If the data to be migrated contains the terminator, DTS does not write the terminator to the destination database. This causes data inconsistency between the source and destination databases.

  • External tables cannot be migrated.

  • During incremental data synchronization, do not use Oracle Data Pump to write data to the source database. Otherwise, data loss may occur.

  • Before you migrate data, evaluate the impact of data migration on the performance of the source and destination databases. We recommend that you migrate data during off-peak hours. During full data migration, DTS uses read and write resources of the source and destination databases. This may increase the loads of the database servers.

  • During full data migration, concurrent INSERT operations cause fragmentation in the tables of the destination cluster. After full data migration is complete, the size of used tablespace of the destination cluster is larger than that of the source database.

  • DTS attempts to resume the data migration task that failed within the last seven days. Before you switch workloads to the destination cluster, you must stop or release the failed tasks. You can also execute the REVOKE statement to revoke the write permissions from the accounts that are used by DTS to access the destination database. Otherwise, the data in the source database overwrites the data in the destination database after the data migration task is resumed.

  • If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.

    Note

    Only the parameters of the task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.

Migrate data from a self-managed Oracle database to an ApsaraMQ for Kafka instance or a self-managed Kafka cluster

Category

Description

Limits on the source database

  • Bandwidth requirements: The server to which the source database belongs must have sufficient outbound bandwidth. Otherwise, the data migration speed decreases.

  • If the source database is an Oracle RAC database connected over Express Connect, you must specify a VIP for the database when you configure the source database.

  • If the source database is an Oracle RAC database connected over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN), you can use a single VIP rather than a Single Client Access Name (SCAN) IP address when you configure the source database. After you specify the VIP, node failover is not supported for the Oracle RAC database.

  • If a field in the source Oracle database contains an empty string of the VARCHAR2 type, which is evaluated as null in the Oracle database, and the corresponding field in the destination database has a NOT NULL constraint, the migration task fails.

  • Requirements for the objects to be migrated:

    • The tables to be migrated must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate data records.

    • If the version of your Oracle database is 12c or later, the names of the tables to be migrated cannot exceed 30 bytes in length.

    • If you select tables as the objects to be migrated and you need to modify the tables in the destination database, such as renaming tables or columns, up to 1,000 tables can be migrated in a single data migration task. If you run a task to migrate more than 1,000 tables, a request error occurs. In this case, we recommend that you configure multiple tasks to migrate the tables in batches, or configure a task to migrate the entire database.

  • To perform incremental migration, you must make sure that the following requirements are met:

    • The redo logging and archive logging must be enabled.

    • If you perform only incremental data migration, the redo logs and archive logs of the source database must be stored for more than 24 hours. If you perform both full data migration and incremental data migration, the redo logs and archive logs of the source database must be stored for at least seven days. After full data migration is complete, you can set the retention period to more than 24 hours. Otherwise, Data Transmission Service (DTS) may fail to obtain the redo logs and archive logs and the task may fail. In extreme cases, data inconsistency or loss may occur. Make sure that you set the retention period of redo logs and archive logs based on the preceding requirements. Otherwise, the Service Level Agreement (SLA) of DTS does not guarantee service reliability or performance.

  • Limits on operations to be performed on the source database:

    • During schema migration and full data migration, do not perform DDL operations to change the schemas of databases or tables. Otherwise, the data migration task fails.

    • If you perform only full data migration, do not write data to the source database during data migration. Otherwise, data inconsistency may occur between the source and destination databases. To ensure data consistency, we recommend that you select schema migration, full data migration, and incremental data migration as the migration types.

    • During data migration, do not update LONGTEXT fields. Otherwise, the data migration task fails.

Other limits

  • During incremental data synchronization, do not use Oracle Data Pump to write data to the source database. Otherwise, data loss may occur.

  • External tables cannot be migrated.

  • Before you migrate data, evaluate the impact of data migration on the performance of the source and destination databases. We recommend that you migrate data during off-peak hours. During full data migration, DTS uses read and write resources of the source and destination databases. This may increase the loads of the database servers.

  • During full data migration, concurrent INSERT operations cause fragmentation in the tables of the destination cluster. After full data migration is complete, the size of used tablespace of the destination cluster is larger than that of the source database.

  • DTS attempts to resume the data migration task that failed within the last seven days. Before you switch workloads to the destination cluster, you must stop or release the failed tasks. You can also execute the REVOKE statement to revoke the write permissions from the accounts that are used by DTS to access the destination database. Otherwise, the data in the source database overwrites the data in the destination database after the data migration task is resumed.

  • If the destination Kafka cluster is upgraded or downgraded during data migration, you must restart the cluster.

  • If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.

    Note

    Only the parameters of the task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.

Migrate data between self-managed Oracle databases

The following table describes the usage notes and limits that apply when you migrate data between self-managed Oracle databases.

Category

Description

Limits on the source database

  • Bandwidth requirements: The server to which the source database belongs must have sufficient outbound bandwidth. Otherwise, the data migration speed decreases.

  • If the source database is an Oracle RAC database connected over Express Connect, you must specify a VIP for the database when you configure the source database.

  • If the source database is an Oracle RAC database connected over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN), you can use a single VIP rather than a Single Client Access Name (SCAN) IP address when you configure the source database. After you specify the VIP, node failover is not supported for the Oracle RAC database.

  • If a field in the source Oracle database contains an empty string of the VARCHAR2 type, which is evaluated as null in the Oracle database, and the corresponding field in the destination database has a NOT NULL constraint, the migration task fails.

  • Requirements for the objects to be migrated:

    • The tables to be migrated must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate data records.

    • If the version of your Oracle database is 12c or later, the names of the tables to be migrated cannot exceed 30 bytes in length.

    • If you select tables as the objects to be migrated and you need to modify the tables in the destination database, such as renaming tables or columns, up to 1,000 tables can be migrated in a single data migration task. If you run a task to migrate more than 1,000 tables, a request error occurs. In this case, we recommend that you configure multiple tasks to migrate the tables in batches, or configure a task to migrate the entire database.

  • To perform incremental migration, you must make sure that the following requirements are met:

    • The redo logging and archive logging must be enabled.

    • If you perform only incremental data migration, the redo logs and archive logs of the source database must be stored for more than 24 hours. If you perform both full data migration and incremental data migration, the redo logs and archive logs of the source database must be stored for at least seven days. After full data migration is complete, you can set the retention period to more than 24 hours. Otherwise, Data Transmission Service (DTS) may fail to obtain the redo logs and archive logs and the task may fail. In extreme cases, data inconsistency or loss may occur. Make sure that you set the retention period of redo logs and archive logs based on the preceding requirements. Otherwise, the Service Level Agreement (SLA) of DTS does not guarantee service reliability or performance.

  • Limits on operations to be performed on the source database:

    • During schema migration and full data migration, do not perform DDL operations to change the schemas of databases or tables. Otherwise, the data migration task fails.

    • If you perform only full data migration, do not write data to the source database during data migration. Otherwise, data inconsistency may occur between the source and destination databases. To ensure data consistency, we recommend that you select schema migration, full data migration, and incremental data migration as the migration types.

    • During data migration, do not update LONGTEXT fields. Otherwise, the data migration task fails.

Other limits

  • During incremental data synchronization, do not use Oracle Data Pump to write data to the source database. Otherwise, data loss may occur.

  • External tables cannot be migrated.

  • Before you migrate data, evaluate the impact of data migration on the performance of the source and destination databases. We recommend that you migrate data during off-peak hours. During full data migration, DTS uses read and write resources of the source and destination databases. This may increase the loads of the database servers.

  • During full data migration, concurrent INSERT operations cause fragmentation in the tables of the destination cluster. After full data migration is complete, the size of used tablespace of the destination cluster is larger than that of the source database.

  • DTS attempts to resume the data migration task that failed within the last seven days. Before you switch workloads to the destination cluster, you must stop or release the failed tasks. You can also execute the REVOKE statement to revoke the write permissions from the accounts that are used by DTS to access the destination database. Otherwise, the data in the source database overwrites the data in the destination database after the data migration task is resumed.

  • If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.

    Note

    Only the parameters of the task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.