All Products
Search
Document Center

Data Transmission Service:Precautions and limitations for migrating data from an Oracle source database

Last Updated:Aug 18, 2025

If the source database is a self-managed Oracle database, review the precautions and limitations in this topic before you configure a data migration task to ensure that the task runs as expected.

Overview of migration scenarios from an Oracle source

Review the precautions and limitations for your migration task based on the following scenarios:

Migrate data from a self-managed Oracle database to PolarDB for PostgreSQL (Compatible with Oracle)

The precautions and limitations are as follows:

Type

Description

Source database limitations

  • Bandwidth requirements: The server that hosts the source database must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected.

  • If the source database is connected over a leased line, you must configure one of the virtual IP addresses (VIPs) in the connection information. This allows the Oracle Real Application Clusters (RAC) to connect to the data migration task over the leased line.

  • If the self-managed Oracle database uses an RAC architecture and is connected over a leased line, VPN Gateway, Smart Access Gateway, Database Gateway (DG), or Cloud Enterprise Network (CEN), or from an ECS instance, you cannot configure a Single Client Access Name (SCAN) IP address. You can only configure one of the VIPs in the connection information. If you use this method, node switching for RAC is not supported.

  • If the data to be migrated contains empty strings of the `varchar2` type, which Oracle treats as null, and the corresponding destination database field has a NOT NULL constraint, the migration task fails.

  • If the FGA (Fine-Grained Audit) policy is enabled on the table to be migrated, DTS cannot recognize the ORA_ROWSCN pseudocolumn, which will cause the migration job to fail.

    Note

    You can disable the FGA policy for the tables to be migrated, or choose not to migrate data from these tables.

  • Requirements for migration objects:

    • The tables to be migrated must have a primary key or a unique constraint, and the fields must be unique. Otherwise, duplicate data may appear in the destination database.

      Note

      You can also use the Oracle ROWID as the primary key for tables that do not have a primary key or unique constraint.

    • If your self-managed Oracle database is version 12c or later, the names of the tables to be migrated must not exceed 30 bytes in length.

    • If you migrate objects at the table level and need to edit them, such as mapping table or column names, a single data migration task supports a maximum of 1,000 tables. If this limit is exceeded, an error is reported after you submit the task. In this case, split the tables into multiple batches and configure a separate task for each batch, or configure a task to migrate the entire database.

  • For incremental migration, Redo Logs and Archive Logs:

    • Must be enabled.

    • For an incremental data migration task, DTS requires that Redo Logs and Archive Logs in the source database are retained for more than 24 hours. For a task that includes both full and incremental data migration, DTS requires that Redo Logs and Archive Logs are retained for at least 7 days. After the full data migration is complete, you can change the retention period to more than 24 hours. If the retention period is shorter than required, the DTS task may fail because it cannot obtain the logs. In extreme cases, this may cause data inconsistency or loss. Issues caused by a log retention period shorter than the DTS requirement are not covered by the DTS Service-Level Agreement (SLA).

  • Limitations on source database operations:

    • During schema migration and full data migration, do not perform DDL operations that change the database or table schema. Otherwise, the data migration task fails.

    • If you perform only full data migration, do not write new data to the source instance. Otherwise, the source and destination data will be inconsistent. To maintain real-time data consistency, select schema migration, full data migration, and incremental data migration.

    • Updating large text fields separately is not supported and will cause the task to fail.

Other limitations

  • Before migrating data, evaluate the performance of the source and destination databases. Perform data migration during off-peak hours. During full data migration, DTS consumes some read and write resources from the source and destination databases, which may increase the database load.

  • Because full data migration performs concurrent INSERT operations, it causes table fragmentation in the destination database. As a result, the table storage space in the destination database is larger than that in the source instance.

  • DTS attempts to resume failed migration tasks within seven days. Therefore, before you switch your business to the destination instance, you must stop or release the task. Alternatively, use the revoke command to revoke the write permissions of the account that DTS uses to access the destination instance. This prevents the source data from overwriting the data in the destination instance if the task is automatically resumed.

  • Migration of foreign tables is not supported.

  • The destination PolarDB for PostgreSQL (Compatible with Oracle) cluster generates unique indexes, such as pg_oid_1498503_index, to correspond to the ROWID of the source Oracle database. Therefore, the destination cluster has more indexes than the source Oracle database.

  • The destination PolarDB for PostgreSQL (Compatible with Oracle) cluster does not support writing the string terminator ('\0'). If the data to be migrated contains this terminator, DTS does not write it to the destination database. This causes data inconsistency.

  • When a CHECK constraint from the source Oracle database is migrated to the destination PolarDB for PostgreSQL (Compatible with Oracle) cluster, it is converted into a NOT NULL constraint.

  • Ensure that the character sets of the source and destination databases are compatible. Otherwise, data inconsistency or task failure may occur.

  • Use the schema migration feature of DTS. Otherwise, the task may fail due to incompatible data types.

  • The time zones of the source and destination databases must be the same.

  • During incremental migration, using Oracle Data Pump to import data into the source database is not supported. This may cause data loss.

  • User-defined types can be migrated to the destination PolarDB for PostgreSQL (Compatible with Oracle) cluster. Type objects automatically generated by Oracle, which are built-in objects, are not migrated.

    Note

    PolarDB for PostgreSQL (Compatible with Oracle) clusters already support Oracle's built-in objects, so you do not need to migrate them.

  • DTS validates data content but does not support validation for metadata such as sequences. You must validate this metadata yourself.

  • After your workloads are switched to the destination database, new sequences do not increment from the maximum value of the sequences in the source database. Therefore, before you switch your workloads to the destination database, you must update the starting value of sequences in the destination database.

  • If you run a full or incremental data migration task, the tables to be migrated from the source database contain foreign keys, triggers, or event triggers, and the account of the destination database is a privileged account or an account that has the permissions of the superuser role, DTS temporarily sets the session_replication_role parameter to replica at the session level during full or incremental data migration. If the account of the destination database does not have the permissions, you must manually set the session_replication_role parameter to replica in the destination database. After the session_replication_role parameter is set to replica during full or incremental data migration, if a cascade update or delete operation is performed in the source database, data inconsistency may occur. After the data migration task is released, you can change the value of the session_replication_role parameter to origin.

  • If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.

    Note

    Only the parameters of the DTS task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.

Migrate data from a self-managed Oracle database to MySQL

If the destination database is a MySQL database, such as an ApsaraDB RDS for MySQL instance or a self-managed MySQL database, the precautions and limitations are as follows:

Type

Description

Source database limitations

  • Bandwidth requirements: The server that hosts the source database must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected.

  • If the source database is connected over a leased line, you must configure one of the virtual IP addresses (VIPs) in the connection information. This allows the Oracle Real Application Clusters (RAC) to connect to the data migration task over the leased line.

  • If the self-managed Oracle database uses an RAC architecture and is connected over a leased line, VPN Gateway, Smart Access Gateway, Database Gateway (DG), or Cloud Enterprise Network (CEN), or from an ECS instance, you cannot configure a Single Client Access Name (SCAN) IP address. You can only configure one of the VIPs in the connection information. If you use this method, node switching for RAC is not supported.

  • If the data to be migrated contains empty strings of the `varchar2` type, which Oracle treats as null, and the corresponding destination database field has a NOT NULL constraint, the migration task fails.

  • If the FGA (Fine-Grained Audit) policy is enabled on the table to be migrated, DTS cannot recognize the ORA_ROWSCN pseudocolumn, which will cause the migration job to fail.

    Note

    You can disable the FGA policy for the tables to be migrated, or choose not to migrate data from these tables.

  • Requirements for migration objects:

    • The tables to be migrated must have a primary key or a unique constraint, and the fields must be unique. Otherwise, duplicate data may appear in the destination database.

    • If your self-managed Oracle database is version 12c or later, the names of the tables to be migrated must not exceed 30 bytes in length.

    • If you migrate objects at the table level and need to edit them, such as mapping table or column names, a single data migration task supports a maximum of 1,000 tables. If this limit is exceeded, an error is reported after you submit the task. In this case, split the tables into multiple batches and configure a separate task for each batch, or configure a task to migrate the entire database.

  • For incremental migration, Redo Logs and Archive Logs:

    • Must be enabled.

    • For an incremental data migration task, DTS requires that Redo Logs and Archive Logs in the source database are retained for more than 24 hours. For a task that includes both full and incremental data migration, DTS requires that Redo Logs and Archive Logs are retained for at least 7 days. After the full data migration is complete, you can change the retention period to more than 24 hours. If the retention period is shorter than required, the DTS task may fail because it cannot obtain the logs. In extreme cases, this may cause data inconsistency or loss. Issues caused by a log retention period shorter than the DTS requirement are not covered by the DTS Service-Level Agreement (SLA).

  • Limitations on source database operations:

    • During schema migration and full data migration, do not perform DDL operations that change the database or table schema. Otherwise, the data migration task fails.

    • If you perform only full data migration, do not write new data to the source instance. Otherwise, the source and destination data will be inconsistent. To maintain real-time data consistency, select schema migration, full data migration, and incremental data migration.

    • Updating large text fields separately is not supported and will cause the task to fail.

Other limitations

  • During incremental migration, using Oracle Data Pump to import data into the source database is not supported. This may cause data loss.

  • Migration of foreign tables is not supported.

  • Migration of PACKAGE, PACKAGE_BODY, MATERIALIZED_VIEW, SYNONYM, TYPE, TYPE_BODY, FUNCTION, PROCEDURE, SEQUENCE, VIEW, TABLE_COMMENT, COLUMN_COMMENT, and TRIGGER is not supported.

  • If the data to be migrated contains information such as rare characters or emojis that takes up four bytes, the destination databases and tables to receive the data must use UTF8mb4 character set.

    Note

    If you use the schema migration feature of DTS, set the instance parameter character_set_server in the destination database to UTF8mb4 character set.

  • Before migrating data, evaluate the performance of the source and destination databases. Perform data migration during off-peak hours. During full data migration, DTS consumes some read and write resources from the source and destination databases, which may increase the database load.

  • Because full data migration performs concurrent INSERT operations, it causes table fragmentation in the destination database. As a result, the table storage space in the destination database is larger than that in the source instance.

  • DTS attempts to resume failed migration tasks within seven days. Therefore, before you switch your business to the destination instance, you must stop or release the task. Alternatively, use the revoke command to revoke the write permissions of the account that DTS uses to access the destination instance. This prevents the source data from overwriting the data in the destination instance if the task is automatically resumed.

  • If a DDL statement fails to be written to the destination database, the DTS task continues to run. You must check the task logs for the failed DDL statement. For more information about how to view task logs, see Query task logs.

  • Ensure that the character sets of the source and destination databases are compatible. Otherwise, data inconsistency or task failure may occur.

  • Use the schema migration feature of DTS. Otherwise, the task may fail due to incompatible data types.

  • The time zones of the source and destination databases must be the same.

  • If you write column names that differ only in capitalization to the same table in the destination MySQL database, the data migration result may not meet your expectations because the column names in MySQL databases are not case-sensitive.

  • After data migration is complete, that is, the Status of the instance changes to Completed, we recommend that you run the analyze table <table name> command to check whether data is written to the destination table. For example, if a high-availability (HA) switchover is triggered in the destination MySQL database, data may be written only to the memory. As a result, data loss occurs.

  • If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.

    Note

    Only the parameters of the DTS task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.

Special cases

When the destination database is ApsaraDB RDS for MySQL

  • ApsaraDB RDS for MySQL instances are case-insensitive to English table names. If you use uppercase English letters to create a table, ApsaraDB RDS for MySQL converts the table name to lowercase before creating the table.

    If the source Oracle database contains tables whose names are identical but differ in case, this may cause object name conflicts and a message indicating that the object already exists during schema migration. If this occurs, use the object name mapping feature provided by DTS to rename the conflicting objects when you configure migration objects. Convert the table names to uppercase. For more information, see Map tables and columns.

  • DTS automatically creates databases in the ApsaraDB RDS for MySQL instance. If the name of a database to be migrated does not comply with the naming conventions of ApsaraDB RDS for MySQL, you must create the database in the ApsaraDB RDS for MySQL instance before you configure the migration task. For more information, see Manage databases.

Migrate data from a self-managed Oracle database to PolarDB for MySQL

If the destination cluster is a PolarDB for MySQL cluster, the precautions and limitations are as follows:

Type

Description

Source database limitations

  • Bandwidth requirements: The server that hosts the source database must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected.

  • If the source database is connected over a leased line, you must configure one of the virtual IP addresses (VIPs) in the connection information. This allows the Oracle Real Application Clusters (RAC) to connect to the data migration task over the leased line.

  • If the self-managed Oracle database uses an RAC architecture and is connected over a leased line, VPN Gateway, Smart Access Gateway, Database Gateway (DG), or Cloud Enterprise Network (CEN), or from an ECS instance, you cannot configure a Single Client Access Name (SCAN) IP address. You can only configure one of the VIPs in the connection information. If you use this method, node switching for RAC is not supported.

  • If the data to be migrated contains empty strings of the `varchar2` type, which Oracle treats as null, and the corresponding destination database field has a NOT NULL constraint, the migration task fails.

  • If the FGA (Fine-Grained Audit) policy is enabled on the table to be migrated, DTS cannot recognize the ORA_ROWSCN pseudocolumn, which will cause the migration job to fail.

    Note

    You can disable the FGA policy for the tables to be migrated, or choose not to migrate data from these tables.

  • Requirements for migration objects:

    • The tables to be migrated must have a primary key or a unique constraint, and the fields must be unique. Otherwise, duplicate data may appear in the destination database.

    • If your self-managed Oracle database is version 12c or later, the names of the tables to be migrated must not exceed 30 bytes in length.

    • If you migrate objects at the table level and need to edit them, such as mapping table or column names, a single data migration task supports a maximum of 1,000 tables. If this limit is exceeded, an error is reported after you submit the task. In this case, split the tables into multiple batches and configure a separate task for each batch, or configure a task to migrate the entire database.

  • For incremental migration, Redo Logs and Archive Logs:

    • Must be enabled.

    • For an incremental data migration task, DTS requires that Redo Logs and Archive Logs in the source database are retained for more than 24 hours. For a task that includes both full and incremental data migration, DTS requires that Redo Logs and Archive Logs are retained for at least 7 days. After the full data migration is complete, you can change the retention period to more than 24 hours. If the retention period is shorter than required, the DTS task may fail because it cannot obtain the logs. In extreme cases, this may cause data inconsistency or loss. Issues caused by a log retention period shorter than the DTS requirement are not covered by the DTS Service-Level Agreement (SLA).

  • Limitations on source database operations:

    • During schema migration and full data migration, do not perform DDL operations that change the database or table schema. Otherwise, the data migration task fails.

    • If you perform only full data migration, do not write new data to the source instance. Otherwise, the source and destination data will be inconsistent. To maintain real-time data consistency, select schema migration, full data migration, and incremental data migration.

    • Updating large text fields separately is not supported and will cause the task to fail.

Other limitations

  • During incremental migration, using Oracle Data Pump to import data into the source database is not supported. This may cause data loss.

  • Migration of foreign tables is not supported.

  • Migration of PACKAGE, PACKAGE_BODY, MATERIALIZED_VIEW, SYNONYM, TYPE, TYPE_BODY, FUNCTION, PROCEDURE, SEQUENCE, VIEW, TABLE_COMMENT, COLUMN_COMMENT, and TRIGGER is not supported.

  • If the data to be migrated contains information such as rare characters or emojis that takes up four bytes, the destination databases and tables to receive the data must use UTF8mb4 character set.

    Note

    If you use the schema migration feature of DTS, set the instance parameter character_set_server in the destination database to UTF8mb4 character set.

  • Before migrating data, evaluate the performance of the source and destination databases. Perform data migration during off-peak hours. During full data migration, DTS consumes some read and write resources from the source and destination databases, which may increase the database load.

  • Because full data migration performs concurrent INSERT operations, it causes table fragmentation in the destination database. As a result, the table storage space in the destination database is larger than that in the source instance.

  • DTS attempts to resume failed migration tasks within seven days. Therefore, before you switch your business to the destination instance, you must stop or release the task. Alternatively, use the revoke command to revoke the write permissions of the account that DTS uses to access the destination instance. This prevents the source data from overwriting the data in the destination instance if the task is automatically resumed.

  • If a DDL statement fails to be written to the destination database, the DTS task continues to run. You must check the task logs for the failed DDL statement. For more information about how to view task logs, see Query task logs.

  • Ensure that the character sets of the source and destination databases are compatible. Otherwise, data inconsistency or task failure may occur.

  • Use the schema migration feature of DTS. Otherwise, the task may fail due to incompatible data types.

  • The time zones of the source and destination databases must be the same.

  • If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.

    Note

    Only the parameters of the DTS task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.

Special cases

When the destination database is PolarDB for MySQL:

  • PolarDB for MySQL clusters are case-insensitive to English table names. If you use uppercase English letters to create a table, PolarDB for MySQL converts the table name to lowercase before creating the table.

    If the source Oracle database contains tables whose names are identical but differ in case, this may cause object name conflicts and a message indicating that the object already exists during schema migration. If this occurs, use the object name mapping feature provided by DTS to rename the conflicting objects when you configure migration objects. Convert the table names to uppercase. For more information, see Map tables and columns.

  • DTS automatically creates databases in PolarDB for MySQL. If the name of a database to be migrated does not comply with the naming conventions of PolarDB for MySQL, you must create the database in PolarDB for MySQL before you configure the migration task. For more information, see Manage databases.

Migrate data from a self-managed Oracle database to AnalyticDB for PostgreSQL

If the destination instance is an AnalyticDB for PostgreSQL instance, the precautions and limitations are as follows:

Type

Description

Source database limitations

  • Bandwidth requirements: The server that hosts the source database must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected.

  • If the source database is connected over a leased line, you must configure one of the virtual IP addresses (VIPs) in the connection information. This allows the Oracle Real Application Clusters (RAC) to connect to the data migration task over the leased line.

  • If the self-managed Oracle database uses an RAC architecture and is connected over a leased line, VPN Gateway, Smart Access Gateway, Database Gateway (DG), or Cloud Enterprise Network (CEN), or from an ECS instance, you cannot configure a Single Client Access Name (SCAN) IP address. You can only configure one of the VIPs in the connection information. If you use this method, node switching for RAC is not supported.

  • If the data to be migrated contains empty strings of the `varchar2` type, which Oracle treats as null, and the corresponding destination database field has a NOT NULL constraint, the migration task fails.

  • If the FGA (Fine-Grained Audit) policy is enabled on the table to be migrated, DTS cannot recognize the ORA_ROWSCN pseudocolumn, which will cause the migration job to fail.

    Note

    You can disable the FGA policy for the tables to be migrated, or choose not to migrate data from these tables.

  • Requirements for migration objects:

    • The tables to be migrated must have a primary key or a unique constraint, and the fields must be unique. Otherwise, duplicate data may appear in the destination database.

    • If your self-managed Oracle database is version 12c or later, the names of the tables to be migrated must not exceed 30 bytes in length.

    • If you migrate objects at the table level and need to edit them, such as mapping table or column names, a single data migration task supports a maximum of 1,000 tables. If this limit is exceeded, an error is reported after you submit the task. In this case, split the tables into multiple batches and configure a separate task for each batch, or configure a task to migrate the entire database.

  • For incremental migration, Redo Logs and Archive Logs:

    • Must be enabled.

    • For an incremental data migration task, DTS requires that Redo Logs and Archive Logs in the source database are retained for more than 24 hours. For a task that includes both full and incremental data migration, DTS requires that Redo Logs and Archive Logs are retained for at least 7 days. After the full data migration is complete, you can change the retention period to more than 24 hours. If the retention period is shorter than required, the DTS task may fail because it cannot obtain the logs. In extreme cases, this may cause data inconsistency or loss. Issues caused by a log retention period shorter than the DTS requirement are not covered by the DTS Service-Level Agreement (SLA).

  • Limitations on source database operations:

    • During schema migration and full data migration, do not perform DDL operations that change the database or table schema. Otherwise, the data migration task fails.

    • If you perform only full data migration, do not write new data to the source instance. Otherwise, the source and destination data will be inconsistent. To maintain real-time data consistency, select schema migration, full data migration, and incremental data migration.

    • Updating large text fields separately is not supported and will cause the task to fail.

Other limitations

  • Only table-level migration is supported. Append-optimized (AO) tables are not supported as destination tables.

  • If you use column mapping for a partial table migration or if the source and destination table schemas are inconsistent, data in the columns that exist in the source but not in the destination will be lost.

  • The destination AnalyticDB for PostgreSQL instance does not support writing the string terminator ('\0'). If the data to be migrated contains this terminator, DTS does not write it to the destination database. This causes data inconsistency.

  • Migration of foreign tables is not supported.

  • Migration of PACKAGE, PACKAGE_BODY, MATERIALIZED_VIEW, SYNONYM, TYPE, TYPE_BODY, PROCEDURE, and INDEX is not supported.

  • During incremental migration, using Oracle Data Pump to import data into the source database is not supported. This may cause data loss.

  • If a table to be migrated contain a primary key, the primary key column in the destination table must be the same as that of the source table. If a table to be migrated does not contain a primary key, the primary key column and the distribution key in the destination table must be the same.

  • A unique key (primary key column contained) in the destination table must contain all columns of its distribution key.

  • Before migrating data, evaluate the performance of the source and destination databases. Perform data migration during off-peak hours. During full data migration, DTS consumes some read and write resources from the source and destination databases, which may increase the database load.

  • Because full data migration performs concurrent INSERT operations, it causes table fragmentation in the destination database. As a result, the table storage space in the destination database is larger than that in the source instance.

  • DTS attempts to resume failed migration tasks within seven days. Therefore, before you switch your business to the destination instance, you must stop or release the task. Alternatively, use the revoke command to revoke the write permissions of the account that DTS uses to access the destination instance. This prevents the source data from overwriting the data in the destination instance if the task is automatically resumed.

  • If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.

    Note

    Only the parameters of the DTS task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.

Migrate data from a self-managed Oracle database to Message Queue for Apache Kafka or a self-managed Kafka cluster

Type

Description

Source database limitations

  • Bandwidth requirements: The server that hosts the source database must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected.

  • If the source database is connected over a leased line, you must configure one of the virtual IP addresses (VIPs) in the connection information. This allows the Oracle Real Application Clusters (RAC) to connect to the data migration task over the leased line.

  • If the self-managed Oracle database uses an RAC architecture and is connected over a leased line, VPN Gateway, Smart Access Gateway, Database Gateway (DG), or Cloud Enterprise Network (CEN), or from an ECS instance, you cannot configure a Single Client Access Name (SCAN) IP address. You can only configure one of the VIPs in the connection information. If you use this method, node switching for RAC is not supported.

  • If the data to be migrated contains empty strings of the `varchar2` type, which Oracle treats as null, and the corresponding destination database field has a NOT NULL constraint, the migration task fails.

  • If the FGA (Fine-Grained Audit) policy is enabled on the table to be migrated, DTS cannot recognize the ORA_ROWSCN pseudocolumn, which will cause the migration job to fail.

    Note

    You can disable the FGA policy for the tables to be migrated, or choose not to migrate data from these tables.

  • Requirements for migration objects:

    • The tables to be migrated must have a primary key or a unique constraint, and the fields must be unique. Otherwise, duplicate data may appear in the destination database.

    • If your self-managed Oracle database is version 12c or later, the names of the tables to be migrated must not exceed 30 bytes in length.

    • If you migrate objects at the table level and need to edit them, such as mapping table or column names, a single data migration task supports a maximum of 1,000 tables. If this limit is exceeded, an error is reported after you submit the task. In this case, split the tables into multiple batches and configure a separate task for each batch, or configure a task to migrate the entire database.

  • For incremental migration, Redo Logs and Archive Logs:

    • Must be enabled.

    • For an incremental data migration task, DTS requires that Redo Logs and Archive Logs in the source database are retained for more than 24 hours. For a task that includes both full and incremental data migration, DTS requires that Redo Logs and Archive Logs are retained for at least 7 days. After the full data migration is complete, you can change the retention period to more than 24 hours. If the retention period is shorter than required, the DTS task may fail because it cannot obtain the logs. In extreme cases, this may cause data inconsistency or loss. Issues caused by a log retention period shorter than the DTS requirement are not covered by the DTS Service-Level Agreement (SLA).

  • Limitations on source database operations:

    • During schema migration and full data migration, do not perform DDL operations that change the database or table schema. Otherwise, the data migration task fails.

    • If you perform only full data migration, do not write new data to the source instance. Otherwise, the source and destination data will be inconsistent. To maintain real-time data consistency, select schema migration, full data migration, and incremental data migration.

    • Updating large text fields separately is not supported and will cause the task to fail.

Other limitations

  • During incremental migration, using Oracle Data Pump to import data into the source database is not supported. This may cause data loss.

  • Migration of foreign tables is not supported.

  • Migration of INDEX, PARTITION, VIEW, PROCEDURE, FUNCTION, TRIGGER, FK, TABLE_COMMENT, and COLUMN_COMMENT is not supported.

  • Before migrating data, evaluate the performance of the source and destination databases. Perform data migration during off-peak hours. During full data migration, DTS consumes some read and write resources from the source and destination databases, which may increase the database load.

  • Because full data migration performs concurrent INSERT operations, it causes table fragmentation in the destination database. As a result, the table storage space in the destination database is larger than that in the source instance.

  • DTS attempts to resume failed migration tasks within seven days. Therefore, before you switch your business to the destination instance, you must stop or release the task. Alternatively, use the revoke command to revoke the write permissions of the account that DTS uses to access the destination instance. This prevents the source data from overwriting the data in the destination instance if the task is automatically resumed.

  • During migration, if the destination Kafka cluster is scaled out or scaled in, you must restart the instance.

  • If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.

    Note

    Only the parameters of the DTS task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.

Migrate data between self-managed Oracle databases

If the destination database is also an Oracle database, the precautions and limitations are as follows:

Type

Description

Source database limitations

  • Bandwidth requirements: The server that hosts the source database must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected.

  • If the source database is connected over a leased line, you must configure one of the virtual IP addresses (VIPs) in the connection information. This allows the Oracle Real Application Clusters (RAC) to connect to the data migration task over the leased line.

  • If the self-managed Oracle database uses an RAC architecture and is connected over a leased line, VPN Gateway, Smart Access Gateway, Database Gateway (DG), or Cloud Enterprise Network (CEN), or from an ECS instance, you cannot configure a Single Client Access Name (SCAN) IP address. You can only configure one of the VIPs in the connection information. If you use this method, node switching for RAC is not supported.

  • If the data to be migrated contains empty strings of the `varchar2` type, which Oracle treats as null, and the corresponding destination database field has a NOT NULL constraint, the migration task fails.

  • If the FGA (Fine-Grained Audit) policy is enabled on the table to be migrated, DTS cannot recognize the ORA_ROWSCN pseudocolumn, which will cause the migration job to fail.

    Note

    You can disable the FGA policy for the tables to be migrated, or choose not to migrate data from these tables.

  • Requirements for migration objects:

    • The tables to be migrated must have a primary key or a unique constraint, and the fields must be unique. Otherwise, duplicate data may appear in the destination database.

    • If your self-managed Oracle database is version 12c or later, the names of the tables to be migrated must not exceed 30 bytes in length.

    • If you migrate objects at the table level and need to edit them, such as mapping table or column names, a single data migration task supports a maximum of 1,000 tables. If this limit is exceeded, an error is reported after you submit the task. In this case, split the tables into multiple batches and configure a separate task for each batch, or configure a task to migrate the entire database.

  • For incremental migration, Redo Logs and Archive Logs:

    • Must be enabled.

    • For an incremental data migration task, DTS requires that Redo Logs and Archive Logs in the source database are retained for more than 24 hours. For a task that includes both full and incremental data migration, DTS requires that Redo Logs and Archive Logs are retained for at least 7 days. After the full data migration is complete, you can change the retention period to more than 24 hours. If the retention period is shorter than required, the DTS task may fail because it cannot obtain the logs. In extreme cases, this may cause data inconsistency or loss. Issues caused by a log retention period shorter than the DTS requirement are not covered by the DTS Service-Level Agreement (SLA).

  • Limitations on source database operations:

    • During schema migration and full data migration, do not perform DDL operations that change the database or table schema. Otherwise, the data migration task fails.

    • If you perform only full data migration, do not write new data to the source instance. Otherwise, the source and destination data will be inconsistent. To maintain real-time data consistency, select schema migration, full data migration, and incremental data migration.

    • Updating large text fields separately is not supported and will cause the task to fail.

Other limitations

  • During incremental migration, using Oracle Data Pump to import data into the source database is not supported. This may cause data loss.

  • Migration of foreign tables is not supported.

  • Schema migration tasks do not support the migration of schemas.

  • Before migrating data, evaluate the performance of the source and destination databases. Perform data migration during off-peak hours. During full data migration, DTS consumes some read and write resources from the source and destination databases, which may increase the database load.

  • Because full data migration performs concurrent INSERT operations, it causes table fragmentation in the destination database. As a result, the table storage space in the destination database is larger than that in the source instance.

  • DTS attempts to resume failed migration tasks within seven days. Therefore, before you switch your business to the destination instance, you must stop or release the task. Alternatively, use the revoke command to revoke the write permissions of the account that DTS uses to access the destination instance. This prevents the source data from overwriting the data in the destination instance if the task is automatically resumed.

  • If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.

    Note

    Only the parameters of the DTS task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.