This topic describes the precautions and limits when you migrate data from an Oracle database. To ensure that your data migration task runs as expected, read the precautions and limits before you configure the task.

Scenarios of migrating data from an Oracle database

Take note of precautions and limits in the following data migration scenarios:

Migrate data from a self-managed Oracle database to a PolarDB for Oracle cluster

The following table describes the precautions and limits.
Category Description
Limits on the source database
  • The server to which the source database belongs must have sufficient outbound bandwidth. Otherwise, the data migration speed decreases.
  • If the source database is connected over Express Connect, you must specify a virtual IP address (VIP) for the database when you configure the source database information.
  • If the source database is an Oracle RAC database hosted on Elastic Compute Service (ECS) or connected over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN), you can use a single VIP rather than a Single Client Access Name (SCAN) IP address when you configure the source database information. After you specify the VIP, node failover is not supported for the Oracle RAC database.
  • Requirements for the objects to migrate:
    • The tables to migrate must have PRIMARY KEY or UNIQUE constraints and all fields must be unique. Otherwise, the destination database may contain duplicate data records.
    • If the version number of your Oracle database is 12c or later, the names of the tables to migrate cannot exceed 30 bytes in length.
    • If you select tables as the objects to migrate and you want to edit the tables (such as renaming tables or columns) in the destination database, up to 1,000 tables can be migrated in a single data migration task. If you run a task to migrate more than 1,000 tables, a request error occurs. In this case, we recommend that you split the tables and configure multiple tasks to migrate the tables, or configure a task to migrate the entire database.
  • If you want to migrate incremental data, you must make sure that the following requirements are met:
    • The redo logging and archive logging must be enabled.
    • For an incremental data migration task, redo logs and archive logs of the source database must be stored for more than 24 hours. For a full data and incremental data migration task, redo logs and archive logs of the source database must be stored for at least seven days. After the full data migration is completed, you can set the retention period to more than 24 hours. Otherwise, Data Transmission Service (DTS) may fail to obtain the redo logs and archive logs, and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. Make sure that you set the retention period of redo logs and archive logs in accordance with the preceding requirements. Otherwise, the Service Level Agreement (SLA) of DTS does not ensure service reliability and performance.

  • Limits on operations:
    • During schema migration and full data migration, do not perform DDL operations to change the schemas of databases or tables. Otherwise, the data migration task fails.
    • If you perform only full data migration, do not write data to the source database during data migration. Otherwise, data inconsistency between the source and destination databases occurs. To ensure data consistency, we recommend that you select schema migration, full data migration, and incremental data migration as the migration types.
    • During data migration, do not update LONGTEXT fields. Otherwise, the data migration task fails.
Other limits
  • Before you migrate data, evaluate the impact of data migration on the performance of the source and destination databases. We recommend that you migrate data during off-peak hours. During full data migration, DTS uses read and write resources of the source and destination databases. This may increase the loads of the database servers.
  • During full data migration, concurrent INSERT operations cause fragmentation in the tables of the destination database. After full data migration is completed, the tablespace of the destination database is larger than that of the source database.
  • DTS attempts to resume data migration tasks that failed within the last seven days. Before you switch workloads over to the destination instance, stop or release the data migration task. You can also execute the REVOKE statement to revoke write permissions from the accounts used by DTS to access the destination instance. Otherwise, the data in the source database overwrites the data in the destination instance after the task is resumed.

Migrate data from a self-managed Oracle database to a MySQL database

The following table describes the precautions and limits when you migrate data to MySQL databases, such as self-managed MySQL databases and ApsaraDB RDS for MySQL instances.
Category Description
Limits on the source database
  • The server to which the source database belongs must have sufficient outbound bandwidth. Otherwise, the data migration speed decreases.
  • If the source database is connected over Express Connect, you must specify a VIP for the database when you configure the source database information.
  • If the source database is an Oracle RAC database hosted on Elastic Compute Service (ECS) or connected over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN), you can use a single VIP rather than a Single Client Access Name (SCAN) IP address when you configure the source database information. After you specify the VIP, node failover is not supported for the Oracle RAC database.
  • Requirements for the objects to migrate:
    • The tables to migrate must have PRIMARY KEY or UNIQUE constraints and all fields must be unique. Otherwise, the destination database may contain duplicate data records.
    • If the version number of your Oracle database is 12c or later, the names of the tables to migrate cannot exceed 30 bytes in length.
    • If you select tables as the objects to migrate and you want to edit the tables (such as renaming tables or columns) in the destination database, up to 1,000 tables can be migrated in a single data migration task. If you run a task to migrate more than 1,000 tables, a request error occurs. In this case, we recommend that you split the tables and configure multiple tasks to migrate the tables, or configure a task to migrate the entire database.
  • If you want to migrate incremental data, you must make sure that the following requirements are met:
    • The redo logging and archive logging must be enabled.
    • For an incremental data migration task, redo logs and archive logs of the source database must be stored for more than 24 hours. For a full data and incremental data migration task, redo logs and archive logs of the source database must be stored for at least seven days. After the full data migration is completed, you can set the retention period to more than 24 hours. Otherwise, Data Transmission Service (DTS) may fail to obtain the redo logs and archive logs, and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. Make sure that you set the retention period of redo logs and archive logs in accordance with the preceding requirements. Otherwise, the Service Level Agreement (SLA) of DTS does not ensure service reliability and performance.

  • Limits on operations:
    • During schema migration and full data migration, do not perform DDL operations to change the schemas of databases or tables. Otherwise, the data migration task fails.
    • If you perform only full data migration, do not write data to the source database during data migration. Otherwise, data inconsistency between the source and destination databases occurs. To ensure data consistency, we recommend that you select schema migration, full data migration, and incremental data migration as the migration types.
    • During data migration, do not update LONGTEXT fields. Otherwise, the data migration task fails.
Other limits
  • Before you migrate data, evaluate the impact of data migration on the performance of the source and destination databases. We recommend that you migrate data during off-peak hours. During full data migration, DTS uses read and write resources of the source and destination databases. This may increase the loads of the database servers.
  • During full data migration, concurrent INSERT operations cause fragmentation in the tables of the destination database. After full data migration is completed, the tablespace of the destination database is larger than that of the source database.
  • DTS attempts to resume data migration tasks that failed within the last seven days. Before you switch workloads over to the destination instance, stop or release the data migration task. You can also execute the REVOKE statement to revoke write permissions from the accounts used by DTS to access the destination instance. Otherwise, the data in the source database overwrites the data in the destination instance after the task is resumed.
Special cases If the destination database runs on an ApsaraDB RDS for MySQL instance, take note of the following limits:
  • Table names in the ApsaraDB RDS for MySQL instance are case-insensitive. If a table name in the source Oracle database contains uppercase letters, ApsaraDB RDS for MySQL converts all uppercase letters to lowercase letters before a table is created.

    If the source Oracle database contains identical table names that differ only in capitalization, these table names are identified as duplicate. As a result, the "The object already exists" message may be displayed during schema migration. To prevent name conflicts in the destination database, you can use the object name mapping feature to capitalize the table names. For more information, see Object name mapping.

  • DTS automatically creates a destination database in the ApsaraDB RDS for MySQL instance. However, if the name of the source database is invalid, you must create a database in the ApsaraDB RDS for MySQL instance before you configure the data migration task. For more information, see Create a database on an ApsaraDB RDS for MySQL instance.

Migrate data from a self-managed Oracle database to a PolarDB for MySQL cluster

The following table describes the precautions and limits when you migrate data from a self-managed Oracle database to a PolarDB for MySQL cluster.
Category Description
Limits on the source database
  • The server to which the source database belongs must have sufficient outbound bandwidth. Otherwise, the data migration speed decreases.
  • If the source database is connected over Express Connect, you must specify a VIP for the database when you configure the source database information.
  • If the source database is an Oracle RAC database hosted on Elastic Compute Service (ECS) or connected over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN), you can use a single VIP rather than a Single Client Access Name (SCAN) IP address when you configure the source database information. After you specify the VIP, node failover is not supported for the Oracle RAC database.
  • Requirements for the objects to migrate:
    • The tables to migrate must have PRIMARY KEY or UNIQUE constraints and all fields must be unique. Otherwise, the destination database may contain duplicate data records.
    • If the version number of your Oracle database is 12c or later, the names of the tables to migrate cannot exceed 30 bytes in length.
    • If you select tables as the objects to migrate and you want to edit the tables (such as renaming tables or columns) in the destination database, up to 1,000 tables can be migrated in a single data migration task. If you run a task to migrate more than 1,000 tables, a request error occurs. In this case, we recommend that you split the tables and configure multiple tasks to migrate the tables, or configure a task to migrate the entire database.
  • If you want to migrate incremental data, you must make sure that the following requirements are met:
    • The redo logging and archive logging must be enabled.
    • For an incremental data migration task, redo logs and archive logs of the source database must be stored for more than 24 hours. For a full data and incremental data migration task, redo logs and archive logs of the source database must be stored for at least seven days. After the full data migration is completed, you can set the retention period to more than 24 hours. Otherwise, Data Transmission Service (DTS) may fail to obtain the redo logs and archive logs, and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. Make sure that you set the retention period of redo logs and archive logs in accordance with the preceding requirements. Otherwise, the Service Level Agreement (SLA) of DTS does not ensure service reliability and performance.

  • Limits on operations:
    • During schema migration and full data migration, do not perform DDL operations to change the schemas of databases or tables. Otherwise, the data migration task fails.
    • If you perform only full data migration, do not write data to the source database during data migration. Otherwise, data inconsistency between the source and destination databases occurs. To ensure data consistency, we recommend that you select schema migration, full data migration, and incremental data migration as the migration types.
    • During data migration, do not update LONGTEXT fields. Otherwise, the data migration task fails.
Other limits
  • Before you migrate data, evaluate the impact of data migration on the performance of the source and destination databases. We recommend that you migrate data during off-peak hours. During full data migration, DTS uses read and write resources of the source and destination databases. This may increase the loads of the database servers.
  • During full data migration, concurrent INSERT operations cause fragmentation in the tables of the destination database. After full data migration is completed, the tablespace of the destination database is larger than that of the source database.
  • DTS attempts to resume data migration tasks that failed within the last seven days. Before you switch workloads over to the destination instance, stop or release the data migration task. You can also execute the REVOKE statement to revoke write permissions from the accounts used by DTS to access the destination instance. Otherwise, the data in the source database overwrites the data in the destination instance after the task is resumed.
Special cases If the destination database runs on a PolarDB for MySQL cluster, take note of the following limits:
  • Table names in the PolarDB for MySQL cluster are case-insensitive. If a table name in the source Oracle database contains uppercase letters, PolarDB for MySQL converts all uppercase letters to lowercase letters before a table is created.

    If the source Oracle database contains identical table names that differ only in capitalization, these table names are identified as duplicate. During schema migration, the following message is returned: "The object already exists". To prevent name conflicts in the destination database, you can use the object name mapping feature to capitalize the table names. For more information, see Object name mapping.

  • DTS automatically creates a destination database in the PolarDB for MySQL cluster. However, if the name of the source database is invalid, you must create a database in the PolarDB for MySQL cluster before you configure the data migration task. For more information, see Database Management.

Migrate data from a self-managed Oracle database to an AnalyticDB for PostgreSQL instance

The following table describes the precautions and limits when you migrate data from a self-managed Oracle database to an AnalyticDB for PostgreSQL instance.
Category Description
Limits on the source database
  • The server to which the source database belongs must have sufficient outbound bandwidth. Otherwise, the data migration speed decreases.
  • If the source database is connected over Express Connect, you must specify a VIP for the database when you configure the source database information.
  • If the source database is an Oracle RAC database hosted on Elastic Compute Service (ECS) or connected over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN), you can use a single VIP rather than a Single Client Access Name (SCAN) IP address when you configure the source database information. After you specify the VIP, node failover is not supported for the Oracle RAC database.
  • Requirements for the objects to migrate:
    • The tables to migrate must have PRIMARY KEY or UNIQUE constraints and all fields must be unique. Otherwise, the destination database may contain duplicate data records.
    • If the version number of your Oracle database is 12c or later, the names of the tables to migrate cannot exceed 30 bytes in length.
    • If you select tables as the objects to migrate and you want to edit the tables (such as renaming tables or columns) in the destination database, up to 1,000 tables can be migrated in a single data migration task. If you run a task to migrate more than 1,000 tables, a request error occurs. In this case, we recommend that you split the tables and configure multiple tasks to migrate the tables, or configure a task to migrate the entire database.
  • If you want to migrate incremental data, you must make sure that the following requirements are met:
    • The redo logging and archive logging must be enabled.
    • For an incremental data migration task, redo logs and archive logs of the source database must be stored for more than 24 hours. For a full data and incremental data migration task, redo logs and archive logs of the source database must be stored for at least seven days. After the full data migration is completed, you can set the retention period to more than 24 hours. Otherwise, Data Transmission Service (DTS) may fail to obtain the redo logs and archive logs, and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. Make sure that you set the retention period of redo logs and archive logs in accordance with the preceding requirements. Otherwise, the Service Level Agreement (SLA) of DTS does not ensure service reliability and performance.

  • Limits on operations:
    • During schema migration and full data migration, do not perform DDL operations to change the schemas of databases or tables. Otherwise, the data migration task fails.
    • If you perform only full data migration, do not write data to the source database during data migration. Otherwise, data inconsistency between the source and destination databases occurs. To ensure data consistency, we recommend that you select schema migration, full data migration, and incremental data migration as the migration types.
    • During data migration, do not update LONGTEXT fields. Otherwise, the data migration task fails.
Other limits
  • Before you migrate data, evaluate the impact of data migration on the performance of the source and destination databases. We recommend that you migrate data during off-peak hours. During full data migration, DTS uses read and write resources of the source and destination databases. This may increase the loads of the database servers.
  • During full data migration, concurrent INSERT operations cause fragmentation in the tables of the destination database. After full data migration is completed, the tablespace of the destination database is larger than that of the source database.
  • DTS attempts to resume data migration tasks that failed within the last seven days. Before you switch workloads over to the destination instance, stop or release the data migration task. You can also execute the REVOKE statement to revoke write permissions from the accounts used by DTS to access the destination instance. Otherwise, the data in the source database overwrites the data in the destination instance after the task is resumed.

Migrate data from a self-managed Oracle database to a Message Queue for Apache Kafka instance or a self-managed Kafka cluster

Category Description
Limits on the source database
  • The server to which the source database belongs must have sufficient outbound bandwidth. Otherwise, the data migration speed decreases.
  • If the source database is connected over Express Connect, you must specify a VIP for the database when you configure the source database information.
  • If the source database is an Oracle RAC database hosted on Elastic Compute Service (ECS) or connected over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN), you can use a single VIP rather than a Single Client Access Name (SCAN) IP address when you configure the source database information. After you specify the VIP, node failover is not supported for the Oracle RAC database.
  • Requirements for the objects to migrate:
    • The tables to migrate must have PRIMARY KEY or UNIQUE constraints and all fields must be unique. Otherwise, the destination database may contain duplicate data records.
    • If the version number of your Oracle database is 12c or later, the names of the tables to migrate cannot exceed 30 bytes in length.
    • If you select tables as the objects to migrate and you want to edit the tables (such as renaming tables or columns) in the destination database, up to 1,000 tables can be migrated in a single data migration task. If you run a task to migrate more than 1,000 tables, a request error occurs. In this case, we recommend that you split the tables and configure multiple tasks to migrate the tables, or configure a task to migrate the entire database.
  • If you want to migrate incremental data, you must make sure that the following requirements are met:
    • The redo logging and archive logging must be enabled.
    • For an incremental data migration task, redo logs and archive logs of the source database must be stored for more than 24 hours. For a full data and incremental data migration task, redo logs and archive logs of the source database must be stored for at least seven days. After the full data migration is completed, you can set the retention period to more than 24 hours. Otherwise, Data Transmission Service (DTS) may fail to obtain the redo logs and archive logs, and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. Make sure that you set the retention period of redo logs and archive logs in accordance with the preceding requirements. Otherwise, the Service Level Agreement (SLA) of DTS does not ensure service reliability and performance.

  • Limits on operations:
    • During schema migration and full data migration, do not perform DDL operations to change the schemas of databases or tables. Otherwise, the data migration task fails.
    • If you perform only full data migration, do not write data to the source database during data migration. Otherwise, data inconsistency between the source and destination databases occurs. To ensure data consistency, we recommend that you select schema migration, full data migration, and incremental data migration as the migration types.
    • During data migration, do not update LONGTEXT fields. Otherwise, the data migration task fails.
Other limits
  • Before you migrate data, evaluate the impact of data migration on the performance of the source and destination databases. We recommend that you migrate data during off-peak hours. During full data migration, DTS uses read and write resources of the source and destination databases. This may increase the loads of the database servers.
  • During full data migration, concurrent INSERT operations cause fragmentation in the tables of the destination database. After full data migration is completed, the tablespace of the destination database is larger than that of the source database.
  • DTS attempts to resume data migration tasks that failed within the last seven days. Before you switch workloads over to the destination instance, stop or release the data migration task. You can also execute the REVOKE statement to revoke write permissions from the accounts used by DTS to access the destination instance. Otherwise, the data in the source database overwrites the data in the destination instance after the task is resumed.

Migrate data between self-managed Oracle databases

The following table describes the precautions and limits when you migrate data between self-managed Oracle databases.
Category Description
Limits on the source database
  • The server to which the source database belongs must have sufficient outbound bandwidth. Otherwise, the data migration speed decreases.
  • If the source database is connected over Express Connect, you must specify a VIP for the database when you configure the source database information.
  • If the source database is an Oracle RAC database hosted on Elastic Compute Service (ECS) or connected over Express Connect, VPN Gateway, Smart Access Gateway, Database Gateway, or Cloud Enterprise Network (CEN), you can use a single VIP rather than a Single Client Access Name (SCAN) IP address when you configure the source database information. After you specify the VIP, node failover is not supported for the Oracle RAC database.
  • Requirements for the objects to migrate:
    • The tables to migrate must have PRIMARY KEY or UNIQUE constraints and all fields must be unique. Otherwise, the destination database may contain duplicate data records.
    • If the version number of your Oracle database is 12c or later, the names of the tables to migrate cannot exceed 30 bytes in length.
    • If you select tables as the objects to migrate and you want to edit the tables (such as renaming tables or columns) in the destination database, up to 1,000 tables can be migrated in a single data migration task. If you run a task to migrate more than 1,000 tables, a request error occurs. In this case, we recommend that you split the tables and configure multiple tasks to migrate the tables, or configure a task to migrate the entire database.
  • If you want to migrate incremental data, you must make sure that the following requirements are met:
    • The redo logging and archive logging must be enabled.
    • For an incremental data migration task, redo logs and archive logs of the source database must be stored for more than 24 hours. For a full data and incremental data migration task, redo logs and archive logs of the source database must be stored for at least seven days. After the full data migration is completed, you can set the retention period to more than 24 hours. Otherwise, Data Transmission Service (DTS) may fail to obtain the redo logs and archive logs, and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. Make sure that you set the retention period of redo logs and archive logs in accordance with the preceding requirements. Otherwise, the Service Level Agreement (SLA) of DTS does not ensure service reliability and performance.

  • Limits on operations:
    • During schema migration and full data migration, do not perform DDL operations to change the schemas of databases or tables. Otherwise, the data migration task fails.
    • If you perform only full data migration, do not write data to the source database during data migration. Otherwise, data inconsistency between the source and destination databases occurs. To ensure data consistency, we recommend that you select schema migration, full data migration, and incremental data migration as the migration types.
    • During data migration, do not update LONGTEXT fields. Otherwise, the data migration task fails.
Other limits
  • Before you migrate data, evaluate the impact of data migration on the performance of the source and destination databases. We recommend that you migrate data during off-peak hours. During full data migration, DTS uses read and write resources of the source and destination databases. This may increase the loads of the database servers.
  • During full data migration, concurrent INSERT operations cause fragmentation in the tables of the destination database. After full data migration is completed, the tablespace of the destination database is larger than that of the source database.
  • DTS attempts to resume data migration tasks that failed within the last seven days. Before you switch workloads over to the destination instance, stop or release the data migration task. You can also execute the REVOKE statement to revoke write permissions from the accounts used by DTS to access the destination instance. Otherwise, the data in the source database overwrites the data in the destination instance after the task is resumed.