Type | Description |
Source database limitations | Bandwidth requirements: The server that hosts the source database must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected. If the source database is connected over a leased line, you must configure one of the virtual IP addresses (VIPs) in the connection information. This allows the Oracle Real Application Clusters (RAC) to connect to the data migration task over the leased line. If the self-managed Oracle database uses an RAC architecture and is connected over a leased line, VPN Gateway, Smart Access Gateway, Database Gateway (DG), or Cloud Enterprise Network (CEN), or from an ECS instance, you cannot configure a Single Client Access Name (SCAN) IP address. You can only configure one of the VIPs in the connection information. If you use this method, node switching for RAC is not supported. If the data to be migrated contains empty strings of the `varchar2` type, which Oracle treats as null, and the corresponding destination database field has a NOT NULL constraint, the migration task fails. If the FGA (Fine-Grained Audit) policy is enabled on the table to be migrated, DTS cannot recognize the ORA_ROWSCN pseudocolumn, which will cause the migration job to fail.
Note You can disable the FGA policy for the tables to be migrated, or choose not to migrate data from these tables. Requirements for migration objects: The tables to be migrated must have a primary key or a unique constraint, and the fields must be unique. Otherwise, duplicate data may appear in the destination database. If your self-managed Oracle database is version 12c or later, the names of the tables to be migrated must not exceed 30 bytes in length. If you migrate objects at the table level and need to edit them, such as mapping table or column names, a single data migration task supports a maximum of 1,000 tables. If this limit is exceeded, an error is reported after you submit the task. In this case, split the tables into multiple batches and configure a separate task for each batch, or configure a task to migrate the entire database.
For incremental migration, Redo Logs and Archive Logs: Must be enabled. For an incremental data migration task, DTS requires that Redo Logs and Archive Logs in the source database are retained for more than 24 hours. For a task that includes both full and incremental data migration, DTS requires that Redo Logs and Archive Logs are retained for at least 7 days. After the full data migration is complete, you can change the retention period to more than 24 hours. If the retention period is shorter than required, the DTS task may fail because it cannot obtain the logs. In extreme cases, this may cause data inconsistency or loss. Issues caused by a log retention period shorter than the DTS requirement are not covered by the DTS Service-Level Agreement (SLA).
Limitations on source database operations: During schema migration and full data migration, do not perform DDL operations that change the database or table schema. Otherwise, the data migration task fails. If you perform only full data migration, do not write new data to the source instance. Otherwise, the source and destination data will be inconsistent. To maintain real-time data consistency, select schema migration, full data migration, and incremental data migration. Updating large text fields separately is not supported and will cause the task to fail.
|
Other limitations | During incremental migration, using Oracle Data Pump to import data into the source database is not supported. This may cause data loss. Migration of foreign tables is not supported. Migration of PACKAGE, PACKAGE_BODY, MATERIALIZED_VIEW, SYNONYM, TYPE, TYPE_BODY, FUNCTION, PROCEDURE, SEQUENCE, VIEW, TABLE_COMMENT, COLUMN_COMMENT, and TRIGGER is not supported. If the data to be migrated contains information such as rare characters or emojis that takes up four bytes, the destination databases and tables to receive the data must use UTF8mb4 character set.
Note If you use the schema migration feature of DTS, set the instance parameter character_set_server in the destination database to UTF8mb4 character set. Before migrating data, evaluate the performance of the source and destination databases. Perform data migration during off-peak hours. During full data migration, DTS consumes some read and write resources from the source and destination databases, which may increase the database load. Because full data migration performs concurrent INSERT operations, it causes table fragmentation in the destination database. As a result, the table storage space in the destination database is larger than that in the source instance. DTS attempts to resume failed migration tasks within seven days. Therefore, before you switch your business to the destination instance, you must stop or release the task. Alternatively, use the revoke command to revoke the write permissions of the account that DTS uses to access the destination instance. This prevents the source data from overwriting the data in the destination instance if the task is automatically resumed. If a DDL statement fails to be written to the destination database, the DTS task continues to run. You must check the task logs for the failed DDL statement. For more information about how to view task logs, see Query task logs. Ensure that the character sets of the source and destination databases are compatible. Otherwise, data inconsistency or task failure may occur. Use the schema migration feature of DTS. Otherwise, the task may fail due to incompatible data types. The time zones of the source and destination databases must be the same. If you write column names that differ only in capitalization to the same table in the destination MySQL database, the data migration result may not meet your expectations because the column names in MySQL databases are not case-sensitive. After data migration is complete, that is, the Status of the instance changes to Completed, we recommend that you run the analyze table <table name> command to check whether data is written to the destination table. For example, if a high-availability (HA) switchover is triggered in the destination MySQL database, data may be written only to the memory. As a result, data loss occurs. If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.
Note Only the parameters of the DTS task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.
|
Special cases | When the destination database is ApsaraDB RDS for MySQL ApsaraDB RDS for MySQL instances are case-insensitive to English table names. If you use uppercase English letters to create a table, ApsaraDB RDS for MySQL converts the table name to lowercase before creating the table. If the source Oracle database contains tables whose names are identical but differ in case, this may cause object name conflicts and a message indicating that the object already exists during schema migration. If this occurs, use the object name mapping feature provided by DTS to rename the conflicting objects when you configure migration objects. Convert the table names to uppercase. For more information, see Map tables and columns. DTS automatically creates databases in the ApsaraDB RDS for MySQL instance. If the name of a database to be migrated does not comply with the naming conventions of ApsaraDB RDS for MySQL, you must create the database in the ApsaraDB RDS for MySQL instance before you configure the migration task. For more information, see Manage databases.
|