Type | Description |
Limits on source database | Bandwidth requirements: The server to which the source database belongs must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected. The tables to be migrated must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate data records. If you select tables as the objects to migrate and you need to edit tables (such as renaming tables or columns) in the destination database, up to 1,000 tables can be migrated in a single data migration task. If you run a task to migrate more than 1,000 tables, a request error occurs. In this case, we recommend that you split the tables to be migrated, configure multiple tasks to migrate the tables, or configure a task to migrate the entire database. If you want to migrate incremental data, make sure that the following requirements are met: The binary logging feature is enabled. The value of the binlog_format parameter is set to row. The value of the binlog_row_image parameter is set to full. Otherwise, error messages are returned during the precheck and the data migration task cannot be started.
Important If the source database is a self-managed MySQL database in a dual-primary cluster (the two nodes serve as primary nodes for each other), you must enable the log_slave_updates parameter to ensure that DTS can obtain all binary logs. RDS MySQL instance local Binlog logs must be retained for 3 days or more (7 days recommended), and self-managed MySQL database local Binlog logs must be retained for 7 days or more. Otherwise, DTS might fail because it cannot obtain the Binlog, and in extreme cases, this might cause data inconsistency or data loss. Problems caused by setting the Binlog log retention time lower than the time required by DTS are not covered by the DTS Service-level agreement.
Limits on operations to perform on the source database: During schema migration and full data migration, do not perform DDL operations to change the schemas of databases or tables. Otherwise, the data migration task fails. If you perform only full data migration, do not write data to the source database during data migration. Otherwise, data inconsistency occurs. To ensure data consistency, we recommend that you select schema migration, full data migration, and incremental data migration as the migration types.
During the migration, data changes made without binary logging, such as data changes caused by physical backup recovery or cascading operations, are not migrated to the destination database.
Note If such a situation occurs, you can perform full data migration again when your business allows. If the source database is MySQL 8.0.23 or later and the data to be migrated contains invisible columns, data may be lost because DTS cannot obtain the data in the invisible columns.
Note You can run the ALTER TABLE ALTER COLUMN SET VISIBLE; command to set the invisible column to visible. For more information, see Invisible Columns. Tables without a primary key automatically generate an invisible primary key. You must also set the invisible primary key to visible. For more information, see Generated Invisible Primary Keys.
|
Others | To ensure compatibility, we recommend that you use the same engine versions for the source and destination MySQL databases. DTS does not support the migration of parsers that are defined by using the comment syntax. If the destination database is MySQL 8.0.23 or later and the data to be received contains invisible columns, the DTS instance may fail to run or data may be lost because DTS cannot find the target columns to write data.
Note You can run the ALTER TABLE ALTER COLUMN SET VISIBLE; command to set the invisible column to visible. For more information, see Invisible Columns. Tables without a primary key automatically generate an invisible primary key. You must also set the invisible primary key to visible. For more information, see Generated Invisible Primary Keys.
If you do not use the schema migration provided by DTS when migrating data, you need to ensure the compatibility of fields yourself. Otherwise, the instance may fail or data may be lost. For example, when the field in the source table is of type text , and the target field that accepts the data is of type varchar(255) , data truncation may occur if large fields exist in the source table. If the data to be migrated contains content that requires four bytes of storage space, such as emojis, the database and tables in the destination instance must use the utf8mb4 character set.
Note If you use DTS to migrate the database and table structure, you need to set the instance-level parameter character_set_server in the destination database to utf8mb4. Before you migrate data, evaluate the impact of data migration on the performance of the source and destination databases. We recommend that you migrate data during off-peak hours. During full data migration, DTS uses the read and write resources of the source database and destination cluster. This may increase the loads on the database servers. During full data migration, concurrent INSERT operations cause fragmentation in the tables of the destination database. After full data migration is complete, the size of the tablespace used by the destination database is larger than that of the source database. You must make sure that the precision settings for columns of the FLOAT or DOUBLE data type meet your business requirements. DTS reads the values of these two types of columns by using ROUND(COLUMN,PRECISION) . If you do not specify a precision, DTS sets the precision for columns of the FLOAT type to 38 digits and the precision for columns of the DOUBLE type to 308 digits. DTS attempts to resume data migration tasks that failed within the last seven days. Therefore, before you switch your workloads to the destination instance, you must end or release the task, or use the revoke command to revoke the write permissions of the DTS account that is used to access the destination instance. Otherwise, the data in the source database will overwrite the data in the destination instance after the task is automatically resumed. If DDL statements fail to be executed in the destination database, the DTS task continues to run. You can view the DDL statements that fail to be executed in task logs. For information about how to view task logs, see Query task logs. If you write fields with names that differ only in capitalization to the same table in the destination MySQL database, the migration result may not meet your expectations because MySQL is case-insensitive in field names. After data migration is completed (the Status of the instance is Completed), we recommend that you use the analyze table <table name> command to confirm that all data has been written to the destination table. For example, after the HA switch mechanism is triggered in the destination MySQL database, data might be written only to memory, which can result in data loss. If the always-confidential (EncDB) feature is enabled for the ApsaraDB RDS for MySQL instance, full data migration is not supported.
Note ApsaraDB RDS for MySQL instances with transparent data encryption (TDE) enabled support schema migration, full data migration, and incremental data migration. If you need to migrate database accounts, you must meet the prerequisites and understand the related precautions. For more information, see Migrate database accounts. If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.
Note Only the parameters of the task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.
|
Special cases | When the source database is a self-managed MySQL database: If a primary/secondary switchover is performed on the source database, the data migration task fails. DTS calculates migration latency based on the timestamp of the latest migrated data in the destination database and the current timestamp in the source database. If no data manipulation language (DML) operation is performed on the source database for a long time, the migration latency may be inaccurate. If the latency of the synchronization task is excessively high, you can perform a DML operation on the source database to update the latency.
Note If you select an entire database as the object to migrate, you can create a heartbeat table. The heartbeat table is updated or receives data every second. DTS periodically executes the CREATE DATABASE IF NOT EXISTS `test` command in the source database to advance the Binlog offset.
When the source database is an ApsaraDB RDS for MySQL instance: If you need to migrate incremental data, ApsaraDB RDS for MySQL instances that do not record transaction logs, such as read-only instances of ApsaraDB RDS for MySQL 5.6, cannot be used as the source database. DTS periodically executes the CREATE DATABASE IF NOT EXISTS `test` command in the source database to advance the Binlog offset.
When the destination database is an ApsaraDB RDS for MySQL instance: DTS automatically creates a destination database in the ApsaraDB RDS for MySQL instance. However, if the name of the source database is invalid, you must create a database in the ApsaraDB RDS for MySQL instance before you configure the data migration task. For more information, see Manage databases.
|