Overview of migration scenarios for a PolarDB for MySQL source
Review the notes and limits for your migration task based on the following migration scenarios:
Migration between PolarDB for MySQL instances
The following table describes the notes and limits.
Type | Description |
Source database limits | Bandwidth requirements: The server that hosts the source database must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected. The tables to be migrated must have a primary key or a UNIQUE constraint, and the fields in the key or constraint must be unique. Otherwise, duplicate data may appear in the destination database. If you migrate data at the table level and need to edit the tables, such as mapping column names, a single data migration task can migrate a maximum of 1,000 tables. If you exceed this limit, an error is reported after you submit the task. In this case, split the tables into multiple migration tasks or configure a task to migrate the entire database. If you perform incremental migration: You must enable binary logging and set the loose_polar_log_bin parameter to on. Otherwise, the precheck reports an error and the data migration task cannot start. For more information about how to enable binary logging and modify parameters, see Enable binary logging and Modify parameters.
Note Enabling binary logging for a PolarDB for MySQL cluster consumes storage space and incurs storage fees. The binary logs of the PolarDB for MySQL cluster must be retained for at least 3 days. We recommend a retention period of 7 days. Otherwise, DTS may fail to obtain the binary logs, which can cause the task to fail. In extreme cases, this can lead to data inconsistency or data loss. Issues caused by a binary log retention period shorter than the DTS requirement are not covered by the DTS Service-Level Agreement (SLA).
Note For more information about how to set the retention period for binary logs of a PolarDB for MySQL cluster, see Modify the retention period.
Operational limits for the source database: During schema migration and full data migration, do not perform DDL operations that change the database or table schema. Otherwise, the data migration task fails.
Note During full data migration, DTS queries the source database. This creates a metadata lock, which may block DDL operations on the source database. If you perform only full data migration, do not write new data to the source instance. Otherwise, data inconsistency occurs between the source and destination. To maintain real-time data consistency, select schema migration, full data migration, and incremental data migration.
|
Other limits | We recommend that the source and destination PolarDB for MySQL instances run the same MySQL version to ensure compatibility. DTS does not migrate data where a parser defined by using comments is used. DTS does not support the migration of read-only nodes of the source PolarDB for MySQL instance. DTS does not support the migration of OSS external tables from the source PolarDB for MySQL instance. DTS does not support the migration of INDEX and PARTITION. DTS does not support primary/standby switchover scenarios for the database instance during full data migration. In such a scenario, reconfigure the migration task promptly. If you perform online DDL operations in temporary table mode on the source database, such as merging multiple tables, data loss may occur in the destination database or the migration instance may fail. If a primary key or unique key conflict occurs while the migration instance is running: If the source and destination databases have the same schema, and a data record has the same primary key as an existing data record in the destination database, the following scenarios may occur: During full data migration, DTS does not migrate the data record to the destination database. The existing data record in the destination database is retained. During incremental data migration, DTS migrates the data record to the destination database. The existing data record in the destination database is overwritten.
If the source and destination databases have different schemas, only specific columns are migrated or the data migration task fails. Proceed with caution.
If the data to be migrated contains information such as rare characters or emojis that takes up four bytes, the destination databases and tables to receive the data must use UTF8mb4 character set.
Note If you use the schema migration feature of DTS, set the instance parameter character_set_server in the destination database to UTF8mb4 character set. Before you perform data migration, evaluate the performance of the source and destination databases. We also recommend that you perform data migration during off-peak hours. Otherwise, DTS consumes read and write resources on both the source and destination databases during full data migration, which may increase the database load. Because full data migration involves concurrent INSERT operations, table fragmentation occurs in the destination database. As a result, the table storage space in the destination database is larger than that in the source instance after full migration is complete. Confirm whether the migration precision for columns of the FLOAT or DOUBLE data type meets your business requirements. DTS reads the values of these columns using ROUND(COLUMN,PRECISION). If you do not explicitly define the precision, DTS uses a default precision of 38 for FLOAT and 308 for DOUBLE. DTS attempts to recover failed tasks within seven days. Therefore, before you switch your business to the destination instance, you must end or release the task. Alternatively, revoke the write permissions of the database account that DTS uses to access the destination instance using the revoke command. This prevents the source data from overwriting the data in the destination instance if the task is automatically recovered. If a DDL statement fails to be written to the destination database, the DTS task continues to run. You need to check the task logs for the failed DDL statement. For more information about how to view task logs, see Query task logs. If you want to migrate accounts from the source database to the destination database, you need to learn the prerequisites and precautions. For more information, see Migrate database accounts. If an instance fails, DTS helpdesk will try to recover the instance within 8 hours. During the recovery process, operations such as restarting the instance and adjusting parameters may be performed.
Note When parameters are adjusted, only the parameters of the DTS instance are modified. The parameters of the database are not modified. The parameters that may be modified include but are not limited to those described in Modify instance parameters.
|
Other notes | DTS periodically runs the CREATE DATABASE IF NOT EXISTS `test` command on the source database to advance the binary log offset. |
Migration from PolarDB for MySQL to RDS for MySQL or self-managed MySQL
The following table describes the notes and limits.
Type | Description |
Source database limits | Bandwidth requirements: The server that hosts the source database must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected. The tables to be migrated must have a primary key or a UNIQUE constraint, and the fields in the key or constraint must be unique. Otherwise, duplicate data may appear in the destination database. If you migrate data at the table level and need to edit the tables, such as mapping column names, a single data migration task can migrate a maximum of 1,000 tables. If you exceed this limit, an error is reported after you submit the task. In this case, split the tables into multiple migration tasks or configure a task to migrate the entire database. If you perform incremental migration: You must enable binary logging and set the loose_polar_log_bin parameter to on. Otherwise, the precheck reports an error and the data migration task cannot start. For more information about how to enable binary logging and modify parameters, see Enable binary logging and Modify parameters.
Note Enabling binary logging for a PolarDB for MySQL cluster consumes storage space and incurs storage fees. The binary logs of the PolarDB for MySQL cluster must be retained for at least 3 days. We recommend a retention period of 7 days. Otherwise, DTS may fail to obtain the binary logs, which can cause the task to fail. In extreme cases, this can lead to data inconsistency or data loss. Issues caused by a binary log retention period shorter than the DTS requirement are not covered by the DTS Service-Level Agreement (SLA).
Note For more information about how to set the retention period for binary logs of a PolarDB for MySQL cluster, see Modify the retention period.
Operational limits for the source database: During schema migration and full data migration, do not perform DDL operations that change the database or table schema. Otherwise, the data migration task fails.
Note During full data migration, DTS queries the source database. This creates a metadata lock, which may block DDL operations on the source database. If you perform only full data migration, do not write new data to the source instance. Otherwise, data inconsistency occurs between the source and destination. To maintain real-time data consistency, select schema migration, full data migration, and incremental data migration.
|
Notes | DTS does not support the migration of read-only nodes of the source PolarDB for MySQL instance. DTS does not support the migration of OSS external tables from the source PolarDB for MySQL instance. DTS does not support the migration of INDEX and PARTITION. DTS does not migrate data where a parser defined by using comments is used. DTS does not support primary/standby switchover scenarios for the database instance during full data migration. In such a scenario, reconfigure the migration task promptly. If you perform online DDL operations in temporary table mode on the source database, such as merging multiple tables, data loss may occur in the destination database or the migration instance may fail. If a primary key or unique key conflict occurs while the migration instance is running: If the source and destination databases have the same schema, and a data record has the same primary key as an existing data record in the destination database, the following scenarios may occur: During full data migration, DTS does not migrate the data record to the destination database. The existing data record in the destination database is retained. During incremental data migration, DTS migrates the data record to the destination database. The existing data record in the destination database is overwritten.
If the source and destination databases have different schemas, only specific columns are migrated or the data migration task fails. Proceed with caution.
If the data to be migrated contains information such as rare characters or emojis that takes up four bytes, the destination databases and tables to receive the data must use UTF8mb4 character set.
Note If you use the schema migration feature of DTS, set the instance parameter character_set_server in the destination database to UTF8mb4 character set. Before you perform data migration, evaluate the performance of the source and destination databases. We also recommend that you perform data migration during off-peak hours. Otherwise, DTS consumes read and write resources on both the source and destination databases during full data migration, which may increase the database load. Because full data migration involves concurrent INSERT operations, table fragmentation occurs in the destination database. As a result, the table storage space in the destination database is larger than that in the source instance after full migration is complete. Confirm whether the migration precision for columns of the FLOAT or DOUBLE data type meets your business requirements. DTS reads the values of these columns using ROUND(COLUMN,PRECISION). If you do not explicitly define the precision, DTS uses a default precision of 38 for FLOAT and 308 for DOUBLE. DTS attempts to recover failed tasks within seven days. Therefore, before you switch your business to the destination instance, you must end or release the task. Alternatively, revoke the write permissions of the database account that DTS uses to access the destination instance using the revoke command. This prevents the source data from overwriting the data in the destination instance if the task is automatically recovered. DTS periodically runs the CREATE DATABASE IF NOT EXISTS `test` command on the source database to advance the binary log offset. If a DDL statement fails to be written to the destination database, the DTS task continues to run. You need to check the task logs for the failed DDL statement. For more information about how to view task logs, see Query task logs. If you write column names that differ only in capitalization to the same table in the destination MySQL database, the data migration result may not meet your expectations because the column names in MySQL databases are not case-sensitive. After data migration is complete, that is, the Status of the instance changes to Completed, we recommend that you run the analyze table <table name> command to check whether data is written to the destination table. For example, if a high-availability (HA) switchover is triggered in the destination MySQL database, data may be written only to the memory. As a result, data loss occurs. If an instance fails, DTS helpdesk will try to recover the instance within 8 hours. During the recovery process, operations such as restarting the instance and adjusting parameters may be performed.
Note When parameters are adjusted, only the parameters of the DTS instance are modified. The parameters of the database are not modified. The parameters that may be modified include but are not limited to those described in Modify instance parameters.
|
Special cases | If the destination database is an RDS for MySQL instance, DTS automatically creates a database in the instance. If the name of the database to be migrated does not comply with the naming conventions of RDS for MySQL, you must create the database in the RDS for MySQL instance before you configure the migration task. For more information, see Manage databases. |
Migration from PolarDB for MySQL to PolarDB-X 2.0
The following table describes the notes and limits.
Type | Description |
Source database limits | Bandwidth requirements: The server that hosts the source database must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected. The tables to be migrated must have a primary key or a UNIQUE constraint, and the fields in the key or constraint must be unique. Otherwise, duplicate data may appear in the destination database. If you migrate data at the table level and need to edit the tables, such as mapping column names, a single data migration task can migrate a maximum of 1,000 tables. If you exceed this limit, an error is reported after you submit the task. In this case, split the tables into multiple migration tasks or configure a task to migrate the entire database. If you perform incremental migration: You must enable binary logging and set the loose_polar_log_bin parameter to on. Otherwise, the precheck reports an error and the data migration task cannot start. For more information about how to enable binary logging and modify parameters, see Enable binary logging and Modify parameters.
Note Enabling binary logging for a PolarDB for MySQL cluster consumes storage space and incurs storage fees. The binary logs of the PolarDB for MySQL cluster must be retained for at least 3 days. We recommend a retention period of 7 days. Otherwise, DTS may fail to obtain the binary logs, which can cause the task to fail. In extreme cases, this can lead to data inconsistency or data loss. Issues caused by a binary log retention period shorter than the DTS requirement are not covered by the DTS Service-Level Agreement (SLA).
Note For more information about how to set the retention period for binary logs of a PolarDB for MySQL cluster, see Modify the retention period.
Operational limits for the source database: During full data migration, do not perform DDL operations that change the database or table schema. Otherwise, the data migration task fails.
Note During full data migration, DTS queries the source database. This creates a metadata lock, which may block DDL operations on the source database. If you perform only full data migration, do not write new data to the source instance. Otherwise, data inconsistency occurs between the source and destination. To maintain real-time data consistency, select full data migration and incremental data migration. Incremental migration of DDL operations is not supported. Otherwise, the data migration task fails. To perform a DDL operation, we recommend that you first manually run it on the destination database and then run the same DDL operation on the source database.
|
Notes | If you perform online DDL operations in temporary table mode on the source database, such as merging multiple tables, data loss may occur in the destination database or the migration instance may fail. If a primary key or unique key conflict occurs while the migration instance is running: If the source and destination databases have the same schema, and a data record has the same primary key as an existing data record in the destination database, the following scenarios may occur: During full data migration, DTS does not migrate the data record to the destination database. The existing data record in the destination database is retained. During incremental data migration, DTS migrates the data record to the destination database. The existing data record in the destination database is overwritten.
If the source and destination databases have different schemas, only specific columns are migrated or the data migration task fails. Proceed with caution.
DTS does not support the migration of read-only nodes of the source PolarDB for MySQL instance. DTS does not support the migration of OSS external tables from the source PolarDB for MySQL instance. DTS does not support primary/standby switchover scenarios for the database instance during full data migration. In such a scenario, reconfigure the migration task promptly. Before you perform data migration, evaluate the performance of the source and destination databases. We also recommend that you perform data migration during off-peak hours. Otherwise, DTS consumes read and write resources on both the source and destination databases during full data migration, which may increase the database load. Because full data migration involves concurrent INSERT operations, table fragmentation occurs in the destination database. As a result, the table storage space in the destination database is larger than that in the source instance after full migration is complete. Confirm whether the migration precision for columns of the FLOAT or DOUBLE data type meets your business requirements. DTS reads the values of these columns using ROUND(COLUMN,PRECISION). If you do not explicitly define the precision, DTS uses a default precision of 38 for FLOAT and 308 for DOUBLE. DTS attempts to recover failed tasks within seven days. Therefore, before you switch your business to the destination instance, you must end or release the task. Alternatively, revoke the write permissions of the database account that DTS uses to access the destination instance using the revoke command. This prevents the source data from overwriting the data in the destination instance if the task is automatically recovered. DTS periodically runs the CREATE DATABASE IF NOT EXISTS `test` command on the source database to advance the binary log offset. If an instance fails, DTS helpdesk will try to recover the instance within 8 hours. During the recovery process, operations such as restarting the instance and adjusting parameters may be performed.
Note When parameters are adjusted, only the parameters of the DTS instance are modified. The parameters of the database are not modified. The parameters that may be modified include but are not limited to those described in Modify instance parameters.
|
Migration from PolarDB for MySQL to AnalyticDB for MySQL
The following table describes the notes and limits.
Type | Description |
Source database limits | Bandwidth requirements: The server that hosts the source database must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected. The tables to be migrated must have a primary key or a UNIQUE constraint, and the fields in the key or constraint must be unique. Otherwise, duplicate data may appear in the destination database. If you migrate data at the table level and need to edit the tables, such as mapping column names, a single data migration task can migrate a maximum of 1,000 tables. If you exceed this limit, an error is reported after you submit the task. In this case, split the tables into multiple migration tasks or configure a task to migrate the entire database. If you perform incremental migration: You must enable binary logging and set the loose_polar_log_bin parameter to on. Otherwise, the precheck reports an error and the data migration task cannot start. For more information about how to enable binary logging and modify parameters, see Enable binary logging and Modify parameters.
Note Enabling binary logging for a PolarDB for MySQL cluster consumes storage space and incurs storage fees. The binary logs of the PolarDB for MySQL cluster must be retained for at least 3 days. We recommend a retention period of 7 days. Otherwise, DTS may fail to obtain the binary logs, which can cause the task to fail. In extreme cases, this can lead to data inconsistency or data loss. Issues caused by a binary log retention period shorter than the DTS requirement are not covered by the DTS Service-Level Agreement (SLA).
Note For more information about how to set the retention period for binary logs of a PolarDB for MySQL cluster, see Modify the retention period.
Operational limits for the source database: During schema migration and full data migration, do not perform DDL operations that change the database or table schema. Otherwise, the data migration task fails.
Note During full data migration, DTS queries the source database. This creates a metadata lock, which may block DDL operations on the source database. During migration, do not perform DDL operations that modify the primary key or add comments to a table, such as ALTER TABLE table_name COMMENT='table_comment';. Otherwise, the data migration task fails. If you perform only full data migration, do not write new data to the source instance. Otherwise, data inconsistency occurs between the source and destination. To maintain real-time data consistency, select schema migration, full data migration, and incremental data migration.
|
Notes | DTS does not support the migration of prefix indexes. If the source database has prefix indexes, the data migration may fail. If you perform online DDL operations in temporary table mode on the source database, such as merging multiple tables, data loss may occur in the destination database or the migration instance may fail. If a primary key or unique key conflict occurs while the migration instance is running: If the source and destination databases have the same schema, and a data record has the same primary key as an existing data record in the destination database, the following scenarios may occur: During full data migration, DTS does not migrate the data record to the destination database. The existing data record in the destination database is retained. During incremental data migration, DTS migrates the data record to the destination database. The existing data record in the destination database is overwritten.
If the source and destination databases have different schemas, only specific columns are migrated or the data migration task fails. Proceed with caution.
You must specify a custom primary key in the destination database or configure Primary Key Column during the Configurations for Databases, Tables, and Columns. Otherwise, data may fail to be migrated. DTS does not support the migration of read-only nodes of the source PolarDB for MySQL instance. DTS does not support the migration of OSS external tables from the source PolarDB for MySQL instance. DTS does not support the migration of INDEX, PARTITION, VIEW, PROCEDURE, FUNCTION, TRIGGER, and FK. DTS does not support primary/standby switchover scenarios for the database instance during full data migration. In such a scenario, reconfigure the migration task promptly. Due to the limits of AnalyticDB for MySQL, if the disk space usage of a node in the AnalyticDB for MySQL cluster exceeds 80%, the DTS task becomes abnormal and latency occurs. Estimate the required space for the objects to be migrated in advance to ensure that the destination cluster has sufficient storage space. If the destination AnalyticDB for MySQL 3.0 cluster is being backed up when the DTS task is running, the task fails. Before you perform data migration, evaluate the performance of the source and destination databases. We also recommend that you perform data migration during off-peak hours. Otherwise, DTS consumes read and write resources on both the source and destination databases during full data migration, which may increase the database load. Because full data migration involves concurrent INSERT operations, table fragmentation occurs in the destination database. As a result, the table storage space in the destination database is larger than that in the source instance after full migration is complete. Confirm whether the migration precision for columns of the FLOAT or DOUBLE data type meets your business requirements. DTS reads the values of these columns using ROUND(COLUMN,PRECISION). If you do not explicitly define the precision, DTS uses a default precision of 38 for FLOAT and 308 for DOUBLE. DTS attempts to recover failed tasks within seven days. Therefore, before you switch your business to the destination instance, you must end or release the task. Alternatively, revoke the write permissions of the database account that DTS uses to access the destination instance using the revoke command. This prevents the source data from overwriting the data in the destination instance if the task is automatically recovered. DTS periodically runs the CREATE DATABASE IF NOT EXISTS `test` command on the source database to advance the binary log offset. If a DDL statement fails to be written to the destination database, the DTS task continues to run. You need to check the task logs for the failed DDL statement. For more information about how to view task logs, see Query task logs. If an instance fails, DTS helpdesk will try to recover the instance within 8 hours. During the recovery process, operations such as restarting the instance and adjusting parameters may be performed.
Note When parameters are adjusted, only the parameters of the DTS instance are modified. The parameters of the database are not modified. The parameters that may be modified include but are not limited to those described in Modify instance parameters.
|
Migration from PolarDB for MySQL to self-managed Oracle
The following table describes the notes and limits.
Type | Description |
Source database limits | Bandwidth requirements: The server that hosts the source database must have sufficient outbound bandwidth. Otherwise, the data migration speed is affected. The tables to be migrated must have a primary key or a UNIQUE constraint, and the fields in the key or constraint must be unique. Otherwise, duplicate data may appear in the destination database. If you migrate data at the table level and need to edit the tables, such as mapping column names, a single data migration task can migrate a maximum of 1,000 tables. If you exceed this limit, an error is reported after you submit the task. In this case, split the tables into multiple migration tasks or configure a task to migrate the entire database. If you perform incremental migration: You must enable binary logging and set the loose_polar_log_bin parameter to on. Otherwise, the precheck reports an error and the data migration task cannot start. For more information about how to enable binary logging and modify parameters, see Enable binary logging and Modify parameters.
Note Enabling binary logging for a PolarDB for MySQL cluster consumes storage space and incurs storage fees. The binary logs of the PolarDB for MySQL cluster must be retained for at least 3 days. We recommend a retention period of 7 days. Otherwise, DTS may fail to obtain the binary logs, which can cause the task to fail. In extreme cases, this can lead to data inconsistency or data loss. Issues caused by a binary log retention period shorter than the DTS requirement are not covered by the DTS Service-Level Agreement (SLA).
Note For more information about how to set the retention period for binary logs of a PolarDB for MySQL cluster, see Modify the retention period.
Operational limits for the source database: During schema migration and full data migration, do not perform DDL operations that change the database or table schema. Otherwise, the data migration task fails.
Note During full data migration, DTS queries the source database. This creates a metadata lock, which may block DDL operations on the source database. If you perform only full data migration, do not write new data to the source instance. Otherwise, data inconsistency occurs between the source and destination. To maintain real-time data consistency, select schema migration, full data migration, and incremental data migration.
|
Notes | DTS does not support the migration of read-only nodes of the source PolarDB for MySQL instance. DTS does not support the migration of OSS external tables from the source PolarDB for MySQL instance. DTS does not support primary/standby switchover scenarios for the database instance during full data migration. In such a scenario, reconfigure the migration task promptly. If you perform online DDL operations in temporary table mode on the source database, such as merging multiple tables, data loss may occur in the destination database or the migration instance may fail. If a primary key or unique key conflict occurs while the migration instance is running: If the source and destination databases have the same schema, and a data record has the same primary key as an existing data record in the destination database, the following scenarios may occur: During full data migration, DTS does not migrate the data record to the destination database. The existing data record in the destination database is retained. During incremental data migration, DTS migrates the data record to the destination database. The existing data record in the destination database is overwritten.
If the source and destination databases have different schemas, only specific columns are migrated or the data migration task fails. Proceed with caution.
Before you perform data migration, evaluate the performance of the source and destination databases. We also recommend that you perform data migration during off-peak hours. Otherwise, DTS consumes read and write resources on both the source and destination databases during full data migration, which may increase the database load. Because full data migration involves concurrent INSERT operations, table fragmentation occurs in the destination database. As a result, the table storage space in the destination database is larger than that in the source instance after full migration is complete. Confirm whether the migration precision for columns of the FLOAT or DOUBLE data type meets your business requirements. DTS reads the values of these columns using ROUND(COLUMN,PRECISION). If you do not explicitly define the precision, DTS uses a default precision of 38 for FLOAT and 308 for DOUBLE. DTS attempts to recover failed tasks within seven days. Therefore, before you switch your business to the destination instance, you must end or release the task. Alternatively, revoke the write permissions of the database account that DTS uses to access the destination instance using the revoke command. This prevents the source data from overwriting the data in the destination instance if the task is automatically recovered. DTS periodically runs the CREATE DATABASE IF NOT EXISTS `test` command on the source database to advance the binary log offset. If an instance fails, DTS helpdesk will try to recover the instance within 8 hours. During the recovery process, operations such as restarting the instance and adjusting parameters may be performed.
Note When parameters are adjusted, only the parameters of the DTS instance are modified. The parameters of the database are not modified. The parameters that may be modified include but are not limited to those described in Modify instance parameters.
|
Special cases | If the self-managed Oracle database uses a Real Application Clusters (RAC) architecture and needs to be connected to an Alibaba Cloud VPC, you must connect both the SCAN IP address and the virtual IP address (VIP) of each node to the VPC and configure routing. This ensures that the DTS task runs successfully. For more information, see Overview of scenarios for connecting an on-premises data center to Alibaba Cloud and Connect an on-premises data center to DTS through a VPN Gateway.
Important When you configure the source Oracle database information in the DTS console, enter only the SCAN IP address of the Oracle RAC in the Database Address or IP Address field. |
Migration from PolarDB for MySQL to DataHub
The following table describes the notes and limits.
Type | Description |
Source database limits | The tables to be migrated must have a primary key or a UNIQUE constraint, and the fields in the key or constraint must be unique. Otherwise, duplicate data may appear in the destination database. If you migrate data at the table level and need to edit the tables, such as mapping column names, a single data migration task can migrate a maximum of 1,000 tables. If you exceed this limit, an error is reported after you submit the task. In this case, split the tables into multiple migration tasks or configure a task to migrate the entire database. If you perform incremental migration: You must enable binary logging and set the loose_polar_log_bin parameter to on. Otherwise, the precheck reports an error and the data migration task cannot start. For more information about how to enable binary logging and modify parameters, see Enable binary logging and Modify parameters.
Note Enabling binary logging for a PolarDB for MySQL cluster consumes storage space and incurs storage fees. The binary logs of the PolarDB for MySQL cluster must be retained for at least 3 days. We recommend a retention period of 7 days. Otherwise, DTS may fail to obtain the binary logs, which can cause the task to fail. In extreme cases, this can lead to data inconsistency or data loss. Issues caused by a binary log retention period shorter than the DTS requirement are not covered by the DTS Service-Level Agreement (SLA).
Note For more information about how to set the retention period for binary logs of a PolarDB for MySQL cluster, see Modify the retention period.
Operational limits for the source database: During schema migration, do not perform DDL operations that change the database or table schema. Otherwise, the data migration task fails.
|
Other limits | Initial full data synchronization is not supported. This means DTS does not migrate the existing data of the migration objects from the source PolarDB for MySQL cluster to the destination DataHub instance. Only table-level data migration is supported. DTS does not support the migration of read-only nodes of the source PolarDB for MySQL instance. DTS does not support the migration of OSS external tables from the source PolarDB for MySQL instance. DTS does not support primary/standby switchover scenarios for the database instance during full data migration. In such a scenario, reconfigure the migration task promptly. Confirm whether the migration precision for columns of the FLOAT or DOUBLE data type meets your business requirements. DTS reads the values of these columns using ROUND(COLUMN,PRECISION). If you do not explicitly define the precision, DTS uses a default precision of 38 for FLOAT and 308 for DOUBLE. DTS attempts to recover failed tasks within seven days. Therefore, before you switch your business to the destination instance, you must end or release the task. Alternatively, revoke the write permissions of the database account that DTS uses to access the destination instance using the revoke command. This prevents the source data from overwriting the data in the destination instance if the task is automatically recovered. If an instance fails, DTS helpdesk will try to recover the instance within 8 hours. During the recovery process, operations such as restarting the instance and adjusting parameters may be performed.
Note When parameters are adjusted, only the parameters of the DTS instance are modified. The parameters of the database are not modified. The parameters that may be modified include but are not limited to those described in Modify instance parameters.
|
Other notes | DTS periodically runs the CREATE DATABASE IF NOT EXISTS `test` command on the source database to advance the binary log offset. |