Category | Description |
Limits on the source database | - The tables to be synchronized must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination may contain duplicate data records.
- If you select tables as the objects to be synchronized and you need to edit the tables, such as renaming tables or columns, in the destination database, you can synchronize up to 1,000 tables in a single data synchronization task. If you run a task to synchronize more than 1,000 tables, a request error occurs. In this case, we recommend that you configure multiple tasks to synchronize the tables or configure a task to synchronize the entire database.
- If you need to synchronize incremental data, the binary logging feature must be enabled and the loose_polar_log_bin parameter must be set to on. Otherwise, error messages are returned during precheck and the data synchronization task cannot be started. For more information about how to enable the binary logging feature and set the loose_polar_log_bin parameter, see Enable binary logging and Modify parameters.
Note - If you enable the binary logging feature for a PolarDB for MySQL cluster, you are charged for the storage space that is occupied by binary logs.
For an incremental data synchronization task, the binary logs of the source database are retained for at least 24 hours. For a full and incremental data synchronization task, the binary logs of the source database are retained for at least seven days. Otherwise, DTS may fail to obtain the binary logs and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. After full data synchronization is complete, you can set the retention period to more than 24 hours. Make sure that you set the retention period of binary logs based on the preceding requirements. Otherwise, the SLA of DTS does not guarantee service reliability or performance.
- During data synchronization, do not perform DDL operations to modify the primary key or add comments because the operations cannot take effect. For example, do not execute the
ALTER TABLE table_name COMMENT='Table comments'; statement.
|
Other limits | - Requirements for the objects to be synchronized:
- Only tables can be selected as objects to be synchronized.
- DTS does not synchronize the following types of data: BIT, VARBIT, GEOMETRY, ARRAY, UUID, TSQUERY, TSVECTOR, and TXID_SNAPSHOT.
- Prefix indexes cannot be synchronized. If the source database contains prefix indexes, data may fail to be synchronized.
- Read-only nodes of the source PolarDB for MySQL cluster cannot be synchronized.
- Before you synchronize data, evaluate the impact of data synchronization on the performance of the source and destination databases. We recommend that you synchronize data during off-peak hours. During initial full data synchronization, DTS uses the read and write resources of the source and destination databases. This may increase the loads on the database servers.
- During initial full data synchronization, concurrent INSERT operations cause fragmentation in the tables of the destination database. After full data synchronization is complete, the tablespace of the destination database is larger than that of the source database.
- We recommend that you do not use tools such as pt-online-schema-change to perform DDL operations on source tables during data synchronization. Otherwise, data synchronization may fail.
- If you use only DTS to write data to the destination database, you can use Data Management (DMS) to perform online DDL operations on source tables during data synchronization. For more information, see Perform lock-free operations.
- During data synchronization, we recommend that you use only DTS to write data to the destination database. This prevents data inconsistency between the source and destination databases. If you use tools other than DTS to write data to the destination database, data loss may occur in the destination database when you use DMS to perform online DDL operations.
|
Usage notes | DTS executes the CREATE DATABASE IF NOT EXISTS `test` statement in the source database as scheduled to move forward the binary log file position. |