|Limits on the source database
- The tables to be synchronized must have PRIMARY KEY or UNIQUE constraints and all
fields must be unique. Otherwise, the destination database may contain duplicate data
- If you select tables as the objects to be synchronized and you need to edit tables
(such as rename tables or columns), up to 1,000 tables can be synchronized in a single
data synchronization task. If you run a task to synchronize more than 1,000 tables,
a request error occurs. In this case, we recommend that you split the tables to be
synchronized, configure multiple tasks to synchronize the tables, or call DTS API
operations to configure tasks.
- The following requirements for binary logs must be met:
- The binary logging feature must be enabled. The value of the binlog_format parameter
must be set to row. The value of the binlog_row_image parameter must be set to full.
Otherwise, error messages are returned during precheck and the data synchronization
task cannot be started.
- Binary logs are retained for at least 7 days during full data synchronization. You
can wait until full data synchronization is complete, and then clear the binary logs
generated in the source database after the DTS task is run.
Note To ensure data security, DTS stores only 50 GB of binary logs or the binary logs for
the last 24 hours. If the limit is exceeded, DTS automatically clears the cached logs.
Warning If you clear the binary logs of the source database during full data synchronization,
the data synchronization task may fail. For example, full data synchronization takes
more than 24 hours due to the large data volume in the source database and abnormal
writing in the destination database. In this case, if the binary logs of the source
database are cleared during full data synchronization, DTS cannot obtain the binary
logs generated 24 hours ago. Therefore, the data synchronization task may fail.
- Schema synchronization is not supported in this scenario. Before you configure a data
synchronization task, you must create databases and tables in the destination instance.
- Requirements for the objects to be synchronized:
- You cannot synchronize the following types of data: BIT, VARBIT, GEOMETRY, ARRAY,
UUID, TSQUERY, TSVECTOR, and TXID_SNAPSHOT.
- Prefix indexes cannot be synchronized. If the source database contains prefix indexes,
data may fail to be synchronized.
- Before you synchronize data, evaluate the impact of data synchronization on the performance
of the source and destination databases. We recommend that you synchronize data during
off-peak hours. During full data synchronization, DTS uses read and write resources
of the source and destination databases. This may increase the loads of the database
- During full data synchronization, concurrent INSERT operations cause fragmentation
in the tables of the destination database. After full data synchronization is complete,
the tablespace of the destination database is larger than that of the source database.
- We recommend that you do not use gh-ost or pt-online-schema-change to perform DDL
operations on source tables during data synchronization. Otherwise, data synchronization
- If you use only DTS to write data to the destination database, you can use Data Management
(DMS) to perform online DDL operations on source tables during data synchronization.
For more information, see Change schemas without locking tables.
Warning If you use tools other than DTS to write data to the destination database, we recommend
that you do not use DMS to perform online DDL operations. Otherwise, data loss may
occur in the destination database.
If the source database is a self-managed MySQL database, take note of the following
- If you perform a primary/secondary switchover on the source database when the data
synchronization task is running, the task fails.
- DTS calculates synchronization latency based on the timestamp of the latest synchronized
data in the destination database and the current timestamp in the source database.
If no DML operation is performed on the source database for a long time, the synchronization
latency may be inaccurate. If the latency of the synchronization task is too high,
you can perform a DML operation on the source database to update the latency.
Note If you select an entire database as the object to be synchronized, you can create
a heartbeat table. The heartbeat table is updated or receives data every second.