|Limits on the source database
- The tables to synchronize must have PRIMARY KEY or UNIQUE constraints, and all fields
must be unique. Otherwise, the destination database may contain duplicate data records.
- If you select tables as the objects to synchronize and you want to edit tables (such
as renaming tables or columns), up to 1,000 tables can be synchronized in a single
data synchronization task. If you run a task to synchronize more than 1,000 tables,
a request error occurs. In this case, we recommend that you split the tables, configure
multiple tasks to synchronize the tables, or configure a task to synchronize the entire
- A single data synchronization task can synchronize up to 10 databases. If you want
to synchronize more than 10 databases, we recommend that you split the tables to synchronize
or configure multiple tasks to synchronize the databases. Otherwise, the performance
and stability of your data synchronization task may be compromised.
- The following requirements for data logs must be met:
- The data logging feature must be enabled. The backup mode must be set to Full, and
full logical backup must be performed.
If you perform only incremental data synchronization, the data logs of the source
database must be stored for more than 24 hours. If you perform both full data synchronization
and incremental data synchronization, the data logs of the source database must be
stored for at least seven days. After the full data synchronization is complete, you
can set the retention period to more than 24 hours. Otherwise, DTS may fail to obtain
the data logs and the task may fail. In exceptional circumstances, data inconsistency
or loss may occur. Make sure that you set the retention period of data logs in accordance
with the preceding requirements. Otherwise, the Service Level Agreement (SLA) of DTS
does not ensure service reliability and performance.
- Requirements for the objects to synchronize:
- DTS supports initial schema synchronization for the following types of objects: schema,
table, view, function, and procedure.
ApsaraDB RDS for SQL Server and AnalyticDB for PostgreSQL are heterogeneous databases.
The data types that they support do not have one-to-one correspondence. We recommend
that you evaluate the impact of data type conversion on your business. For more information,
see Data type mappings for schema synchronization
- DTS does not synchronize the schemas of assemblies, service brokers, full-text indexes,
full-text catalogs, distributed schemas, distributed functions, CLR stored procedures,
CLR scalar-valued functions, CLR table-valued functions, internal tables, systems,
or aggregate functions.
- DTS does not synchronize data of the following types: TIMESTAMP, CURSOR, ROWVERSION,
HIERACHYID, SQL_VARIANT, SPATIAL GEOMETRY, SPATIAL GEOGRAPHY, and TABLE.
- DTS does not synchronize tables that contain computed columns.
- If you set SQL Server Incremental Synchronization Mode to Incremental Synchronization Based on Logs of Source Database in the Configure Objects and Advanced Settings step, the tables to synchronize must have clustered indexes that contain primary
key columns. In addition, the table to synchronize cannot be compressed tables and
cannot contain computed columns. Ignore the preceding limits in the mixed log-based
- In the Incremental Synchronization Based on Logs of Source Database mode, DTS adds
a heartbeat table named dts_log_heart_beat to the source database to ensure that the latency of data synchronization is accurate.
In the mixed log-based parsing incremental synchronization mode, DTS creates a trigger
named dts_cdc_sync_ddl, a heartbeat table named dts_sync_progress, and a storage table named dts_cdc_ddl_history by using DDL and enables Change Data Capture (CDC) for the database and some tables.
- Due to the limits of AnalyticDB for MySQL , if the disk space usage of the nodes in an AnalyticDB for MySQL cluster reaches 80%, the performance of data writing to the destination database
is compromised and the DTS task is delayed. If the usage reaches 90%, data cannot
be written to the destination database and error messages are returned. We recommend
that you estimate the required disk space based on the objects that you want to synchronize.
You must ensure that the destination cluster has sufficient storage space.
- Before you synchronize data, evaluate the impact of data synchronization on the performance
of the source and destination databases. We recommend that you synchronize data during
off-peak hours. During full data synchronization, DTS uses read and write resources
of the source and destination databases. This may increase the loads of the database
- During full data synchronization, concurrent INSERT operations cause fragmentation
in the tables of the destination database. Therefore, after the full data synchronization
is complete, the tablespace of the destination database is larger than that of the
- We recommend that you do not use gh-ost or pt-online-schema-change to perform DDL
operations on source tables during data synchronization. Otherwise, data synchronization
- During data synchronization, we recommend that you use only DTS to write data to the
destination database. This prevents data inconsistency between the source and destination
databases. If you use tools other than DTS to write data to the destination database,
data loss may occur in the destination database when you use DMS to perform online