All Products
Search
Document Center

Data Transmission Service:Precautions and limits for synchronizing data from a Db2 for LUW database

Last Updated:Jun 04, 2025

This topic describes the precautions and limits on synchronizing data from a Db2 for LUW database. To ensure that your data synchronization task runs as expected, read the precautions and limits before you configure the task.

Synchronize data from a Db2 for LUW database to a PolarDB-X V2.0 instance

Note

By default, Data Transmission Service (DTS) disables FOREIGN KEY constraints for the destination database in a data synchronization task. Therefore, specific operations such as the cascade and delete operations on the source database are not synchronized to the destination database.

Limit type

Description

Limits on the source database

  • Bandwidth requirements: The server on which the source database is deployed must have sufficient outbound bandwidth. Otherwise, the data synchronization speed decreases.

  • The tables to be synchronized must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate data records.

  • If you select tables as the objects to be synchronized and you want to modify the tables in the destination database, such as renaming tables or columns, you can synchronize up to 5,000 tables in a single data synchronization task. If you run a task to synchronize more than 5,000 tables, a request error occurs. In this case, we recommend that you configure multiple tasks to synchronize the tables in batches or configure a task to synchronize the entire database.

  • The data logging feature must be enabled. Otherwise, error messages are returned during precheck, and the data synchronization task cannot be started.

    Note

    If you perform only incremental data synchronization, the data logs of the source database must be stored for more than 24 hours. If you perform both full data synchronization and incremental data synchronization, the data logs of the source database must be stored for at least seven days. Otherwise, DTS may fail to obtain the data logs and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. After full data synchronization is complete, you can set the retention period to more than 24 hours. Make sure that you set the retention period of data logs based on the preceding requirements. Otherwise, the service reliability or performance stated in the Service Level Agreement (SLA) of DTS cannot be guaranteed.

  • The change data capture (CDC) feature must be enabled for the tables to be synchronized.

  • During schema synchronization and full data synchronization, do not execute DDL statements to change the schemas of databases or tables. Otherwise, the data synchronization task fails.

Other limits

  • DTS synchronizes incremental data from a Db2 for LUW database to the destination database based on the CDC replication technology of Db2 for LUW. However, the CDC replication technology has its own limits. For more information, see General data restrictions for SQL Replication.

  • Before you synchronize data, evaluate the impact of data synchronization on the performance of the source and destination databases. We recommend that you synchronize data during off-peak hours. During initial full data synchronization, DTS uses the read and write resources of the source and destination databases. This may increase the loads on the database servers.

  • During initial full data synchronization, concurrent INSERT operations cause fragmentation in the tables of the destination database. Therefore, after initial full data synchronization is complete, the size of the used tablespace of the destination database is larger than that of the source database.

  • During data synchronization, we recommend that you use only DTS to write data to the destination database. This prevents data inconsistency between the source and destination databases. After data synchronization is complete, you can use Data Management (DMS) to execute DDL statements online. For more information, see Perform lock-free DDL operations.

  • If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.

    Note

    Only the parameters of the task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.

Special cases

The source Db2 for LUW database is a self-managed database. When you synchronize data from the Db2 for LUW database, take note of the following items:

  • If you perform a primary/secondary switchover on the source database when the data synchronization task is running, the task fails.

  • DTS calculates synchronization latency based on the timestamp of the latest synchronized data record in the destination database and the current timestamp in the source database. If no data manipulation language (DML) operation is performed on the source database for a long time, the synchronization latency may be inaccurate. If the synchronization latency is too high, you can perform a DML operation on the source database to update the latency.

    Note

    If you select an entire database as the object to be synchronized, you can create a heartbeat table. The heartbeat table is updated or receives data every second.

Synchronize data from a Db2 for LUW database to a PolarDB for MySQL cluster

Note

By default, DTS disables FOREIGN KEY constraints for the destination database in a data synchronization task. Therefore, specific operations such as the cascade and delete operations on the source database are not synchronized to the destination database.

Limit type

Description

Limits on the source database

  • Bandwidth requirements: The server on which the source database is deployed must have sufficient outbound bandwidth. Otherwise, the data synchronization speed decreases.

  • The tables to be synchronized must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate data records.

  • If you select tables as the objects to be synchronized and you want to modify the tables in the destination database, such as renaming tables or columns, you can synchronize up to 5,000 tables in a single data synchronization task. If you run a task to synchronize more than 5,000 tables, a request error occurs. In this case, we recommend that you configure multiple tasks to synchronize the tables in batches or configure a task to synchronize the entire database.

  • The data logging feature must be enabled. Otherwise, error messages are returned during precheck, and the data synchronization task cannot be started.

    Note

    If you perform only incremental data synchronization, the data logs of the source database must be stored for more than 24 hours. If you perform both full data synchronization and incremental data synchronization, the data logs of the source database must be stored for at least seven days. Otherwise, DTS may fail to obtain the data logs and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. After full data synchronization is complete, you can set the retention period to more than 24 hours. Make sure that you set the retention period of data logs based on the preceding requirements. Otherwise, the service reliability or performance stated in the SLA of DTS cannot be guaranteed.

  • During schema synchronization and full data synchronization, do not execute DDL statements to change the schemas of databases or tables. Otherwise, the data synchronization task fails.

Other limits

  • DTS synchronizes incremental data from a Db2 for LUW database to the destination database based on the CDC replication technology of Db2 for LUW. However, the CDC replication technology has its own limits. For more information, see General data restrictions for SQL Replication.

  • Before you synchronize data, evaluate the impact of data synchronization on the performance of the source and destination databases. We recommend that you synchronize data during off-peak hours. During initial full data synchronization, DTS uses the read and write resources of the source and destination databases. This may increase the loads on the database servers.

  • During initial full data synchronization, concurrent INSERT operations cause fragmentation in the tables of the destination database. Therefore, after initial full data synchronization is complete, the size of the used tablespace of the destination database is larger than that of the source database.

  • If the data to be synchronized contains information such as rare characters or emojis that takes up four bytes, the destination databases and tables to receive the data must use UTF8mb4 character set.

    Note

    If you use the schema synchronization feature of DTS, set the instance parameter character_set_server in the destination database to UTF8mb4 character set.

  • During data synchronization, we recommend that you use only DTS to write data to the destination database. This prevents data inconsistency between the source and destination databases. After data synchronization is complete, you can use DMS to execute DDL statements online. For more information, see Perform lock-free DDL operations.

  • If DDL statements fail to be executed in the destination database, the data synchronization task continues to run. You can view the DDL statements that fail to be executed in the task logs. For more information about how to view the task logs, see View task logs.

  • Column names in MySQL databases are not case-sensitive. Therefore, if multiple columns in the source database have names that differ only in capitalization, data in the columns are written to the same column in the destination MySQL database during the synchronization. This can cause unexpected synchronization results.

  • After data synchronization is complete, that is, the Status of the instance changes to Completed, we recommend that you run the analyze table <Table name> command to check whether data is written to the destination table. For example, if a high-availability (HA) switchover is triggered in the source MySQL database, data may be written only to the memory. As a result, data loss occurs.

  • If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.

    Note

    Only the parameters of the task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.

Special cases

The source Db2 for LUW database is a self-managed database. When you synchronize data from the Db2 for LUW database, take note of the following items:

  • If you perform a primary/secondary switchover on the source database when the data synchronization task is running, the task fails.

  • DTS calculates synchronization latency based on the timestamp of the latest synchronized data record in the destination database and the current timestamp in the source database. If no DML operation is performed on the source database for a long time, the synchronization latency may be inaccurate. If the synchronization latency is too high, you can perform a DML operation on the source database to update the latency.

    Note

    If you select an entire database as the object to be synchronized, you can create a heartbeat table. The heartbeat table is updated or receives data every second.

Synchronize data from a Db2 for LUW database to an ApsaraDB RDS for MySQL instance

Note

By default, DTS disables FOREIGN KEY constraints for the destination database in a data synchronization task. Therefore, the cascade operation of the source database is not synchronized to the destination database.

Limit type

Description

Limits on the source database

  • Bandwidth requirements: The server on which the source database is deployed must have sufficient outbound bandwidth. Otherwise, the data synchronization speed decreases.

  • The tables to be synchronized must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate data records.

  • If you select tables as the objects to be synchronized and you want to modify the tables in the destination database, such as renaming tables or columns, you can synchronize up to 5,000 tables in a single data synchronization task. If you run a task to synchronize more than 5,000 tables, a request error occurs. In this case, we recommend that you configure multiple tasks to synchronize the tables in batches or configure a task to synchronize the entire database.

  • The data logging feature must be enabled. Otherwise, error messages are returned during precheck, and the data synchronization task cannot be started.

    Note

    If you perform only incremental data synchronization, the data logs of the source database must be stored for more than 24 hours. If you perform both full data synchronization and incremental data synchronization, the data logs of the source database must be stored for at least seven days. Otherwise, DTS may fail to obtain the data logs and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. After full data synchronization is complete, you can set the retention period to more than 24 hours. Make sure that you set the retention period of data logs based on the preceding requirements. Otherwise, the service reliability or performance stated in the SLA of DTS cannot be guaranteed.

  • During schema synchronization and full data synchronization, do not execute DDL statements to change the schemas of databases or tables. Otherwise, the data synchronization task fails.

Other limits

  • DTS synchronizes incremental data from a Db2 for LUW database to the destination database based on the CDC replication technology of Db2 for LUW. However, the CDC replication technology has its own limits. For more information, see General data restrictions for SQL Replication.

  • Before you synchronize data, evaluate the impact of data synchronization on the performance of the source and destination databases. We recommend that you synchronize data during off-peak hours. During initial full data synchronization, DTS uses the read and write resources of the source and destination databases. This may increase the loads on the database servers.

  • During initial full data synchronization, concurrent INSERT operations cause fragmentation in the tables of the destination database. Therefore, after initial full data synchronization is complete, the size of the used tablespace of the destination database is larger than that of the source database.

  • If the data to be synchronized contains information such as rare characters or emojis that takes up four bytes, the destination databases and tables to receive the data must use UTF8mb4 character set.

    Note

    If you use the schema synchronization feature of DTS, set the instance parameter character_set_server in the destination database to UTF8mb4 character set.

  • During data synchronization, we recommend that you use only DTS to write data to the destination database. This prevents data inconsistency between the source and destination databases. After data synchronization is complete, you can use DMS to execute DDL statements online. For more information, see Perform lock-free DDL operations.

  • If DDL statements fail to be executed in the destination database, the data synchronization task continues to run. You can view the DDL statements that fail to be executed in the task logs. For more information about how to view the task logs, see View task logs.

  • Column names in MySQL databases are not case-sensitive. Therefore, if multiple columns in the source database have names that differ only in capitalization, data in the columns are written to the same column in the destination MySQL database during the synchronization. This can cause unexpected synchronization results.

  • After data synchronization is complete, that is, the Status of the instance changes to Completed, we recommend that you run the analyze table <Table name> command to check whether data is written to the destination table. For example, if a high-availability (HA) switchover is triggered in the source MySQL database, data may be written only to the memory. As a result, data loss occurs.

  • If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.

    Note

    Only the parameters of the task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.

Special cases

The source Db2 for LUW database is a self-managed database. When you synchronize data from the Db2 for LUW database, take note of the following items:

  • If you perform a primary/secondary switchover on the source database when the data synchronization task is running, the task fails.

  • DTS calculates synchronization latency based on the timestamp of the latest synchronized data record in the destination database and the current timestamp in the source database. If no DML operation is performed on the source database for a long time, the synchronization latency may be inaccurate. If the synchronization latency is too high, you can perform a DML operation on the source database to update the latency.

    Note

    If you select an entire database as the object to be synchronized, you can create a heartbeat table. The heartbeat table is updated or receives data every second.

Synchronize data from a Db2 for LUW database to an AnalyticDB for PostgreSQL instance

Note

By default, DTS disables FOREIGN KEY constraints for the destination database in a data synchronization task. Therefore, specific operations such as the cascade and delete operations on the source database are not synchronized to the destination database.

Limit type

Description

Limits on the source database

  • Bandwidth requirements: The server on which the source database is deployed must have sufficient outbound bandwidth. Otherwise, the data synchronization speed decreases.

  • The tables to be synchronized must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate data records.

  • If you select tables as the objects to be synchronized and you want to modify the tables in the destination database, such as renaming tables or columns, you can synchronize up to 5,000 tables in a single data synchronization task. If you run a task to synchronize more than 5,000 tables, a request error occurs. In this case, we recommend that you configure multiple tasks to synchronize the tables in batches or configure a task to synchronize the entire database.

  • The data logging feature must be enabled. Otherwise, error messages are returned during precheck, and the data synchronization task cannot be started.

    Note

    If you perform only incremental data synchronization, the data logs of the source database must be stored for more than 24 hours. If you perform both full data synchronization and incremental data synchronization, the data logs of the source database must be stored for at least seven days. Otherwise, DTS may fail to obtain the data logs and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. After full data synchronization is complete, you can set the retention period to more than 24 hours. Make sure that you set the retention period of data logs based on the preceding requirements. Otherwise, the service reliability or performance stated in the SLA of DTS cannot be guaranteed.

  • During schema synchronization and full data synchronization, do not execute DDL statements to change the schemas of databases or tables. Otherwise, the data synchronization task fails.

Other limits

  • DTS synchronizes incremental data from a Db2 for LUW database to the destination database based on the CDC replication technology of Db2 for LUW. However, the CDC replication technology has its own limits. For more information, see General data restrictions for SQL Replication.

  • If the source table to be synchronized has a primary key, the primary key column of the destination table is the same as that of the source table. If the source table to be synchronized does not have a primary key, the primary key column of the destination table is the same as the distribution key of the destination table.

  • The unique key (including primary key column) of the destination table must contain all columns of a distribution key.

  • Before you synchronize data, evaluate the impact of data synchronization on the performance of the source and destination databases. We recommend that you synchronize data during off-peak hours. During initial full data synchronization, DTS uses the read and write resources of the source and destination databases. This may increase the loads on the database servers.

  • During initial full data synchronization, concurrent INSERT operations cause fragmentation in the tables of the destination database. Therefore, after initial full data synchronization is complete, the size of the used tablespace of the destination database is larger than that of the source database.

  • During data synchronization, we recommend that you use only DTS to write data to the destination database. This prevents data inconsistency between the source and destination databases. After data synchronization is complete, you can use DMS to execute DDL statements online. For more information, see Perform lock-free DDL operations.

  • During schema synchronization and incremental data synchronization, the foreign keys of the source database are not synchronized to the destination database.

  • You can select only tables as the objects to be synchronized. The tables cannot be append-optimized (AO) tables.

  • If column mapping is used for non-full table synchronization or if the source and destination table structures are inconsistent, the data in the columns of the source database that are not contained in the destination database will be lost.

  • If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.

    Note

    Only the parameters of the task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.

Special cases

The source Db2 for LUW database is a self-managed database. When you synchronize data from the Db2 for LUW database, take note of the following items:

  • If you perform a primary/secondary switchover on the source database when the data synchronization task is running, the task fails.

  • DTS calculates synchronization latency based on the timestamp of the latest synchronized data record in the destination database and the current timestamp in the source database. If no DML operation is performed on the source database for a long time, the synchronization latency may be inaccurate. If the synchronization latency is too high, you can perform a DML operation on the source database to update the latency.

    Note

    If you select an entire database as the object to be synchronized, you can create a heartbeat table. The heartbeat table is updated or receives data every second.

Synchronize data from a Db2 for LUW database to an ApsaraMQ for Kafka instance

Note

By default, DTS disables FOREIGN KEY constraints for the destination database in a data synchronization task. Therefore, specific operations such as the cascade and delete operations on the source database are not synchronized to the destination database.

Limit type

Description

Limits on the source database

  • Bandwidth requirements: The server on which the source database is deployed must have sufficient outbound bandwidth. Otherwise, the data synchronization speed decreases.

  • The tables to be synchronized must have PRIMARY KEY or UNIQUE constraints, and all fields must be unique. Otherwise, the destination database may contain duplicate data records.

  • If you select tables as the objects to be synchronized and you want to modify the tables in the destination database, such as renaming tables or columns, you can synchronize up to 5,000 tables in a single data synchronization task. If you run a task to synchronize more than 5,000 tables, a request error occurs. In this case, we recommend that you configure multiple tasks to synchronize the tables in batches or configure a task to synchronize the entire database.

  • The data logging feature must be enabled. Otherwise, error messages are returned during precheck, and the data synchronization task cannot be started.

    Note

    If you perform only incremental data synchronization, the data logs of the source database must be stored for more than 24 hours. If you perform both full data synchronization and incremental data synchronization, the data logs of the source database must be stored for at least seven days. Otherwise, DTS may fail to obtain the data logs and the task may fail. In exceptional circumstances, data inconsistency or loss may occur. After full data synchronization is complete, you can set the retention period to more than 24 hours. Make sure that you set the retention period of data logs based on the preceding requirements. Otherwise, the service reliability or performance stated in the SLA of DTS cannot be guaranteed.

  • During schema synchronization and full data synchronization, do not execute DDL statements to change the schemas of databases or tables. Otherwise, the data synchronization task fails.

Other limits

  • The synchronization of indexes, partitions, views, stored procedures, functions, triggers, and foreign keys is not supported.

  • DTS synchronizes incremental data from a Db2 for LUW database to the destination database based on the CDC replication technology of Db2 for LUW. However, the CDC replication technology has its own limits. For more information, see General data restrictions for SQL Replication.

  • Before you synchronize data, evaluate the impact of data synchronization on the performance of the source and destination databases. We recommend that you synchronize data during off-peak hours. During initial full data synchronization, DTS uses the read and write resources of the source and destination databases. This may increase the loads on the database servers.

  • During initial full data synchronization, concurrent INSERT operations cause fragmentation in the tables of the destination database. Therefore, after initial full data synchronization is complete, the size of the used tablespace of the destination database is larger than that of the source database.

  • During data synchronization, we recommend that you use only DTS to write data to the destination database. This prevents data inconsistency between the source and destination databases.

  • If a DTS task fails to run, DTS technical support will try to restore the task within 8 hours. During the restoration, the task may be restarted, and the parameters of the task may be modified.

    Note

    Only the parameters of the task may be modified. The parameters of databases are not modified. The parameters that may be modified include but are not limited to the parameters in the "Modify instance parameters" section of the Modify the parameters of a DTS instance topic.

Special cases

  • If you perform a primary/secondary switchover on the source database when the data synchronization task is running, the task fails.

  • DTS calculates synchronization latency based on the timestamp of the latest synchronized data record in the destination database and the current timestamp in the source database. If no DML operation is performed on the source database for a long time, the synchronization latency may be inaccurate. If the synchronization latency is too high, you can perform a DML operation on the source database to update the latency.

    Note

    If you select an entire database as the object to be synchronized, you can create a heartbeat table. The heartbeat table is updated or receives data every second.

  • During data synchronization, if the destination ApsaraMQ for Kafka instance is scaled, you must restart the instance.