All Products
Search
Document Center

Data Transmission Service:FAQ

Last Updated:Dec 27, 2025

If you receive an error message when using Data Transmission Service (DTS), see Common errors to find a solution. If you do not receive a specific error message, find a solution based on the problem category.

Problem categories

FAQs are classified into the following types:

Billing issues

How is DTS billed?

DTS supports subscription and pay-as-you-go billing methods. For more information about billing methods, see Billing overview.

How do I view DTS bills?

For more information about how to view DTS bills, see View bills.

Am I still charged after an instance is paused?

  • You are not charged when a data migration instance is paused.

  • You are still charged for a data synchronization instance during the period in which the instance is paused, regardless of whether the source or destination database can be connected. This is because the paused instance still consumes resources such as CPU and memory. After the instance is paused, DTS stops writing data to the destination database but continues trying to read logs from the source database. This helps resume the data synchronization instance immediately after the instance is restarted.

Why is the price of data synchronization higher than that of data migration?

Data synchronization has more advanced features. For example, you can modify the synchronization objects online and configure two-way data synchronization between MySQL databases. Data synchronization also ensures lower network latency because it transmits data over an internal network.

What is the impact of an overdue payment?

For more information about the impact of an overdue payment, see Expiration and overdue payments.

How do I release a subscription task before it expires?

First, switch the billing method of the subscription task to pay-as-you-go, and then unsubscribe from it. For more information about how to switch the billing method, see Switch between billing methods.

Can I switch a subscription task to pay-as-you-go?

Yes. For more information, see Switch between billing methods.

Can I switch a pay-as-you-go task to subscription?

Yes, for data synchronization or change tracking tasks. For more information, see Switch between billing methods.

Note

Data migration tasks support only the pay-as-you-go billing method.

Why did my DTS task suddenly start incurring charges?

This may be because the free trial period for the instance has expired. DTS offers a promotional free period for tasks where the destination database is an Alibaba Cloud-developed database engine. Charges begin after the free period ends.

Why am I still being charged for a task that has been released?

Bills for pay-as-you-go DTS tasks are pushed daily. Because you used DTS on the day you released the task, you will still be charged for that day's usage.

How does pay-as-you-go billing work?

You are charged for a pay-as-you-go DTS instance only when an incremental task is running. This includes the period when an Incremental Synchronization task is paused, but excludes the period when an Incremental Migration task is paused. For more information, see Billing methods.

Does DTS charge for traffic?

Some DTS tasks incur data transfer costs and data traffic fees, regardless of the regions of the source and destination databases. Migration tasks incur data transfer costs if the Access Method for the destination database is Public IP Address. Full verification tasks that use the Full field validation by row sampling mode incur data traffic fees based on the volume of verified data. For more information, see Billing Items.

Performance and specification issues

What are the differences between instance types?

For more information about the differences between instance types, see Data migration link specifications and Data synchronization link specifications.

Can I upgrade an instance type?

Yes. For more information, see Upgrade the link specification of an instance.

Can I downgrade an instance type?

Currently, this operation is supported only for sync instances. For more information, see Downgrade instance link specifications.

Can I downgrade the instance type of a DTS task?

You can downgrade the link specification for eligible DTS tasks. For more information, see Downgrade the link specification of an instance.

Can I downgrade a DTS task to an instance type lower than medium?

No.

How long does it take to synchronize or migrate data?

The time required for a DTS task cannot be estimated. The transmission performance of DTS is affected by multiple factors, such as the internal workload of DTS, the load on the source and destination database instances, the amount of data to be transferred, whether the DTS instance has incremental tasks, and the network. If you have high performance requirements, select an instance type with a higher performance limit. For more information about specifications, see Data migration link specifications and Data synchronization link specifications.

How do I view the performance information of a data migration or data synchronization task?

For information about how to view performance, see View the status and performance of an incremental migration link or View the status and performance of a synchronization link.

Why can't I find a specific DTS instance in the console?

Possible reason: If the specified DTS instance is a subscription instance, it has expired and been released.

  • The selected resource group is incorrect. We recommend that you select All Resources.

  • The region for the instance is incorrect. Check whether the selected region is the one where the destination instance is located.

  • An incorrect task type is selected. Verify that the current task list page corresponds to the task type of the target instance. For example, a sync instance appears only in the Data Synchronization Tasks list.

  • The instance was released due to expiration or an overdue payment. After a DTS instance expires or has an overdue payment, the data transmission task stops. If the payment is not successfully made within seven days, the system releases and deletes the instance. For more information, see Expiration and overdue payments.

Precheck issues

Why is there an alert for the Redis eviction policy check?

If the data eviction policy (maxmemory-policy) of the destination instance is set to a value other than noeviction, data inconsistency may occur between the source and destination instances. For more information about data eviction policies, see Introduction to Redis Data Eviction Policies.

What should I do if a binary log-related precheck item fails during incremental data migration?

Check whether the binary logging of the source database is normal. For more information, see Source database binary log check.

Database connection issues

What should I do if the connection to the source database fails?

Check whether the source database information and settings are correct. For more information, see Source database connectivity check.

What should I do if the connection to the destination database fails?

Check whether the destination database information and settings are correct. For more information, see Destination database connectivity check.

How can I migrate and synchronize data if the source or destination instance is in a region not supported by DTS?

  • For a data migration task, you can apply for a public endpoint for a database instance, such as an RDS for MySQL instance, and connect to it using a Public IP Address. Then, add the CIDR blocks of DTS servers that reside in a region supported by DTS to the whitelist of the database instance. For more information, see Add DTS server IP addresses to a whitelist.

  • For a data synchronization task, DTS does not support these regions because data synchronization does not support connecting to database instances using a Public IP Address.

Data synchronization issues

Which database instances does DTS support for synchronization?

DTS supports data synchronization between various data sources, such as relational database management systems (RDBMSs), NoSQL databases, and online analytical processing (OLAP) databases. For supported database instances, see Synchronization solutions.

What is the difference between data migration and data synchronization?

The following table describes the differences between data migration and data synchronization.

Note

Self-managed database: A database instance for which the Access Method is not Alibaba Cloud Instance when you configure a DTS instance. A self-managed database can be a database instance from a third-party cloud, a database deployed on-premises, or a database deployed on an ECS instance.

Item

Data migration

Data synchronization

Scenarios

Mainly used for cloud migration, such as migrating local databases, self-managed databases on ECS, or third-party cloud databases to Alibaba Cloud databases.

Mainly used for real-time data synchronization between two data sources. It is suitable for scenarios such as active geo-redundancy, data disaster recovery, cross-border data synchronization, query and report offloading, cloud BI, and real-time data warehouses.

Supported databases

For more information, see Migration solutions.

Note

For some databases not supported by data synchronization, you can use data migration to achieve data synchronization. Examples include single-node MongoDB databases and OceanBase (MySQL mode) databases.

For more information, see Synchronization solutions.

Supported database deployment locations (connection types)

  • Alibaba Cloud instance

  • Self-managed database with a public IP address

  • Self-managed database connected through Database Gateway

  • Self-managed database connected through Cloud Enterprise Network (CEN)

  • Self-managed database on an ECS instance

  • Self-managed database connected through a leased line, VPN Gateway, or Smart Access Gateway

  • Alibaba Cloud instance

  • Self-managed database connected through Database Gateway

  • Self-managed database connected through CEN

  • Self-managed database on an ECS instance

  • Self-managed database connected through a leased line, VPN Gateway, or Smart Access Gateway

Note

Data synchronization is based on internal network transmission, which ensures lower network latency.

Feature differences

  • Supports three levels of object name mapping: database, table, and column.

  • Supports filtering data to be migrated.

  • Supports selecting the type of SQL operations to migrate, such as migrating only INSERT operations.

  • Supports reading from VPCs under other Alibaba Cloud accounts. This feature allows migrating self-managed databases in VPCs across Alibaba Cloud accounts.

  • Supports three levels of object name mapping: database, table, and column.

  • Supports filtering data to be synchronized.

  • Supports modifying synchronization objects online.

  • Supports two-way synchronization between databases such as MySQL.

  • Supports selecting the type of SQL operations to synchronize, such as synchronizing only INSERT operations.

Billing method

Only pay-as-you-go is supported.

Supports pay-as-you-go and subscription.

Charges

Fees are incurred only for migration instances that include incremental migration tasks.

Yes. Synchronization instances include incremental synchronization tasks by default, so they will always incur fees.

Billing rule

Billing occurs only when incremental data migration is running. It does not include periods when incremental data migration is paused. Schema migration and full data migration are not billed.

  • For pay-as-you-go, billing occurs only when incremental data synchronization is running. This includes periods when incremental data synchronization is paused. Schema synchronization and full data synchronization are not billed.

  • For subscription, a one-time fee is charged based on the selected configuration and purchase quantity at the time of purchase.

How does data synchronization work?

For information about how data synchronization works, see Service architecture and principles.

How is synchronization latency calculated?

Synchronization latency is the difference between the timestamp of the latest data synchronized to the destination database and the current timestamp of the source database. The unit is milliseconds.

Note

Under normal conditions, the latency is within 1,000 ms.

Can I modify the synchronization objects in a data synchronization task?

Yes. For information about how to modify synchronization objects, see Add a synchronization object and Remove a synchronization object.

Can I add a new table to a data synchronization task?

Yes. For information about how to add a new table, see Add a synchronization object.

How do I modify synchronization objects such as tables and fields in a running synchronization task?

After the full synchronization phase of a synchronization task is complete and it enters the incremental data synchronization phase, you can modify the synchronization objects. For information about how to modify synchronization objects, see Add a synchronization object and Remove a synchronization object.

If I pause a synchronization task and then restart it after some time, will it cause data inconsistency?

If the source database changes while the synchronization task is paused, data inconsistency between the source and destination databases may occur. After the synchronization task is restarted and the incremental data is synchronized to the destination database, the data in the destination database will be consistent with the source database.

If I delete data from the source database of an incremental synchronization task, will the synchronized data in the destination database be deleted?

If you do not select delete for the DML operations to be synced in an incremental sync task, data in the destination database is not deleted. Otherwise, the synced data in the destination database is deleted.

During synchronization between Redis instances, will the data in the destination Redis instance be overwritten?

Data with the same key will be overwritten. DTS checks the destination during the precheck phase. If the destination is not empty, an error is reported.

Does a synchronization task support filtering some fields or data?

Yes. You can use the mapping feature to filter columns that do not need to be synchronized. You can also specify a SQL WHERE condition to filter the data to be synchronized. For more information, see Synchronize or migrate partial columns and Filter task data using SQL conditions.

Can a synchronization task be converted into a migration task?

No, different types of tasks cannot be converted into each other.

Can I synchronize only data without synchronizing the structure?

Yes, you can synchronize only data without synchronizing schemas. To do so, do not select Schema Synchronization when you configure a sync task.

What are the possible reasons for data inconsistency between the source and destination of a data synchronization instance?

The possible reasons for data inconsistency are as follows:

  1. The destination data was not cleared when the task was configured, and the destination had historical data.

  2. Only the incremental synchronization module was selected when the task was configured, and the full synchronization module was not selected.

  3. Only the full synchronization module was selected when the task was configured, and the incremental synchronization module was not selected. The source data changed after the task ended.

  4. Data was written to the destination by sources other than the DTS task.

  5. There is a delay in incremental writes, and not all incremental data has been written to the destination yet.

Can I change the name of the source database in the destination database for a data synchronization task?

Yes. For information about how to change the name of the source database in the destination database, see Set the name of a synchronization object in the destination instance.

Is real-time synchronization of DML or DDL operations supported?

Yes. For data synchronization between relational databases, the supported DML operations are INSERT, UPDATE, and DELETE. The supported DDL operations are CREATE, DROP, ALTER, RENAME, and TRUNCATE.

Note

The supported DML or DDL operations vary by scenario. In Synchronization solutions, select a link that fits your business scenario and check the supported DML or DDL operations in the specific link configuration document.

Can a read-only instance be used as the source instance for a synchronization task?

A synchronization task includes incremental data synchronization by default. Therefore, there are two situations:

  • If the instance is a read-only instance that records transaction logs (such as RDS MySQL 5.7 or 8.0), it can be used as a source instance.

  • If the instance is a read-only instance that does not record transaction logs (such as RDS MySQL 5.6), it cannot be used as a source instance.

Does DTS support data synchronization for sharded databases and tables?

Yes. For example, you can synchronize sharded databases and tables from MySQL and PolarDB for MySQL to AnalyticDB for MySQL to merge multiple tables.

Why is the data volume in the destination instance smaller than in the source instance after a sync task?

If data is filtered during synchronization, or if the source instance has significant table fragmentation, the data volume in the destination instance may be smaller than in the source instance after migration.

Do cross-account data synchronization tasks support two-way synchronization?

You can configure a two-way synchronization task across Alibaba Cloud accounts only between ApsaraDB RDS for MySQL instances, between PolarDB for MySQL clusters, between Tair (Enterprise Edition) instances, between ApsaraDB for MongoDB replica set instances, or between ApsaraDB for MongoDB sharded cluster instances.

Does reverse synchronization in a two-way synchronization instance support DDL synchronization?

No. Only forward synchronization tasks (from source to destination) support DDL synchronization. Reverse synchronization tasks (from destination to source) do not support DDL synchronization and automatically filter DDL operations.

Note

To synchronize DDL operations for the current remote sync task, you can try, if your business permits, to reverse the direction of the two-way synchronization instance.

Do I need to configure the reverse synchronization task manually?

Yes. Wait for the forward sync task to complete initial synchronization (until the Status is Running). Then, locate the remote sync task and click Configure Task to configure it.

Note

Configure the reverse synchronization task after the forward synchronization task has zero latency (0 ms).

Does DTS support cross-border two-way synchronization tasks?

No.

Why are records not added to the other database when they are added to one database in a two-way synchronization task?

This may happen if you have not configured a reverse task.

Why does the incremental synchronization progress never reach 100%?

DTS incremental synchronization continuously synchronizes changes from the source to the destination in real time and does not end on its own. This means there is no 100% completion state. If you no longer need real-time synchronization, end the task in the DTS console.

Why can't an incremental synchronization task synchronize data?

If you configure only an incremental synchronization task for a DTS instance, DTS synchronizes only the incremental data generated after the task starts. Data that exists before the task starts is not synchronized to the destination database. We recommend that you select Incremental Data Synchronization, Schema Synchronization, and Full Data Synchronization when you configure the task. This ensures data consistency.

Does synchronizing full data from an RDS database affect the performance of the source RDS?

It will affect the query performance of the source database. There are three ways to reduce the impact of a DTS task on the source database:

  1. Upgrade the specifications of the source database instance.

  2. Pause the DTS task first, and restart it after the load on the source database decreases.

  3. Reduce the rate of the DTS task. For information about how to adjust the rate, see Adjust the full migration rate.

Why is latency not displayed for a synchronization instance with a PolarDB-X 1.0 source?

An instance with a PolarDB-X 1.0 source database is a distributed task, and the metrics that DTS monitors exist only in subtasks. Therefore, no latency information is displayed for the instance. You can click the instance ID to view the latency information on the Task Management page's Subtask Details tab.

Why does a multi-table merge task report the error DTS-071001?

This may be because online DDL operations were performed on the source database while the multi-table merge task was running. This modified the source database's table structure, but the corresponding changes were not manually made in the destination database.

What should I do if adding a whitelist fails when configuring a task in the old console?

Use the new console to configure the task.

What should I do if a data synchronization task fails because a DDL operation was performed on the source database?

Based on the DDL operation performed on the source database, manually execute the DDL on the destination and restart the task. During data synchronization, do not use tools like pt-online-schema-change to perform online DDL operations on the synchronization objects of the source database. Otherwise, the synchronization will fail. If no data other than from DTS is written to the destination database, you can use Data Management (DMS) to perform online DDL operations. Alternatively, you can modify the synchronization objects to remove the tables affected by the DDL. For the removal operation, see Remove a synchronization object.

What should I do if a data synchronization task fails because a DDL operation was performed on the destination database?

If a database or table in the destination database is deleted during DTS incremental synchronization, causing the task to become abnormal, you can use one of the following two solutions to restore the task:

  • Method 1: Reconfigure the task. Do not select the database or table that caused the task to fail as a synchronization object.

  • Method 2: Modify the synchronization objects to remove the database or table that caused the task to fail. For specific operations, see Remove a synchronization object.

Can a synchronization task be restored after it is released? Can reconfiguring the task ensure data consistency?

A sync task cannot be recovered after it is released. When you reconfigure the task, if you do not select Full Synchronization, new data added after the task is released and before the new task starts cannot be synchronized to the destination database. In this case, data consistency cannot be ensured. If your business requires high data accuracy, you can delete the data from the destination database and then reconfigure the sync task. In the Task Step, select both Schema Synchronization and Full Synchronization. By default, Incremental Synchronization is selected.

What should I do if a DTS full synchronization task shows no progress for a long time?

If the table to be synchronized is a table without a primary key, full synchronization will be very slow. Add a primary key to the table to be synchronized in the source database before synchronizing.

When synchronizing data between tables with the same name, is it possible to transfer data from the source table only if it does not exist in the destination table?

Yes. When you configure a task, you can set the Processing Mode Of Conflicting Tables parameter to Ignore Errors And Proceed. If the table schemas are consistent, during a full synchronization, a record from the source database is not synchronized to the destination database if a record with the same primary key value already exists in the destination.

How do I configure a cross-account synchronization task?

First, understand the scenarios for cross-account tasks. Then, use the Alibaba Cloud account of the database instance to configure RAM authorization for cross-Alibaba Cloud account tasks. Finally, configure a cross-Alibaba Cloud account task.

What should I do if I cannot select a DMS LogicDB instance?

Make sure the region of the instance is selected correctly. If you still cannot select the instance, there may be only one instance. Continue to configure other parameters.

For a synchronization task with a SQL Server source, are functions supported for synchronization?

No. If the granularity of the selected synchronization object is a table, other objects such as views, triggers, and stored procedures will also not be synchronized to the destination database.

What should I do if a data synchronization task reports an error?

You can look up the solution in Common errors based on the error message.

How do I enable hot spot merging for a synchronization task?

You can set the trans.hot.merge.enable parameter to true. For more information, see Modify parameter values.

How do I perform synchronization when the source database has triggers?

When the synchronization object is the entire database, and a trigger (TRIGGER) in the database updates a table within it, data inconsistency between the source and destination databases may occur. For synchronization operations, see How to configure a synchronization or migration job when the source database has triggers.

Does DTS support synchronizing the sys database and system databases?

No.

Does DTS support synchronizing the admin and local databases of MongoDB?

No, DTS does not support using MongoDB's admin and local databases as source or destination databases.

When can the reverse task of a two-way synchronization task be configured?

The reverse task of a two-way synchronization task can be configured only after the forward incremental task has no latency.

When PolarDB-X 1.0 is the source, can the source PolarDB-X 1.0 in a synchronization task be scaled?

No. If the source PolarDB-X 1.0 instance is scaled, you must reconfigure the task.

Can DTS guarantee the uniqueness of data synchronized to Kafka?

No. Because data is appended to Kafka, duplicate data may occur when a DTS task is restarted or when source logs are pulled repeatedly. DTS ensures data idempotence, meaning data is ordered sequentially, and the latest value of any duplicate data is placed at the end.

Does DTS support data synchronization from RDS for MySQL to AnalyticDB for MySQL?

Yes. For the configuration method, see Synchronize RDS MySQL to AnalyticDB for MySQL 3.0.

Why don't synchronization tasks between Redis instances show full synchronization?

Data synchronization between Redis instances supports both full data synchronization and incremental data synchronization, which are merged and displayed as Incremental Synchronization.

Can full synchronization be skipped?

Yes. After skipping full synchronization, incremental synchronization will continue, but errors may occur. It is recommended not to skip full synchronization.

Does DTS support scheduled automatic synchronization?

DTS does not currently support scheduling the start of data synchronization tasks.

Will fragmented space in tables be synchronized during the process?

No.

What should I be aware of when synchronizing from MySQL 8.0 to MySQL 5.6?

You need to create the database in MySQL 5.6 before starting the synchronization. It is recommended to keep the source and destination database versions consistent, or to synchronize from a lower version to a higher version to ensure compatibility. If you synchronize from a higher version to a lower version, database compatibility issues may exist.

Can accounts from the source database be synchronized to the destination database?

Currently, only synchronization tasks between RDS MySQL instances support synchronizing accounts. Other synchronization tasks do not support this.

Can I configure a cross-account two-way synchronization task?

You can configure a two-way synchronization task across Alibaba Cloud accounts only between ApsaraDB RDS for MySQL instances, between PolarDB for MySQL clusters, between Tair (Enterprise Edition) instances, between ApsaraDB for MongoDB replica set instances, or between ApsaraDB for MongoDB sharded cluster instances.

Note

For tasks that do not have the Replicate Data Across Alibaba Cloud Accounts configuration item, you can use CEN to implement cross-account bidirectional sync tasks. For more information, see Access database resources across Alibaba Cloud accounts or regions.

How do I configure parameters when the destination is a Message Queue for Apache Kafka instance?

Configure the parameters as needed. For information on how to configure some special parameters, see Configure parameters for a Message Queue for Apache Kafka instance.

How do I handle the ERR invalid DB index error when using DTS for a Redis data synchronization or migration task?

Cause: The ERR invalid DB index error occurs when the destination database executes the SELECT DB operation. This is usually because the number of databases in the destination database is insufficient. If the destination database uses a proxy, check if the proxy can lift the limit on the number of databases.

Solution: Modify the databases configuration of the destination database. We recommend that you expand it to be consistent with the source database and then restart the DTS task. The command to query the databases configuration of the destination database is as follows:

CONFIG GET databases;

How do I handle the IDENTIFIER CLUSTERED error when synchronizing or migrating data from SQL Server to AnalyticDB for PostgreSQL?

Cause: When synchronizing or migrating data from SQL Server to AnalyticDB for PostgreSQL, the task cannot proceed because the CREATE CLUSTERED INDEX command is not supported.

Solution: After confirming that there are no other DDL commands during the latency period, modify the instance parameter sink.ignore.failed.ddl to true to filter out all DDL executions. After the incremental synchronization/migration offset advances, change sink.ignore.failed.ddl back to false.

In a Redis synchronization or migration task, does extending the time-to-live (TTL) of keys in the destination database have a practical effect?

To prevent keys from expiring during full data synchronization, you can extend the TTL of keys in the destination database for the task. For DTS tasks that include incremental synchronization/migration, if a key in the source database expires and is deleted, the corresponding key in the destination database will still be released.

After extending the TTL of keys in the destination database, if a key in the source database expires, is the corresponding key in the destination database released immediately?

A key in the destination database is not necessarily released immediately after it expires. Only when the key in the source database expires and is cleaned up will the key in the destination database be released immediately.

For example, if the TTL of a key in the source database is set to 5 seconds, and the TTL of the key in the destination database is extended to 30 seconds, the key in the destination database will be released immediately when the key in the source database expires and is automatically cleaned up after 5 seconds. When the key in the source database expires and is automatically cleaned up, a delete operation is appended to the AOF file, and this request is synchronized to and executed on the destination database.

Data migration issues

Will the data in the source database still exist after a data migration task is executed?

DTS data migration and synchronization copy data from the source database to the destination database and do not affect the source data.

Which database instances does DTS support for migration?

DTS supports data migration between various data sources, such as RDBMSs, NoSQL databases, and OLAP databases. For supported migration instances, see Migration solutions.

How does data migration work?

For information about how data migration works, see Service architecture and principles.

Can I modify the migration objects in a data migration task?

No.

Can I add a new table to a data migration task?

No.

How do I modify migration objects such as tables and fields in a running migration task?

Migration tasks do not support modifying migration objects.

If I pause a migration task and then restart it after some time, will it cause data inconsistency?

If the source database changes while the migration task is paused, data inconsistency between the source and destination databases may occur. After the migration task is restarted and the incremental data is migrated to the destination database, the data in the destination database will be consistent with the source database.

Can a migration task be converted into a synchronization task?

No, different types of tasks cannot be converted into each other.

Can I migrate only data without migrating the structure?

Yes, you can. To do so, do not select Schema Migration when you configure the migration task.

What are the possible reasons for data inconsistency between the source and destination of a data migration instance?

The possible reasons for data inconsistency are as follows:

  1. The destination data was not cleared when the task was configured, and the destination had historical data.

  2. Only the incremental migration module was selected when the task was configured, and the full migration module was not selected.

  3. Only the full migration module was selected when the task was configured, and the incremental migration module was not selected. The source data changed after the task ended.

  4. Data was written to the destination by sources other than the DTS task.

  5. There is a delay in incremental writes, and not all incremental data has been written to the destination yet.

Can I change the name of the source database in the destination database for a data migration task?

Yes. For information about how to change the name of the source database in the destination database, see Object name mapping.

Is data migration within the same instance supported?

Yes. For information about how to migrate data within the same instance, see Synchronize or migrate data between databases with different names.

Is real-time migration of DML or DDL operations supported?

Yes. For data between relational databases, the supported DML operations are INSERT, UPDATE, and DELETE. The supported DDL operations are CREATE, DROP, ALTER, RENAME, and TRUNCATE.

Note

The supported DML or DDL operations vary by scenario. In Migration solutions, select a link that fits your business scenario and check the supported DML or DDL operations in the specific link configuration document.

Can a read-only instance be used as the source for a migration task?

If the migration task does not require incremental data migration, a read-only instance can be used as the source instance. If the migration task requires incremental data migration, there are two situations:

  • If the instance is a read-only instance that records transaction logs (such as RDS MySQL 5.7 or 8.0), it can be used as a source instance.

  • If the instance is a read-only instance that does not record transaction logs (such as RDS MySQL 5.6), it cannot be used as a source instance.

Does DTS support data migration for sharded databases and tables?

Yes. For example, you can migrate database shards and table partitions from MySQL or PolarDB for MySQL to AnalyticDB for MySQL to merge multiple tables.

Does a migration task support filtering some fields or data?

Yes. You can use the mapping feature to filter columns that do not need to be migrated. You can also specify a SQL WHERE condition to filter the data to be migrated. For more information, see Synchronize or migrate partial columns and Filter data to be migrated.

Why is the data volume in the destination instance smaller than in the source instance after a migration task ends?

If data filtering was performed during the migration process, or if there is a lot of table fragmentation in the source instance, the data volume in the destination instance may be smaller than in the source instance after the migration is complete.

Why does the completed value in a migration task exceed the total?

The displayed total is an estimated value. After the migration task is completed, the total will be adjusted to the accurate value.

What is the purpose of the new increment_trx table in the destination database during data migration?

The increment_trx table added to the destination database during data migration is an offset table created by DTS incremental migration in the destination instance. It is mainly used to record the offset of the incremental migration and solve the issue of resuming from a breakpoint after an abnormal task restart. Do not delete it during the migration process, or the migration will fail.

Does a data migration task support resuming from a breakpoint during the full migration phase?

Yes. If you pause and then restart a task during the full migration phase, the task will continue migrating from where it left off, without needing to start over.

How do I migrate a non-Alibaba Cloud instance to Alibaba Cloud?

For information about how to migrate a non-Alibaba Cloud instance to Alibaba Cloud, see Migrate from a third-party cloud to Alibaba Cloud.

How do I migrate a local Oracle database to PolarDB?

For information about how to migrate a local Oracle database to PolarDB, see Migrate a self-managed Oracle database to PolarDB for PostgreSQL (Compatible with Oracle).

Can a data migration task that is not complete in the full migration phase be paused?

Yes.

How do I migrate partial data from RDS MySQL to a self-managed MySQL?

When you configure a data migration task, you can select the objects to be migrated in the Source Objects section or filter objects in the Selected Objects section as needed. The operations to migrate data between MySQL databases are similar. For more information, see Migrate data from a self-managed MySQL database to an ApsaraDB RDS for MySQL instance.

How do I migrate RDS instances under the same Alibaba Cloud account?

DTS supports migration and synchronization between RDS instances. For configuration methods, see the relevant configuration documents in Migration solutions.

After a migration task starts, the source database has an IOPS alert. How can I ensure the stability of the source database's business at this time?

If the source database instance has a high load while a DTS task is running, there are three ways to reduce the impact of the DTS task on the source database:

  1. Upgrade the specifications of the source database instance.

  2. Pause the DTS task first, and restart it after the load on the source database decreases.

  3. Reduce the rate of the DTS task. For information about how to adjust the rate, see Adjust the full migration rate.

Why can't a data migration task select a database named test?

DTS data migration does not support migrating system databases. Select a business-created database for migration.

Why is latency not displayed for a migration instance with a PolarDB-X 1.0 source?

An instance with a PolarDB-X 1.0 source database is a distributed task, and the metrics that DTS monitors exist only in subtasks. Therefore, no latency information is displayed for the instance. You can click the instance ID to view the latency information on the Task Management page's Subtask Details tab.

Why can't DTS migrate a MongoDB database?

This may be because the database to be migrated is `local` or `admin`. DTS does not support using MongoDB's `admin` and `local` databases as source or destination databases.

Why does a multi-table merge task report the error DTS-071001?

This may be because online DDL operations were performed on the source database while the multi-table merge task was running. This modified the source database's table structure, but the corresponding changes were not manually made in the destination database.

What should I do if adding a whitelist fails when configuring a task in the old console?

Use the new console to configure the task.

What should I do if a data migration task fails because a DDL operation was performed on the source database?

Based on the DDL content executed on the source database, manually execute the DDL on the destination and restart the task. During data migration, do not use tools like pt-online-schema-change to perform online DDL operations on the migration objects of the source database. Otherwise, the migration will fail. If no data other than from DTS is written to the destination database, you can use Data Management (DMS) to perform online DDL operations.

What should I do if a data migration task fails because a DDL operation was performed on the destination database?

If a database or table in the destination database is deleted during DTS incremental migration, causing the task to become abnormal, you can reconfigure the task. Do not select the database or table that caused the task to fail as a migration object.

Can a migration task be restored after it is released? Can reconfiguring the task ensure data consistency?

A migration task cannot be recovered after it is released. When you reconfigure the migration task, if you do not select Full Migration, the incremental data generated after the task is released and before the new task is started cannot be migrated to the destination database. In this case, data consistency cannot be ensured. If your business requires high data accuracy, you can delete the data from the destination database and reconfigure the migration task. When you reconfigure the task, select Schema Migration, Incremental Migration, and Full Migration for the Task Step parameter.

What should I do if a DTS full migration task shows no progress for a long time?

If the table to be migrated is a table without a primary key, full migration will be very slow. Add a primary key to the table to be migrated in the source database before migrating.

When migrating data between tables with the same name, is it possible to transfer data from the source table only if it does not exist in the destination table?

Yes. When you configure a task, you can set Processing Mode Of Conflicting Tables to Ignore Errors And Proceed. If the table schemas are consistent, during a full data migration, when a record from the source database has a primary key value that already exists in the destination database, that source record is not migrated to the destination database.

How do I configure a cross-account migration task?

You first need to identify the scenarios for cross-account tasks, then use the Alibaba Cloud account that owns the database instance to configure RAM authorization for cross-account tasks, and finally configure a cross-account task.

How does a data migration task connect to a local database?

When you configure a data migration task, you can set the Access Method for your local database to Public IP Address. For an example, see Migrate data from a self-managed MySQL database to an ApsaraDB RDS for MySQL instance.

What should I do if data migration fails with the error DTS-31008?

You can click View Cause or use the error message to find a solution in Common errors and troubleshooting.

What should I do if the network is down when accessing a self-managed database via a leased line?

Check if the leased line is correctly configured with the DTS-related IP whitelist. For the list of IP whitelists to add, see Add the CIDR blocks of DTS servers to the IP whitelist of a self-managed database.

For a migration task with a SQL Server source, are functions supported for migration?

No. If the granularity of the selected migration object is a table, other objects such as views, triggers, and stored procedures will also not be migrated to the destination database.

What should I do if the DTS full migration speed is slow?

This may be because the volume of data to be migrated is large. Please wait patiently. You can go to the task details page and view the migration progress in the Task Management module under Full Migration.

What should I do if a schema migration error occurs?

Click the instance ID to go to the instance details page. In Task Management, view the detailed error message for the schema migration module and troubleshoot the error. For more information about how to troubleshoot common errors, see Common errors and troubleshooting.

Are schema migration and full migration billed?

No. For more billing information, see Billable items.

For a data migration task between Redis instances, will the zset data on the destination be overwritten?

The zset on the destination will be overwritten. If the destination already has a key that is the same as the source, DTS will first delete the zset of the corresponding key on the destination, and then zadd each object from the source zset collection to the destination.

What is the impact of full migration on the source database?

The process of DTS full migration is to first slice the data, and then read and write the data within the slice range. For the source database, the IOPS of the source database will increase during the slicing process. During the process of reading data within the slice range, the IOPS, CachePool, and outbound bandwidth of the source database will be affected to some extent. Based on DTS's practical experience, these effects can be ignored.

When PolarDB-X 1.0 is the source, can the source PolarDB-X 1.0 in a migration task be scaled?

No. If the source PolarDB-X 1.0 instance is scaled, you must reconfigure the task.

Can DTS guarantee the uniqueness of data migrated to Kafka?

No. Because data is appended to Kafka, duplicate data may occur when a DTS task is restarted or when source logs are pulled repeatedly. DTS ensures data idempotence, meaning data is ordered sequentially, and the latest value of any duplicate data is placed at the end.

If I configure a full migration task first, and then an incremental data migration task, will data inconsistency occur?

Data inconsistency may occur. When an incremental data migration task is configured separately, the task starts migrating data only after it is launched. Incremental data generated in the source instance before the incremental migration task starts will not be synchronized to the destination instance. To perform a zero-downtime migration, select schema migration, full data migration, and incremental data migration as the migration types when you configure the task.

Do I need to select schema migration when configuring an incremental migration task?

Schema migration is the process of migrating the definitions of migration objects to the destination instance before the data migration begins, such as migrating the table definition of Table A to the destination instance. To perform an incremental migration, to ensure data consistency, it is recommended to select schema migration, full data migration, and incremental data migration.

Why does RDS use more storage space than the source database when migrating a self-managed database to RDS?

Because DTS performs a logical migration, it encapsulates the data to be migrated into SQL and then migrates it to the destination RDS instance. This generates binary log data in the destination RDS instance, so during the migration process, the storage space used by RDS may be larger than the source database.

Does DTS support the migration of MongoDB in a VPC network?

Yes, DTS currently supports using an ApsaraDB for MongoDB instance in a VPC network as the source database for migration.

If the source database data changes during data migration, what will be the result of the migrated data?

If the migration task is configured with schema migration, full migration, and incremental migration, any data changes that occur in the source database during the migration will be migrated to the destination database by DTS.

Will releasing a completed migration task affect the use of the migrated database?

No. After the Running Status of the migration task is Completed, you can safely release the migration task.

Does DTS support incremental migration for MongoDB?

Yes. For related configuration examples, see Migration solutions.

What is the difference between using an RDS instance and a self-managed database instance with a public IP as the source for a migration task?

If you select an RDS instance when configuring a migration task, the DTS migration task can adapt to changes such as DNS modifications and network type switches in the RDS instance, effectively ensuring link reliability.

Does DTS support migrating a self-managed database on an ECS instance in a VPC to an RDS instance?

This is supported.

  • If the source ECS instance and the destination RDS instance are in the same region, DTS can directly access the self-managed database on the ECS instance in the VPC.

  • If the source ECS instance and the destination RDS instance are in different regions, the ECS instance needs to have an Elastic IP Address attached. When configuring the migration task, select the ECS instance as the source, and DTS will automatically use the Elastic IP Address of the ECS instance to access the database on it.

Does DTS lock tables during migration? Is there any impact on the source database?

DTS does not lock tables on the source database during full data migration and incremental data migration. During full data migration and incremental data migration, the source data tables can be accessed for reading and writing normally.

When DTS performs RDS migration, does it get data from the primary or secondary RDS database?

When DTS performs data migration, it pulls data from the primary RDS database.

Does DTS support scheduled automatic migration?

DTS does not currently support scheduling the start of data migration tasks.

Does DTS support data migration for RDS instances in VPC mode?

Yes. When configuring the migration task, simply configure the RDS instance ID.

When DTS performs migration or synchronization for ECS and RDS instances within the same account or across accounts, does it use the internal network or the public network? Are there any traffic fees?

When DTS performs a synchronization or migration task, the network used (internal or public) is not related to whether it is cross-account. Whether traffic fees are charged depends on the task type.

  • Network used

    • Migration task: If data migration is performed within the same region, DTS will use the internal network to connect to ECS and RDS instances. If it is a cross-region migration, DTS will use the external network to connect to the source instance (ECS, RDS) and the internal network to connect to the destination RDS instance.

    • Synchronization task: Uses the internal network.

  • Traffic fees

    • Migration tasks: You are charged for outbound traffic over the public network. You are not charged for traffic for other types of DTS instances. Fees for outbound traffic over the public network are incurred only when the Access Method of the destination database instance is set to Public IP Address.

    • Synchronization task: No traffic fees are charged.

When using DTS for data migration, will the data in the source database be deleted after the migration?

No. When DTS performs data migration, it actually copies the data from the source database to the destination database, which does not affect the data in the source database.

When DTS executes data migration between RDS instances, can I specify the name of the destination database?

Yes. When you execute data migration between RDS instances, you can use the object name mapping feature provided by DTS to specify the name of the destination database. For more information, see Synchronize or migrate data between databases with different names.

What should I do if the source of a DTS migration task cannot connect to an ECS instance?

This may be because the ECS instance has not opened a public IP address. Bind an Elastic IP Address to the ECS instance and try again. For information on how to bind an Elastic IP Address, see Elastic IP Address.

Why don't migration tasks between Redis instances show full migration?

Migration between Redis instances supports full data migration and incremental data migration, which are merged and displayed as Incremental Migration.

Can full migration be skipped?

Yes. After skipping full migration, incremental migration will continue, but errors may occur. It is recommended not to skip full migration.

Does the cluster version of Redis support access to DTS via a public IP?

No. Currently, only the standalone version of Redis supports access to a DTS migration instance via a public IP.

What should I be aware of when migrating from MySQL 8.0 to MySQL 5.6?

You need to create the database in MySQL 5.6 before starting the migration. It is recommended to keep the source and destination database versions consistent, or to migrate from a lower version to a higher version to ensure compatibility. If you migrate from a higher version to a lower version, database compatibility issues may exist.

Can accounts from the source database be migrated to the destination database?

Currently, only migration tasks between RDS MySQL instances support migrating accounts. Other migration tasks do not support this.

How do I configure parameters when the destination is a Message Queue for Apache Kafka instance?

Configure the parameters as needed. For information on how to configure some special parameters, see Configure parameters for a Message Queue for Apache Kafka instance.

How do I perform a scheduled full migration?

You can configure the scheduling policy of the data integration feature to periodically migrate the structure and historical data from the source database to the destination database. For more information, see Configure a data integration task between RDS MySQL instances.

Is it supported to migrate a self-managed SQL Server on ECS to a local self-managed SQL Server?

Yes. The local self-managed SQL Server needs to be connected to Alibaba Cloud. For more information, see Preparations.

Is it supported to migrate PostgreSQL databases from other clouds?

When PostgreSQL databases from other clouds allow DTS to access them via the public network, data migration via DTS is supported.

Note

If the PostgreSQL version is lower than 10.0, incremental migration is not supported.

Change tracking issues

How does change tracking work?

For information about how change tracking works, see Service architecture and principles.

Will a consumer group be deleted after a change tracking task expires?

After a DTS change tracking task expires, the data consumer group will be retained for 7 days. If the instance is not renewed for more than 7 days after expiration, it will be released, and the corresponding consumer group will also be deleted.

Can a read-only instance be used as a source instance for a tracking task?

There are two situations:

  • If the instance is a read-only instance that records transaction logs (such as RDS MySQL 5.7 or 8.0), it can be used as a source instance.

  • If the instance is a read-only instance that does not record transaction logs (such as RDS MySQL 5.6), it cannot be used as a source instance.

How to consume subscribed data?

For more information, see Consume tracked data.

Why does the date data format change after data is transmitted using the change tracking feature?

The default date data storage format in DTS is YYYY:MM:DD. YYYY-MM-DD is the display format, but the actual storage format is YYYY:MM:DD. Therefore, regardless of the format in which the data is transmitted and written, it will ultimately be converted to the default format.

How do I troubleshoot tracking task issues?

For information on how to troubleshoot tracking task issues, see Troubleshoot tracking task issues.

What should I do if the SDK suddenly pauses during normal data download and cannot track data?

Check if the ackAsConsumed interface is called in the SDK code to report the consumer offset. If ackAsConsumed is not called to report the offset, the data in the Record's cache space set inside the SDK will not be deleted. When the cache is fully occupied, new data cannot be pulled, which will cause the SDK to pause and be unable to track data.

What should I do if the SDK fails to subscribe to data after a rerun?

Before starting the SDK, modify the consumer offset to ensure it is within the data range. For information on how to modify the consumer offset, see Save and query consumer offsets.

How can a client specify a point in time for data consumption?

You can set the initCheckpoint parameter to specify a consumer offset. For more information, see Use SDK sample code to consume subscribed data.

How do I reset the offset if a DTS tracking task is backlogged?

  1. Open the corresponding code file based on the usage mode of the SDK client. For example, DTSConsumerAssignDemo.java or DTSConsumerSubscribeDemo.java.

    Note

    For more information, see Use SDK sample code to consume tracked data.

  2. In the Data Range column of the tracking task list, view the modifiable range for the offset of the target tracking instance.

  3. Select a new consumer offset as needed and convert it to a Unix timestamp.

  4. Replace the old consumer offset (initCheckpoint parameter) in the code file with the converted new consumer offset.

  5. Rerun the client.

What should I do if I cannot connect to a tracking task's VPC address from the client?

This may be because the machine where the client is located is not in the VPC specified when the tracking task was configured (for example, the client's VPC was changed). You need to reconfigure the task.

Why is the consumer offset on the console larger than the maximum value of the data range?

Because the data range of the tracking channel is updated every 1 minute, while the consumer offset is updated every 10 seconds. So, if you are consuming in real time, the value of the consumer offset may be larger than the maximum value of the tracking channel's data range.

How does DTS ensure that the data tracked by the SDK is a complete transaction?

Based on the provided consumer offset, the server searches for the complete transaction corresponding to this offset and distributes data downstream starting from the BEGIN statement of the entire transaction. This ensures that the complete transaction content can be received.

How do I confirm if data is being consumed normally?

If data is being consumed normally, the consumer offset in the Data Transmission console will advance as expected.

What does usePublicIp=true mean in the change tracking SDK?

Configuring usePublicIp=true in the change tracking SDK means that the SDK accesses the DTS tracking channel via the public network.

When the source RDS of a change tracking task undergoes a primary-secondary switchover or the primary database restarts, will the business be affected?

When an RDS MySQL, RDS PostgreSQL, PolarDB for MySQL, PolarDB for PostgreSQL, or PolarDB-X 1.0 (storage type RDS MySQL) instance undergoes a primary-secondary switchover or restart, DTS will adaptively switch, and the business will not be affected.

Does RDS have a way to automatically download binary logs to a local server?

DTS's change tracking supports real-time tracking of RDS binary logs. You can enable the DTS change tracking service and use the DTS SDK to track RDS binary log data and synchronize it to a local server in real time.

Does the real-time incremental data of change tracking refer only to new data, or does it include modified data?

The incremental data that can be tracked by DTS includes all additions, deletions, modifications, and schema changes (DDL).

Why do I receive duplicate data after restarting the SDK when one record on the consumer end was not ACKed?

When the SDK has a message that has not been ACKed, the server will finish pushing all messages in the buffer. After the push is complete, the SDK can no longer receive messages. At this point, the consumer offset saved by the server is the offset of the last message before the un-ACKed one. When the SDK restarts, to ensure no messages are lost, the server will start re-pushing data from the offset corresponding to the message before the un-ACKed one. Therefore, the SDK will receive some messages repeatedly at this time.

How often is the change tracking consumer offset updated, and why do I sometimes receive duplicate data when restarting the SDK?

After the change tracking SDK consumes each message, it must call ackAsConsumed to reply with an ACK to the server. After receiving the ACK, the server updates the consumer offset in memory and then persists the consumer offset every 10 seconds. If the SDK is restarted before the latest ACK is persisted, to ensure no messages are lost, the server will start pushing messages from the last persisted consumer offset. At this time, the SDK will receive duplicate messages.

Can one change tracking instance track multiple RDS instances?

No. Currently, one change tracking instance can only track one RDS instance.

Do change tracking instances experience data inconsistency?

No. A change tracking task only gets changes from the source database and does not involve data inconsistency. If the data consumed by the client is not what you expect, please investigate on your own.

What should I do if UserRecordGenerator appears when consuming tracked data?

When consuming tracked data, if you see a message like UserRecordGenerator: haven't receive records from generator for 5s, you need to check if the consumer offset is within the offset range of the incremental data collection module and ensure that the consumer end is running normally.

Does one topic support creating multiple partitions?

No. To ensure the global order of messages, each tracking topic in DTS has only one partition, which is fixedly assigned to partition 0.

Does the Change Tracking SDK Support Go?

Yes. For sample code, see dts-subscribe-demo.

Does the change tracking SDK support the Python language?

Yes, sample code is available in the dts-subscribe-demo repository.

Does flink-dts-connector support multi-threaded concurrent consumption of tracked data?

No.

Data validation issues

What are the causes of data inconsistency in a data validation task?

Common causes are as follows:

  1. There is latency in the migration or synchronization task.

  2. A column with a default value was added to the source database, and the task has latency.

  3. Data was written to the destination by sources other than DTS.

  4. A DDL operation was performed on the source database of a task with the multi-table merge feature enabled.

  5. The migration or synchronization task used the object name mapping feature.

Why does a schema validation task report a difference in isRelHasoids?

PostgreSQL versions earlier than 12 support adding a globally unique object identifier (OID) field by specifying WITH OIDS when a table is created. If the source for a structure validation task contains a table created with WITH OIDS and the target is a later version of PostgreSQL that does not support WITH OIDS, the validation reports an isRelHasoids difference.

Should I be concerned if a schema validation task reports a difference in isRelHasoids?

No.

Does DTS synchronize or migrate the object identifier (OID) field?

The Object Identifier (OID) field is automatically generated when you specify WITH OIDS. DTS does not sync or migrate this data, even if the destination supports the field.

How do I check if a table has an object identifier (OID) field?

Note

In the command, replace <table_name> with the table name.

  • SQL query: SELECT relname AS table_name, relhasoids AS has_oids FROM pg_class WHERE relname = '<table_name>' AND relkind = 'r';

  • The client command is \d+ <table_name>.

Other issues

What is the impact of modifying data in the destination database while a data synchronization or migration task is running?

  • Modifying data in the destination database may cause the DTS task to fail. During data migration or synchronization, if you perform operations on the objects to be migrated or synchronized in the destination database, it may lead to primary key conflicts, no update records, and ultimately, DTS task failure. However, you can perform operations that do not interrupt the DTS task, such as creating a table in the destination instance and writing to it. Because this table is not in the migration or synchronization object list, it will not cause DTS to fail.

  • Because DTS reads information from the source database instance and migrates or synchronizes its full data, structural data, and incremental data to the destination instance, any data modified in the destination database during the task may be overwritten by data migrated or synchronized from the source.

Can data be written to both the source and destination databases simultaneously while a data synchronization or migration task is running?

Yes, but if data sources other than DTS write data to the destination database while the DTS instance is running, it may lead to abnormal data in the destination database or the DTS instance.

What happens if the password of the source or destination database is changed while a DTS instance is running?

The DTS instance returns an error and stops. You can click the instance ID to go to the instance details page. On the Basic Information tab, modify the account password for the source or destination. Then, go to the Task Management tab, find the module that is reporting an error, and restart it in the Basic Information section.

Why do some source or destination databases not have a public IP as a connection type?

This is related to the connection type of the source or destination database, the task type, and the database type. For example, for a MySQL source, migration and tracking tasks can be connected via a public IP, but synchronization tasks do not support public IP connections.

Is cross-account data migration or data synchronization supported?

Yes. For the configuration method, see Configure a cross-Alibaba Cloud account task.

Can the source and destination databases be the same database instance?

Yes. If your source and destination databases are the same instance, it is recommended to use the mapping feature to isolate and distinguish the data. Otherwise, it may lead to DTS instance failure or data loss. For more information, see Object name mapping.

Why does a task with Redis as the destination report the error "OOM command not allowed when used memory > 'maxmemory'"?

This may be because the storage space of the destination Redis instance is insufficient. If the destination Redis instance has a cluster architecture, it may also be that a shard has reached its memory limit. You need to upgrade the specifications of the destination instance.

What is the AliyunDTSRolePolicy access policy and what is it for?

The AliyunDTSRolePolicy policy is used to access cloud resources such as RDS and ECS under the current account or across accounts. It can be used to call relevant cloud resource information when configuring data migration, synchronization, or tracking tasks. For more information, see Grant DTS permission to access cloud resources.

How do I grant a RAM role authorization?

When you log on to the console for the first time, DTS will ask you to authorize the AliyunDTSDefaultRole role. Follow the console prompts to go to the RAM authorization page to grant the authorization. For more information, see Grant DTS permission to access cloud resources.

Important

You need to log on to the console with an Alibaba Cloud account to perform this operation.

Can the username and password entered for a DTS task be modified?

Yes, you can change the database account password for a DTS task. Click the instance ID to go to the instance details page. On the Basic Information tab, click Change Password to change the password of the source or destination database account.

Important

The system username and password of a DTS task cannot be modified.

Why do MaxCompute tables have a `base` suffix?

  1. Initial schema synchronization.

    DTS synchronizes the schemas of the required objects from the source database to MaxCompute. During initial schema synchronization, DTS adds the _base suffix to the end of the source table name. For example, if the name of the source table is customer, the name of the table in MaxCompute is customer_base.

  2. Initial full data synchronization.

    DTS synchronizes the historical data of the table from the source database to the destination table in MaxCompute. For example, the customer table in the source database is synchronized to the customer_base table in MaxCompute. The data is the basis for subsequent incremental synchronization.

    Note

    The destination table that is suffixed with _base is known as a full baseline table.

  3. Incremental data synchronization.

    DTS creates an incremental data table in MaxCompute. The name of the incremental data table is suffixed with _log, such as customer_log. Then, DTS synchronizes the incremental data that was generated in the source database to the incremental data table.

    Note

    For more information, see Schema of an incremental data table.

What should I do if I cannot get the Kafka topic?

This may be because the currently configured Kafka broker has no topic information. Check the topic's broker distribution with the following command:

./bin/kafka-topics.sh --describe --zookeeper zk01:2181/kafka --topic topic_name

Can I set up a local MySQL instance as a secondary database for an RDS instance?

Yes. You can use the data migration feature of Data Transmission Service (DTS) to configure real-time data synchronization from RDS to a local self-managed MySQL instance, achieving a master-slave architecture.

How do I copy data from an RDS instance to a newly created RDS instance?

You can use the DTS data migration feature. For the migration task, select schema migration, full migration, and incremental migration as the migration types. For the configuration method, see Data migration between RDS instances.

Does DTS support creating a copy of a database within an RDS instance with a different database name?

Yes. The object name mapping feature provided by DTS lets you create a copy of a database within an RDS instance with a different database name.

What should I do if a DTS instance always shows latency?

Possible reasons are as follows:

  • Multiple DTS tasks were created for the source database instance using different accounts, causing the instance's load to be too high. Create tasks using the same account.

  • The destination database instance has insufficient memory. Arrange your business and restart the destination database instance. If the problem is not solved, upgrade the specifications of the destination instance or perform a primary-secondary switchover.

    Note

    A transient disconnection may occur during the primary-secondary switchover. Make sure your application has an automatic reconnection mechanism.

What should I do if fields are all lowercase after synchronizing or migrating to the destination database in the old console?

Use the new console to configure the task and use the destination database object name case policy feature. For more information, see Destination database object name case policy.

Can a DTS task be resumed after being paused?

Generally, a DTS task paused for no more than 24 hours can be resumed normally. If the data volume is small, a DTS task paused for no more than 7 days can be resumed normally. It is recommended not to pause the task for more than 6 hours.

Why does the progress start from 0 after a task is paused and then restarted?

After the task is restarted, DTS will re-query the completed data and then continue to process the remaining data. During this process, the task progress may differ from the actual progress due to latency.

What is the principle of DDL lockless change?

For more information about the principles of DDL lockless change, see Main principles.

Does DTS support pausing the synchronization or migration of a specific table?

No.

If a task fails, do I need to purchase it again?

No, you can reconfigure the original task.

What happens if multiple tasks write data to the same destination?

This may lead to data inconsistency.

Why is an instance still locked after renewal?

After renewing a locked DTS instance, it takes some time for the instance to be unlocked. Please wait patiently.

Does a DTS instance support changing its resource group?

Yes. You can go to the Basic Information page of the instance. In the Basic Information section, click Edit next to Resource Group Name.

Does DTS have a binary log analysis tool?

DTS does not have a binary log analysis tool.

Is it normal for an incremental task to always show 95%?

Yes, it is normal. An incremental task is continuous and does not complete, so the progress will not reach 100%.

Why hasn't a DTS task been released after more than 7 days?

Occasionally, a frozen task may be saved for more than 7 days.

Can the port of a created task be modified?

No.

Can the RDS MySQL mounted under PolarDB-X 1.0 in a DTS task be downgraded?

Downgrading is not recommended. It will trigger a primary-secondary switchover, which may lead to data loss.

Can the source or destination instance be upgraded or downgraded while a DTS task is running?

Upgrading or downgrading the source or destination instance while a DTS task is running may cause task latency or data loss. It is not recommended to change the specifications of the source or destination instance.

What is the impact of a DTS task on the source and destination instances?

Initial full data synchronization will occupy some read and write resources of the source and destination databases, which may increase the database load. It is recommended to perform full tasks during off-peak hours.

What is the approximate latency of a DTS task?

The latency of a DTS task cannot be estimated because it is limited by various factors, such as the running load of the source instance, the bandwidth of the transmission network, network latency, and the write performance of the destination instance.

If the Data Transmission console automatically redirects to the Data Management DMS console, how do I return to the old Data Transmission console?

In the Data Management DMS console, you can click jiqiren in the lower-right corner and then click 返回旧版 to return to the old Data Transmission console.

Does DTS support data encryption?

DTS supports securely accessing the source or destination database via an SSL-encrypted connection to read data from the source or write data to the destination. However, it does not support data encryption during the data transmission process.

Does DTS support ClickHouse as a source or destination?

No.

Does DTS support AnalyticDB for MySQL 2.0 as a source or destination?

AnalyticDB for MySQL 2.0 can be used only as a destination. Configurations that use AnalyticDB for MySQL 2.0 as the destination are not yet available in the new console and can be configured only in the old console.

Why can't I see a newly created task on the console?

You may have selected the wrong task list or filtered the tasks. Select the correct filter options in the corresponding task list, such as the correct region and resource group.资源组

Can the grayed-out configuration items of a created task be modified?

No.

How do I configure latency alerts and thresholds?

DTS provides a monitoring and alerting feature. You can set alert rules for important monitoring metrics through the console to stay informed about the running status. For the configuration method, see Configure monitoring and alerting.

Can I view the reason for failure for a task that has been failed for a long time?

No. If a task has been failed for a long time (for example, more than 7 days), the relevant logs will be cleared, making it impossible to view the reason for failure.

Can a task that has been failed for a long time be recovered?

No. If a task has been failed for a long time (for example, more than 7 days), the relevant logs will be cleared and it cannot be recovered. You need to reconfigure the task.

What is the rdsdt_dtsacct account?

If you did not create the rdsdt_dtsacct account, it may have been created by DTS. DTS creates a built-in account named rdsdt_dtsacct in some database instances to connect to the source and destination database instances.

How do I view information about heap tables, tables without primary keys, compressed tables, tables with computed columns, and tables with sparse columns in SQL Server?

You can execute the following SQL to check if the source database has tables in these scenarios:

  1. Execute the following SQL statement to check for heap tables:

    SELECT s.name AS schema_name, t.name AS table_name FROM sys.schemas s INNER JOIN sys.tables t ON s.schema_id = t.schema_id AND t.type = 'U' AND s.name NOT IN ('cdc', 'sys') AND t.name NOT IN ('systranschemas') AND t.object_id IN (SELECT object_id FROM sys.indexes WHERE index_id = 0);
  2. Execute the following SQL statement to check for tables without primary keys:

    SELECT s.name AS schema_name, t.name AS table_name FROM sys.schemas s INNER JOIN sys.tables t ON s.schema_id = t.schema_id AND t.type = 'U' AND s.name NOT IN ('cdc', 'sys') AND t.name NOT IN ('systranschemas') AND t.object_id NOT IN (SELECT parent_object_id FROM sys.objects WHERE type = 'PK');
  3. Execute the following SQL statement to check for primary key columns that are not contained in clustered index columns:

    SELECT s.name schema_name, t.name table_name FROM sys.schemas s INNER JOIN sys.tables t ON s.schema_id = t.schema_id WHERE t.type = 'U' AND s.name NOT IN('cdc', 'sys') AND t.name NOT IN('systranschemas') AND t.object_id IN ( SELECT pk_colums_counter.object_id AS object_id FROM (select pk_colums.object_id, sum(pk_colums.column_id) column_id_counter from (select sic.object_id object_id, sic.column_id FROM sys.index_columns sic, sys.indexes sis WHERE sic.object_id = sis.object_id AND sic.index_id = sis.index_id AND sis.is_primary_key = 'true') pk_colums group by object_id) pk_colums_counter inner JOIN ( select cluster_colums.object_id, sum(cluster_colums.column_id) column_id_counter from (SELECT sic.object_id object_id, sic.column_id FROM sys.index_columns sic, sys.indexes sis WHERE sic.object_id = sis.object_id AND sic.index_id = sis.index_id AND sis.index_id = 1) cluster_colums group by object_id ) cluster_colums_counter ON pk_colums_counter.object_id = cluster_colums_counter.object_id and pk_colums_counter.column_id_counter != cluster_colums_counter.column_id_counter);
  4. Execute the following SQL statement to check for compressed tables:

    SELECT s.name AS schema_name, t.name AS table_name FROM sys.objects t, sys.schemas s, sys.partitions p WHERE s.schema_id = t.schema_id AND t.type = 'U' AND s.name NOT IN ('cdc', 'sys') AND t.name NOT IN ('systranschemas') AND t.object_id = p.object_id AND p.data_compression != 0;
  5. Execute the following SQL statement to check for tables with computed columns:

    SELECT s.name AS schema_name, t.name AS table_name FROM sys.schemas s INNER JOIN sys.tables t ON s.schema_id = t.schema_id AND t.type = 'U' AND s.name NOT IN ('cdc', 'sys') AND t.name NOT IN ('systranschemas') AND t.object_id IN (SELECT object_id FROM sys.columns WHERE is_computed = 1);

What should I do if the structures of the source and destination are inconsistent?

You can try using the mapping feature to establish a mapping relationship between the columns of the source and destination. For more information, see Object name mapping.

Note

Modifying column types is not supported.

Does object name mapping support modifying column types?

No.

Does DTS support limiting the read speed of the source database?

No. You need to evaluate the performance of the source database (such as whether IOPS and network bandwidth meet the requirements) before running the task. It is also recommended to run the task during off-peak hours.

How do I clean up orphaned documents in a MongoDB (sharded cluster architecture)?

Check for orphaned documents

  1. Connect to the MongoDB sharded cluster instance using the Mongo Shell.

    For the connection method for ApsaraDB for MongoDB, see Connect to a MongoDB sharded cluster instance using the Mongo Shell.

  2. Execute the following command to switch to the destination database.

    use <db_name>
  3. Execute the following command to view information about orphaned documents.

    db.<coll_name>.find().explain("executionStats")
    Note

    Check the chunkSkips field of the SHARDING_FILTER stage in the executionStats of each shard. If it is not 0, it means there are orphaned documents on the corresponding shard.

    The following example response indicates that in the FETCH stage before the SHARDING_FILTER stage, 102 documents were returned ("nReturned" : 102). Then, in the SHARDING_FILTER stage, 2 orphaned documents were filtered out ("chunkSkips" : 2), and finally 100 documents were returned ("nReturned" : 100).

    "stage" : "SHARDING_FILTER",
    "nReturned" : 100,
    ......
    "chunkSkips" : 2,
    "inputStage" : {
        "stage" : "FETCH",
        "nReturned" : 102,

    For more information about the SHARDING_FILTER stage, see MongoDB Manual.

Clean up orphaned documents

Important

If you have multiple databases, delete orphaned documents from each database.

ApsaraDB for MongoDB instances
Note

An error occurs if a cleanup script is executed to delete orphaned documents from an ApsaraDB for MongoDB instance whose major version is earlier than 4.2 or an ApsaraDB for MongoDB instance whose minor version is earlier than 4.0.6. For information about how to view the current version of an ApsaraDB for MongoDB instance, see ApsaraDB for MongoDB minor version release notes. For information about how to update the minor version or major version of an ApsaraDB for MongoDB instance, see Upgrade the major version of an instance and Update the minor version of an instance.

The cleanupOrphaned command is required to delete orphaned documents. The method of running this command varies based on the version of the MongoDB database.

MongoDB 4.4 and later
  1. Create a JavaScript script file named cleanupOrphaned.js on a server that can connect to the sharded cluster instance.

    Note

    This script is used to delete orphaned documents from all collections in multiple databases in multiple shards. If you want to delete orphaned documents from a specific collection, you can modify some of the parameters in the script file.

    // The names of shards.
    var shardNames = ["shardName1", "shardName2"];
    // The databases from which you want to delete orphaned documents.
    var databasesToProcess = ["database1", "database2", "database3"];
    
    shardNames.forEach(function(shardName) {
        // Traverse the specified databases.
        databasesToProcess.forEach(function(dbName) {
            var dbInstance = db.getSiblingDB(dbName);
            // Obtain the names of all collections of the specified databases.
            var collectionNames = dbInstance.getCollectionNames();
            
            // Traverse all collections.
            collectionNames.forEach(function(collectionName) {
                // The complete collection name.
                var fullCollectionName = dbName + "." + collectionName;
                // Build the cleanupOrphaned command.
                var command = {
                    runCommandOnShard: shardName,
                    command: { cleanupOrphaned: fullCollectionName }
                };
    
                // Run the cleanupOrphaned command.
                var result = db.adminCommand(command); 
                if (result.ok) {
                    print("Cleaned up orphaned documents for collection " + fullCollectionName + " on shard " + shardName);
                    printjson(result);
                } else {
                    print("Failed to clean up orphaned documents for collection " + fullCollectionName + " on shard " + shardName);
                }
            });
        });
    });

    You must modify the shardNames and databasesToProcess parameters in the script file. The following content describes the two parameters:

    • shardNames: the IDs of the shards from which you want to delete orphaned documents. You can view the IDs in the Shard List section on the Basic Information page of the sharded cluster instance. Example: d-bp15a3796d3a****.

    • databasesToProcess: the names of the databases from which you want to delete orphaned documents.

  2. Run the following command in the directory in which the cleanupOrphaned.js script file is stored:

    mongo --host <Mongoshost> --port <Primaryport>  --authenticationDatabase <database> -u <username> -p <password> cleanupOrphaned.js > output.txt

    The following table describes the parameters that you can configure.

    Parameter

    Description

    <Mongoshost>

    The endpoint of the mongos node of the sharded cluster instance. Format: s-bp14423a2a51****.mongodb.rds.aliyuncs.com.

    <Primaryport>

    The port number of the mongos node of the sharded cluster instance. Default value: 3717.

    <database>

    The name of the database to which the database account belongs.

    <username>

    The database account.

    <password>

    The password of the database account.

    output.txt

    The output.txt file that is used to store execution results.

MongoDB 4.2 and earlier
  1. Create a JavaScript script file named cleanupOrphaned.js on a server that can connect to the sharded cluster instance.

    Note

    This script is used to delete orphaned documents from a specific collection in a database in multiple shards. If you want to delete orphaned documents from multiple collections, you can modify the fullCollectionName parameter in the script file and run the script multiple times. Alternatively, you can modify the script file to traverse all collections.

    function cleanupOrphanedOnShard(shardName, fullCollectionName) {
        var nextKey = { };
        var result;
    
        while ( nextKey != null ) {
            var command = {
                runCommandOnShard: shardName,
                command: { cleanupOrphaned: fullCollectionName, startingFromKey: nextKey }
            };
    
            result = db.adminCommand(command);
            printjson(result);
    
            if (result.ok != 1 || !(result.results.hasOwnProperty(shardName)) || result.results[shardName].ok != 1 ) {
                print("Unable to complete at this time: failure or timeout.")
                break
            }
    
            nextKey = result.results[shardName].stoppedAtKey;
        }
    
        print("cleanupOrphaned done for coll: " + fullCollectionName + " on shard: " + shardName)
    }
    
    var shardNames = ["shardName1", "shardName2", "shardName3"]
    var fullCollectionName = "database.collection"
    
    shardNames.forEach(function(shardName) {
        cleanupOrphanedOnShard(shardName, fullCollectionName);
    });

    You must modify the shardNames and fullCollectionName parameters in the script file. The following content describes the two parameters:

    • shardNames: the IDs of the shards from which you want to delete orphaned documents. You can view the IDs in the Shard List section on the Basic Information page of the sharded cluster instance. Example: d-bp15a3796d3a****.

    • fullCollectionName: You must replace this parameter with the name of the collection from which you want to delete orphaned documents. Format: database name.collection name.

  2. Run the following command in the directory in which the cleanupOrphaned.js script file is stored:

    mongo --host <Mongoshost> --port <Primaryport>  --authenticationDatabase <database> -u <username> -p <password> cleanupOrphaned.js > output.txt

    The following table describes the parameters that you can configure.

    Parameter

    Description

    <Mongoshost>

    The endpoint of the mongos node of the sharded cluster instance. Format: s-bp14423a2a51****.mongodb.rds.aliyuncs.com.

    <Primaryport>

    The port number of the mongos node of the sharded cluster instance. Default value: 3717.

    <database>

    The name of the database to which the database account belongs.

    <username>

    The database account.

    <password>

    The password of the database account.

    output.txt

    The output.txt file that is used to store execution results.

Self-managed MongoDB databases
  1. Download the cleanupOrphaned.js script file on a server that can connect to the self-managed MongoDB database.

    wget "https://docs-aliyun.cn-hangzhou.oss.aliyun-inc.com/assets/attach/120562/cn_zh/1564451237979/cleanupOrphaned.js"
  2. Replace test in the cleanupOrphaned.js file with the name of the database from which you want to delete orphaned documents.

    Important

    If you want to delete orphaned documents from multiple databases, repeat Step 2 and Step 3.

  3. Run the following command on a shard to delete the orphaned documents from all collections in the specified database:

    Note

    You must repeat this step for each shard.

    mongo --host <Shardhost> --port <Primaryport>  --authenticationDatabase <database> -u <username> -p <password> cleanupOrphaned.js
    Note
    • <Shardhost>: the IP address of the shard.

    • <Primaryport>: the service port of the primary node in the shard.

    • <database>: the name of the database to which the database account belongs.

    • <username>: the account that is used to log on to the self-managed MongoDB database.

    • <password>: the password that is used to log on to the self-managed MongoDB database.

    Example:

    In this example, a self-managed MongoDB database has three shards, and you must delete the orphaned documents from each shard.

    mongo --host 172.16.1.10 --port 27018  --authenticationDatabase admin -u dtstest -p 'Test123456' cleanupOrphaned.js
    mongo --host 172.16.1.11 --port 27021 --authenticationDatabase admin -u dtstest -p 'Test123456' cleanupOrphaned.js
    mongo --host 172.16.1.12 --port 27024  --authenticationDatabase admin -u dtstest -p 'Test123456' cleanupOrphaned.js

Troubleshooting

If idle cursors exist on the namespace corresponding to the orphaned documents, the cleanup process may not complete, and the following information may appear in the mongod logs of the corresponding orphaned documents:

Deletion of DATABASE.COLLECTION range [{ KEY: VALUE1 }, { KEY: VALUE2 }) will be scheduled after all possibly dependent queries finish

You can connect to mongod using the Mongo Shell and execute the following command to check if there are idle cursors on the current shard. If there are, you need to clean up all idle cursors using the restart mongod or killCursors command, and then clean up the orphaned documents again. For more information, see JIRA ticket.

db.getSiblingDB("admin").aggregate( [{ $currentOp : { allUsers: true, idleCursors: true } },{ $match : { type: "idleCursor" } }] )

How do I handle uneven data distribution in a MongoDB sharded cluster architecture?

Enabling the Balancer feature and performing pre-sharding can effectively solve the problem of most data being written to a single shard (data skew).

Enable the Balancer

If the Balancer is off, or if the time set for the Balancer window has not been reached, you can enable it or temporarily cancel the Balancer's window period to start data balancing immediately.

  1. Connect to the MongoDB sharded cluster instance.

  2. In the mongos node command window, switch to the config database.

    use config
  3. Execute the following commands as needed.

    • Enable the Balancer feature

      sh.setBalancerState(true)
    • Temporarily cancel the Balancer's window period

      db.settings.updateOne( { _id : "balancer" }, { $unset : { activeWindow : true } } )

Pre-sharding

MongoDB supports two sharding methods: range sharding and hash sharding. Pre-sharding can distribute the values of chunks as much as possible across multiple shard nodes, thereby achieving load balancing as much as possible during DTS data synchronization or migration.

Hash sharding

Use the numInitialChunks parameter for quick and easy pre-sharding. The default value is number of shards × 2, and the maximum can be set to number of shards × 8192. For more information, see sh.shardCollection().

sh.shardCollection("phonebook.contacts", { last_name: "hashed" }, false, {numInitialChunks: 16384})

Range sharding

  • If the source MongoDB is also a sharded cluster architecture, you can use the data in config.chunks to get the chunk range of the corresponding sharded table in the source MongoDB. This can be used as a reference for the value of <split_value> in subsequent pre-sharding commands.

  • If the source MongoDB is a replica set, you can only use the find command to determine the specific range of the sharding key, and then design reasonable split points.

    # Get the minimum value of the sharding key
    db.<coll>.find().sort({<shardKey>:1}).limit(1)
    # Get the maximum value of the sharding key
    db.<coll>.find().sort({<shardKey>:-1).limit(1)

Command format

Note

Taking the splitAt command as an example, for more information, see sh.splitAt(), sh.splitFind(), and Split Chunks in a Sharded Cluster.

sh.splitAt("<db>.<coll>", {"<shardKey>":<split_value>})

Example statements

sh.splitAt("test.test", {"id":0})
sh.splitAt("test.test", {"id":50000})
sh.splitAt("test.test", {"id":75000})

After completing the pre-sharding operation, you can execute the sh.status() command on the mongos node to confirm the effect of the pre-sharding.

How do I set the number of instances displayed per page in the console's task list?

Note

This operation is introduced using a synchronization instance as an example.

  1. Use one of the following methods to go to the Data Synchronization page and select the region in which the data synchronization instance resides.

    DTS console

    1. Log on to the DTS console.

    2. In the left-side navigation pane, click Data Synchronization.

    3. In the upper-left corner of the page, select the region in which the data synchronization task resides.

    DMS console

    Note

    The actual operations may vary based on the mode and layout of the DMS console. For more information, see Simple mode and Customize the layout and style of the DMS console.

    1. Log on to the DMS console.

    2. In the top navigation bar, move the pointer over Data + AI and choose DTS (DTS) > Data Synchronization.

    3. From the drop-down list to the right of Data Synchronization Tasks, select the region in which the data synchronization instance resides.

  2. On the right side of the page, drag the scroll bar to the bottom of the page.

  3. In the bottom-right corner of the page, select Items Per Page.

    Note

    Items Per Page can be set to only 10, 20, or 50.

What should I do if a DTS instance prompts a ZooKeeper connection timeout?

Try restarting the instance to see if it can be recovered. For the restart operation, see Start a DTS instance.

After deleting a DTS CIDR block in CEN, why is it automatically re-added?

This may be because you are using a Basic Edition transit router of Cloud Enterprise Network (CEN) to connect the database to DTS. If you use this database to create a DTS instance, even if you delete the DTS CIDR block in CEN, DTS will automatically add the server's IP address CIDR block to the corresponding router.

Can DTS tasks be exported?

No.

How do I call OpenAPI using the Java language?

The method for calling OpenAPI in Java is similar to that in Python. For more information, see Python SDK Call Example. You can go to the Data Transmission Service DTS SDK page, select your target programming language under All Languages, and view the example code.

How do I use an API to configure the ETL feature for a synchronization or migration task?

You can configure it using common parameters (such as etlOperatorCtl and etlOperatorSetting) in the Reserve parameter of the API operation. For more information, see ConfigureDtsJob and Reserve parameter description.

Does DTS support Azure SQL Database?

Yes. When Azure SQL Database is used as the source database, the SQL Server Incremental Synchronization Mode must be set to Polling and querying CDC instances for incremental synchronization.

Will the source database data be retained after DTS synchronization or migration is complete?

Yes, DTS will not delete the data in the source database. If you do not need to retain the data in the source database, please delete it yourself.

Can the rate be adjusted after a synchronization or migration instance is running?

Yes. For more information, see Adjust migration rate.

Does DTS support sampling data for synchronization or migration by time period?

No.

Do I need to manually create data tables in the destination database when synchronizing or migrating data?

For DTS instances that support schema tasks (schema synchronization or schema migration), if you select Schema Synchronization for Synchronization Types or Schema Migration for Migration Types, you do not need to manually create data tables in the destination database.

Is a network connection required between the source and destination databases for data synchronization or migration?

No.

Do I need to configure RAM authorization for a cross-account DTS task configured over the public network?

No. When you configure a DTS task, you can set the Access Method of the database instance to Public IP Address, and then complete the required configuration.

Note

Data synchronization tasks do not support connecting to database instances via Public IP Address.

Does DTS overwrite existing data during data synchronization or migration?

For data synchronization, the default behavior of DTS and its related parameters are as follows:

Default DTS behavior

If a primary key or unique key conflict occurs during a sync task:

  • If the source and destination databases have the same schema and a data record in the destination database has the same primary key value or unique key value as a data record in the source database:

    • During full data synchronization, DTS does not synchronize the data record to the destination database. The existing data record in the destination database is retained.

    • During incremental data synchronization, DTS synchronizes the data record to the destination database. The existing data record in the destination database is overwritten.

  • If the source and destination databases have different schemas, data may fail to be initialized. In this case, only some columns are synchronized, or the data synchronization instance fails. Proceed with caution.

Related parameters

When you configure a task, use the Processing Mode of Conflicting Tables parameter to manage how data is handled.

  • Precheck and Report Errors: checks whether the destination database contains tables that have the same names as tables in the source database. If the source and destination databases do not contain tables that have identical table names, the precheck is passed. Otherwise, an error is returned during the precheck, and the data synchronization task cannot be started.

    Note

    If the source and destination databases contain tables with identical names and the tables in the destination database cannot be deleted or renamed, you can use the object name mapping feature to rename the tables that are synchronized to the destination database. For more information, see Map object names.

  • Ignore Errors and Proceed: skips the precheck for identical table names in the source and destination databases.

    Warning

    If you select Ignore Errors and Proceed, data inconsistency may occur and your business may be exposed to potential risks.

    • If the source and destination databases have the same schema and a data record in the destination database has the same primary key value or unique key value as a data record in the source database:

      • During full data synchronization, DTS does not synchronize the data record to the destination database. The existing data record in the destination database is retained.

      • During incremental data synchronization, DTS synchronizes the data record to the destination database. The existing data record in the destination database is overwritten.

    • If the source and destination databases have different schemas, data may fail to be initialized. In this case, only some columns are synchronized, or the data synchronization instance fails. Proceed with caution.

Recommendation

To ensure data consistency, delete the destination table before you configure the DTS task, if your business requirements permit.