This topic describes error messages DTS-RETRY-ERR-0501 through DTS-RETRY-ERR-0600 that may occur when you configure a DTS task, along with their solutions.
In this topic, the (.*)? regular expression is used to indicate variables in error messages.
DTS-RETRY-ERR-0501: Data too large
Possible cause: The Elasticsearch instance has insufficient memory, which prevents data from being written.
Solution: Upgrade the configuration of the Elasticsearch instance, and then restart the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: IOException: es: DTS-10035: [parent] Data too large, data for [] would be [136509****/1.2gb], which is larger than the limit of ****DTS-RETRY-ERR-0502: COPY with ON CONFLICT clause is not supported on AO/AOCO relation
Possible cause: The storage format of the destination table is Append-Optimized/Append-Optimized Column-Oriented (AO/AOCO). Tables of this type do not support the ON CONFLICT clause, which prevents data from being written.
Solution: Modify the storage format of the destination table, and then restart the DTS task.
For a sync task, you can use the Modify Synchronization Objects feature to remove the table that causes the error, and then restart the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: CriticalAnyAllException: full-greenplum: DTS-65708: copy error, recordRange=**** FROM STDIN DELIMITER '|' ESCAPE '\' CSV QUOTE '"' DO on conflict DO update PSQLException: ERROR: COPY with ON CONFLICT clause is not supported on AO/AOCO relation. DTS-RETRY-ERR-0503: server closed the connection unexpectedly
Possible cause: The database is in an abnormal state, such as a server crash, resource exhaustion, or network issues. This prevents DTS from connecting to the database.
Solution: Make sure that the database is in a normal state with sufficient resources and that DTS can connect to it. Then, restart the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: CriticalAnyAllException: full-greenplum: DTS-65708: copy error, recordRange={id:218227539994843****,schema:public.trade_order_status,indexName:,fields:[id BIGINT not nullable primary unique], **** FROM STDIN DELIMITER '|' ESCAPE '\' CSV QUOTE '"' DO on conflict DO update PSQLException: ERROR: Error on receive from seg73 172.31.XX.XX:XXX pid=16****: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. ****DTS-RETRY-ERR-0504: PSQLException (.*)? could not open relation. (.*)? This can be validly caused by a concurrent delete operation on this object.
Possible cause: A concurrent operation, such as another transaction, deleted an object in the destination database. This caused its corresponding OID to be deleted.
Solution: Make sure that no other transactions are concurrently deleting the destination object, and then restart the DTS task. If the error persists after the restart, contact the technical support of AnalyticDB for PostgreSQL for assistance.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: CriticalAnyAllException: full-greenplum: DTS-65708: copy error, recordRange= **** FROM STDIN DELIMITER '|' ESCAPE '\' CSV QUOTE '"' DO on conflict DO update PSQLException: ERROR: could not open relation with OID 358**** TRACE_ID: 63033089393406**** Detail: This can be validly caused by a concurrent delete operation on this object.DTS-RETRY-ERR-0505: MPP detected (.*)? segment failures, system is reconnected
Possible cause: An exception occurred during massively parallel processing (MPP) in the destination database, which prevents data from being written.
Solution: Check the logs of the destination database to identify and fix the exception. Then, restart the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: CriticalAnyAllException: full-greenplum: DTS-65708: copy error, recordRange={id:894369101000160****,schema:public.trade_order_header,indexName:,fields:[id BIGINT not nullable primary unique],leftValues:**** FROM STDIN DELIMITER '|' ESCAPE '\' CSV QUOTE '"' DO on conflict DO update PSQLException: ERROR: MPP detected 1 segment failures, system is reconnected TRACE_ID: ****DTS-RETRY-ERR-0506: NotServingRegionException
Possible cause: The region partition of the destination Lindorm instance has changed.
Solution: Restart the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: BatchUpdateException: Got error RetriesExhaustedException from storage engine: Server side Error:{ location: ****, name: NotServingRegionException: 6times, ...unrecorded exceptions num=4} at com.alibaba.lindorm.server.ldserver.queryprocessor.ServerCaller.getRetriesExhaustedException(ServerCaller.java:253) at **** at com.alibaba.lindorm.sql.mysql.execution.executor.CommandExecutorTask.executeCommand(CommandExecutorTask.java:132) at com.alibaba.lindorm.sql.mysql.execution.executor.CommandExecutorTask.run(CommandExecutorTask.java:87) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:627) at java.lang.Thread.run(Thread.java:882) SQLException: Got error RetriesExhaustedException from storage engine: Server side Error:{ location: ****, name: NotServingRegionException: 6times, ...unrecorded exceptions num=4} at com.alibaba.lindorm.server.ldserver.queryprocessor.ServerCaller.getRetriesExhaustedException(ServerCaller.java:253) at ****DTS-RETRY-ERR-0507: Not allowed to execute statement that may change data on polar slave
Possible cause: The destination database is a read-only node (slave) of a PolarDB for MySQL instance, which prevents data from being written.
Solution: Use the primary node of the PolarDB for MySQL instance as the destination database and reconfigure the DTS task.
Error example:
common(mysql-utils): DTS-10046: execute sql: CREATE TABLE if not exists `dts`.`pt****` ( `id` int(11) NOT NULL,`trx_count` bigint(20) NOT NULL, PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8 failed. Create TransactionTable failed. cause: SQLException: Not allowed to execute statement that may change data on polar slaveDTS-RETRY-ERR-0508: No route info for this topic
Possible cause | Solution |
The topic that receives data does not exist in the destination database. | Create the corresponding topic in the destination database, and then restart the DTS task. |
DTS does not support this type of RocketMQ instance. | Use a RocketMQ instance that is supported by DTS and reconfigure the DTS task. |
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: MQClientException: No route info for this topic, ads_rom_rtm_ord_of_**** For more information, please visit the url, ****DTS-RETRY-ERR-0509: date/time field value out of range
Possible cause: The date format of the destination table is incompatible with the source table, which prevents data from being written.
Solution: Modify the date format of the destination table to match the date format of the data to be synchronized or migrated from the source database. Then, restart the DTS task.
Error example:
DTS-31009: In process of processing data (recordRange: 123456) failed cause: FatalAnyAllException: common: DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds CriticalAnyAllException: framework: DTS-30020: execute **** fetchSize = 1024 PSQLException: ERROR: date/time field value out of range: "0001-2023-00 00:00:00.0" Hint: Perhaps you need a different "datestyle" setting. Where: portal "C_1" parameter $1 = '...'DTS-RETRY-ERR-0510: Error message not available
Possible cause: The target MaxCompute response timed out.
Solution: Ensure that the destination MaxCompute is operating normally and can be connected properly, and then restart the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: IOException: ErrorCode=Local Error, ErrorMessage=Error message not available TunnelException: ErrorCode=Local Error, ErrorMessage=Error message not availableDTS-RETRY-ERR-0511: Generated columns cannot be used in COPY
Possible cause: The table to be synchronized or migrated from the source database contains generated columns. The COPY operation is not supported for generated columns.
Solution: Modify the structure of the table, and then reconfigure the DTS task.
For a sync task, you can use the Modify Synchronization Objects feature to remove the table that causes the error, and then restart the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: CriticalAnyAllException: full-greenplum: DTS-65708: copy error, recordRange=**** BIGINT not nullable primary unique],leftValues:[null],rightValues:[null],partition:null}, copySql=**** FROM STDIN DELIMITER '|' ESCAPE '\' CSV QUOTE '"' DO on conflict DO update PSQLException: ERROR: column "computed_p****" is a generated column TRACE_ID: 489425612987842**** Detail: Generated columns cannot be used in COPY.DTS-RETRY-ERR-0512: no partition of relation (.*)? found for row
Possible cause: The destination partitioned table is missing a corresponding partition, which prevents data from being written.
Solution: Check the definition of the partitioned table and make sure that a corresponding partition exists in the destination table to receive data. You can create a new partition or adjust the partitioning policy. Then, restart the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: CriticalAnyAllException: full-greenplum: DTS-65708: copy error, recordRange={id:919367515335399****,schema:np_erpc.bms_lot_def,indexName:,fields:[],leftValues:[null],rightValues:[null],partition:null}, copySql=**** FROM STDIN DELIMITER '|' ESCAPE '\' CSV QUOTE '"' DO on conflict DO update PSQLException: ERROR: no partition of relation "bms_****" found for row TRACE_ID: 280630901049587**** TRACE_ID: 280630901049587**** Detail: Partition key of the failing row contains (invaliddate) =****DTS-RETRY-ERR-0514: terminating connection due to conflict with recovery
Possible cause: The source database is in an abnormal state, such as a server crash, resource exhaustion, or network issues. This prevents DTS from connecting to the database.
Solution: Make sure that the source database is in a normal state with sufficient resources and that DTS can connect to it. Then, restart the DTS task.
Error example:
DTS-31009: In process of processing data (recordRange: 123456) failed cause: FatalAnyAllException: common: DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds CriticalAnyAllException: framework: DTS-30020: execute sql**** fetchSize = 1024 PSQLException: FATAL: terminating connection due to conflict with recovery Detail: User query might have needed to see row versions that must be removed.DTS-RETRY-ERR-0515: xlog flush request (.*)? is not satisfied
Possible cause: The source database is in an abnormal state, such as a server crash, resource exhaustion, or network issues. This prevents DTS from connecting to the database.
Solution: Make sure that the source database is in a normal state with sufficient resources and that DTS can connect to it. Then, restart the DTS task.
Error example:
DTS-31009: In process of processing data (recordRange: 123456) failed cause: FatalAnyAllException: common: DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds CriticalAnyAllException: framework: DTS-30020: execute sql**** fetchSize = 1024 PSQLException: ERROR: xlog flush request 9FA0/858A**** is not satisfied --- flushed only to 9F92/FF1B**** Where: writing block 433758 of relation ****DTS-RETRY-ERR-0516: missing chunk number (.*)? for toast value (.*)? in pg_toast_(.*)?
Possible cause: The source database is in an abnormal state, such as a server crash, resource exhaustion, or network issues. This prevents DTS from connecting to the database.
Solution: Make sure that the source database is in a normal state with sufficient resources and that DTS can connect to it. Then, restart the DTS task.
Error example:
DTS-31009: In process of processing data (recordRange: 123456) failed cause: FatalAnyAllException: common: DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds CriticalAnyAllException: framework: DTS-30020: execute sql:**** fetchSize = 1024 PSQLException: ERROR: missing chunk number 0 for toast value 1181**** in pg_toast_1**** DTS-RETRY-ERR-0517: could not read block (.*)? in file
Possible cause: The source database is in an abnormal state, such as a server crash, resource exhaustion, or network issues. This prevents DTS from connecting to the database.
Solution: Make sure that the source database is in a normal state with sufficient resources and that DTS can connect to it. Then, restart the DTS task.
Error example:
DTS-31009: In process of processing data (recordRange: 123456) failed cause: FatalAnyAllException: common: DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds CriticalAnyAllException: framework: DTS-30020: execute sql:**** fetchSize = 1024 PSQLException: ERROR: could not read block 601620 in file "pg_tbl****": read only 0 of 8192 bytesDTS-RETRY-ERR-0518: Canceling query because of high VMEM usage.
Possible cause: The source database is in an abnormal state, such as a server crash, resource exhaustion, or network issues. This prevents DTS from connecting to the database.
Solution: Make sure that the source database is in a normal state with sufficient resources and that DTS can connect to it. Then, restart the DTS task.
Error example:
DTS-31009: In process of processing data (recordRange: 123456) failed cause: FatalAnyAllException: common: DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds CriticalAnyAllException: framework: DTS-31009: read source data error PSQLException: ERROR: Canceling query because of high VMEM usage. Used: 413MB, available 571MB, red zone: 5160MB (runaway_cleaner.c:200) ****DTS-RETRY-ERR-0519: PSQLException: ERROR: lock not available
Possible cause: A deadlock exists in the source database, which prevents DTS from reading data.
Solution: Make sure that the source database is in a normal state without deadlocks and that DTS can connect to it. Then, restart the DTS task.
Error example:
DTS-31009: In process of processing data (recordRange: 123456) failed cause: FatalAnyAllException: common: DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds CriticalAnyAllException: framework: DTS-31009: read source data error PSQLException: ERROR: lock not availableDTS-RETRY-ERR-0520: ERR DB index is out of range
Possible cause: The DB that receives data does not exist in the destination Redis instance.
Solution: Reconfigure the DTS task and use the Schema, Table, and Column Name Mapping feature to map the data to an existing DB in the destination Redis instance.
For a sync task, you can try to use the Modify Synchronization Objects feature to remove the DB that causes the error. Then, use the Modify Synchronization Objects and Schema, Table, and Column Name Mapping features to add the removed DB back to the synchronization objects and map it.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: JedisDataException: ERR DB index is out of rangeDTS-RETRY-ERR-0521: ERR DUMP payload version or checksum are wrong
Possible cause: The version of the source Redis instance is significantly different from that of the destination Redis instance. You cannot migrate only the full data of the source Redis instance.
Solution: Reconfigure the DTS task to include both full and incremental tasks.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: JedisDataException: ERR DUMP payload version or checksum are wrongDTS-RETRY-ERR-0522: Aggregation stage not supported (.*)? changeStream
Possible cause: When a MongoDB database that does not have Change Streams enabled is used as the source, the Migration Method is set to ChangeStream.
Solution: Enable Change Streams for the MongoDB database, and then reconfigure the DTS task.
If Oplog is enabled for the MongoDB database, you can reconfigure the DTS task and set the Migration Method to Oplog.
Error example:
DTS-52700: Command failed with error 304: 'Aggregation stage not supported: '$changeStream'' on server mongo.overseauat.starcharge.com:6669. The full response is {"ok": 0.0, "code": 304, "errmsg": "Aggregation stage not supported: '$changeStream'", "operationTime": **** cause: MongoCommandException: Command failed with error 304: 'Aggregation stage not supported: '$changeStream'' on server mongo.overseauat.starcharge.com:6669. The full response is {"ok": 0.0, "code": 304, "errmsg": "Aggregation stage not supported: '$changeStream'", "operationTime":****DTS-RETRY-ERR-0523: Timed out after
Possible cause: The network communication between DTS and the database timed out.
Solution: Ensure that the database is in normal status and that DTS can connect to it normally (check the database connection settings, whitelist settings, account and password accuracy, etc.), and then restart the DTS task.
Error example:
DTS-52700: Timed out after 3600000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. Client view of cluster state is {type=REPLICA_SET, servers=**** cause: MongoTimeoutException: Timed out after 3600000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. Client view of cluster state is ****DTS-RETRY-ERR-0524: Invalid object name
Possible causes | Solution |
The destination table does not exist. | Ensure that the destination database is in normal status and can be connected, the tables for receiving data exist in the destination database, and the destination database account used by the DTS task has sufficient permissions, and then restart the DTS task. |
The destination database account used by the DTS task has insufficient permissions. |
|
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: CriticalAnyAllException: dts-k-src: DTS-52111: Increment Context Is Not Running..: CriticalAnyAllException: sqlserver-reader: DTS-52340: Serial transaction log failed SQLServerException: Invalid object name ****DTS-RETRY-ERR-0525: Not support fetch backup log MissActiveLogException: ERROR_MISS_TRANSACTION_LOG
Possible cause: The backup log of the source database does not support reading.
Solution: Contact the administrator (DBA) of the source database to handle the issue, and then reconfigure the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: RecoverableAnyAllException: dts-k-src: DTS-52110: SQLServerRecordExtractor Init Error: sqlserver-reader: DTS-52061: Failed to seek sqlserver position CriticalAnyAllException: sqlserver-reader: DTS-52061: Failed to seek sqlserver position CriticalAnyAllException: sqlserver-reader: DTS-52411: Not support fetch backup log MissActiveLogException: ERROR_MISS_TRANSACTION_LOG(timestamp:174184****)DTS-RETRY-ERR-0526: Miss backup log
Possible cause: The backup log of the source database has been purged.
Solution:
Method 1: Contact the administrator (DBA) of the source database to resolve the issue, and then restart the DTS task.
Method 2: Reconfigure the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: RecoverableAnyAllException: dts-k-src: DTS-52110: SQLServerRecordExtractor Init Error: sqlserver-reader: DTS-52061: Failed to seek sqlserver position CriticalAnyAllException: sqlserver-reader: DTS-52061: Failed to seek sqlserver position CriticalAnyAllException: sqlserver-reader: DTS-52412: Miss backup logDTS-RETRY-ERR-0527: is invalid as a backup device name for the specified device type. Reissue the BACKUP statement with a valid file name and device type
Possible cause: The backup log of the source database has been purged.
Solution:
Method 1: Contact the administrator (DBA) of the source database to resolve the issue, and then restart the DTS task.
Method 2: Reconfigure the DTS task.
Error example:
DTS-52110: SQLServerRecordExtractor Init Error: sqlserver-reader: DTS-52061: Failed to seek sqlserver position cause: CriticalAnyAllException: sqlserver-reader: DTS-52061: Failed to seek sqlserver position CriticalAnyAllException: sqlserver-reader: DTS--0001: execute sql select top(1) [Current LSN], Operation, Context, CAST([Begin Time] as DATETIME2) from [CarlsbergReport].[sys].[fn_dump_dblog]**** where [Begin Time] is not null failed, case by common: DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds FatalAnyAllException: common: DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds SQLServerException: The file name "****" is invalid as a backup device name for the specified device type. Reissue the BACKUP statement with a valid file name and device type.DTS-RETRY-ERR-0528: Cannot replicate because the master purged required binary logs
Possible cause: The binary logging of the source database has been purged.
Solution:
Method 1: Contact the administrator (DBA) of the source database to resolve the issue, and then restart the DTS task.
Method 2: Reconfigure the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: CriticalAnyAllException: dts-k-src: DTS-52111: Increment Context Is Not Running..: mysql-reader: DTS-52200: process error failed cause: IOException: java.sql.SQLException: Cannot replicate because the master purged required binary logs. Replicate the missing transactions from elsewhere, or provision a new slave from backup. Consider increasing the master's binary log expiration period. The GTID sets and the missing purged transactions are too long to print in this message. For more information, please see the master's error log or the manual for GTID_SUBTRACT SQLException: Cannot replicate because the master purged required binary logs. Replicate the missing transactions from elsewhere, or provision a new slave from backup. Consider increasing the master's binary log expiration period. The GTID sets and the missing purged transactions are too long to print in this message. For more information, please see the master's error log or the manual for GTID_SUBTRACT CriticalAnyAllException: mysql-reader: DTS-52200: process error failed IOException: java.sql.SQLException: Cannot replicate because the master purged required binary logs. Replicate the missing transactions from elsewhere, or provision a new slave from backup. Consider increasing the master's binary log expiration period. The GTID sets and the missing purged transactions are too long to print in this message. For more information, please see the master's error log or the manual for GTID_SUBTRACT SQLException: Cannot replicate because the master purged required binary logs. Replicate the missing transactions from elsewhere, or provision a new slave from backup. Consider increasing the master's binary log expiration period. The GTID sets and the missing purged transactions are too long to print in this message. For more information, please see the master's error log or the manual for GTID_SUBTRACTDTS-RETRY-ERR-0529: Cannot replicate because the source purged required binary logs
Possible cause: The binary logging of the source database has been purged.
Solution:
Method 1: Contact the administrator (DBA) of the source database to resolve the issue, and then restart the DTS task.
Method 2: Reconfigure the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: CriticalAnyAllException: dts-k-src: DTS-52111: Increment Context Is Not Running..: mysql-reader: DTS-52200: process error failed cause: IOException: java.sql.SQLException: Cannot replicate because the source purged required binary logs. Replicate the missing transactions from elsewhere, or provision a new replica from backup. Consider increasing the source's binary log expiration period. The GTID set sent by the replica is **** and the missing transactions are **** SQLException: Cannot replicate because the source purged required binary logs. Replicate the missing transactions from elsewhere, or provision a new replica from backup. Consider increasing the source's binary log expiration period. ****DTS-RETRY-ERR-0530: Client requested master to start replication from position > file size
Possible causes: The Binlog in the source database is abnormal, causing DTS to fail to parse it.
Solution:
Method 1: Contact the administrator (DBA) of the source database to resolve the issue, and then restart the DTS task.
Method 2: Reconfigure the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: CriticalAnyAllException: dts-k-src: DTS-52111: Increment Context Is Not Running..: mysql-reader: DTS-52200: process error failed cause: IOException: java.sql.SQLException: Client requested master to start replication from position > file size SQLException: Client requested master to start replication from position > file size CriticalAnyAllException: mysql-reader: DTS-52200: process error failed IOException: java.sql.SQLException: Client requested master to start replication from position > file size SQLException: Client requested master to start replication from position > file sizeDTS-RETRY-ERR-0531: java.sql.SQLException: could not find next log
Possible causes: The Binlog in the source database is abnormal, causing DTS to fail to parse it.
Solution:
Method 1: Contact the administrator (DBA) of the source database to resolve the issue, and then restart the DTS task.
Method 2: Reconfigure the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: CriticalAnyAllException: dts-k-src: DTS-52111: Increment Context Is Not Running..: mysql-reader: DTS-52200: process error failed cause: IOException: java.sql.SQLException: could not find next log; the first event '' at 4, the last event read from **** DTS-RETRY-ERR-0532: Could not open log file
Possible causes: The Binlog in the source database is abnormal, causing DTS to fail to parse it.
Solution:
Method 1: Contact the administrator (DBA) of the source database to resolve the issue, and then restart the DTS task.
Method 2: Reconfigure the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: CriticalAnyAllException: dts-k-src: DTS-52111: Increment Context Is Not Running..: mysql-reader: DTS-52200: process error failed cause: IOException: java.sql.SQLException: Could not open log file SQLException: Could not open log file CriticalAnyAllException: mysql-reader: DTS-52200: process error failed IOException: java.sql.SQLException: Could not open log file SQLException: Could not open log fileDTS-RETRY-ERR-0533: mysql row image valid failed, miss
Possible causes: The Binlog in the source database is abnormal, causing DTS to fail to parse it.
Solution:
Method 1: Contact the administrator (DBA) of the source database to resolve the issue, and then restart the DTS task.
Method 2: Reconfigure the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: CriticalAnyAllException: dts-k-src: DTS-52111: Increment Context Is Not Running..: mysql-reader: DTS-52230: mysql-reader: DTS-52230: mysql row image valid failed, miss 2, event: Header:**** CriticalAnyAllException: mysql-reader: DTS-52230: mysql-reader: DTS-52230: mysql row image valid failed, miss 2, event: Header:**** CriticalAnyAllException: mysql-reader: DTS-52230: mysql row image valid failed, miss 2, event: Header:****DTS-RETRY-ERR-0534: Valid type fail
Possible causes: The Binlog in the source database is abnormal, causing DTS to fail to parse it.
Solution:
Method 1: Contact the administrator (DBA) of the source database to resolve the issue, and then restart the DTS task.
Method 2: Reconfigure the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: CriticalAnyAllException: dts-k-src: DTS-52111: Increment Context Is Not Running..: mysql-reader: DTS-52230: Valid type fail, Column:planOrdinal,Type:5 cause: IllegalArgumentException: Valid type fail, Column:planOrdinal,Type:5 CriticalAnyAllException: mysql-reader: DTS-52230: Valid type fail, Column:planOrdinal,Type:5 IllegalArgumentException: Valid type fail, Column:planOrdinal,Type:5DTS-RETRY-ERR-0535: Transaction entry load failed FileNotFoundException
Possible cause: An exception occurred when DTS was processing a large transaction.
Solution: Restart the DTS task. If the issue persists, contact Alibaba Cloud technical support.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: CriticalAnyAllException: dts-k-src: DTS-52111: Increment Context Is Not Running..: CriticalAnyAllException: oracle-reader: DTS-52330: Transaction entry merge failed CriticalAnyAllException: oracle-reader: DTS-52330: Transaction entry load failed FileNotFoundException: ****DTS-RETRY-ERR-0536: Binary log is not open
Possible cause: Binary logging is not enabled for the source database.
Solution: Enable and correctly configure binary logging for the source database, and then restart the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: CriticalAnyAllException: dts-k-src: DTS-52111: Increment Context Is Not Running..: mysql-reader: DTS-52200: process error failed cause: IOException: java.sql.SQLException: Binary log is not open SQLException: Binary log is not open CriticalAnyAllException: mysql-reader: DTS-52200: process error failed IOException: java.sql.SQLException: Binary log is not open SQLException: Binary log is not openDTS-RETRY-ERR-0537: Database connection failed when reading from copy PSQLException: Database connection failed when reading from copy EOFException: null
Possible cause: The native logical replication protocol of PostgreSQL has a heartbeat mechanism. If no heartbeat signal is received within the wal_sender_timeout period, the system disconnects the connection. After many transactions that do not need to be synchronized are filtered, the heartbeat is not detected in time. The system directly checks whether the time of the last heartbeat exceeds wal_sender_timeout. As a result, the system considers the downstream direct connection to be interrupted, even if the downstream has sent a heartbeat. For the downstream (DTS), this means that the connection is suddenly interrupted during normal operation. You can check whether terminating walsender process due to replication timeout exists in the error log of the source database to determine whether this is the cause of the connection interruption.
This issue is fixed in PostgreSQL 15. For more information about the fix, see Fix for unexpected timeout error.
Solution: Run the following commands in the source database to set this parameter to 0 to work around this issue. Then, restart the DTS task.
ALTER SYSTEM SET wal_sender_timeout = '0';
SELECT pg_reload_conf();Error example:
DTS-52111: Increment Context Is Not Running..: cause: CriticalAnyAllException: postgresql-reader: DTS-52510: Fetch postgresql logical log failed FatalAnyAllException: common: DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds IOException: org.postgresql.util.PSQLException: Database connection failed when reading from copy PSQLException: Database connection failed when reading from copy EOFException: nullDTS-RETRY-ERR-0538: [polardbx]please try later
Possible cause: The CDC feature of the source database is abnormal.
Solution:
Method 1: Contact the administrator (DBA) of the source database to resolve the issue, and then restart the DTS task.
Method 2: Reconfigure the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: CriticalAnyAllException: dts-k-src: DTS-52111: Increment Context Is Not Running..: mysql-reader: DTS-52200: process error failed cause: IOException: java.sql.SQLException: [19a7407da540****][10.32.XX.XX:XXX][polardbx]please try later... SQLException: [19a7407da540****][10.32.XX.XX:XXX][polardbx]please try later... CriticalAnyAllException: mysql-reader: DTS-52200: process error failed IOException: java.sql.SQLException: ****DTS-RETRY-ERR-0539: canceling statement due to user request
Possible cause: The SQL statement executed by DTS in the database timed out.
Solution:
Method 1: Process long-running transactions that have not been committed for more than 5 minutes, and then restart the DTS instance.
Method 2: Restart the DTS instance during off-peak hours.
Error example:
DTS-52110: PostgresRecordExtractor Init Error: postgresql-reader: DTS-52511: alter replica identity thread exception, readerContext=ReaderContext [****] cause: CriticalAnyAllException: postgresql-reader: DTS-52511: alter replica identity thread exception ExecutionException: postgresql-reader: DTS--0001: alter table identity: "****"/d failed, case by common: DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: FatalAnyAllException: common: DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds PSQLException: ERROR: canceling statement due to user request CriticalAnyAllException: postgresql-reader: DTS--0001: alter table identity: "****"/d failed, case by common: DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds FatalAnyAllException: common: DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds PSQLException: ERROR: canceling statement due to user requestDTS-RETRY-ERR-0541: Only tables can be added to publications
Possible cause: The synchronization or migration object contains a foreign table.
Solution: Reconfigure the DTS task and make sure that the task object does not contain foreign tables.
For a sync task, you can use the Modify Synchronization Objects feature to remove the foreign table.
Error example:
DTS-52111: Increment Context Is Not Running..: cause: CriticalAnyAllException: postgresql-reader: DTS-52510: Fetch postgresql logical log failed CriticalAnyAllException: postgresql-reader: DTS-52514: create publication error for dts_sync_h55r44gha27**** CriticalAnyAllException: postgresql-reader: DTS--0001: execute sql alter publication "****" add table "public"."****" failed, case by common: DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds FatalAnyAllException: common: DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds PSQLException: ERROR: "****" is not a table Detail: Only tables can be added to publications.DTS-RETRY-ERR-0542: SSLProtocolException: Received fatal alert: unexpected_message
Possible cause: The network of the source database is unstable or has changed, which causes an exception in the connection between DTS and the source database.
Solution: Ensure that DTS can connect to and read data from the source database normally, and then restart the DTS task.
Error example:
DTS-52111: Increment Context Is Not Running..: cause: CriticalAnyAllException: postgresql-reader: DTS-52510: Fetch postgresql logical log failed FatalAnyAllException: common: DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds IOException: org.postgresql.util.PSQLException: Database connection failed when reading from copy PSQLException: Database connection failed when reading from copy SSLProtocolException: Received fatal alert: unexpected_messageDTS-RETRY-ERR-0543: seek timestamp for topic (.*)? with timestamp (.*)? failed NoSuchElementException: null
Possible cause: The incremental data of the TiDB database cannot be copied to the partition with an ID of 0 in the destination topic, or the offset data stored in this partition is abnormal.
Solution: Ensure that the target topic has correctly stored the incremental data from TiDB (the partition with an ID of 0 contains the corresponding incremental data, and the offset data is normal), and then start the DTS task again.If the issue cannot be resolved, contact Alibaba Cloud technical support for assistance.
Error example:
DTS-31009: In process of processing data (recordRange: 123456) failed cause: FatalAnyAllException: common: DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds RuntimeException: seek timestamp for topic [****] with timestamp [174186187****] failed NoSuchElementException: nullDTS-RETRY-ERR-0544: Can’t find primary key info for target table
Possible cause: The destination table is missing a primary key.
Solution: Add a primary key to the destination table based on the error message and the structure of the source table. Then, restart the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: CriticalAnyAllException: replicate-adb30: DTS-70020: Can’t find primary key info for target table: `****`DTS-RETRY-ERR-0545: Could not create connection to database server
Possible cause: DTS cannot connect to the destination database.
Solution: Ensure that the destination database is in normal status and can be connected normally, and then restart the DTS task.
Error example:
Connect Target DB failed. cause by [Could not create connection to database server. Attempted reconnect 1 times. Giving up.] About more information in ****DTS-RETRY-ERR-0546: out of range for decimal
Possible cause: The data cannot be written to the destination database because the field type of the destination database is incompatible with that of the source database.
Solution: Modify the field type of the destination table based on the type of the field to be synchronized or migrated from the source database. Then, restart the DTS task.
Error example:
DTS-077100: Record Replicator error in table ****.****. cause by [java.sql.SQLException: [15018, 2025042117135301001****] (Column => VIP_MEMBER_LEVEL_INDICATOR.INDEX_VALUE), Value[10000****.36] out of range for decimal(10, 2)] About more information in ****DTS-RETRY-ERR-0547: Can't find any matching password for myadmin
Possible cause: The account or password of the destination database used by the DTS task is incorrect.
Solution:
Database password error: Go to the Basic Information page in the DTS instance console. In the Destination area, click Change Password, correct the database password, and then restart the DTS task.
Database account error: Use the correct database account and password to reconfigure the DTS task.
Error example:
java.sql.SQLException: [10000, 2025032923342501002000****] Can't find any matching password for myadminDTS-RETRY-ERR-0548: Throwable occurs in QueueStorage
Possible cause: The destination AnalyticDB for MySQL cluster is abnormal, which prevents data from being written to the destination database.
Solution: Contact the technical support of AnalyticDB for MySQL to handle the issue, and then restart the DTS task.
Error example:
DTS-077501: Target Instance error in table ****.****, Please contact ADS support. cause by [java.sql.SQLException: Error Code:60008, RealtimeDataFailedException:Insert failed with 1 data records: [partition=9] Throwable occurs in QueueStorage: null] About more information in ****DTS-RETRY-ERR-0549: database (.*)? does not exist
Possible cause: The destination database does not exist, which prevents data from being written.
Solution: Manually create the destination database based on the error message, and then restart the DTS task.
Reconfigure the DTS task and use DTS to synchronize or migrate the schema. For a sync task, select Schema Synchronization for Synchronization Types. For a migration task, select Schema Migration for Migration Types.
Error example:
org.postgresql.util.PSQLException: FATAL: database "****" does not existDTS-RETRY-ERR-0550: The connection attempt failed
Possible cause: DTS cannot connect to the database.
Solution: Make sure that the database instance is in a normal state and that DTS can connect to it. Check the database connection settings, whitelist settings, and the correctness of the account and password. Then, restart the DTS task.
Error example:
org.postgresql.util.PSQLException: The connection attempt failed.DTS-RETRY-ERR-0551: RecordBatchTooLargeException: The request included message batch larger than the configured segment size on the server
Possible cause: The size of a single message written to Kafka exceeds the maximum value set for the destination Kafka.
Solution:
Sync task: Identify the large object corresponding to the message, and then use the Modify Synchronization Objects feature to remove or filter the large object.
Migration task: Identify the large object corresponding to the message, and then reconfigure the migration task to filter the large object from the objects to be migrated.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: ExecutionException: com.****.****.common.errors.RecordBatchTooLargeException: The request included message batch larger than the configured segment size on the server. RecordBatchTooLargeException: The request included message batch larger than the configured segment size on the server.DTS-RETRY-ERR-0553: was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction
Possible cause: A deadlock exists in the destination database, which prevents data from being written.
Solution: Contact the database administrator (DBA) to restore the database, and then restart the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: TransactionReplicateException: transaction-replicate: DTS-70003: 2 execute transaction has excess max transaction retry time [64] cause:transaction-replicate: DTS-70004: EXEC statement failed, may try it again RecoverableAnyAllException: transaction-replicate: DTS-70004: EXEC statement failed, may try it again SQLServerException: Transaction (Process ID 73) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.DTS-RETRY-ERR-0554: No operations allowed after connection closed
Possible cause: The database connection has been closed, preventing DTS from connecting to the database instance.
Solution: Based on the error message and database logs, troubleshoot and address the issue, and then restart the DTS task.
Error example:
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: No operations allowed after connection closed.DTS-RETRY-ERR-0555: The server is currently in offline mode
Possible cause: The database is in offline mode and cannot process requests.
Solution: Contact the database administrator (DBA) to restore the database, and then restart the DTS task.
Error example:
java.sql.SQLException: The server is currently in offline modeDTS-RETRY-ERR-0556: maximum (.*)? partitions allowed, or more than (.*)? partitions created
Possible cause: When DTS writes data to the destination database, the number of partitions in the destination table exceeds the limit.
Solution: Handle the issue based on the error message, and then restart the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: RecoverableAnyAllException: transaction-replicate: DTS-70004: execute upload failed, may try it again TransactionReplicateException: transaction-replicate: DTS-70003: 3 execute transaction has excess max transaction retry time [100] cause:Catalog Service Failed. ErrorCode:105. ErrorMessage:ODPS-0110061: Failed to run ddltask - Modify DDL meta encounter exception : ODPS-0123031:ODPS partition exception - maximum 60000 partitions allowed, or more than 10240 partitions created OdpsException: Catalog Service Failed. ErrorCode:105. ErrorMessage:ODPS-0110061: Failed to run ddltask - Modify DDL meta encounter exception : ODPS-0123031:ODPS partition exception - maximum 60000 partitions allowed, or more than 10240 partitions createdDTS-RETRY-ERR-0557: RuntimeException (.*)? not support
Possible cause: An exception occurred when DTS executed an SQL command in the database.
Solution: Check the status and logs of the database and fix the issue. Then, restart the DTS task.
Error example:
DTS-31009: In process of processing data (recordRange: 123456) failed cause: FatalAnyAllException: common: DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds RuntimeException: not supportDTS-RETRY-ERR-0558: The server has been locked
Possible cause: The database instance is locked.
Solution: Contact the database administrator (DBA) to restore the database, and then restart the DTS task.
Error example:
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: The server has been lockedDTS-RETRY-ERR-0559: missing column (.*)? in source record
Possible cause: Due to the limitations of the database, DTS cannot capture some DDL operations from the source database. For example, the operation of deleting a column in the source database is not synchronized to the destination database. This causes a mismatch between the columns of the source and destination databases when subsequent incremental changes are synchronized or migrated.
Solution: If your business permits, clear the data that has been synchronized or migrated by DTS in the destination database, and then reconfigure the DTS task.
For a sync task, you can try to use the Modify Synchronization Objects feature to remove the table that causes the error. Then, clear the data that has been synchronized or migrated by DTS from the table in the destination database. After that, use the Modify Synchronization Objects feature to add the removed table back to the synchronization objects.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: CriticalAnyAllException: replicate-postgresql: DTS-70029: missing column was_control in source recordDTS-RETRY-ERR-0560: Syntax parse error in condition, conditionSQL
Possible cause: The filter conditions configured in the DTS instance are incorrect.
Solution:
Sync instance: Correct the filter clauses, and then restart the DTS task. For more information, see What to do next.
Migration instance: Restart the DTS task. If the instance cannot be recovered, reconfigure the DTS task.
If the issue cannot be resolved, please contact Alibaba Cloud technical support for assistance.
Error example:
DTS-31009: In process of processing data (recordRange: 123456) failed cause: FatalAnyAllException: common: DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds CriticalAnyAllException: capture-dstore: DTS-51004: Syntax parse error in condition, conditionSQL:****DTS-RETRY-ERR-0561: DbException: Table (.*)? not found
Possible cause: The filter conditions configured in the DTS instance are incorrect.
Solution:
Sync instance: Correct the filter clauses, and then restart the DTS task. For more information, see What to do next.
Migration instance: Restart the DTS task. If the instance cannot be recovered, reconfigure the DTS task.
If the issue cannot be resolved, please contact Alibaba Cloud technical support for assistance.:
Error example:
DTS-31009: In process of processing data (recordRange: 123456) failed cause: FatalAnyAllException: common: DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds DbException: Table "****" not found [42102-193] JdbcSQLException: ****DTS-RETRY-ERR-0562: ERR syntax error
Possible cause: The versions of the source and destination Redis instances are incompatible.
Solution: Check the versions of the source and destination Redis instances and handle the issue to make them compatible, for example, by performing an upgrade. Then, restart the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: JedisDataException: ERR syntax errorDTS-RETRY-ERR-0563: No enum constant com.alibaba.amp.any.redis.writer.ExtendRedisCommand.
Possible causes: DTS cannot parse the execution commands in the source Redis instance.
Solution: Restart the DTS task. If the issue persists, contact Alibaba Cloud technical support.
Error example:
DTS-31009: In process of processing data (recordRange: 123456) failed cause: FatalAnyAllException: common: DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds IllegalArgumentException: No enum constant com.alibaba.amp.any.redis.writer.ExtendRedisCommand.FLUSHDB_XSJDTS-RETRY-ERR-0564: full replication not allowed in INC mode
Possible cause: DTS cannot capture the incremental data of the remote sync task.
Solution: Reset and reconfigure the remote sync task.
Error example:
DTS-31009: In process of processing data (recordRange: 123456) failed cause: FatalAnyAllException: common: DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds RedisServerException: redis: DTS-10015: redis: DTS-10009: redis: DTS-10013: full replication not allowed in INC mode. RedisCannotRetryException: redis: DTS-10009: redis: DTS-10013: full replication not allowed in INC mode. RedisReFullSyncException: redis: DTS-10013: full replication not allowed in INC mode.DTS-RETRY-ERR-0565: TableStoreException: The AccessKeyID is disabled
Possible cause: For a task whose destination database is Tablestore, the configured AccessKey is disabled.
Solution: Re-enable the AccessKey and then restart the DTS task, or reconfigure the DTS task with a valid AccessKey.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: ClientException: [ErrorCode]:OTSAuthFailed, [Message]:The AccessKeyID is disabled., [RequestId]:00063****-50eb-a71e-c97a-261a0481****, [TraceId]:092f****-e3fa-8066-efc4-0c1ce00****, [HttpStatus:]403 TableStoreException: The AccessKeyID is disabled. TableStoreException: The AccessKeyID is disabled.DTS-RETRY-ERR-0566: TableStoreException: You have no permission to access the requested resource, please contact the resource owner.
Possible cause: For a task whose destination database is Tablestore, the configured AccessKey has insufficient permissions.
Solution: Make sure that the Alibaba Cloud account to which the AccessKey belongs has the write permissions on Tablestore. Then, restart the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: ClientException: [ErrorCode]:OTSNoPermissionAccess, [Message]:You have no permission to access the requested resource, please contact the resource owner., [RequestId]:0006****-65b5-b7da-c97a-261a112****, [TraceId]:f2a6****-ce44-d9bf-e4bf-205b7d92****, [HttpStatus:]403 TableStoreException: You have no permission to access the requested resource, please contact the resource owner. TableStoreException: You have no permission to access the requested resource, please contact the resource owner.DTS-RETRY-ERR-0567: Unable to parse the date
Possible cause: The object to be synchronized or migrated in the source database contains data in an invalid date format. This prevents DTS from converting the data into an integer or a valid date format.
Solution: Modify the data in the source database based on the error message and data format requirements, and then reconfigure the DTS task.
Error example:
DTS-100047: retry 1 times, 43200 seconds, which exceed the supposed 43200 seconds cause: CriticalAnyAllException: tablestore: DTS-11001: Failed to disperse value 0000-00-00 00:00:00.0 of type {typeName:UNKNOWN, typeId:12, isLobType:false,}tablestore: DTS-11001: Can not convert value: 0000-00-00 00:00:00.0 to Integer.java.text.ParseException: Unable to parse the date: 0000-00-00 00:00:00.0DTS-RETRY-ERR-0569: 503 Service Temporarily Unavailable
Possible cause: A node in the destination ApsaraDB for ClickHouse cluster is in an abnormal health state or is temporarily unavailable, which makes the service inaccessible.
Solution: Check the health status of the destination ClickHouse instance to make sure that all nodes are running as expected. After the issue is resolved, restart the DTS task.
Error example:
java.sql.BatchUpdateException: <center><h1>503 Service Temporarily Unavailable</h1></center> <hr/><center>nginx</center> , server ClickHouseNode [uri=http://10.xxx.xxx.xxx:xxx/default]@-477967xxxDTS-RETRY-ERR-0570: Foreign key constraint violation
Possible cause: A foreign key constraint was violated when data was being written to the destination PostgreSQL database. This usually occurs because the destination database still has usage limits and the session_replication_role parameter is not set to replica for the synchronization account. As a result, foreign key constraints remain active during data import.
Solution: Set the session_replication_role parameter for the destination database account that is used for data synchronization to temporarily disable foreign key constraints. After the issue is resolved, restart the DTS task.
Error example:
common: DTS-100047: retry 60 times, 601 seconds, which exceed the supposed 600 seconds cause: CriticalAnyAllException: framework: DTS-30011: currentRunningSQL: /* DTS-full-q7bj8hxd128xxx */insert into "public"."ApprovalFlow"(xxx) values (xxx) on conflict do nothing,currentRow:xxx,
reason: Batch entry 0 /* DTS-full-q7bj8hxd128xxx */insert into "public"."ApprovalFlow"(xxx) values (xxx) on conflict do nothing was aborted: ERROR: insert or update on table "ApprovalFlow" violates foreign key constraint "FK_ApprovalFlow_User_CreateUserId" Detail: Key (CreateUserId)=(8219xxxx-e9eb-41d8-9fa8-40cee856xxxx) is not present in table "User". DTS-RETRY-ERR-0571: Cannot connect to an availability database because the database replica is not in the PRIMARY or SECONDARY role. Connections to an availability database are permitted only when the database replica is in the PRIMARY or SECONDARY role
Possible cause: The role of the SQL Server availability database replica that DTS attempted to connect to is not PRIMARY or SECONDARY. This caused the connection to be rejected.
Solution: Check your SQL Server AlwaysOn High Availability Group configuration to make sure that the database replica to which DTS connects is in the PRIMARY or SECONDARY role. After the database status is fixed, restart the DTS task.
Error example:
com.microsoft.sqlserver.jdbc.SQLServerException: Unable to access availability database "SIT_MSCRM" because the database replica is not in the PRIMARY or SECONDARY role. Connections to an availability database are permitted only when the database replica is in the PRIMARY or SECONDARY role.DTS-RETRY-ERR-0572: Access Mode is AllDenied
Possible cause: The account used for data synchronization does not have the odps:Describe permission on the destination MaxCompute (formerly ODPS) table.
Solution: Grant the required permissions to the RAM user or role that is configured for the DTS task to access the destination MaxCompute table. After you grant the permissions, restart the DTS task.
Error example:
DTS-077000: Record Replicator Unknown error. cause by [ODPS-0420095: Access Denied - Authorization Failed [4093], You have NO privilege 'odps:Describe' on {acs:odps:*:projects/xxx/tables/xxx}. Access Mode is AllDeniedDTS-RETRY-ERR-0573: An error occurred when accessing your server. Please retry or report your issues.
Possible cause: An unknown exception occurred when DTS accessed the source MySQL database. This may be caused by network fluctuations, transient database unavailability, or configuration issues.
Solution: Check the running status, network connectivity, and firewall settings of the source MySQL database to make sure that the DTS server can access the database. After you confirm that everything is normal, restart the DTS task.
Error example:
java.sql.SQLException: An error occurred when accessing your server. Please retry or report your issues.DTS-RETRY-ERR-0574: At most (.*)? new range partitions can be automatically created at once for interval partitioning
Possible cause: The table in the destination database, such as PolarDB, uses interval partitioning. During data writing, the number of partitions to be automatically created at a time exceeds the upper limit set for the database.
Solution: Adjust the relevant parameters of the destination database to increase the maximum number of partitions that can be automatically created at a time, or manually create the required partitions in advance. After you modify the database configuration, restart the DTS task.
Error example:
DTS-30011: currentRunningSQL: /* DTS-full-v0cg6k8717xxxx */insert ignore into `xxxx`.`xxxx`() VALUES (),currentRow:xxx,
reason: At most xxx new range partitions can be automatically created at once for interval partitioning. cause: BatchUpdateException: At most 30 new range partitions can be automatically created at once for interval partitioning. SQLException: At most xxx new range partitions can be automatically created at once for interval partitioning.DTS-RETRY-ERR-0575: Authentication Failed For (.*)? maybe username or password is incorrect
Possible cause: The username or password used by DTS to connect to the source database is incorrect, which caused the identity verification to fail.
Solution: Check the source database connection information configured in the DTS task to confirm whether the username and password are correct. Pay special attention to whether the database password has been recently modified. After you update the connection information, restart the DTS task.
Error example:
java.sql.SQLException: Authentication Failed For RDS maybe username or password is incorrectDTS-RETRY-ERR-0576: because the database replica is not in the PRIMARY or SECONDARY role. Connections to an availability database is permitted only when the database replica is in the PRIMARY or SECONDARY role
Possible cause: The role of the SQL Server availability database replica that DTS attempted to connect to is not PRIMARY or SECONDARY. This caused the connection to be rejected.
Solution: Check your SQL Server AlwaysOn High Availability Group configuration to make sure that the database replica to which DTS connects is in the PRIMARY or SECONDARY role. After the database status is fixed, restart the DTS task.
Error example:
com.microsoft.sqlserver.jdbc.SQLServerException: Unable to access availability database 'xxx' because the database replica is not in the PRIMARY or SECONDARY role. Connections to an availability database is permitted only when the database replica is in the PRIMARY or SECONDARY role. Try the operation again later. ClientConnectionId:f2f1xxxx-c943-4348-95b3-520074baxxxxDTS-RETRY-ERR-0577: binlog controller is not exist, binlog dump
Possible cause: The Binlog Controller component of the source AnalyticDB for MySQL 3.0 instance is abnormal, which prevents DTS from pulling incremental logs.
Solution: This issue is caused by an internal database component. Contact the technical support of AnalyticDB for MySQL for troubleshooting.
Error example:
DTS-52110: MySQLRecordExtractor Init Error: mysql-reader: DTS-52060: initial fetch position failed cause: CriticalAnyAllException: mysql-reader: DTS-52060: initial fetch position failed IOException: java.sql.SQLException: [40089, 2025070914xxxx0210170500400315135xxxx] binlog controller is not exist, binlog dump request is not supported SQLException: [40089, 2025070914xxxx0210170500400315135xxxx] binlog controller is not exist, binlog dump request is not supportedDTS-RETRY-ERR-0578: binlog file adb-binlog.log is not a binlog file
Possible cause: When an AnalyticDB for MySQL 3.0 instance is used as the source, MySQL was incorrectly selected as the source instance type when the DTS task was created.
Solution: Recreate the DTS task. When you select the source type, select AnalyticDB for MySQL 3.0.
Error example:
DTS-100047: retry 1714 times, 43203826 seconds, which exceed the supposed 43200 seconds cause: RecoverableAnyAllException: dts-k-src: DTS-52110: MySQLRecordExtractor Init Error: mysql-reader: DTS-52060: initial fetch position failed CriticalAnyAllException: mysql-reader: DTS-52060: initial fetch position failed IOException: java.sql.SQLException: [40087, 20250718144xxxx100090801240315125xxxx] binlog file adb-binlog.log is not a binlog file SQLException: [40087, 20250718144xxxx100090801240315125xxxx] binlog file adb-binlog.log is not a binlog fileDTS-RETRY-ERR-0579: binlog file adb-klog.log is not a binlog file
Possible cause: When an AnalyticDB for MySQL 3.0 instance is used as the source, MySQL was incorrectly selected as the source instance type when the DTS task was created.
Solution: Recreate the DTS task. When you select the source type, select AnalyticDB for MySQL 3.0.
Error example:
DTS-100047: retry 1730 times, 43217716 seconds, which exceed the supposed 43200 seconds cause: RecoverableAnyAllException: dts-k-src: DTS-52110: MySQLRecordExtractor Init Error: mysql-reader: DTS-52060: initial fetch position failed CriticalAnyAllException: mysql-reader: DTS-52060: initial fetch position failed IOException: java.sql.SQLException: [40087, 20250718144xxxx100090801240315125xxxx] binlog file adb-klog.log is not a binlog file SQLException: [40087, 20250718144xxxx100090801240315125xxxx] binlog file adb-klog.log is not a binlog fileDTS-RETRY-ERR-0580: binlog file adb-system.log is not a binlog file
Possible cause: When an AnalyticDB for MySQL 3.0 instance is used as the source, MySQL was incorrectly selected as the source instance type when the DTS task was created.
Solution: Recreate the DTS task. When you select the source type, select AnalyticDB for MySQL 3.0.
Error example:
DTS-100047: retry 1706 times, 43206894 seconds, which exceed the supposed 43200 seconds cause: RecoverableAnyAllException: dts-k-src: DTS-52110: MySQLRecordExtractor Init Error: mysql-reader: DTS-52060: initial fetch position failed CriticalAnyAllException: mysql-reader: DTS-52060: initial fetch position failed IOException: java.sql.SQLException: [40087, 20250718144xxxx100090801240315125xxxx] binlog file adb-system.log is not a binlog file SQLException: [40087, 20250718144xxxx100090801240315125xxxx] binlog file adb-system.log is not a binlog fileDTS-RETRY-ERR-0581: binlog is not enable
Possible cause: When an AnalyticDB for MySQL 3.0 instance is used as the source, MySQL was incorrectly selected as the source instance type when the DTS task was created.
Solution: Recreate the DTS task. When you select the source type, select AnalyticDB for MySQL 3.0.
Error example:
java.sql.SQLException: [30000, 2025071409510xxxx1681812390315112xxxx] can't show binary logs for sg_b_phy_in_notices_wide because binlog is not enableDTS-RETRY-ERR-0582: binlog truncated in the middle of event; consider out of disk space on master
Possible cause: The binary log file of the source database was truncated in the middle of an event and became incomplete. This is usually caused by reasons such as insufficient disk space on the primary database.
Solution:
Immediately check the disk space and health status of the source database server.
Because the binary log is corrupted, DTS cannot continue. You need to skip the corrupted log section by modifying the current offset of the synchronization or migration instance.
NoteSkipping the offset may cause data loss. Before you perform this operation, make sure that you evaluate the impact on your business.
After you fix the source database issue and adjust the offset, restart the DTS task.
Error example:
DTS-100047: retry 0 times, 1000 seconds, which exceed the supposed 43200 seconds cause: CriticalAnyAllException: dts-k-src: DTS-52111: Increment Context Is Not Running..: mysql-reader: DTS-52200: process error failed cause: IOException: java.sql.SQLException: binlog truncated in the middle of event; consider out of disk space on master; the first event 'mysql-bin.000100' at 63072xxxx, the last event read from './mysql-bin.000100' at 63072xxxx, the last byte read from './mysql-bin.000100' at 63072xxxx.
SQLException: binlog truncated in the middle of event; consider out of disk space on master; the first event 'mysql-bin.000100' at 63072xxxx, the last event read from './mysql-bin.000100' at 63072xxxx, the last byte read from './mysql-bin.000100' at 63072xxxx. CriticalAnyAllException: mysql-reader: DTS-52200: process error failed IOException: java.sql.SQLException: binlog truncated in the middle of event; consider out of disk space on master; the first event 'mysql-bin.000100' at 63072xxxx, the last event read from './mysql-bin.000100' at 63072xxxx, the last byte read from './mysql-bin.000100' at 63072xxxx. DTS-RETRY-ERR-0583: (.*)? bytes when serialized which is larger than the total memory buffer you have configured with the buffer
Possible cause: When DTS processes data, the size of a single record after serialization exceeds the configured memory buffer limit.
Solution: This is an internal resource configuration issue in DTS. You do not need to manually intervene. Try to restart the DTS task. The system may automatically adjust or resolve this issue.
Error example:
DTS-52122: Error on Send.The message is xxx bytes when serialized which is larger than the total memory buffer you have configured with the buffer.memory configuration. cause: RecordTooLargeException: The message is xxx bytes when serialized which is larger than the total memory buffer you have configured with the buffer.memory configuration.DTS-RETRY-ERR-0584: can not find disperser for (.*)?
Possible cause: DTS cannot find a suitable converter during data type conversion. This is usually because the field types of the source and destination tables are incompatible, and DTS does not support automatic conversion between these types.
Solution: Check and compare the table structures of the source and destination for incompatible fields. You can try to modify the destination table field type to make it compatible with the source, or exclude the field from synchronization in the DTS task. After the adjustment, restart the DTS task.
Error example:
full-selectdb: DTS-61001: disperser is null,can not find disperser for field is_platform_accountDTS-RETRY-ERR-0585: cannot insert NULL into (.*)?
Possible cause: A column in the destination Oracle table is defined as NOT NULL, but the value of this column in the data synchronized from the source is NULL. This causes the insert operation to fail.
Solution:
Check and compare the table structures of the source and destination to identify the field that causes the issue.
Based on your business requirements, you can use one of the following methods:
Modify the destination table structure to allow the column to be
NULLor set a default value for it.Check the source data to make sure that the column does not contain
NULLvalues.
After the adjustment, restart the DTS task.
Error example:
DTS-077100: Record Replicator error in table sas2erp.orderstateflow. cause by [java.sql.SQLIntegrityConstraintViolationException: ORA-01400: cannot insert NULL into ("xxx"."xxx") ] About more information in [https://yq.aliyun.com/articles/508049].DTS-RETRY-ERR-0586: cannot insert NULL into (.*)?
Possible cause: A column in the destination Oracle table is defined as NOT NULL, but the value of this column in the data synchronized from the source is NULL. This causes the insert operation to fail.
Solution:
Check and compare the table structures of the source and destination to identify the field that causes the issue.
Based on your business requirements, you can use one of the following methods:
Modify the destination table structure to allow the column to be
NULLor set a default value for it.Check the source data to make sure that the column does not contain
NULLvalues.
After the adjustment, restart the DTS task.
Error example:
DTS-100047: retry 4300 times, 43203 seconds, which exceed the supposed 43200 seconds cause: CriticalAnyAllException: framework: DTS-30011: currentRunningSQL: /* DTS-full-tf3u6r91200xxxx */insert into "xxx"."xxx"() VALUES (),currentRow:,
reason: ORA-01400: cannot insert NULL into ("xxx"."xxx"."xxx") BatchUpdateException: ORA-01400: cannot insert NULL into ("xxx"."xxx"."xxx")DTS-RETRY-ERR-0588: check the etl document for the right syntax to use near script
Possible cause: The ETL script that you configured in the DTS task has a syntax error.
Solution: Check the ETL script that you wrote and correct it by following the instructions in Configure an ETL task in a DTS migration or synchronization task.
Error example:
ETL: DTS--0001: You have an error in your ETL syntax; check the etl document for the right syntax to use near script^^^: at line 1DTS-RETRY-ERR-0589: close wait failed coz rpc error
Possible cause: An RPC error was returned from within SelectDB when DTS was writing data to the destination SelectDB instance using StreamLoad. This indicates that the destination SelectDB instance may have a health issue or an internal error.
Solution: Check the running status, logs, and load of the destination SelectDB instance. After you confirm that the database is restored to a normal state, restart the DTS task.
Error example:
java.io.IOException: Failed to stream load data to SelectDB.Key status is fail. Load result: {"Status":"Fail","Comment":"","BeginTxnTimeMs":15,"Message":"[INTERNAL_ERROR][INTERNAL_ERROR]close wait failed coz rpc error. VNodeChannel[173908364xxxx-173899408xxxx], load_id=247bc89eabexxxx-56cfe111acd6xxxx, txn_id=2946215958969xxxx, node=172.xxx.xxx.xxx:xxx, add batch req success but status isn't ok, err: [DATA_QUALITY_ERROR]PStatus: (172.xxx.xxx.xxx)[DATA_QUALITY_ERROR]Reached max column size limit 2048, host: 172.xxx.xxx.xxx","NumberUnselectedRows":0,"CommitAndPublishTimeMs":0,"Label":"DTS-SelectDB-Sink0b7170a9-xxxx-4dc4-xxxx-1bc2e043141d","LoadBytes":31146554,"StreamLoadPutTimeMs":7,"NumberTotalRows":9949,"WriteDataTimeMs":9800,"ReceiveDataTimeMs":70,"TxnId":2946215958969xxxx,"LoadTimeMs":9824,"TwoPhaseCommit":"false","ReadDataTimeMs":93,"NumberLoadedRows":0,"NumberFilteredRows":0}DTS-RETRY-ERR-0590: CN failed to send & exec SQL to PN
Possible cause: A compute node (CN) of the destination PolarDB-X database failed to send an SQL request to a physical node (PN) for execution. This usually indicates an abnormal communication between internal database nodes.
Solution: Check the health status of your PolarDB-X instance, especially whether the CNs and PNs are running as expected and can communicate with each other. After the database is restored to a normal state, restart the DTS task.
Error example:
DTS-30011: currentRunningSQL: /* DTS-full-vkxf76uu16qxxxx */insert ignore into `xxx`.`xxx`() VALUES (),currentRow:null, reason: Internal error: CN failed to send & exec SQL to PN cause: BatchUpdateException: Internal error: CN failed to send & exec SQL to PN SQLException: Internal error: CN failed to send & exec SQL to PNDTS-RETRY-ERR-0591: Column has no default value
Possible cause: A column in the destination SelectDB table is defined as NOT NULL but does not have a default value. When the source data does not contain this column or the value of this column is NULL, the write operation fails due to the missing value.
Solution:
Check the source data and the synchronization object configuration of DTS to confirm whether this column is synchronized.
Modify the table structure of the destination SelectDB instance to set a default value for the column. Example:
ALTER TABLE your_table ALTER COLUMN your_column SET DEFAULT 'your_default_value';After the modification, restart the DTS task.
Error example:
java.io.IOException: Failed to stream load data to SelectDB.Key status is fail. Load result: {"Status":"Fail","Comment":"","BeginTxnTimeMs":23,"Message":"[ANALYSIS_ERROR]TStatus: errCode = 2, detailMessage = Column has no default value. column: assets_quantity","NumberUnselectedRows":0,"CommitAndPublishTimeMs":0,"Label":"DTS-SelectDB-Sink193914c2-xxxx-4beb-xxxx-e6e4a89276c4","LoadBytes":0,"StreamLoadPutTimeMs":0,"NumberTotalRows":0,"WriteDataTimeMs":0,"TxnId":1674029191551xxxx,"LoadTimeMs":0,"TwoPhaseCommit":"false","ReadDataTimeMs":0,"NumberLoadedRows":0,"NumberFilteredRows":0}DTS-RETRY-ERR-0592: com.ibm.as400.access.ExtendedIllegalArgumentException
Possible cause: The data format in the source is not supported by DTS, which causes a parsing failure.
Solution: This error indicates a data type or format that DTS cannot process. Check the source data.
Error example:
DTS-31009: In process of processing data (recordRange: 456002531641996xxxx) failed cause: FatalAnyAllException: common: DTS-100047: retry 0 times, 14 seconds, which exceed the supposed 43200 seconds RuntimeException: AS400DB2Source:JournalEntryParser: parser entry failed, cause java.util.concurrent.ExecutionException: java.lang.RuntimeException: com.ibm.as400.access.ExtendedIllegalArgumentException: source (07/09/2511): Parameter value is not valid.DTS-RETRY-ERR-0593: Communications link failure during commit(). Transaction resolution unknown.
Possible cause: The network connection was interrupted when DTS was committing a transaction to the destination database. This may be caused by an unstable network or an abnormal connection to the destination database.
Solution: Check the network connection stability between the DTS server and the destination database, including firewall, security group, and routing configurations. After you make sure that the network is connected, restart the DTS task.
Error example:
DTS-30011: currentRunningSQL: /* DTS-full-q8sh6ybp20jxxx */insert ignore into `xxx`.`xxx`() VALUES (),currentRow:xxx, reason: Communications link failure during commit(). Transaction resolution unknown. cause: MySQLNonTransientConnectionException: Communications link failure during commit(). Transaction resolution unknown.DTS-RETRY-ERR-0594: Connect Failed For Mysql Server No Response
Possible cause: DTS attempted to connect to the source MySQL server, but the server did not respond. This may be because the network is disconnected, a firewall is blocking the connection, the database service is not started, or the load is too high.
Solution: Check the running status, network connectivity, and security policies of the source MySQL server to make sure that DTS can access it. After you confirm that everything is normal, restart the DTS task.
Error example:
java.sql.SQLException: Connect Failed For Mysql Server No ResponseDTS-RETRY-ERR-0595: Connection attempt timed out
Possible cause: The connection to the PostgreSQL database timed out, which indicates that a connection could not be established within the specified period.
Solution: Check the network connection, firewall settings, and whether the hostname and port of the destination database are correct to make sure that DTS can access the database. After the issue is fixed, restart the DTS task.
Error example:
org.postgresql.util.PSQLException: Connection attempt timed out.DTS-RETRY-ERR-0596: Connection refused
Possible cause: The request from DTS to connect to the MongoDB server was rejected. Common causes include a disconnected network, the destination port not listening, or a firewall or security group blocking the connection.
Solution: Check the network connectivity between DTS and the MongoDB server, confirm that the MongoDB service is started and the port is correct, and check the relevant network security policies. Restart the DTS task.
Error example:
java.lang.RuntimeException: com.mongodb.MongoQueryException: Query failed with error code 6 and error message 'Connection refused' on server 10.27.185.136:24214DTS-RETRY-ERR-0597: Connection to (.*)? refused
Possible cause: The request from DTS to connect to the PostgreSQL database was rejected. Common causes include a disconnected network, the destination port not listening, a firewall blocking the connection, or the database not being configured to accept external connections.
Solution: Check the network connection, firewall settings, and whether the hostname and port of the destination database are correct to make sure that DTS can access the database. After the issue is fixed, restart the DTS task.
Error example:
org.postgresql.util.PSQLException: Connection to 10.xxx.xxx.xxx:xxx refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.DTS-RETRY-ERR-0598: execute sql (.*)? could not create file
Possible cause: The destination PostgreSQL database failed to create a file when executing a DDL statement from DTS, such as creating an internal marker table. This may be due to insufficient disk space, permission issues, or file system errors.
Solution: Check the disk space, file system status, and the write permissions of the database user on the data directory of the destination database server. After you confirm that everything is normal, restart the DTS task.
Error example:
DTS-10046: execute sql: CREATE TABLE IF NOT EXISTS "xxx"."xxx" ();CREATE INDEX "xxx" ON "xxx"."xxx"("xxx", "xxx"); CREATE INDEX "xxx" ON "xxx"."xxx"("xxx", "xxx"); failed. Create TransactionTable failed. cause: PSQLException: ERROR: could not create file "/39667142-1/data/base/xxx/xxx": Unknown error 1001DTS-RETRY-ERR-0599: Fetch postgresql logical log failed (.*)? could not create file
Possible cause: The source PostgreSQL database failed to create a replication slot for DTS. The error message File exists indicates that the temporary file that was being created already exists, which may be a remnant of a previous abnormal interruption.
Solution: Contact the administrator (DBA) of the source PostgreSQL database to troubleshoot the logical replication feature.
Error example:
DTS-100047: retry 0 times, 1001 seconds, which exceed the supposed 43200 seconds cause: CriticalAnyAllException: dts-k-src: DTS-52111: Increment Context Is Not Running..: CriticalAnyAllException: postgresql-reader: DTS-52510: Fetch postgresql logical log failed IOException: org.postgresql.util.PSQLException: ERROR: could not create file "pg_replslot/dts_sync_kh81092l29exxxx/state.tmp": File exists PSQLException: ERROR: could not create file "pg_replslot/dts_sync_kh81092l29exxxx/state.tmp": File existsDTS-RETRY-ERR-0600: Could not route this query because target db is not healthy.
Possible cause: When the source database was processing a query request from DTS, one of its internal nodes was in an unhealthy state, which prevented the query from being routed.
Solution: Check the overall health status of the source instance to make sure that all compute nodes and data nodes are running as expected. After the database cluster is restored, restart the DTS task.
Error example:
java.sql.SQLException: Could not route this query because target db is not healthy.