This topic describes the errors that may occur in Data Transmission Service (DTS). You can troubleshoot these errors based on the error codes and error messages. The examples listed in the following table are for reference only. The actual error messages may be different.

Error codeDescriptionExampleTroubleshooting method
DTS-51009When the data write module reads data from the data cache module, a connection timeout occurs.

DTS-31009: In process of processing data (recordRange: 5799806987638145024) failed cause: CriticalAnyAllException: capture-dstore: DTS-51009: Failed to fetch records from dstore TimeoutException: Failed to get offsets by times in 120000 ms

  • Cause: The connection between the data write module and the data cache module times out due to network jitters.
  • Solution: Restart the task to check whether the task can be resumed.
DTS-50019The data write module cannot read data at the specified point in time from the data cache module. DTS-31009: In process of processing data (recordRange: 9117052685537771520) failed cause: CriticalAnyAllException: capture-dstore: DTS-51009: Failed to fetch records from dstore CriticalAnyAllException: capture-dstore: DTS-50019: seek timestamp for topic [-0] with position [{"timestamp":1621516671, "offset":-1}] failed
  • Cause 1: The task is paused for a long period of time. After the task is restarted, the data cache module has no data at the specified point in time, such as 2021-5-20 21:17:51.
    Note The time displayed in the error message is a UNIX timestamp. You can use a search engine to obtain a UNIX timestamp converter and convert the UNIX timestamp to UTC time.

    Solution 1: Check whether the specified point in time is included in the time range of logs used for incremental data parsing in the source database, such as binary logs or redo logs. If it is included in the time range, contact Alibaba Cloud technical support to re-read data. If not, you must create another task.

    Solution 2: Create another task. After you run the task, do not pause the task for a long period of time. If the task is paused for more than 6 hours, the task can no longer be started.

  • Cause 2: The task has failed for a long period of time. After you fix the failure, the data cache module has no data at the specified point in time.

    Solution: Create another task. If an error occurs when the task is running, fix the error and resume the task as soon as possible to prevent the task from failing for a long period of time.

  • Cause 3: During full data migration or synchronization, DTS circularly stores data logs of the last 24 hours of the source database or 50 GB of data logs for incremental data parsing. If full data migration or synchronization lasts for a long period of time, DTS may have cleared the data logs that are cached after the task is started. As a result, the data logs used for incremental data parsing are incomplete.

    Solution: Create another task. Before you create the task, evaluate the amount of data to be migrated or synchronized. If the amount of data is large, we recommend that you create multiple tasks.

DTS-30018, DTS-30019, and DTS-70019The column does not exist.

DTS-30019: Can not find column [column_in_source<-->column_in_target] in target table db_name.table_name

DTS-70019: Can not find column[column_in_source<-->column_in_target]in target table db_name.table_name

  • Possible causes:
    • Cause 1: When you configure the data migration task, you do not select Schema Migration as the migration type. As a result, the schemas of tables in the source and destination databases are inconsistent.
    • Cause 2: After the task is configured, you execute DDL statements in the source or destination database to change table schemas. As a result, the schemas of tables in the source and destination databases are inconsistent.
  • Solution: Modify the schemas of tables in the source and destination databases and make sure that the schemas are consistent.
Note If the preceding causes do not apply to your actual situation, you can try the following methods:
  • Restart the data migration task to check whether the task can be resumed.
  • Remove the tables from the selected objects and add the tables to the selected objects again to check whether the task can be resumed.
DTS-10046 and DTS-30020The SQL statements are not executed as expected. DTS-10046: execute sql: CREATE TABLE if not exists`dts`.`table_name` ( `id` int(11) NOTNULL,`column1` bigint(20) NOT NULL, PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULTCHARSET=utf8 failed. Create TransactionTable failed. cause:MySQLSyntaxErrorException: CREATE command denied to user 'username'@'ip_address' for table 'table_name'In most cases, the system displays the error message specific to a database together with this error code. Check the SQL statements mentioned in the error message and fix the error.
  • Possible cause: The database account does not have the CREATE TABLE permission.
  • Solution: Grant the CREATE TABLE permission to the database account.
DTS-70004The SQL statements are not executed as expected. DTS-70004: execute statement failed: /*dts_id=dtsaaaaaaaaaa*/alter table`xxx`.`yyy` modify column `aaa` INT default 0 comment 'bbb' cause:SQLException: [15018, 2021030506452201000000420803151947572] modify column type is not supported, col=x, oldType=bigint, newType=intThe error message is returned when the destination database is AnalyticDB for MySQL.
  • Possible cause: DTS does not support the DDL statement in the synchronization scenario. As a result, the DDL statement fails to be executed.
  • Solutions:
    • If you do not need to synchronize the entire database, we recommend that you synchronize the table again. You can remove the table from the selected objects, delete the table in the destination database, and then add the table to the selected objects again. DTS synchronizes the schemas and historical data again. In this way, the unsupported DDL statement is ignored.
    • If you need to synchronize the entire database, we recommend that you create a table (Table A) in AnalyticDB for MySQL. Table A must have a different name from the table that has triggered errors (Table B). Table A must have the same schema as that of the source table. You can execute the INSERT INTO SELECT statement to write data of the source table to Table A, delete Table B, execute the RENAME statement to change the name of Table A to Table B, and then restart the data synchronization task.
DTS-70004:execute statement failed: /*aaaaaaaaaaMysqlEngine(28)*/update `xxx`.`yyy` set... error detail message:Server shutdown in progress MySQLNonTransientConnectionException: Server shutdown in progress
  • Possible cause: The SHUTDOWN operation is performed on the destination database.
  • Solution: Make sure that the destination database is running. Then, restart the data synchronization task.
DTS-70004:execute statement failed: /*aaaaaaaaaaMysqlEngine(27)*/update `xxx`.`yyy` set... cause: SQLException: null, message from server: "Host '192.10.XX.XX' isnot allowed to connect to this MySQL server"
  • Possible causes: The network is unavailable, or the account does not have the required permissions. As a result, the MySQL database does not allow the account to perform remote logon.
  • Solutions:
Note After you fix the error, you must restart the task.
DTS-70004: execute statement failed: /*bg6l15vf29l1b0vMysqlEngine(16)*/insertinto `xxx`.`yyy` ... error detail message:Table 'xxx.yyy' doesn't existMySQLSyntaxErrorException: Table 'xxx.yyy' doesn't exist
  • Possible cause: The corresponding table does not exist.
  • Solutions:
    • Check whether the table has been deleted in the destination database.
    • Check whether the table has been synchronized during schema migration.
Note After you fix the error, you must restart the task.
o3z5bntMysqlEngine(0)*/insert into`xxx`.`yyy` ...error detail message:Column 'aaa' cannot be nullMySQLIntegrityConstraintViolationException: Column 'aaa' cannot be null
  • Possible cause: The integrity constraints of the database are violated. For example, NULL values are inserted into a NOT NULL column.
  • Solutions: 1. Check the table schemas and the data records involved in the DML statements. Make sure that the schemas of the source and destination tables are consistent. 2. Revise the data records and then restart the task.
DTS-70004: execute statement failed:/*s2gb356ds228bt8MysqlEngine(0)*/insert into `xxx`.`yyy` (`...`)values(...)error detail message:INSERT command denied to user'user'@'100.104.175.31' for table 'yyy' MySQLSyntaxErrorException: INSERTcommand denied to user 'user'@'100.104.175.31' for table 'yyy'
  • Possible cause: The account of the destination database does not have the required permissions.
  • Solution: Check whether the account of the destination database has the permissions mentioned in the error message. If not, grant the permissions to the account and then restart the task.
DTS-70002The database cannot be connected, or the corresponding database or table does not exist. DTS-70002: get table detail info failed after 151times (user: db_name, database:table_name) MySQLSyntaxErrorException: Table 'db_name.table_name'doesn't exist
  • Possible cause: An error occurs during schema migration. As a result, the corresponding table is not migrated to the destination database. In addition, no schema is created for the table in the destination database.
  • Solution: Check whether the database or table exists based on the error message. If not, perform the following operations:
    • For a data synchronization task, remove the table from the selected objects and add the table to the selected objects again to check whether the task can be resumed.
    • For a data migration task, create another task.
DTS-60001The source database cannot be connected. DTS-60001: Create connection failed (url:ip_address:port, user: username) cause: SQLException: Access denied for user'username'@'ip_address' (using password: YES)
  • Possible cause: The username or password is invalid.
  • Solution: Enter the invalid username and password. Then, restart the task.
Note If the preceding cause does not apply to your actual situation, you can try the following methods:
DTS-61015The source database cannot be connected. DTS-61015: Create connection failed(url:ip_address:port, user: username) cause:MySQLNonTransientConnectionException: Data source rejected establishment ofconnection, message from server: "Too many connections"
  • Possible cause: The number of connections to the source database has reached the upper limit.
  • Solution: Increase the upper limit on the number of connections to the source database. Then, restart the task.
Note If the preceding cause does not apply to your actual situation, you can try the following methods:
DTS-71016The destination database cannot be connected.
  • [Example 1] DTS-71016: connect target failed (url: jdbc:mysql://ip_address:port?parameters)cause: MySQLNonTransientConnectionException: Data source rejected establishmentof connection, message from server: "Too many connections"
  • [Example 2] DTS-71016: connect target failed (url: jdbc:mysql://ip_address:port?parameters)cause: SQLException: null, message from server: "Host 'ip_address' isblocked because of many connection errors; unblock with 'mysqladminflush-hosts'"
  • Possible cause:
    • Example 1: The number of connections to the destination database has reached the upper limit.
    • Example 2: An excessive number of failed database connections are generated from the same IP address in a short period of time. As a result, new connections are blocked.
  • Solution:
    • Example 1: Increase the upper limit on the number of connections to the destination database. Then, restart the task.
    • Example 2: Fix the error on the destination database. For example, increase the value of the MAX_CONNECTION_ERRORS parameter. Then, restart the task.
DTS-10015The Redis database cannot be connected. DTS-10015: unreachable redis network.JedisConnectionException: Could not get a resource from the poolJedisConnectionException: Failed connecting to ip_address:portConnectException: Connection timed out (Connection timed out)
  • Possible cause: The Redis server cannot be connected because an exception has occurred.
  • Solution: Check the status of the Redis server and make sure that the server can be connected.
DTS-30005The metadata information cannot be saved in internal processing.DTS-30005: save changed job progress info failed cause: SQLException:result content unexpected. expect 3 time 1, actually [2, 1, 1]
  • Possible cause: Two tables in the source database have the same name that differs only in capitalization, such as TABLE and table. DTS does not support conflicting table names.
  • Solution: Check the table names in the source database, rename one of the tables, and then restart the task.
DTS-30011An exception occurs when data is written to the destination database.DTS-30011:put 2048 records error, currentRunningSQL:insert ignore into `xxx`.`yyy`(...), currentRunningRecordRange:{id:5362875948821643265,xxx.yyy,field:DefaultRecordField:{fieldName=aaa, dataType={typeName:BIGINT, typeId:-5, isLobType:false,encoding:null}, sourceDataType=null, aliases=[], defaultValue=null,nullable=false, unique=true, primary=true,generated=false},leftValue:1293615822270500864,rightValue:1311217506282770432,partition:null}cause: BatchUpdateException: The table 'yyy' is full SQLException: The table'yyy' is full
  • Possible causes:
    • Cause 1: The disk space is full.
    • Cause 2: The size of the table has reached the upper limit.
  • Solutions:
    • Solution 1: Resize the disk and then restart the task.
    • Solution 2: Increase the upper limit of the table size at the database level. Fix the error and then restart the task.
      Note For information about the maximum table size for MySQL databases, see B.3.2.10 The table is full.
DTS-30011: put 2048 records error, currentRunningSQL:insert ignore into`xxx`.`yyy`(...:{id:4958981218625388545,schema:xxx.yyy,field:DefaultRecordField:{fieldName=aaa, dataType={typeName:INTEGER, typeId:4, isLobType:false,encoding:null}, sourceDataType=null, aliases=[], defaultValue=null,nullable=false, unique=true, primary=true,generated=false},leftValue:512021,rightValue:1024022,partition:null} cause:BatchUpdateException: Lock wait timeout exceeded; try restarting transactionMySQLTransactionRollbackException: Lock wait timeout exceeded; try restartingtransaction
  • Possible cause: A lock wait timeout occurs in the destination database.
  • Solution: If the destination database is a MySQL database, you can execute the following statements to check the lock status of the database.
    SHOW FULL PROCESSLIST;
    SELECT
    * FROM innodb_trx;
    Note After you fix the error, you must restart the task.
DTS-30011: put 2048 records error, currentRunningSQL:insert ignore into`xxx`.`yyy`(`...,currentRunningRecordRange:{id:4050997257913237860,schema:xxx.yyy,field:DefaultRecordField:{fieldName=id, dataType={typeName:BIGINT, typeId:-5, isLobType:false, encoding:null},sourceDataType=null, aliases=[], defaultValue=null, nullable=false,unique=true, primary=true,generated=false},leftValue:5424204045,rightValue:5424982251,partition:null}cause: SQLException: The MySQL server is running with the --read-only option soit cannot execute this statement
  • Possible cause: The destination database is in the Read-Only state.
  • Solutions:
    • Execute the SELECT @@read_only; statement to check whether the destination database is in the Read-Only state.
    • Execute the SET GLOBAL read_only=0; statement to disable the Read-Only state.
    Note After you disable the Read-Only state, you must restart the task.
DTS-1051009An exception occurs when DTS parses binary logs from the MySQL database. java.IOEException: java.lang.IllegalArgumentException: Valid type fail, Column:column_name,Type:type_number
  • Possible cause: The schemas stored in the binary logs of the MySQL database are inconsistent with the schemas stored in DTS. Schema inconsistency occurs if a DDL statement that changes the schema of a table is not recorded in the binary logs. As a result, DTS cannot parse the DDL statement and does not update the schema. For example, you execute the SET SESSION sql_log_bin=0 statement to change the field type of a table from INT(11) to BIGINT(20).
  • Solution: Contact Alibaba Cloud technical support to reload the schemas stored in DTS.