When change tracking tasks fail or behave unexpectedly, the root cause can be difficult to identify — especially when data is consumed outside of SDKs, when SDK versions differ across environments, or when the client runs in an isolated network. The DTS diagnostic toolkit lets you run a targeted diagnostic against your change tracking client, read the structured log output, and pinpoint the exact failure.
Running the toolkit updates the consumer offset of the specified consumer group in the DTS console. The consumer offset recorded on the server is not updated when subscribeMode is set to SUBSCRIBE.
Prerequisites
Before you begin, ensure that you have:
Java Development Kit (JDK) 1.8 or later installed on the change tracking client
Set up and run the toolkit
Step 1: Download the toolkit
Download the dts_subscribe_sdk_dep_demo toolkit to your change tracking client and decompress it.
Step 2: Configure the toolkit
Edit the config file in the decompressed directory. The following table describes the parameters.
| Parameter | Description | How to get the value |
|---|---|---|
brokerUrl | Endpoint and port number of the change tracking instance. If the Elastic Compute Service (ECS) instance running the SDK client is in the same classic network or virtual private cloud (VPC) as the change tracking instance, use the internal network endpoint to minimize latency. | In the DTS console, click the instance ID. On the Basic Information page, copy the endpoint and port number. |
topic | Topic name of the change tracking instance. | In the DTS console, click the instance ID. On the Basic Information page, copy the tracked topic. |
sid | ID of the consumer group. | In the DTS console, click the instance ID. In the left-side navigation pane, click Consume Data. Copy the consumer group ID and account. The password is set automatically when you create the consumer group. |
userName | Account of the consumer group. If you are not using the SDK client described in this topic, format the value as <Username>-<Consumer group ID>, for example, dtstest-dtsae******bpv. An incorrectly formatted value causes the connection to fail. | See sid above. |
password | Password of the consumer group account. | See sid above. |
initCheckpoint | The consumption checkpoint — the UNIX timestamp from which the SDK client begins consuming data (for example, 1620962769). This parameter takes effect only when subscribeMode is ASSIGN and isForceUseInitCheckpoint is true. Use this parameter to resume consumption after an interruption without data loss, or to start consumption from a specific point in time. | The timestamp must fall within the data range of the change tracking instance. |
subscribeMode | The mode in which the SDK client connects to the consumer group. Use ASSIGN if you run a single SDK client against the consumer group. Use SUBSCRIBE if you run multiple SDK clients against the same consumer group for disaster recovery. | N/A |
isForceUseInitCheckpoint | Whether to forcefully use the specified offset to track data changes. Valid values: true and false. | N/A |
Step 3: Run the toolkit
In the directory where you decompressed the toolkit, run:
java -jar dts_subscribe_sdk_dep_demo-1.0-SNAPSHOT-jar-with-dependencies.jar configStep 4: Check the log output
Open dts-new-subscribe.log in the same directory. Use the sections below to interpret the output and fix any issues.
Interpret the log output
Start by confirming whether the task is running normally. If you see a HEARTBEAT record, no action is needed. If no HEARTBEAT appears, identify the error pattern below and apply the corresponding fix.
Task is running normally
A HEARTBEAT record like the following indicates the change tracking task is running as expected:
[2022-01-04 17:10:53.949] [INFO ]
[com.aliyun.dts.subscribe.clients.recordprocessor.EtlRecordProcessor]
[com.aliyun.dts.subscribe.clients.recordprocessor.DefaultRecordPrintListener:49] -
RecordID [13082769]
RecordTimestamp [1641284702]
Source [{"sourceType": "MySQL", "version": "5.6.16-log"}]
RecordType [HEARTBEAT]Change tracking endpoint unreachable
Error in log:
ERROR CheckResult{isOk=false, errMsg='telnet dts-cn-hangzhou.aliyuncs.com:18009 failed, please check the network and if the brokerUrl is correct'} (com.aliyun.dts.subscribe.clients.DefaultDTSConsumer)or
telnet real node xxx failed, please check the networkWhat happened: The specified endpoint of the change tracking instance is invalid, or no connections can be established to the change tracking instance.
Fix: Update the brokerUrl parameter in the config file with a valid endpoint and port for your network environment.
Authentication failed
Error in log:
ERROR CheckResult{isOk=false, errMsg='build kafka consumer failed, error: org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata, probably the user name or password is wrong'} (com.aliyun.dts.subscribe.clients.DefaultDTSConsumer)What happened: The consumer group credentials are invalid. If you are not using the SDK client described in this topic, the userName must include both the username and the consumer group ID separated by a hyphen.
Fix: Update the userName and password parameters in the config file. Make sure userName follows the format <Username>-<Consumer group ID> if required.
Consumption checkpoint out of range
Error in log:
com.aliyun.dts.subscribe.clients.exception.TimestampSeekException: RecordGenerator:seek timestamp for topic [cn_hangzhou_rm_bp11tv2923n87081s_rdsdt_dtsacct-0] with timestamp [1610249501] failedWhat happened: The consumer offset is not within the data range of the change tracking instance.
Fix: Update the initCheckpoint parameter in the config file with a UNIX timestamp that falls within the current data range of the change tracking instance.
Diagnose slow data flow
If no errors appear in the log but data is not flowing as expected, check the queue depth values in the log:
If
DStoreRecordQueueandDefaultUserRecordQueueremain at0, data is read from the server at a slow rate.If
DStoreRecordQueueandDefaultUserRecordQueueremain at512, the client consumes tracked data changes at a slow rate.