The Kafka plug-in is added to the Application Real-Time Monitoring Service (ARMS) agent of V2.7.1.2 and later. This allows Application Monitoring of ARMS to monitor the trace data of Apache Kafka applications that use a specified version and integrate with ARMS in a specified mode. This topic describes the Kafka versions that are supported by Application Monitoring of ARMS. This topic also describes the notes on integrating Kafka with ARMS in different modes.

Requirements for ARMS agent versions

The Kafka plug-in is added to the ARMS agent only of V2.7.1.2 and later. To update the ARMS agent to a version that provides the Kafka plug-in, contact the official DingTalk account for ARMS (account ID: arms160804).

Requirements for Kafka versions

The Kafka plug-in of the ARMS agent can monitor trace data only if the Kafka message protocol supports headers. The Kafka message protocol has three versions.

Version of the Kafka message protocol Magic number Kafka version Whether headers are supported
V0 0 < V0.10.0.x No
V1 1 [V0.10.0.x, V0.11.0.x) No
V2 2 ≥ V0.11.0.x Yes

The earliest Kafka version that supports headers is V0.11.0. Therefore, Application Monitoring of ARMS can monitor Kafka applications only of V0.11.0 and later.

Requirements for the versions of Kafka clients (producers and consumers) and Kafka brokers

To reduce workloads, we recommend that you use Kafka clients and Kafka brokers that are of the same version.

However, many users use mismatched versions of Kafka clients and Kafka brokers in actual scenarios, which brings additional complexity.

To resolve this issue, the Kafka plug-in of the ARMS agent supports the following version compatibility: The Kafka plug-in of the ARMS agent can inject trace data if the versions of both Kafka clients and Kafka brokers are later than V0.11.0.

Version of Kafka brokers Version of Kafka clients (producers and consumers) Whether the Kafka plug-in can inject trace data
[V0.10.0.x, V0.11.0.x) < V0.10.0.x No
[V0.10.0.x, V0.11.0.x) No
≥ V0.11.0.x No
≥ V0.11.0.x < V0.10.0.x No
[V0.10.0.x, V0.11.0.x) No
≥ V0.11.0.x Yes

Kafka integration modes

ARMS supports two Kafka integration modes.

Integrate Apache Kafka clients with ARMS

Producer

You can integrate the Apache Kafka Producer API with ARMS by following the official instructions of Apache Kafka without additional configurations. For more information, see Apache Kafka official documentation.

Consumer

The Apache Kafka Consumer API does not allow consumers to consume messages one by one. Instead, consumers must send continuous poll requests to pull messages. As a result, the Kafka plug-in of the ARMS agent cannot directly inject trace data. Additional configurations are required to inject messages for consumers. If you directly integrate consumers with ARMS, perform the required configurations in ARMS.

Sample file for consumers:

 package arms.test.kafka;
 
 public class KafkaConsumeTest {
        public void testConsumer(){
            Properties props = new Properties();
            props.put("bootstrap.servers", "PLAINTEXT://XXXX");
            props.put("group.id",  UUID.randomUUID().toString());
            props.put("enable.auto.commit", "true");
            props.put("auto.offset.reset", "earliest");
            props.put("auto.commit.interval.ms", "1000");
            props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
            props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
            KafkaConsumer<String,String> kafkaConsumer = new KafkaConsumer<String, String>(props);
            kafkaConsumer.subscribe(Arrays.asList("test"));
            while (true) {
                ListAdapter consumer;
                ConsumerRecords<String, String> records = kafkaConsumer.poll(100);
                for (ConsumerRecord<String, String> record : records)
                     handler(record);       
            }
         }

         public void handler(ConsumerRecord<String, String> record){
                LOGGER.info( Utils.string(record));
         }       
 } 

If you use the preceding sample file, add the following code to the arms-agent.config file in the installation package of the ARMS agent:

profiler.kafka.consumer.entryPoint=arms.test.kafka.KafkaConsumeTest.handler

In the preceding code, arms.test.kafka.KafkaConsumeTest.handler indicates the package, class, and method name in the sample file.

Integrate Spring for Apache Kafka with ARMS

Spring for Apache Kafka is encapsulated based on Apache Kafka clients. If you integrate Spring for Apache Kafka with ARMS, make sure that the mapping between the Spring for Apache Kafka version and the Kafka client version meets the requirements as described in the following table. For more information, see Spring for Apache Kafka official documentation.

Spring for Apache Kafka version Spring Integration for Apache Kafka version Kafka client version Spring Boot version
V2.7.0 V5.4.x V2.7.0 or V2.8.0 V2.4.x or V2.5.x
V2.6.x V5.3.x or V5.4.x V2.6.0 V2.3.x or V2.4.x
V2.5.x V3.3.x V2.5.1 V2.3.x
V2.4.x V3.2.x V2.4.1 V2.2.x
V2.3.x V3.2.x V2.3.1 V2.2.x

If you use Spring for Apache Kafka to integrate Kafka brokers with ARMS, the Kafka plug-in of the ARMS agent can monitor trace data by default. No additional configurations are required.

Overhead trace data

By default, the Kafka plug-in of the ARMS agent adds the following data to Kafka message headers. Application Monitoring of ARMS can use the following data to monitor trace data.

  • EagleEye-TraceID
  • EagleEye-RpcID
  • EagleEye-SpanID
  • EagleEye-pSpanID
  • EagleEye-pAppName
  • EagleEye-pRpc
  • EagleEye-IP
  • EagleEye-ROOT-APP
  • EagleEye-Sampled
  • uber-trace-id
  • X-B3-TraceId
  • X-B3-SpanId
  • X-B3-ParentSpanId
  • X-B3-Flags
  • X-B3-Sampled
  • traceparent