This topic describes how to use the SDK for Java to connect to the default endpoint of Message Queue for Apache Kafka from a Java client and send and subscribe to messages in a virtual private cloud (VPC).
Prerequisites
- Step 3: Create resources
- JDK 1.8 or later is installed. For more information, see Java SE Downloads.
- Maven 2.5 or later is installed. For more information, see Download Apache Maven.
Install Java dependencies
Add the following dependencies to the pom.xml file:
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.4.0</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.6</version>
</dependency>
Note We recommend that you keep the version of the client consistent with that of the broker.
This means that the client library version must be consistent with the major version
of the Message Queue for Apache Kafka instance. You can obtain the major version of the Message Queue for Apache Kafka instance on the Instance Details page in the Message Queue for Apache Kafka console.
Preparations
Send messages
Consume messages
You can consume messages by using one of the following methods:
- Use a single consumer to consume messages
- Create a single-consumer program named KafkaConsumerDemo.java that contains the following code:
import java.util.ArrayList; import java.util.List; import java.util.Properties; import org.apache.kafka.clients.consumer.ConsumerConfig; import org.apache.kafka.clients.consumer.ConsumerRecord; import org.apache.kafka.clients.consumer.ConsumerRecords; import org.apache.kafka.clients.consumer.KafkaConsumer; import org.apache.kafka.clients.producer.ProducerConfig; public class KafkaConsumerDemo { public static void main(String args[]) { // Load the kafka.properties file. Properties kafkaProperties = JavaKafkaConfigurer.getKafkaProperties(); Properties props = new Properties(); // Specify an endpoint. Obtain the default endpoint of the corresponding instance in the Message Queue for Apache Kafka console. props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getProperty("bootstrap.servers")); // Set the maximum interval between two consecutive polling cycles. // The default interval is 30s. If the consumer does not return a heartbeat message within the interval, the broker determines that the consumer is not alive. In this case, the broker removes the consumer from the consumer group and triggers load balancing.Group props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000); // Set the maximum number of messages that can be polled at a time. // Do not set this parameter to an excessively large value. If the messages polled are not all consumed before the next polling cycle starts, load balancing is triggered and performance may deteriorate. props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 30); // Set the method for deserializing messages. props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer"); props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer"); // Specify the consumer group of the current consumer. You must create the consumer group in the Message Queue for Apache Kafka console. // The consumers in a consumer group consume messages in load balancing mode. props.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaProperties.getProperty("group.id")); // Create a consumer object. KafkaConsumer<String, String> consumer = new org.apache.kafka.clients.consumer.KafkaConsumer<String, String>(props); // Specify one or more topics to which the consumer group subscribes. // We recommend that you configure consumers with the same GROUP_ID_CONFIG value to subscribe to the same topics. List<String> subscribedTopics = new ArrayList<String>(); // If you want to subscribe to multiple topics, add the topics here. // You must create the topics in the Message Queue for Apache Kafka console in advance. String topicStr = kafkaProperties.getProperty("topic"); String[] topics = topicStr.split(","); for (String topic: topics) { subscribedTopics.add(topic.trim()); } consumer.subscribe(subscribedTopics); // Consume messages in a loop. while (true){ try { ConsumerRecords<String, String> records = consumer.poll(1000); // All messages must be consumed before the next polling cycle starts. The total duration cannot exceed the interval specified by SESSION_TIMEOUT_MS_CONFIG. // We recommend that you create a separate thread pool to consume messages and then asynchronously return the results. for (ConsumerRecord<String, String> record : records) { System.out.println(String.format("Consume partition:%d offset:%d", record.partition(), record.offset())); } } catch (Exception e) { try { Thread.sleep(1000); } catch (Throwable ignore) { } e.printStackTrace(); } } } }
- Compile and run KafkaConsumerDemo.java to consume messages.
- Create a single-consumer program named KafkaConsumerDemo.java that contains the following code:
- Use multiple consumers to consume messages
- Create a multi-consumer program named KafkaMultiConsumerDemo.java that contains the following code:
import java.util.ArrayList; import java.util.List; import java.util.Properties; import java.util.concurrent.atomic.AtomicBoolean; import org.apache.kafka.clients.consumer.ConsumerConfig; import org.apache.kafka.clients.consumer.ConsumerRecord; import org.apache.kafka.clients.consumer.ConsumerRecords; import org.apache.kafka.clients.consumer.KafkaConsumer; import org.apache.kafka.clients.producer.ProducerConfig; import org.apache.kafka.common.errors.WakeupException; /** * This tutorial shows you how to use multiple consumers to simultaneously consume messages in one process. * Make sure that the total number of consumers in the environment does not exceed the number of partitions of the topics to which the consumers subscribe. */ public class KafkaMultiConsumerDemo { public static void main(String args[]) throws InterruptedException { // Load the kafka.properties file. Properties kafkaProperties = JavaKafkaConfigurer.getKafkaProperties(); Properties props = new Properties(); // Specify an endpoint. Obtain the default endpoint of the corresponding instance in the Message Queue for Apache Kafka console. props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getProperty("bootstrap.servers")); // Set the maximum interval between two consecutive polling cycles. // The default interval is 30s. If the consumer does not return a heartbeat message within the interval, the broker determines that the consumer is not alive. In this case, the broker removes the consumer from the consumer group and triggers load balancing.Group props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000); // Set the maximum number of messages that can be polled at a time. // Do not set this parameter to an excessively large value. If the messages polled are not all consumed before the next polling cycle starts, load balancing is triggered and performance may deteriorate. props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 30); // Set the method for deserializing messages. props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer"); props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer"); // Specify the consumer group of the current consumer. You must create the consumer group in the Message Queue for Apache Kafka console. // The consumers in a consumer group consume messages in load balancing mode. props.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaProperties.getProperty("group.id")); int consumerNum = 2; Thread[] consumerThreads = new Thread[consumerNum]; for (int i = 0; i < consumerNum; i++) { KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props); List<String> subscribedTopics = new ArrayList<String>(); subscribedTopics.add(kafkaProperties.getProperty("topic")); consumer.subscribe(subscribedTopics); KafkaConsumerRunner kafkaConsumerRunner = new KafkaConsumerRunner(consumer); consumerThreads[i] = new Thread(kafkaConsumerRunner); } for (int i = 0; i < consumerNum; i++) { consumerThreads[i].start(); } for (int i = 0; i < consumerNum; i++) { consumerThreads[i].join(); } } static class KafkaConsumerRunner implements Runnable { private final AtomicBoolean closed = new AtomicBoolean(false); private final KafkaConsumer consumer; KafkaConsumerRunner(KafkaConsumer consumer) { this.consumer = consumer; } @Override public void run() { try { while (!closed.get()) { try { ConsumerRecords<String, String> records = consumer.poll(1000); // All messages must be consumed before the next polling cycle starts. The total duration cannot exceed the interval specified by SESSION_TIMEOUT_MS_CONFIG. for (ConsumerRecord<String, String> record : records) { System.out.println(String.format("Thread:%s Consume partition:%d offset:%d", Thread.currentThread().getName(), record.partition(), record.offset())); } } catch (Exception e) { try { Thread.sleep(1000); } catch (Throwable ignore) { } e.printStackTrace(); } } } catch (WakeupException e) { // If the consumer is shut down, ignore the exception. if (!closed.get()) { throw e; } } finally { consumer.close(); } } // Implement a shutdown hook that can be called by another thread. public void shutdown() { closed.set(true); consumer.wakeup(); } } }
- Compile and run KafkaMultiConsumerDemo.java to consume messages.
- Create a multi-consumer program named KafkaMultiConsumerDemo.java that contains the following code: