This topic describes how a Java client connects to Message Queue for Apache Kafka by using a Simple Authentication and Security Layer (SASL) endpoint in a virtual private cloud (VPC) and send and subscribe to messages with PLAIN authentication.
Prerequisites
- Install JDK 1.8 or later
- Maven 2.5 or later is installed. For more information, see Install Maven 2.5 or later.
- Authorize SASL users
Install Java dependencies
In the pom.xml And add the following dependencies.
<dependency> <groupId>org.apache.kafka <artifactId>kafka-clients</artifactId> <version>0.10.2.2</version> </dependency> <dependency> <groupId>org.slf4j <artifactId>slf4j-log4j12</artifactId> <version>1.7.6</version> </dependency>
Note We recommend that you keep the version of the provider and client consistent, that
is, keep the client library version consistent with Message Queue for Apache Kafka The major version must be the same for all instances. You can go to the Message Queue for Apache Kafka The console's Instance Details page Page acquisition Message Queue for Apache Kafka The major version of the instance.
Preparations
Send messages
Subscribe to messages
You can subscribe to messages by using one of the following methods:
- Enable only one consumer to subscribe to messages
- Create a single-consumer subscription program named KafkaConsumerDemo.java.
import java.util.ArrayList; import java.util.List; import java.util.Properties; import org.apache.kafka.clients.CommonClientConfigs; import org.apache.kafka.clients.consumer.ConsumerConfig; import org.apache.kafka.clients.consumer.ConsumerRecord; import org.apache.kafka.clients.consumer.ConsumerRecords; import org.apache.kafka.clients.consumer.KafkaConsumer; import org.apache.kafka.clients.producer.ProducerConfig; import org.apache.kafka.common.config.SaslConfigs; public class KafkaConsumerDemo { public static void main(String args[]) { // Set the path of the JAAS configuration file. JavaKafkaConfigurer.configureSaslPlain(); // Load the kafka.properties file. Properties kafkaProperties = JavaKafkaConfigurer.getKafkaProperties(); Properties props = new Properties(); // Set the endpoint. Obtain the endpoint of the corresponding topic in the Message Queue for Apache Kafka console. props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getProperty("bootstrap.servers")); // Specify the access protocol. props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT"); // Specify the PLAIN mechanism. props.put(SaslConfigs.SASL_MECHANISM, "PLAIN"); // Set the maximum interval between two polling cycles. // The default value is 30s. If the consumer does not return a heartbeat message within the interval, the broker determines that the consumer is not alive. The broker removes the consumer from the consumer group and triggers rebalancing. props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000); // Set the maximum number of messages that can be polled at a time. // Do not set this parameter to an excessively large value. If polled messages are not all consumed before the next poll starts, load balancing is triggered and performance may deteriorate. props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 30); // Set the method for deserializing messages. props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer"); props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer"); // Set the consumer group of the current consumer instance. You must create the consumer group in the Message Queue for Apache Kafka console. // The instances in a consumer group consume messages in load balancing mode. props.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaProperties.getProperty("group.id")); // Construct a consumer object. This generates a consumer instance. KafkaConsumer<String, String> consumer = new org.apache.kafka.clients.consumer.KafkaConsumer<String, String>(props); // Set one or more topics to which the consumer group subscribes. // We recommend that you configure consumer instances with the same GROUP_ID_CONFIG value to subscribe to the same topics. List<String> subscribedTopics = new ArrayList<String>(); // If you want to subscribe to multiple topics, add the topics here. // You must create the topics in the Message Queue for Apache Kafka console in advance. String topicStr = kafkaProperties.getProperty("topic"); String[] topics = topicStr.split(","); for (String topic: topics) { subscribedTopics.add(topic.trim()); } consumer.subscribe(subscribedTopics); // Consume messages in a loop. while (true){ try { ConsumerRecords<String, String> records = consumer.poll(1000); // All messages must be consumed before the next polling cycle starts. The total duration cannot exceed the timeout interval specified by SESSION_TIMEOUT_MS_CONFIG. // We recommend that you create a separate thread pool to consume messages and then asynchronously return the results. for (ConsumerRecord<String, String> record : records) { System.out.println(String.format("Consume partition:%d offset:%d", record.partition(), record.offset())); } } catch (Exception e) { try { Thread.sleep(1000); } catch (Throwable ignore) { } e.printStackTrace(); } } } }
- Compile and run KafkaConsumerDemo.java to consume messages.
- Create a single-consumer subscription program named KafkaConsumerDemo.java.
- Enable multiple consumers to subscribe to messages
- Create a multi-consumer subscription program named KafkaMultiConsumerDemo.java.
import java.util.ArrayList; import java.util.List; import java.util.Properties; import java.util.concurrent.atomic.AtomicBoolean; import org.apache.kafka.clients.CommonClientConfigs; import org.apache.kafka.clients.consumer.ConsumerConfig; import org.apache.kafka.clients.consumer.ConsumerRecord; import org.apache.kafka.clients.consumer.ConsumerRecords; import org.apache.kafka.clients.consumer.KafkaConsumer; import org.apache.kafka.clients.producer.ProducerConfig; import org.apache.kafka.common.config.SaslConfigs; import org.apache.kafka.common.errors.WakeupException; /** * This tutorial shows you how to enable multiple consumers to simultaneously consume messages of the same topic in one process. * Make sure that the total number of consumers in the environment does not exceed the number of partitions of the topics to which the consumers are subscribed. */ public class KafkaMultiConsumerDemo { public static void main(String args[]) throws InterruptedException { // Set the path of the JAAS configuration file. JavaKafkaConfigurer.configureSaslPlain(); // Load the kafka.properties file. Properties kafkaProperties = JavaKafkaConfigurer.getKafkaProperties(); Properties props = new Properties(); // Set the endpoint. Obtain the endpoint of the corresponding topic in the Message Queue for Apache Kafka console. props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getProperty("bootstrap.servers")); // Specify the access protocol. props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT"); // Specify the PLAIN mechanism. props.put(SaslConfigs.SASL_MECHANISM, "PLAIN"); // Set the maximum interval between two polling cycles. // The default value is 30s. If the consumer does not return a heartbeat message within the interval, the broker determines that the consumer is not alive. The broker removes the consumer from the consumer group and triggers rebalancing. props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000); // Set the maximum number of messages that can be polled at a time. // Do not set this parameter to an excessively large value. If polled messages are not all consumed before the next poll starts, load balancing is triggered and performance may deteriorate. props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 30); // Set the method for deserializing messages. props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer"); props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer"); // Set the consumer group of the current consumer instance. You must create the consumer group in the Message Queue for Apache Kafka console. // The instances in a consumer group consume messages in load balancing mode. props.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaProperties.getProperty("group.id")); int consumerNum = 2; Thread[] consumerThreads = new Thread[consumerNum]; for (int i = 0; i < consumerNum; i++) { KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props); List<String> subscribedTopics = new ArrayList<String>(); subscribedTopics.add(kafkaProperties.getProperty("topic")); consumer.subscribe(subscribedTopics); KafkaConsumerRunner kafkaConsumerRunner = new KafkaConsumerRunner(consumer); consumerThreads[i] = new Thread(kafkaConsumerRunner); } for (int i = 0; i < consumerNum; i++) { consumerThreads[i].start(); } for (int i = 0; i < consumerNum; i++) { consumerThreads[i].join(); } } static class KafkaConsumerRunner implements Runnable { private final AtomicBoolean closed = new AtomicBoolean(false); private final KafkaConsumer consumer; KafkaConsumerRunner(KafkaConsumer consumer) { this.consumer = consumer; } @Override public void run() { try { while (! closed.get()) { try { ConsumerRecords<String, String> records = consumer.poll(1000); // All messages must be consumed before the next polling cycle starts. The total duration cannot exceed the timeout interval specified by SESSION_TIMEOUT_MS_CONFIG. for (ConsumerRecord<String, String> record : records) { System.out.println(String.format("Thread:%s Consume partition:%d offset:%d", Thread.currentThread().getName(), record.partition(), record.offset())); } } catch (Exception e) { try { Thread.sleep(1000); } catch (Throwable ignore) { } e.printStackTrace(); } } } catch (WakeupException e) { // If the consumer is shut down, ignore exceptions. if (! closed.get()) { throw e; } } finally { consumer.close(); } } // Implement a shutdown hook that can be called by another thread. public void shutdown() { closed.set(true); consumer.wakeup(); } } }
- Compile and run KafkaMultiConsumerDemo.java to consume messages.
- Create a multi-consumer subscription program named KafkaMultiConsumerDemo.java.