This topic describes how to use the SDK for Java to connect to the SSL endpoint of a Message Queue for Apache Kafka instance and use the PLAIN mechanism to send and consume messages over the Internet.

Prerequisites

Install Java dependencies

Add the following dependencies to the pom.xml file:
<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>2.4.0</version>
</dependency>
<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-log4j12</artifactId>
    <version>1.7.6</version>
</dependency>
Note We recommend that you keep the version of the client consistent with that of the broker. This means that the client library version must be consistent with the major version of the Message Queue for Apache Kafka instance. You can obtain the major version of the Message Queue for Apache Kafka instance on the Instance Details page in the Message Queue for Apache Kafka console.

Preparations

  1. Create a Log4j configuration file named log4j.properties.
    # Licensed to the Apache Software Foundation (ASF) under one or more
    # contributor license agreements.  See the NOTICE file distributed with
    # this work for additional information regarding copyright ownership.
    # The ASF licenses this file to You under the Apache License, Version 2.0
    # (the "License"); you may not use this file except in compliance with
    # the License.  You may obtain a copy of the License at
    #
    #    http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    log4j.rootLogger=INFO, STDOUT
    
    log4j.appender.STDOUT=org.apache.log4j.ConsoleAppender
    log4j.appender.STDOUT.layout=org.apache.log4j.PatternLayout
    log4j.appender.STDOUT.layout.ConversionPattern=[%d] %p %m (%c)%n
  2. Download an SSL root certificate.
  3. Create a JAAS configuration file named kafka_client_jaas.conf.
    KafkaClient {
      org.apache.kafka.common.security.plain.PlainLoginModule required
      username="xxxx"
      password="xxxx";
    };                       
    Note
    • If the access control list (ACL) feature is disabled for the Message Queue for Apache Kafka instance, you can obtain the username and password of the default Simple Authentication and Security Layer (SASL) user for the instance on the Instance Details page in the Message Queue for Apache Kafka console.
    • If ACL is enabled for the Message Queue for Apache Kafka instance, make sure that the SASL user to be used is of the PLAIN type and that the user is authorized to send and consume messages. For more information, see Grant permissions to SASL users.
  4. Create a Message Queue for Apache Kafka configuration file named kafka.properties.
    ## Specify SSL endpoint, which can be obtained in the Message Queue for Apache Kafka console. 
    bootstrap.servers=xxxx
    ## Specify a topic, which is created in the Message Queue for Apache Kafka console. 
    topic=xxxx
    ## Specify a consumer group, which is created in the Message Queue for Apache Kafka console.Group 
    group.id=xxxx
    ## The SSL root certificate. 
    ssl.truststore.location=/xxxx/kafka.client.truststore.jks
    ## The JAAS configuration file. 
    java.security.auth.login.config=/xxxx/kafka_client_jaas.conf                       
  5. Create a program named JavaKafkaConfigurer.java to load the configuration files.
    import java.util.Properties;
    
    public class JavaKafkaConfigurer {
    
        private static Properties properties;
    
        public static void configureSasl() {
            // If you have used the -D parameter or another method to set the path, do not set it again in this section. 
            if (null == System.getProperty("java.security.auth.login.config")) {
                // Replace XXX with the actual path. 
                // Make sure that the path is readable by the file system. Do not compress configuration files into JAR packages. 
                System.setProperty("java.security.auth.login.config", getKafkaProperties().getProperty("java.security.auth.login.config"));
            }
        }
    
        public synchronized static Properties getKafkaProperties() {
            if (null != properties) {
                return properties;
            }
            // Obtain the content of the kafka.properties file. 
            Properties kafkaProperties = new Properties();
            try {
                kafkaProperties.load(KafkaProducerDemo.class.getClassLoader().getResourceAsStream("kafka.properties"));
            } catch (Exception e) {
                // If the file cannot be loaded, exit the program. 
                e.printStackTrace();
            }
            properties = kafkaProperties;
            return kafkaProperties;
        }
    }                    

Send messages

  1. Create a producer program named KafkaProducerDemo.java that contains the following code:
    import java.util.ArrayList;
    import java.util.List;
    import java.util.Properties;
    import java.util.concurrent.Future;
    import org.apache.kafka.clients.CommonClientConfigs;
    import org.apache.kafka.clients.producer.KafkaProducer;
    import org.apache.kafka.clients.producer.ProducerConfig;
    import org.apache.kafka.clients.producer.ProducerRecord;
    import org.apache.kafka.clients.producer.RecordMetadata;
    import org.apache.kafka.common.config.SaslConfigs;
    import org.apache.kafka.common.config.SslConfigs;
    
    public class KafkaProducerDemo {
    
        public static void main(String args[]) {
            // Specify the path of the JAAS configuration file. 
            JavaKafkaConfigurer.configureSasl();
    
            // Load the kafka.properties file. 
            Properties kafkaProperties =  JavaKafkaConfigurer.getKafkaProperties();
    
            Properties props = new Properties();
            // Specify an endpoint. Obtain the SASL endpoint of the corresponding instance in the Message Queue for Apache Kafka console. 
            props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getProperty("bootstrap.servers"));
            // Specify the path of the SSL root certificate. Replace XXX with the actual path. 
            // Do not compress the certificate file into a JAR package. 
            props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, kafkaProperties.getProperty("ssl.truststore.location"));
            // The password of the truststore in the root certificate store. Use the default value. 
            props.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, "KafkaOnsClient");
            // Specify the access protocol. Set the value to SASL_SSL. 
            props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_SSL");
            // Specify the SASL authentication method. Use the default value. 
            props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
            // Set the method for deserializing Message Queue for Apache Kafka messages. 
            props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
            props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
            // Set the maximum time to wait for a request. 
            props.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, 30 * 1000);
            // Set the maximum number of retries allowed for the client. 
            props.put(ProducerConfig.RETRIES_CONFIG, 5);
            // Set the interval between two consecutive retries for the client. 
            props.put(ProducerConfig.RETRY_BACKOFF_MS_CONFIG, 3000);
    
            // Set the algorithm for hostname verification to an empty value. 
            props.put(SslConfigs.SSL_ENDPOINT_IDENTIFICATION_ALGORITHM_CONFIG, "");
    
            // Create a thread-safe producer object. One producer object can serve one process. 
            // To improve performance, you can create multiple objects. We recommend that you create no more than five objects. 
            KafkaProducer<String, String> producer = new KafkaProducer<String, String>(props);
    
            // Create a Message Queue for Apache Kafka message. 
            String topic = kafkaProperties.getProperty("topic"); // The topic of the message. Enter the topic that you created in the Message Queue for Apache Kafka console. 
            String value = "this is the message's value"; // The content of the message. 
    
            try {
                // To improve the efficiency, obtain multiple future objects at a time. Do not obtain a large number of future objects at a time. 
                List<Future<RecordMetadata>> futures = new ArrayList<Future<RecordMetadata>>(128);
                for (int i =0; i < 100; i++) {
                    // Send the message and obtain a future object. 
                    ProducerRecord<String, String> kafkaMessage =  new ProducerRecord<String, String>(topic, value + ": " + i);
                    Future<RecordMetadata> metadataFuture = producer.send(kafkaMessage);
                    futures.add(metadataFuture);
    
                }
                producer.flush();
                for (Future<RecordMetadata> future: futures) {
                    // Synchronize the future object. 
                    try {
                        RecordMetadata recordMetadata = future.get();
                        System.out.println("Produce ok:" + recordMetadata.toString());
                    } catch (Throwable t) {
                        t.printStackTrace();
                    }
                }
            } catch (Exception e) {
                // If the message still fails to be sent after retries, troubleshoot the error. 
                System.out.println("error occurred");
                e.printStackTrace();
            }
        }
    }  
  2. Compile and run KafkaProducerDemo.java to send messages.

Consume messages

You can consume messages by using one of the following methods:
  • Use a single consumer to consume messages
    1. Create a single-consumer program named KafkaConsumerDemo.java that contains the following code:
      import java.util.ArrayList;
      import java.util.List;
      import java.util.Properties;
      import org.apache.kafka.clients.CommonClientConfigs;
      import org.apache.kafka.clients.consumer.ConsumerConfig;
      import org.apache.kafka.clients.consumer.ConsumerRecord;
      import org.apache.kafka.clients.consumer.ConsumerRecords;
      import org.apache.kafka.clients.consumer.KafkaConsumer;
      import org.apache.kafka.clients.producer.ProducerConfig;
      import org.apache.kafka.common.config.SaslConfigs;
      import org.apache.kafka.common.config.SslConfigs;
      
      public class KafkaConsumerDemo {
      
          public static void main(String args[]) {
              // Specify the path of the JAAS configuration file. 
              JavaKafkaConfigurer.configureSasl();
      
              // Load the kafka.properties file. 
              Properties kafkaProperties =  JavaKafkaConfigurer.getKafkaProperties();
      
              Properties props = new Properties();
              // Specify an endpoint. Obtain the SASL endpoint of the corresponding instance in the Message Queue for Apache Kafka console. 
              props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getProperty("bootstrap.servers"));
              // Specify the path of the SSL root certificate. Replace XXX with the actual path. 
              // Do not compress the certificate file into a JAR package. 
              props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, kafkaProperties.getProperty("ssl.truststore.location"));
              // Specify the password of the truststore in the root certificate store. Use the default value. 
              props.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, "KafkaOnsClient");
              // Specify the access protocol. Set the value to SASL_SSL. 
              props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_SSL");
              // Specify the SASL authentication method. Use the default value. 
              props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
              // Set the maximum interval between two consecutive polling cycles. 
              // The default interval is 30s. If the consumer does not return a heartbeat message within the interval, the broker determines that the consumer is not alive. In this case, the broker removes the consumer from the consumer group and triggers load balancing.Group 
              props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
              // Set the maximum message size allowed for a single poll operation. This parameter has a significant effect if data is transmitted over the Internet. 
              props.put(ConsumerConfig.MAX_PARTITION_FETCH_BYTES_CONFIG, 32000);
              props.put(ConsumerConfig.FETCH_MAX_BYTES_CONFIG, 32000);
              // Set the maximum number of messages that can be polled at a time. 
              // Do not set this parameter to an excessively large value. If the messages polled are not all consumed before the next polling cycle starts, load balancing is triggered and performance may deteriorate. 
              props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 30);
              // Set the method for deserializing messages. 
              props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
              props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
              // Set the consumer group of the current consumer. You must create the consumer group in the Message Queue for Apache Kafka console. 
              // The consumers in a consumer group consume messages in load balancing mode. 
              props.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaProperties.getProperty("group.id"));
              // Set the algorithm for hostname verification to an empty value. 
              props.put(SslConfigs.SSL_ENDPOINT_IDENTIFICATION_ALGORITHM_CONFIG, "");
      
              // Create a consumer object. 
              KafkaConsumer<String, String> consumer = new org.apache.kafka.clients.consumer.KafkaConsumer<String, String>(props);
              // Specify one or more topics to which the consumer group subscribes. 
              // We recommend that you configure consumers with the same GROUP_ID_CONFIG value to subscribe to the same topics. 
              List<String> subscribedTopics =  new ArrayList<String>();
              // If you want to subscribe to multiple topics, add the topics here. 
              // You must create the topics in the Message Queue for Apache Kafka console in advance. 
              subscribedTopics.add(kafkaProperties.getProperty("topic"));
              consumer.subscribe(subscribedTopics);
      
              // Consume messages in a loop. 
              while (true){
                  try {
                      ConsumerRecords<String, String> records = consumer.poll(1000);
                      // All messages must be consumed before the next polling cycle starts. The total duration cannot exceed the interval specified by SESSION_TIMEOUT_MS_CONFIG. 
                      // We recommend that you create a separate thread pool to consume messages and then asynchronously return the results. 
                      for (ConsumerRecord<String, String> record : records) {
                          System.out.println(String.format("Consume partition:%d offset:%d", record.partition(), record.offset()));
                      }
                  } catch (Exception e) {
                      try {
                          Thread.sleep(1000);
                      } catch (Throwable ignore) {
      
                      }
                      e.printStackTrace();
                  }
              }
          }
      }
    2. Compile and run KafkaConsumerDemo.java to consume messages.
  • Use multiple consumers to consume messages
    1. Create a multi-consumer program named KafkaMultiConsumerDemo.java that contains the following code:
      import java.util.ArrayList;
      import java.util.List;
      import java.util.Properties;
      import java.util.concurrent.atomic.AtomicBoolean;
      import org.apache.kafka.clients.CommonClientConfigs;
      import org.apache.kafka.clients.consumer.ConsumerConfig;
      import org.apache.kafka.clients.consumer.ConsumerRecord;
      import org.apache.kafka.clients.consumer.ConsumerRecords;
      import org.apache.kafka.clients.consumer.KafkaConsumer;
      import org.apache.kafka.clients.producer.ProducerConfig;
      import org.apache.kafka.common.config.SaslConfigs;
      import org.apache.kafka.common.config.SslConfigs;
      import org.apache.kafka.common.errors.WakeupException;
      
      /**
       * This tutorial shows you how to use multiple consumers to simultaneously consume messages in one process. 
       * Make sure that the total number of consumers in the environment does not exceed the number of partitions of the topics to which the consumers subscribe. 
       */
      public class KafkaMultiConsumerDemo {
      
          public static void main(String args[]) throws InterruptedException {
              // Specify the path of the JAAS configuration file. 
              JavaKafkaConfigurer.configureSasl();
      
              // Load the kafka.properties file. 
              Properties kafkaProperties = JavaKafkaConfigurer.getKafkaProperties();
      
              Properties props = new Properties();
              // Specify an endpoint. Obtain the SASL endpoint of the corresponding instance in the Message Queue for Apache Kafka console. 
              props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getProperty("bootstrap.servers"));
              // Specify the path of the SSL root certificate. Replace XXX with the actual path. 
              // Do not compress the certificate file into a JAR package. 
              props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, kafkaProperties.getProperty("ssl.truststore.location"));
              // Specify the password of the truststore in the root certificate store. Use the default value. 
              props.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, "KafkaOnsClient");
              // Specify the access protocol. Set the value to SASL_SSL. 
              props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_SSL");
              // Specify the SASL authentication method. Use the default value. 
              props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
              // Set the maximum interval between two consecutive polling cycles. 
              // The default interval is 30s. If the consumer does not return a heartbeat message within the interval, the broker determines that the consumer is not alive. In this case, the broker removes the consumer from the consumer group and triggers load balancing.Group 
              props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
              // Set the maximum number of messages that can be polled at a time. 
              // Do not set this parameter to an excessively large value. If the messages polled are not all consumed before the next polling cycle starts, load balancing is triggered and performance may deteriorate. 
              props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 30);
              // Set the method for deserializing messages. 
              props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
              props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
              // Set the consumer group of the current consumers. You must create the consumer group in the Message Queue for Apache Kafka console. 
              // The consumers in a consumer group consume messages in load balancing mode. 
              props.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaProperties.getProperty("group.id"));
              // Create a consumer object. 
      
              // Set the algorithm for hostname verification to an empty value. 
              props.put(SslConfigs.SSL_ENDPOINT_IDENTIFICATION_ALGORITHM_CONFIG, "");
      
              int consumerNum = 2;
              Thread[] consumerThreads = new Thread[consumerNum];
              for (int i = 0; i < consumerNum; i++) {
                  KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);
      
                  List<String> subscribedTopics = new ArrayList<String>();
                  subscribedTopics.add(kafkaProperties.getProperty("topic"));
                  consumer.subscribe(subscribedTopics);
      
                  KafkaConsumerRunner kafkaConsumerRunner = new KafkaConsumerRunner(consumer);
                  consumerThreads[i] = new Thread(kafkaConsumerRunner);
              }
      
              for (int i = 0; i < consumerNum; i++) {
                  consumerThreads[i].start();
              }
      
              for (int i = 0; i < consumerNum; i++) {
                  consumerThreads[i].join();
              }
          }
      
          static class KafkaConsumerRunner implements Runnable {
              private final AtomicBoolean closed = new AtomicBoolean(false);
              private final KafkaConsumer consumer;
      
              KafkaConsumerRunner(KafkaConsumer consumer) {
                  this.consumer = consumer;
              }
      
              @Override
              public void run() {
                  try {
                      while (!closed.get()) {
                          try {
                              ConsumerRecords<String, String> records = consumer.poll(1000);
                              // All messages must be consumed before the next polling cycle starts. The total duration cannot exceed the interval specified by SESSION_TIMEOUT_MS_CONFIG. 
                              for (ConsumerRecord<String, String> record : records) {
                                  System.out.println(String.format("Thread:%s Consume partition:%d offset:%d", Thread.currentThread().getName(), record.partition(), record.offset()));
                              }
                          } catch (Exception e) {
                              try {
                                  Thread.sleep(1000);
                              } catch (Throwable ignore) {
      
                              }
                              e.printStackTrace();
                          }
                      }
                  } catch (WakeupException e) {
                      // If the consumer is shut down, ignore the exception. 
                      if (!closed.get()) {
                          throw e;
                      }
                  } finally {
                      consumer.close();
                  }
              }
      
              // Implement a shutdown hook that can be called by another thread. 
              public void shutdown() {
                  closed.set(true);
                  consumer.wakeup();
              }
          }
      }
    2. Compile and run KafkaMultiConsumerDemo.java to consume messages.