This topic describes how to use the SDK for Java to connect to an endpoint of a Message Queue for Apache Kafka instance to send and receive messages.

Prerequisites

Install the Java dependency library

Add the following dependencies to the pom.xml file:
<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>2.4.0</version>
</dependency>
<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-log4j12</artifactId>
    <version>1.7.6</version>
</dependency>
Note We recommend that you make sure that the version of your client library is same as the major version of your Message Queue for Apache Kafka instance. You can view the major version of the Message Queue for Apache Kafka instance on the Instance Details page in the Message Queue for Apache Kafka console.

Prepare a configuration file

  1. Optional:Download the SSL root certificate. If you use an SSL endpoint to access the Message Queue for Apache Kafka instance, you must install the certificate.
  2. Go to the Aliware-kafka-demos page, click the code icon to download the demo project to your on-premises machine, and then decompress the package of the demo project.
  3. In the decompressed demo project, find the kafka-java-demo folder and import the files in the folder to IntelliJ IDEA.
  4. Optional:Modify the kafka_client_jaas.conf configuration file if you use an SSL endpoint or a Simple Authentication and Security Layer (SASL) endpoint to access the Message Queue for Apache Kafka instance. For information about different endpoints of an instance, see Comparison among endpoints.
    KafkaClient {
      org.apache.kafka.common.security.plain.PlainLoginModule required
      username="xxxx"
      password="xxxx";
    }; 
    Enter the username and password of the instance in username and password.
    • If the access control list (ACL) feature is disabled for the instance, you can obtain the default username and password in the Configuration Information section of the Instance Details page in the Message Queue for Apache Kafka console.
    • If the ACL feature is enabled for the instance, make sure that the SASL user to be used is of the PLAIN type and that the user is authorized to send and consume messages. For more information, see Grant permissions to SASL users.
  5. Modify the kafka.properties configuration file.
    ##==============================Common configuration parameters==============================
    bootstrap.servers=xxxxxxxxxxxxxxxxxxxxx
    topic=xxx
    group.id=xxx
    ##=======================Configure the following parameters based on your business requirements.========================
    ## Configure the SSL endpoint
    ssl.truststore.location=/xxxx/kafka.client.truststore.jks
    java.security.auth.login.config=/xxxx/kafka_client_jaas.conf
    ## Configure the PLAIN mechanism for the SASL endpoint
    java.security.auth.login.config.plain=/xxxx/kafka_client_jaas_plain.conf
    ## Configure the Salted Challenge Response Authentication Mechanism (SCRAM) for the SASL endpoint
    java.security.auth.login.config.scram=/xxxx/kafka_client_jaas_scram.conf
    Parameter Description
    bootstrap.servers The information about the endpoint. You can obtain the information in the Endpoint Information section of the Instance Details page in the Message Queue for Apache Kafka console.
    topic The name of the topic in the instance. You can obtain the name of the topic on the Topics page in the Message Queue for Apache Kafka console.
    group.id The ID of the consumer group in the instance. You can obtain the ID of the consumer group on the Groups page in the Message Queue for Apache Kafka console.
    Note If the client runs producer.go to send messages, this parameter is optional. If the client runs consumer.go to subscribe to messages, this parameter is required.
    ssl.truststore.location The path of the SSL root certificate. Replace xxxx with the actual value. Example: /home/doc/project/kafka-java-demo/ssl/src/main/resources/kafka.client.truststore.jks.
    Note If the default endpoint or an SASL endpoint is used, this parameter is not required. If an SSL endpoint is used, this parameter is required.
    kafka_client_jaas.conf The path where the JAAS configuration file is saved. Replace xxxx with the actual value. Example: /home/doc/project/kafka-java-demo/ssl/src/main/resources/kafka_client_jaas_scram.conf.
    Note If the default endpoint is used, this parameter is not required. If an SSL endpoint or an SASL endpoint is used, this parameter is required.

Send messages

Compile and run KafkaProducerDemo.java to send messages.

Sample code
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;
import java.util.concurrent.Future;
// If you use an SSL endpoint or SASL endpoint to access the Message Queue for Apache Kafka instance, comment out the first line of the following code: 
import java.util.concurrent.TimeUnit;
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
/*
*If you use an SSL endpoint or SASL endpoint to access the Message Queue for Apache Kafka instance, uncomment the following two lines of code: 
import org.apache.kafka.common.config.SaslConfigs;
import org.apache.kafka.common.config.SslConfigs;
*/

public class KafkaProducerDemo {

    public static void main(String args[]) {
          
       /*
        * If you use an SSL endpoint to access the Message Queue for Apache Kafka instance, uncomment the following line of code: 
        Specify the path of the JAAS configuration file. 
        JavaKafkaConfigurer.configureSasl();
        */
         
       /*
        * If you use an SASL endpoint that uses the PLAIN mechanism to access the Message Queue for Apache Kafka instance, uncomment the following line of code: 
        Specify the path of the JAAS configuration file. 
        JavaKafkaConfigurer.configureSaslPlain();
        */
       
       /*
        * If you use an SASL endpoint that uses the SCRAM mechanism to access the Message Queue for Apache Kafka instance, uncomment the following line of code: 
        Specify the path of the JAAS configuration file. 
        JavaKafkaConfigurer.configureSaslScram();
        */

        // Load kafka.properties. 
        Properties kafkaProperties =  JavaKafkaConfigurer.getKafkaProperties();

        Properties props = new Properties();
        // Specify the endpoint. You can obtain the endpoint of the topic on the Instance Details page in the Message Queue for Apache Kafka console. 
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getProperty("bootstrap.servers"));
         
       /*
        * If you use an SSL endpoint to access the Message Queue for Apache Kafka instance, uncomment the following four lines of code: 
        * Do not compress this file into a JAR package. 
        props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, kafkaProperties.getProperty("ssl.truststore.location"));
        * The password of the truststore in the root certificate store. Use the default value. 
        props.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, "KafkaOnsClient");
        * The access protocol. Set the value to SASL_SSL. 
        props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_SSL");
        * The SASL authentication method. Use the default value. 
        props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
        */

       /*
        * If you use an SASL endpoint that uses the PLAIN mechanism to access the Message Queue for Apache Kafka instance, uncomment the following two lines of code: 
        * The access protocol. 
        props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
        * The PLAIN mechanism. 
        props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
        */

       /*
        * If you use an SASL endpoint that uses the SCRAM mechanism to access the Message Queue for Apache Kafka instance, uncomment the following two lines of code: 
        * The access protocol. 
        props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
        * The SCRAM mechanism. 
        props.put(SaslConfigs.SASL_MECHANISM, "SCRAM-SHA-256");
        */

        // The method for serializing the messages in Message Queue for Apache Kafka. 
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        // The maximum waiting time for a request. 
        props.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, 30 * 1000);
        // Specify the maximum number of retries for the messages in the client. 
        props.put(ProducerConfig.RETRIES_CONFIG, 5);
        // Specify the interval between two consecutive retries for the messages in the client. 
        props.put(ProducerConfig.RETRY_BACKOFF_MS_CONFIG, 3000);
         
       /*
        * If you use an SSL endpoint to access the Message Queue for Apache instance, uncomment the following line of code: 
        * Set the algorithm for hostname verification to an empty value. 
        props.put(SslConfigs.SSL_ENDPOINT_IDENTIFICATION_ALGORITHM_CONFIG, "");
        */

        // Construct a thread-safe producer object. Construct one producer object for a process. 
        // To improve performance, you can construct multiple producer objects. We recommend that you construct no more than five producer objects. 
        KafkaProducer<String, String> producer = new KafkaProducer<String, String>(props);

        // Construct a Message Queue for Apache Kafka message. 
        String topic = kafkaProperties.getProperty("topic"); // The topic to which the message belongs. Enter the topic that you created in the Message Queue for Apache Kafka console. 
        String value = "this is the message's value"; // The content of the message. 

        try {
            // Obtaining multiple future objects at a time can help improve efficiency. However, do not obtain a large number of future objects at a time. 
            List<Future<RecordMetadata>> futures = new ArrayList<Future<RecordMetadata>>(128);
            for (int i =0; i < 100; i++) {
                // Send the message and obtain a future object. 
                ProducerRecord<String, String> kafkaMessage =  new ProducerRecord<String, String>(topic, value + ": " + i);
                Future<RecordMetadata> metadataFuture = producer.send(kafkaMessage);
                futures.add(metadataFuture);

            }
            producer.flush();
            for (Future<RecordMetadata> future: futures) {
                // Obtain the results of the future object in a synchronous manner. 
                try {
                    RecordMetadata recordMetadata = future.get();
                    System.out.println("Produce ok:" + recordMetadata.toString());
                } catch (Throwable t) {
                    t.printStackTrace();
                }
            }
        } catch (Exception e) {
            // If the message still fails to be sent after retries, troubleshoot the error. 
            System.out.println("error occurred");
            e.printStackTrace();
        }
    }
}

Subscribe to messages

You can subscribe to messages by using one of the following methods:
  • Subscribe to messages by using a single consumer. In this case, you can compile and run KafkaConsumerDemo.java to send messages:
    Sample code
    import java.util.ArrayList;
    import java.util.List;
    import java.util.Properties;
    
    
    import org.apache.kafka.clients.consumer.ConsumerConfig;
    import org.apache.kafka.clients.consumer.ConsumerRecord;
    import org.apache.kafka.clients.consumer.ConsumerRecords;
    import org.apache.kafka.clients.consumer.KafkaConsumer;
    import org.apache.kafka.clients.producer.ProducerConfig;
    /*
    * If you use an SSL endpoint to access the Message Queue for Apache Kafka instance, uncomment the following three lines of code. If you use an SASL endpoint to access the Message Queue for Apache Kafka instance, uncomment the first two lines of the following code: 
    import org.apache.kafka.clients.CommonClientConfigs;
    import org.apache.kafka.common.config.SaslConfigs;
    import org.apache.kafka.common.config.SslConfigs;
    */
    
    public class KafkaConsumerDemo {
    
        public static void main(String args[]) {
    
            // Specify the path of the JAAS configuration file. 
            /*
             * If you use an SLL endpoint to access the Message Queue for Apache Kafka instance, uncomment the following line of code: 
            JavaKafkaConfigurer.configureSasl();
             */
                            
            /*
             * If you use an SASL endpoint that uses the PLAIN mechanism to access the Message Queue for Apache Kafka instance, uncomment the following line of code: 
            JavaKafkaConfigurer.configureSaslPlain();
             */
                            
            /*
            * If you use an SASL endpoint that uses the SCRAM mechanism to access the Message Queue for Apache Kafka instance, uncomment the following line of code: 
            JavaKafkaConfigurer.configureSaslScram();
            */
    
            // Load kafka.properties.
            Properties kafkaProperties =  JavaKafkaConfigurer.getKafkaProperties();
    
            Properties props = new Properties();
            // Specify the endpoint. You can obtain the endpoint of the topic on the Instance Details page in the Message Queue for Apache Kafka console. 
            props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getProperty("bootstrap.servers"));
    
            //If you use an SSL endpoint to access the Message Queue for Apache Kafka instance, uncomment the first line of the following code: 
            // The session timeout period. If the consumer does not return a heartbeat before the session times out, the broker determines that the consumer is not alive. In this case, the broker removes the consumer from the consumer group and triggers a rebalance. The default value is 30 seconds. 
            props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
            /*
             * If you use an SSL endpoint to access the Message Queue for Apache Kafka instance, uncomment the following six lines of code: 
             * Specify the path of the SSL root certificate. Replace XXX with the actual path. 
             * Do not compress the certificate file into a JAR package. 
             props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, kafkaProperties.getProperty("ssl.truststore.location"));
             * The password of the truststore in the root certificate store. Use the default value. 
             props.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, "KafkaOnsClient");
             * The access protocol. Set the value to SASL_SSL. 
             props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_SSL");
             * The SASL authentication method. Use the default value. 
             props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
             * The maximum interval between two consecutive polling cycles. 
             * The default interval is 30 seconds. If the consumer does not return a heartbeat within the interval, the broker determines that the consumer is not alive. In this case, the broker removes the consumer from the consumer group and triggers a rebalance. 
             props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
             * Specify the maximum message size allowed for a single pull operation. This parameter may significantly influence performance if data is transmitted over the Internet. 
             props.put(ConsumerConfig.MAX_PARTITION_FETCH_BYTES_CONFIG, 32000);
             props.put(ConsumerConfig.FETCH_MAX_BYTES_CONFIG, 32000);
             */
    
            // If the instance uses an SASL endpoint and the PLAIN mechanism, uncomment the following line of code: 
           // The session timeout period. If the consumer does not return a heartbeat before the session times out, the broker determines that the consumer is not alive. In this case, the broker removes the consumer from the consumer group and triggers rebalancing. The default value is 30s. 
            props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
            /*
             * If the instance uses an SASL endpoint and the PLAIN mechanism, uncomment the following three lines of code: 
             * The access protocol. 
             props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
             * The PLAIN mechanism. 
             props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
             * The maximum interval between two consecutive polling cycles. 
             * The default interval is 30 seconds. If the consumer does not return a heartbeat within the interval, the broker determines that the consumer is not alive. In this case, the broker removes the consumer from the consumer group and triggers a rebalance. 
             props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
             */
    
            //If you use an SASL endpoint that uses the SCRAM mechanism to access the Message Queue for Apache Kafka instance, comment out the following line of code: 
           // The session timeout period. If the consumer does not return a heartbeat before the session times out, the broker determines that the consumer is not alive. Then, the broker removes the consumer from the consumer group and triggers a rebalance. The default value is 30s.
            props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
            /*
             * If you use an SASL endpoint that uses the SCRAM mechanism to access the Message Queue for Apache Kafka instance, uncomment the following four lines of code: 
             * The access protocol. 
             props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
             * The SCRAM mechanism. 
             props.put(SaslConfigs.SASL_MECHANISM, "SCRAM-SHA-256");
             * The maximum interval between two consecutive polling cycles. 
             * The default interval is 30 seconds. If the consumer does not return a heartbeat within the interval, the broker determines that the consumer is not alive. In this case, the broker removes the consumer from the consumer group and triggers a rebalance. 
             props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
             props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
             */
    
            // The maximum number of messages that can be polled at a time. 
            // Do not set this parameter to an excessively large value. If polled messages are not all consumed before the next poll starts, load balancing is triggered and performance may deteriorate. 
            props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 30);
            // The method for deserializing messages.
            props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
            props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
            // The consumer group to which the current consumer instance belongs. Enter the consumer group that you created in the Message Queue for Apache Kafka console. 
            // The consumer instances that belong to the same consumer group. These instances consume messages in load balancing mode. 
            props.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaProperties.getProperty("group.id"));
            
            * If you use an SSL endpoint to access the Message Queue for Apache Kafka instance, uncomment the following line of code: 
            // Set the algorithm for hostname verification to an empty value. 
            //props.put(SslConfigs.SSL_ENDPOINT_IDENTIFICATION_ALGORITHM_CONFIG, "");
    
            // Construct a message object, which is a consumer instance. 
            KafkaConsumer<String, String> consumer = new org.apache.kafka.clients.consumer.KafkaConsumer<String, String>(props);
            // Specify one or more topics to which the consumer group subscribes. 
            // We recommend that you configure the consumers with the same GROUP_ID_CONFIG value to subscribe to the same topics. 
            List<String> subscribedTopics =  new ArrayList<String>();
            
            // If you use an SSL endpoint to access the Message Queue for Apache Kafka instance, comment out the first five lines and uncomment the sixth line of the following code: 
            // If you want to subscribe to multiple topics, add them here. 
            // You must create the topics in the Message Queue for Apache Kafka console in advance. 
            String topicStr = kafkaProperties.getProperty("topic");
            String[] topics = topicStr.split(",");
            for (String topic: topics) {
                subscribedTopics.add(topic.trim());
            }
            //subscribedTopics.add(kafkaProperties.getProperty("topic"));
            consumer.subscribe(subscribedTopics);
    
            // Consume messages in a loop. 
            while (true){
                try {
                    ConsumerRecords<String, String> records = consumer.poll(1000);
                    // All messages must be consumed before the next polling cycle starts. The total duration cannot exceed the timeout interval specified by SESSION_TIMEOUT_MS_CONFIG. 
                    // We recommend that you create a separate thread pool to consume messages and return the results in an asynchronous manner. 
                    for (ConsumerRecord<String, String> record : records) {
                        System.out.println(String.format("Consume partition:%d offset:%d", record.partition(), record.offset()));
                    }
                } catch (Exception e) {
                    try {
                        Thread.sleep(1000);
                    } catch (Throwable ignore) {
    
                    }
                    // For information about common client errors, see Client errors when you use Message Queue for Apache Kafka.
                    e.printStackTrace();
                }
            }
        }
    }
  • Subscribe to messages by using multiple consumers. In this case, compile and run KafkaConsumerDemo.java.
    Sample code
    import java.util.ArrayList;
    import java.util.List;
    import java.util.Properties;
    import java.util.concurrent.atomic.AtomicBoolean;
    // If you use an SSL endpoint or SASL endpoint to access the Message Queue for Apache Kafka instance, comment out the first line of the following code: 
    
    import org.apache.kafka.clients.consumer.ConsumerConfig;
    import org.apache.kafka.clients.consumer.ConsumerRecord;
    import org.apache.kafka.clients.consumer.ConsumerRecords;
    import org.apache.kafka.clients.consumer.KafkaConsumer;
    import org.apache.kafka.clients.producer.ProducerConfig;
    /*
    * If you use an SSL endpoint to access the Message Queue for Apache Kafka instance, uncomment the first three lines of the following code. If you use an SASL endpoint to access the Message Queue for Apache Kafka instance, uncomment the first two lines of the following code: 
    import org.apache.kafka.clients.CommonClientConfigs;
    import org.apache.kafka.common.config.SaslConfigs;
    import org.apache.kafka.common.config.SslConfigs;
    */
    import org.apache.kafka.common.errors.WakeupException;
    
    /**
     * This tutorial shows you how to use multiple consumers to simultaneously consume messages in one process. 
     * Make sure that the total number of consumers in the environment does not exceed the number of partitions of the topics to which the consumers subscribe. 
     */
    public class KafkaMultiConsumerDemo {
    
        public static void main(String args[]) throws InterruptedException {
            
            // Specify the path of the JAAS configuration file. 
            /* 
             * If you use an SSL endpoint to access the Message Queue for Apache Kafka instance, uncomment the following line of code: 
             JavaKafkaConfigurer.configureSasl();
             */
                                
            /* 
             * If you use an SASL endpoint that uses the PLAIN mechanism to access your Message Queue for Apache Kafka instance, uncomment the following line of code: 
             JavaKafkaConfigurer.configureSaslPlain(); 
             */
                                
            /* 
             * If you use an SASL endpoint that uses the SCRAM mechanism to access the Message Queue for Apache Kafka instance, uncomment the following line of code: 
             JavaKafkaConfigurer.configureSaslScram();
             */
    
    
            // Load kafka.properties. 
            Properties kafkaProperties = JavaKafkaConfigurer.getKafkaProperties();
    
            Properties props = new Properties();
            // Specify the endpoint. You can obtain the endpoint of the topic on the Instance Details page in the Message Queue for Apache Kafka console. 
            props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getProperty("bootstrap.servers"));
            
            /*
             * If you use an SSL endpoint to access the Message Queue for Apache Kafka instance, uncomment the following four lines of code: 
             * Do not compress the certificate file into a JAR package. 
             props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, kafkaProperties.getProperty("ssl.truststore.location"));
             * The password of the truststore in the root certificate store. Use the default value. 
             props.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, "KafkaOnsClient");
             * The access protocol. Set the value to SASL_SSL. 
             props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_SSL");
             * The SASL authentication method. Use the default value. 
             props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
             */
            
            /*
             * If you use an SASL endpoint that uses the PLAIN mechanism to access your Message Queue for Apache Kafka instance, uncomment the following two lines of code: 
             * The access protocol. 
             props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
             * The PLAIN mechanism. 
             props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
             */
    
            /* 
             * If you use an SASL endpoint that uses the SCRAM mechanism to access the Message Queue for Apache Kafka instance, uncomment the following two lines of code: 
             * The access protocol. 
             props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
             * The SCRAM mechanism. 
             props.put(SaslConfigs.SASL_MECHANISM, "SCRAM-SHA-256");
             */
    
            // The maximum interval between two consecutive polling cycles. 
            // The default interval is 30 seconds. If the consumer does not return a heartbeat within the interval, the broker determines that the consumer is not alive. In this case, the broker removes the consumer from the consumer group and triggers a rebalance. 
            props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
            // The maximum number of messages that can be polled at a time. 
            // Do not set this parameter to an excessively large value. If the messages polled are not all consumed before the next polling cycle starts, load balancing is triggered and performance may deteriorate. 
            props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 30);
            // The method for deserializing messages. 
            props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
            props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
            // The consumer group to which the current consumer instance belongs. Enter the consumer group that you created in the Message Queue for Apache Kafka console. 
            // The consumer instances that belong to the same consumer group. These instances consume messages in load balancing mode. 
            props.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaProperties.getProperty("group.id"));
    
            /* 
             * If you use an SSL endpoint to access the Message Queue for Apache Kafka instance, uncomment the following line of code: 
             * Construct a consumer object, which is a consumer instance. 
             * Set the algorithm for hostname verification to an empty value. 
             props.put(SslConfigs.SSL_ENDPOINT_IDENTIFICATION_ALGORITHM_CONFIG, "");
             */
    
            int consumerNum = 2;
            Thread[] consumerThreads = new Thread[consumerNum];
            for (int i = 0; i < consumerNum; i++) {
                KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);
    
                List<String> subscribedTopics = new ArrayList<String>();
                subscribedTopics.add(kafkaProperties.getProperty("topic"));
                consumer.subscribe(subscribedTopics);
    
                KafkaConsumerRunner kafkaConsumerRunner = new KafkaConsumerRunner(consumer);
                consumerThreads[i] = new Thread(kafkaConsumerRunner);
            }
    
            for (int i = 0; i < consumerNum; i++) {
                consumerThreads[i].start();
            }
    
            for (int i = 0; i < consumerNum; i++) {
                consumerThreads[i].join();
            }
        }
    
        static class KafkaConsumerRunner implements Runnable {
            private final AtomicBoolean closed = new AtomicBoolean(false);
            private final KafkaConsumer consumer;
    
            KafkaConsumerRunner(KafkaConsumer consumer) {
                this.consumer = consumer;
            }
    
            @Override
            public void run() {
                try {
                    while (!closed.get()) {
                        try {
                            ConsumerRecords<String, String> records = consumer.poll(1000);
                            // All messages must be consumed before the next polling cycle starts. The total duration cannot exceed the interval specified by SESSION_TIMEOUT_MS_CONFIG. 
                            for (ConsumerRecord<String, String> record : records) {
                                System.out.println(String.format("Thread:%s Consume partition:%d offset:%d", Thread.currentThread().getName(), record.partition(), record.offset()));
                            }
                        } catch (Exception e) {
                            try {
                                Thread.sleep(1000);
                            } catch (Throwable ignore) {
    
                            }
                            e.printStackTrace();
                        }
                    }
                } catch (WakeupException e) {
                    // If the consumer is shut down, ignore the exception. 
                    if (!closed.get()) {
                        throw e;
                    }
                } finally {
                    consumer.close();
                }
            }
            // Implement a shutdown hook that can be called by another thread. 
            public void shutdown() {
                closed.set(true);
                consumer.wakeup();
            }
        }
    }