All Products
Search
Document Center

ApsaraMQ for Kafka:Use instance endpoints to send and receive messages

Last Updated:Mar 15, 2024

You can connect an application to an ApsaraMQ for Kafka instance to send and receive messages by using an endpoint of the instance. ApsaraMQ for Kafka provides a default endpoint, a Security Sockets Layer (SSL) endpoint, and a Simple Authentication and Secure Layer (SASL) endpoint for each instance to meet your connection and security requirements. Default endpoints are suitable for messaging in virtual private clouds (VPCs) with high security requirements. SASL endpoints are suitable for scenarios in which transmission encryption is not required but messaging authentication is required. If you want to encrypt transmission links and authenticate messaging, we recommend that you use SSL endpoints.

Environment preparation

  • JDK 1.8 or later is installed. For more information, see Java Downloads.

  • Maven 2.5 or later is installed. For more information, see Downloading Apache Maven.

  • A compiler is installed.

    In the example of this topic, IntelliJ IDEA Ultimate is used.

  • An ApsaraMQ for Kafka instance is purchased and deployed. If the instance is a VPC-connected instance, only the default endpoint is displayed. If the instance is an Internet- and VPC-connected instance, the default endpoint and SSL endpoint are displayed. By default, SASL endpoints are not enabled for instances. Therefore, SASL endpoints are not displayed. If you want to use SASL endpoints, you must manually enable them. For more information, see Grant permissions to SASL users.

    • Default endpoint: allows you to send and receive messages in a VPC but does not support SASL authentication.

    • SASL endpoint: allows you to send and receive messages in a VPC and supports SASL authentication.

    • SSL endpoint: allows you to send and receive messages over the Internet and supports SASL authentication.

Install Java dependencies

The following sample code provides an example of dependencies that are required when you use the SDK for Java to connect to an ApsaraMQ for Kafka instance. The dependencies are built in the pom.xml file of the kafka-java-demo folder. You do not need to manually install the dependencies.

<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>2.4.0</version>
</dependency>
<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-log4j12</artifactId>
    <version>1.7.6</version>
</dependency>
Note

We recommend that your client version be consistent with the major version of your ApsaraMQ for Kafka instance. You can view the major version of your ApsaraMQ for Kafka instance on the Instance Details page in the ApsaraMQ for Kafka console.

Prepare configuration files

  1. (Optional) Download the SSL root certificate. If you use the SSL endpoint to connect to your ApsaraMQ for Kafka instance, you must install the certificate.

  2. Go to the aliware-kafka-demos page, click download to download the demo project to your on-premises machine, and then decompress the package of the demo project.

  3. In the decompressed demo project, find the kafka-java-demo folder and import the folder to IntelliJ IDEA.

  4. (Optional) If you use the SSL endpoint or the SASL endpoint to access your ApsaraMQ for Kafka instance, you must modify the kafka_client_jaas.conf configuration file. For information about the differences between endpoints, see Comparison among endpoints.

    KafkaClient {
      org.apache.kafka.common.security.plain.PlainLoginModule required
      username="xxxx"
      password="xxxx";
    }; 

    If your ApsaraMQ for Kafka instance is a VPC-connected instance, only resources deployed in the same VPC can access the instance. This ensures the security and privacy of data transmission. In scenarios that require high security, you can enable the access control list (ACL) feature. After you enable the feature, messages are transmitted in a secure channel only after the SASL identity authentication is passed. You can select the PLAIN or SCRAM mechanism for identity authentication based on your business requirements for security protection. For more information, see Enable the ACL feature.

    If you ApsaraMQ for Kafka instance is an Internet- and VPC-connected instance, messages must be authenticated and encrypted when transmitted over the Internet. The PLAIN mechanism of SASL must be used together with SSL to ensure that messages are not transmitted in plaintext without being encrypted.

    In the example of this topic, the values of the username and password parameters are the SASL username and password of the instance.

    • If you enable Internet access but not ACL for the instance, you can obtain the username and password of the default user in the Configuration Information section of the Instance Details page in the ApsaraMQ for Kafka console.

    • If you enable ACL for the instance, make sure that the SASL user that you use is of the PLAIN type and granted the required permissions on message sending and receiving. For more information, see Grant permissions to SASL users.

  5. Modify the kafka.properties configuration file.

    ##==============================Common parameters==============================
    bootstrap.servers=xxxxxxxxxxxxxxxxxxxxx
    topic=xxx
    group.id=xxx
    ##=======================Configure the following parameters based on your business requirements.========================
    ## The SSL endpoint.
    ssl.truststore.location=/xxxx/kafka.client.truststore.jks
    java.security.auth.login.config=/xxxx/kafka_client_jaas.conf
    ## The PLAIN mechanism of the SASL endpoint.
    java.security.auth.login.config.plain=/xxxx/kafka_client_jaas_plain.conf
    ## The SCRAM mechanism of the SASL endpoint.
    java.security.auth.login.config.scram=/xxxx/kafka_client_jaas_scram.conf

    Parameter

    Description

    bootstrap.servers

    The endpoint of the ApsaraMQ for Kafka instance. You can obtain the endpoint in the Endpoint Information section of the Instance Details page in the ApsaraMQ for Kafka console.

    topic

    The name of the topic on the instance. You can obtain the topic name on the Topics page in the ApsaraMQ for Kafka console.

    group.id

    The ID of the group on the instance. You can obtain the group ID on the Groups page in the ApsaraMQ for Kafka console.

    Note

    If the client runs producer.go to send messages, this parameter is optional. If the client runs consumer.go to subscribe to messages, this parameter is required.

    ssl.truststore.location

    The path to which the SSL root certificate is saved. You must save the SSL certificate file that you downloaded in the "Prepare configuration files" section to a local path and then replace xxxx in the sample code with the local path. Example: /home/ssl/kafka.client.truststore.jks.

    Important

    If you use the default endpoint or the SASL endpoint to access the instance, this parameter is not required. If you use the SSL endpoint to access the instance, this parameter is required.

    java.security.auth.login.config

    The path to which the JAAS configuration file is saved. You must save the kafka_client_jaas.conf file in the demo project to a local path and then replace xxxx in the sample code with the local path. Example: /home/ssl/kafka_client_jaas.conf.

    Important

    If you use the default endpoint to access the instance, this parameter is not required. If you use the SSL endpoint or the SASL endpoint to access the instance, this parameter is required.

Send messages

The following sample code provides an example on how to compile and run KafkaProducerDemo.java to send messages:

import java.util.ArrayList;
import java.util.List;
import java.util.Properties;
import java.util.concurrent.Future;
// If you use the SSL endpoint or the SASL endpoint to access the instance, comment out the first line of the following code: 
import java.util.concurrent.TimeUnit;
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
/*
* If you use the SSL endpoint or the SASL endpoint to access the instance, uncomment the following two lines of code: 
import org.apache.kafka.common.config.SaslConfigs;
import org.apache.kafka.common.config.SslConfigs;
*/

public class KafkaProducerDemo {

    public static void main(String args[]) {
          
       /*
        * If you use the SSL endpoint to access the instance, uncomment the following line of code. 
        Specify the path of the JAAS configuration file. 
        JavaKafkaConfigurer.configureSasl();
        */
         
       /*
        * If you use the PLAIN mechanism of the SASL endpoint to access the instance, uncomment the following line of code. 
        Specify the path of the JAAS configuration file. 
        JavaKafkaConfigurer.configureSaslPlain();
        */
       
       /*
        * If you use the SCRAM mechanism of the SASL endpoint to access the instance, uncomment the following line of code. 
        Specify the path of the JAAS configuration file. 
        JavaKafkaConfigurer.configureSaslScram();
        */

        // Load the kafka.properties file. 
        Properties kafkaProperties =  JavaKafkaConfigurer.getKafkaProperties();

        Properties props = new Properties();
        // Specify the endpoint. You can obtain the endpoint of the topic on the Instance Details page in the ApsaraMQ for Kafka console. 
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getProperty("bootstrap.servers"));
         
       /*
        * If you use the SSL endpoint to access the instance, uncomment the following four lines of code. 
        * Do not compress the file into a JAR package. 
        props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, kafkaProperties.getProperty("ssl.truststore.location"));
        * The password of the truststore in the root certificate. Use the default value. 
        props.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, "KafkaOnsClient");
        * The access protocol. Set this parameter to SASL_SSL. 
        props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_SSL");
        * The SASL authentication method. Use the default value. 
        props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
        */

       /*
        * If you use the PLAIN mechanism of the SASL endpoint to access the instance, uncomment the following two lines of code. 
        * The access protocol. 
        props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
        * The PLAIN mechanism. 
        props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
        */

       /*
        * If you use the SCRAM mechanism of the SASL endpoint to access the instance, uncomment the following two lines of code. 
        * The access protocol. 
        props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
        * The SCRAM mechanism. 
        props.put(SaslConfigs.SASL_MECHANISM, "SCRAM-SHA-256");
        */

        // The method that is used to serialize messages in ApsaraMQ for Kafka. 
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        // The maximum waiting time for a request. 
        props.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, 30 * 1000);
        // The maximum number of retries for messages in the client. 
        props.put(ProducerConfig.RETRIES_CONFIG, 5);
        // The interval between two consecutive retries for messages in the client. 
        props.put(ProducerConfig.RETRY_BACKOFF_MS_CONFIG, 3000);
         
       /*
        * If you use the SSL endpoint to access the instance, uncomment the following line of code. 
        * Set the algorithm for hostname verification to an empty value. 
        props.put(SslConfigs.SSL_ENDPOINT_IDENTIFICATION_ALGORITHM_CONFIG, "");
        */

        // Construct a thread-safe producer object. Construct one producer object for a process. 
        // To improve performance, you can construct multiple producer objects. We recommend that you construct up to five producer objects. 
        KafkaProducer<String, String> producer = new KafkaProducer<String, String>(props);

        // Construct an ApsaraMQ for Kafka message. 
        String topic = kafkaProperties.getProperty("topic"); // The topic to which the message belongs. Enter the topic that you created in the ApsaraMQ for Kafka console. 
        String value = "this is the message's value"; // The message content. 

        try {
            // Obtain multiple future objects at a time. This helps improve efficiency. However, do not obtain a large number of future objects at a time. 
            List<Future<RecordMetadata>> futures = new ArrayList<Future<RecordMetadata>>(128);
            for (int i =0; i < 100; i++) {
                // Send the message and obtain a future object. 
                ProducerRecord<String, String> kafkaMessage =  new ProducerRecord<String, String>(topic, value + ": " + i);
                Future<RecordMetadata> metadataFuture = producer.send(kafkaMessage);
                futures.add(metadataFuture);

            }
            producer.flush();
            for (Future<RecordMetadata> future: futures) {
                // Obtain the results of the future object in a synchronous manner. 
                try {
                    RecordMetadata recordMetadata = future.get();
                    System.out.println("Produce ok:" + recordMetadata.toString());
                } catch (Throwable t) {
                    t.printStackTrace();
                }
            }
        } catch (Exception e) {
            // If the message still fails to be sent after the maximum number of retries is reached, troubleshoot the error. 
            System.out.println("error occurred");
            e.printStackTrace();
        }
    }
}

Subscribe to messages

You can subscribe to messages by using one of the following methods.

Use a single consumer to subscribe to messages

The following sample code provides an example on how to compile and run KafkaConsumerDemo.java to subscribe to messages:

import java.util.ArrayList;
import java.util.List;
import java.util.Properties;


import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.producer.ProducerConfig;
/*
* If you use the SSL endpoint to access the instance, uncomment the following three lines of code. If you use the SASL endpoint to access the instance, uncomment the first two lines of the following code: 
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.common.config.SaslConfigs;
import org.apache.kafka.common.config.SslConfigs;
*/

public class KafkaConsumerDemo {

    public static void main(String args[]) {

        // Specify the path of the JAAS configuration file. 
        /*
         * If you use the SSL endpoint to access the instance, uncomment the following line of code: 
        JavaKafkaConfigurer.configureSasl();
         */
                        
        /*
         * If you use the PLAIN mechanism of the SASL endpoint to access the instance, uncomment the following line of code: 
        JavaKafkaConfigurer.configureSaslPlain();
         */
                        
        /*
        * If you use the SCRAM mechanism of the SASL endpoint to access the instance, uncomment the following line of code: 
        JavaKafkaConfigurer.configureSaslScram();
        */

        // Load the kafka.properties file.
        Properties kafkaProperties =  JavaKafkaConfigurer.getKafkaProperties();

        Properties props = new Properties();
        // Specify the endpoint. You can obtain the endpoint of the topic on the Instance Details page in the ApsaraMQ for Kafka console. 
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getProperty("bootstrap.servers"));

        // If you use the SSL endpoint to access the instance, uncomment the first line of the following code. 
        // The session timeout period. If the consumer does not return a heartbeat before the session times out, the broker determines that the consumer is not alive. In this case, the broker removes the consumer from the consumer group and triggers a rebalance. The default value is 30 seconds. 
        props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
        /*
         * If you use the SSL endpoint to access an instance, uncomment the following six lines of code. 
         * Specify the path to which the SSL root certificate is saved. Replace XXX with the actual path. 
         * Do not compress the certificate file into a JAR package. 
         props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, kafkaProperties.getProperty("ssl.truststore.location"));
         * The password of the truststore in the root certificate store. Use the default value. 
         props.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, "KafkaOnsClient");
         * The access protocol. Set this parameter to SASL_SSL. 
         props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_SSL");
         * The SASL authentication method. Use the default value. 
         props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
         * The maximum interval between two consecutive polling cycles. 
         * The default interval is 30 seconds. If the consumer does not return a heartbeat within the interval, the broker determines that the consumer is not alive. In this case, the broker removes the consumer from the consumer group and triggers a rebalance. 
         props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
         * Specify the maximum message size allowed for a single pull operation. If data is transmitted over the Internet, this parameter may significantly influence performance. 
         props.put(ConsumerConfig.MAX_PARTITION_FETCH_BYTES_CONFIG, 32000);
         props.put(ConsumerConfig.FETCH_MAX_BYTES_CONFIG, 32000);
         */

        // If you use the PLAIN mechanism of the SASL endpoint to access the instance, comment out the following line of code. 
       // The session timeout period. If the consumer does not return a heartbeat before the session times out, the broker determines that the consumer is not alive. In this case, the broker removes the consumer from the consumer group and triggers a rebalance. The default value is 30 seconds. 
        props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
        /*
         * If you use the PLAIN mechanism of the SASL endpoint to access the instance, uncomment the following three lines of code. 
         * The access protocol. 
         props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
         * The PLAIN mechanism. 
         props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
         * The maximum interval between two consecutive polling cycles. 
         * The default interval is 30 seconds. If the consumer does not return a heartbeat within the interval, the broker determines that the consumer is not alive. In this case, the broker removes the consumer from the consumer group and triggers a rebalance. 
         props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
         */

        // If you use the SCRAM mechanism of the SASL endpoint to access the instance, comment out the following line of code. 
       // The session timeout period. If the consumer does not return a heartbeat before the session times out, the broker determines that the consumer is not alive. Then, the broker removes the consumer from the consumer group and triggers a rebalance. The default value is 30s.
        props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
        /*
         * If you use the SCRAM mechanism of the SASL endpoint to access the instance, uncomment the following four lines of code. 
         * The access protocol. 
         props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
         * The SCRAM mechanism. 
         props.put(SaslConfigs.SASL_MECHANISM, "SCRAM-SHA-256");
         * The maximum interval between two consecutive polling cycles. 
         * The default interval is 30 seconds. If the consumer does not return a heartbeat within the interval, the broker determines that the consumer is not alive. In this case, the broker removes the consumer from the consumer group and triggers a rebalance. 
         props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
         props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
         */

        // The maximum number of messages that can be polled at a time. 
        // Do not set this parameter to an excessively large value. If polled messages are not all consumed before the next poll starts, load balancing is triggered and performance may deteriorate. 
        props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 30);
        // The method that is used to deserialize messages.
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        // The consumer group to which the current consumer instance belongs. Enter the consumer group that you created in the ApsaraMQ for Kafka console. 
        // The consumer instances that belong to the same consumer group. These instances consume messages in load balancing mode. 
        props.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaProperties.getProperty("group.id"));
        
        // If you use the SSL endpoint to access the instance, uncomment the following line of code. 
        // Set the algorithm for hostname verification to an empty value. 
        //props.put(SslConfigs.SSL_ENDPOINT_IDENTIFICATION_ALGORITHM_CONFIG, "");

        // Construct a message object, which is a consumer instance. 
        KafkaConsumer<String, String> consumer = new org.apache.kafka.clients.consumer.KafkaConsumer<String, String>(props);
        // Specify one or more topics to which the consumer group subscribes. 
        // We recommend that you configure the consumers with the same GROUP_ID_CONFIG value to subscribe to the same topics. 
        List<String> subscribedTopics =  new ArrayList<String>();
        
        // If you use the SSL endpoint to access the instance, comment out the first five lines and uncomment the sixth line of the following code. 
        // If you want to subscribe to multiple topics, add them here. 
        // You must create the topics in the ApsaraMQ for Kafka console in advance. 
        String topicStr = kafkaProperties.getProperty("topic");
        String[] topics = topicStr.split(",");
        for (String topic: topics) {
            subscribedTopics.add(topic.trim());
        }
        //subscribedTopics.add(kafkaProperties.getProperty("topic"));
        consumer.subscribe(subscribedTopics);

        // Consume messages in a loop. 
        while (true){
            try {
                ConsumerRecords<String, String> records = consumer.poll(1000);
                // All messages must be consumed before the next polling cycle starts. The total duration cannot exceed the timeout interval specified by SESSION_TIMEOUT_MS_CONFIG. 
                // We recommend that you create a separate thread pool to consume messages and return the results in an asynchronous manner. 
                for (ConsumerRecord<String, String> record : records) {
                    System.out.println(String.format("Consume partition:%d offset:%d", record.partition(), record.offset()));
                }
            } catch (Exception e) {
                try {
                    Thread.sleep(1000);
                } catch (Throwable ignore) {

                }
          
                e.printStackTrace();
            }
        }
    }
}

Use multiple consumers to subscribe to messages

The following sample code provides an example on how to compile and run KafkaMultiConsumerDemo.java to subscribe to messages:

import java.util.ArrayList;
import java.util.List;
import java.util.Properties;
import java.util.concurrent.atomic.AtomicBoolean;
// If you use the SSL endpoint or the SASL endpoint to access the instance, uncomment the first line of the following code: 

import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.producer.ProducerConfig;
/*
* If you use the SSL endpoint to access the instance, uncomment the first three lines of the following code. If you use the SASL endpoint to access the instance, uncomment the first two lines of the following code: 
import org.apache.kafka.clients.CommonClientConfigs;
import org.apache.kafka.common.config.SaslConfigs;
import org.apache.kafka.common.config.SslConfigs;
*/
import org.apache.kafka.common.errors.WakeupException;

/**
 * This tutorial shows you how to use multiple consumers to simultaneously consume messages in one process. 
 * Make sure that the total number of consumers in the environment does not exceed the number of partitions of the topics to which the consumers subscribe. 
 */
public class KafkaMultiConsumerDemo {

    public static void main(String args[]) throws InterruptedException {
        
        // Specify the path of the JAAS configuration file. 
        /* 
         * If you use the SSL endpoint to access the instance, uncomment the following line of code: 
         JavaKafkaConfigurer.configureSasl();
         */
                            
        /* 
         * If you use the PLAIN mechanism of the SASL endpoint to access the instance, uncomment the following line of code: 
         JavaKafkaConfigurer.configureSaslPlain(); 
         */
                            
        /* 
         * If you use the SCRAM mechanism of the SASL endpoint to access the instance, uncomment the following line of code: 
         JavaKafkaConfigurer.configureSaslScram();
         */


        // Load the kafka.properties file. 
        Properties kafkaProperties = JavaKafkaConfigurer.getKafkaProperties();

        Properties props = new Properties();
        // Specify the endpoint. You can obtain the endpoint of the topic on the Instance Details page in the ApsaraMQ for Kafka console. 
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getProperty("bootstrap.servers"));
        
        /*
         * If you use the SSL endpoint to access the instance, uncomment the following four lines of code. 
         * Do not compress the certificate file into a JAR package. 
         props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, kafkaProperties.getProperty("ssl.truststore.location"));
         * The password of the truststore in the root certificate store. Use the default value. 
         props.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, "KafkaOnsClient");
         * The access protocol. Set this parameter to SASL_SSL. 
         props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_SSL");
         * The SASL authentication method. Use the default value. 
         props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
         */
        
        /*
         * If you use the PLAIN mechanism of the SASL endpoint to access the instance, uncomment the following two lines of code. 
         * The access protocol. 
         props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
         * The PLAIN mechanism. 
         props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
         */

        /* 
         * If you use the SCRAM mechanism of the SASL endpoint to access the instance, uncomment the following two lines of code. 
         * The access protocol. 
         props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
         * The SCRAM mechanism. 
         props.put(SaslConfigs.SASL_MECHANISM, "SCRAM-SHA-256");
         */

        // The maximum interval between two consecutive polling cycles. 
        // The default interval is 30 seconds. If the consumer does not return a heartbeat within the interval, the broker determines that the consumer is not alive. In this case, the broker removes the consumer from the consumer group and triggers a rebalance. 
        props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
        // The maximum number of messages that can be polled at a time. 
        // Do not set this parameter to an excessively large value. If the messages polled are not all consumed before the next polling cycle starts, load balancing is triggered and performance may deteriorate. 
        props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 30);
        // The method that is used to deserialize messages. 
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        // The consumer group to which the current consumer instance belongs. Enter the consumer group that you created in the ApsaraMQ for Kafka console. 
        // The consumer instances that belong to the same consumer group. These instances consume messages in load balancing mode. 
        props.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaProperties.getProperty("group.id"));

        /* 
         * If you use the SSL endpoint to access the instance, uncomment the following line of code: 
         * Construct a consumer object, which is a consumer instance. 
         * Set the algorithm for hostname verification to an empty value. 
         props.put(SslConfigs.SSL_ENDPOINT_IDENTIFICATION_ALGORITHM_CONFIG, "");
         */

        int consumerNum = 2;
        Thread[] consumerThreads = new Thread[consumerNum];
        for (int i = 0; i < consumerNum; i++) {
            KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);

            List<String> subscribedTopics = new ArrayList<String>();
            subscribedTopics.add(kafkaProperties.getProperty("topic"));
            consumer.subscribe(subscribedTopics);

            KafkaConsumerRunner kafkaConsumerRunner = new KafkaConsumerRunner(consumer);
            consumerThreads[i] = new Thread(kafkaConsumerRunner);
        }

        for (int i = 0; i < consumerNum; i++) {
            consumerThreads[i].start();
        }

        for (int i = 0; i < consumerNum; i++) {
            consumerThreads[i].join();
        }
    }

    static class KafkaConsumerRunner implements Runnable {
        private final AtomicBoolean closed = new AtomicBoolean(false);
        private final KafkaConsumer consumer;

        KafkaConsumerRunner(KafkaConsumer consumer) {
            this.consumer = consumer;
        }

        @Override
        public void run() {
            try {
                while (!closed.get()) {
                    try {
                        ConsumerRecords<String, String> records = consumer.poll(1000);
                        // All messages must be consumed before the next polling cycle starts. The total duration cannot exceed the interval specified by SESSION_TIMEOUT_MS_CONFIG. 
                        for (ConsumerRecord<String, String> record : records) {
                            System.out.println(String.format("Thread:%s Consume partition:%d offset:%d", Thread.currentThread().getName(), record.partition(), record.offset()));
                        }
                    } catch (Exception e) {
                        try {
                            Thread.sleep(1000);
                        } catch (Throwable ignore) {

                        }
                        e.printStackTrace();
                    }
                }
            } catch (WakeupException e) {
                // If the consumer is shut down, ignore the exception. 
                if (!closed.get()) {
                    throw e;
                }
            } finally {
                consumer.close();
            }
        }
        // Implement a shutdown hook that can be called by another thread. 
        public void shutdown() {
            closed.set(true);
            consumer.wakeup();
        }
    }
}

FAQ

How do I configure the SASL_SSL certificate in ApsaraMQ for Kafka?

You can perform the following operations to configure the SASL_SSL certificate in ApsaraMQ for Kafka: Access the link in Step 1 of the "Prepare configuration files" section of this topic to download the SSL certificate to a local path and configure the ssl.truststore.location parameter in the kafka.properties configuration file in the demo project.

Can I bind my own SSL certificate when I use the SDK for Java to access the endpoint of my ApsaraMQ for Kafka instance to send and receive messages?

No, you cannot bind your own SSL certificate when you use the SDK for Java to access the endpoint of your ApsaraMQ for Kafka instance to send and receive messages. We recommend that you use the SSL certificate provided by ApsaraMQ for Kafka.

References