All Products
Search
Document Center

ApsaraMQ for Kafka:Troubleshoot ApsaraMQ for Kafka client errors

Last Updated:Mar 11, 2026

Use the error message returned by your ApsaraMQ for Kafka client to find the matching section below. Each entry explains the root cause and provides step-by-step instructions to resolve the issue.

Connection and authentication errors

These errors occur when the client cannot reach the broker or fails Simple Authentication and Security Layer (SASL) authentication. They are most common on Internet-connected instances that require SASL.

Timeout or connection failure

Error messages:

  • TimeoutException (Java)

  • run out of brokers (Go)

  • Authentication failed for user (Python)

Cause:

The client cannot establish a connection to the broker. Common root causes:

  • Network issues -- A firewall, security group, or routing issue blocks traffic to the broker.

  • SASL authentication failure -- The sasl.mechanisms configuration is missing or contains invalid credentials. This applies only to Internet-connected instances.

Solution:

  1. Make sure that the server is correctly configured. Verify that bootstrap.servers matches the endpoint shown on the Instance Details page in the ApsaraMQ for Kafka console.

  2. Run the telnet command to check the network connection.

  3. If the connection succeeds but the client still reports an error, check the SASL authentication configuration:

Note

SASL authentication applies only to Internet-connected instances. VPC-connected instances do not require SASL configuration.

Missing SASL or SSL libraries

Error messages:

  • No such configuration property: "sasl.mechanisms" (C++, PHP, Node.js)

  • No worthy mechs found (C++, PHP, Node.js)

Cause:

The SASL and Secure Sockets Layer (SSL) libraries are not installed on the machine running the client. This affects C++ clients and clients that use C++ as the core runtime, such as PHP and Node.js Kafka libraries.

Solution:

Install the required libraries. The following example uses CentOS:

# Install SSL libraries
sudo yum install openssl openssl-devel

# Install SASL libraries
sudo yum install cyrus-sasl{,-plain}

For other operating systems, install the equivalent OpenSSL and Cyrus SASL packages through your OS package manager.

Missing JAAS configuration file

Error message:

  • No KafkaClient Entry (Java)

Cause:

The Java client cannot find the kafka_client_jaas.conf file, which contains the SASL login credentials.

Solution:

  1. Create the kafka_client_jaas.conf file and save it to a known path, for example /home/admin/kafka_client_jaas.conf.

  2. Point the Java client to the file by using one of the following methods: Option A: JVM parameter Add the following flag when you start your application: Option B: Set the property in code Option C: System-wide configuration Add the following line to ${JAVA_HOME}/jre/lib/java.security:

    Note

    This line must run before the Kafka client initializes. Place it early in your application startup sequence.

       -Djava.security.auth.login.config=/home/admin/kafka_client_jaas.conf
       System.setProperty("java.security.auth.login.config", "/home/admin/kafka_client_jaas.conf");
       login.config.url.1=file:/home/admin/kafka_client_jaas.conf

For details on the JAAS file format, see JAAS Login Configuration File.

Topic and partition errors

These errors relate to topic availability and partition leader election.

Topic not available

Error messages:

  • Leader is not available (all languages)

  • leader is in election (all languages)

Cause:

These errors are normal during topic initialization or partition leader election and typically resolve within seconds. If the error persists, the topic may not exist on the instance.

Solution:

  1. Wait a few seconds and retry. If the error is transient (for example, immediately after topic creation), it resolves on its own.

  2. If the error persists, log on to the ApsaraMQ for Kafka console and verify that the topic exists.

  3. If the topic does not exist, create it. For more information, see Create a topic.

Consumer errors

These errors occur when the consumer fails to fetch messages from the broker.

Fetch request failure

Error messages:

  • Error sending fetch request (Java)

  • DisconnectException (Java)

Cause:

Messages fail to be pulled by the consumer. Possible causes:

  • Network issues -- The connection between the consumer and the broker is interrupted or unreliable.

  • Message pulling timeout -- The response exceeds the configured size limits, which causes the request to time out before the data transfer completes.

Solution:

  1. Make sure that the server is correctly configured.

  2. Run the telnet command to check the network connection.

  3. If the network connection is normal, message pulling may time out. You can modify the following parameters to limit the number of messages that can be pulled each time:

    • fetch.max.bytes: the maximum number of bytes that are returned by the broker from a single fetch request.

    • max.partition.fetch.bytes: the maximum number of bytes that are returned by one partition on the broker from a single fetch request.

  4. Traffic may be limited on the broker. In the ApsaraMQ for Kafka console, go to the Instance Details page:

    • For VPC-connected instances, check the Traffic Specification value.

    • For Internet-connected instances, check the Public Traffic value.

Message format errors

These errors occur when the broker rejects messages due to format or configuration mismatches.

Corrupt message

Error message:

  • CORRUPT_MESSAGE (all languages)

Cause:

The root cause depends on the storage engine your instance uses:

  • Cloud storage -- Kafka client 3.0 and later enables idempotent production by default. Cloud storage does not support idempotence, which causes the broker to reject messages with a CORRUPT_MESSAGE error.

  • Local storage -- The message key is not set, but the topic uses log compaction. Log compaction requires every message to have a key to determine which messages to retain.

Solution:

  • Cloud storage -- Disable idempotent production by adding the following producer configuration:

      enable.idempotence=false
  • Local storage -- Set the message key for every message sent to the topic.

Spring Cloud message parsing failure

Error message:

  • array index out of bound exception (Java)

Cause:

Spring Cloud Stream uses a built-in header format to parse messages. When a message is produced by a non-Spring Cloud client (for example, a native Kafka Java client), the headers do not match and parsing fails.

Solution:

Choose one of the following approaches:

  • Use Spring Cloud Stream for both producing and consuming. This approach maintains consistent header formatting across the pipeline.

  • Disable header parsing on the consumer side. If messages are produced by a non-Spring Cloud client, set the headerMode parameter to raw in the Spring Cloud Stream consumer configuration: For more information, see Spring Cloud Stream Reference Guide.

      spring:
        cloud:
          stream:
            bindings:
              input:
                consumer:
                  headerMode: raw

Related topics