All Products
Search
Document Center

ApsaraMQ for Kafka:Troubleshoot message accumulation in ApsaraMQ for Kafka

Last Updated:Mar 11, 2026

Message accumulation occurs when a consumer group's committed offset falls behind the broker's latest produced offset (high-water mark). The difference between these two offsets is the accumulated message count. A growing count does not always indicate a problem -- what matters is whether consumption is keeping pace with production. Use this guide to diagnose whether accumulation is normal and to resolve abnormal cases.

How message consumption works

Before diagnosing accumulation, understand the two-phase consumption cycle on each client:

  1. Pull: The client fetches messages from the broker.

  2. Process: The client runs business logic on each message, then commits the consumer offset back to the broker.

image

The accumulated message count equals the broker's high-water mark minus the consumer group's committed offset. A large number alone does not signal a problem. Focus on the trend: is the gap stable, growing, or caused by uncommitted offsets?

Diagnose accumulation

To check whether accumulation is normal, inspect the consumer group metrics in the ApsaraMQ for Kafka console:

  1. Log on to the ApsaraMQ for Kafka console.

  2. In the top navigation bar, select the region where your instance resides.

  3. In the left-side navigation pane, click Instances.

  4. On the Instances page, click the name of the target instance.

  5. On the Instance Details page, click Groups in the left-side navigation pane.

  6. On the Groups page, find the target group and choose More > Consumer Status in the Actions column.

  7. On the Consumer Status page, check the Last Consumed At, Accumulated Messages, and Consumer Offset values.

Note

These values are refreshed at 1-minute intervals. Click Details to view the consumer offset for each partition.

Use the following decision table to interpret the metrics:

SymptomDiagnosisAction
Last Consumed At is close to the current time, and Accumulated Messages fluctuates within a stable rangeNormal -- the client is pulling and processing messages at a steady pace.No action required.
Accumulated Messages keeps increasing, and Consumer Offset stays unchangedAbnormal -- the consumer thread is blocked. The client has stopped processing messages and committing offsets.See Resolve abnormal accumulation.
Accumulated Messages keeps increasing, but Consumer Offset is advancingAbnormal -- consumption is too slow. The client is processing messages, but the processing rate is lower than the production rate. The bottleneck is in the processing phase (phase 2), not the pull phase.See Resolve abnormal accumulation.
Messages appear accumulated in partitions, but downstream processing is normalLikely a false positive. If the downstream system uses the assign consumption mode, offsets are managed manually. Messages may already be consumed but show as accumulated because offsets have not been committed.Commit offsets manually to clear the reported accumulation.
Note

A large Accumulated Messages value does not always mean a problem. The displayed count depends on the production rate and offset commit frequency. For example, if a topic receives 10,000 messages per second and offsets are committed once per second, the accumulated count normally fluctuates around 10,000.

Resolve abnormal accumulation

After confirming abnormal accumulation, identify the bottleneck and increase the consumption rate.

Identify the bottleneck

Determine whether the consumer thread is blocked or simply slow:

  • Blocked thread: If Consumer Offset is not advancing, the consumer thread is likely stuck. Use jstack (for Java applications) to capture a thread dump and identify the blocking point. For more information, see jstack - Stack Trace.

  • Slow processing: If Consumer Offset is advancing but falling behind production, profile the message-processing logic in your application. Look for slow I/O calls, database writes, or blocking operations in the processing phase.

Increase the consumption rate

Use one or both of the following approaches:

  • Add consumers: Add more consumer instances within the same consumer group, either as additional threads in an existing process or as separate processes. Each consumer handles one or more partitions. If the number of consumers already equals or exceeds the number of partitions, adding more consumers has no effect -- the extra consumers remain idle.

  • Increase consumption threads: Use multi-threaded consumption within each consumer instance. For implementation details, see the "Increase consumption rate" section in Best practices for consumers.

Note

In most cases, abnormal message accumulation is caused by slow message consumption or a blocked consumption thread. Avoid setting long durations for related parameters in the consumption logic.

Check for rebalances

If messages are accumulated and the consumer status appears abnormal in the console, the consumer group may be rebalancing. During a rebalance, no messages are consumed.

Frequent rebalances are typically caused by consumers connecting and disconnecting at a high rate. For more information, see Why do rebalances frequently occur on my consumer client?

References