All Products
Search
Document Center

ApsaraMQ for Kafka:Limits

Last Updated:Oct 22, 2025

ApsaraMQ for Kafka has limits on specific metrics. When you use ApsaraMQ for Kafka, you must not exceed these limits to prevent errors in your program.

Important

The Service-Level Agreement (SLA) and its compensation terms do not cover instability caused by instance configurations that exceed the following limits.

Limits

The following table describes the limits for ApsaraMQ for Kafka.

Limit

Limit

Description

Limiting the total number of topics and partitions

Support

The storage and coordination mechanism of ApsaraMQ for Kafka is based on partition granularity, and an excessive number of topics (and therefore partitions) leads to storage fragmentation and a decrease in cluster performance and stability.

Minimum number of partitions per topic

  • Subscription and pay-as-you-go:

    • For topics that use cloud storage, the minimum value is 2.

    • For topics that use local storage, the minimum value is 1.

  • Serverless edition:

    • For topics that use cloud-native storage, the minimum value is 1.

If traffic is high, a single partition can cause data skew and hot spot issues. Set the number of partitions appropriately.

Reducing the number of partitions for a topic

Not supported

This is a limitation of the Apache Kafka design.

Exposing ZooKeeper

Not supported

You do not need to access ZooKeeper to use clients in Apache Kafka V0.9.0 and later. ZooKeeper in ApsaraMQ for Kafka is partially shared and is not exposed for security reasons. You do not need to understand how ZooKeeper works.

Log on to the machine where ApsaraMQ for Kafka is deployed

Not supported

None.

Version

Supports versions 2.2.x to 3.3.x

  • Non-Serverless instances support versions 2.2.x to 2.6.x.

  • Serverless instances support version 3.3.x.

To upgrade the instance version, see Upgrade instance version.

Ratio of partitions to topics

1:1

The number of available topics is directly related to the total number of partitions. For example, you purchase an instance with 50 partitions, the alikafka.hw.2xlarge throughput specification, and 1,000 bonus partitions included in the specification. The total number of partitions for this instance is 50 (purchased) + 1,000 (bonus) = 1,050. The number of available topics is 1,050.

Note

This applies only to non-Serverless instances.

Changing the instance region

Not supported

After an instance is purchased and deployed, its region is tied to physical resources and cannot be changed. To change the instance region, release the instance and purchase a new one.

Changing instance network properties

Supported

You can change network properties as needed. For more information, see Upgrade instance configurations.

Message size

10 MB

The message size cannot exceed 10 MB. Otherwise, the message fails to be sent.

Monitoring and alerts

Supported

Data latency is 1 minute.

Endpoints

Specifications

  • Non-Serverless instances:

    • Standard Edition: Supports default and SSL endpoints.

    • Professional Edition: Supports default, SSL, and SASL endpoints.

  • Serverless instances: Supports default, SSL, and SASL endpoints.

Single partition with cloud storage

May become unavailable during downtime or upgrades

Create more than one partition. If you must use a single partition, use local storage.

Note
  • This limit applies only to non-Serverless instances. Single partitions with cloud storage on Serverless instances provide high availability.

  • Only Professional Edition instances support selecting local storage as the storage engine type when you create a topic. Standard Edition does not support this feature.

Maximum number of messages per batch

32767

If a single message is small, set batch.size to a value that does not exceed 16384.

Note

This limit applies only to non-Serverless instances.

Note

You can no longer purchase non-Serverless ApsaraMQ for Kafka instances based on topic specifications. If your existing instance was purchased based on topic specifications, the topic-to-partition ratio is 1:16. For Professional Edition instances, the number of topics is calculated as the number of purchased topics × 2.

Quota limits

The following table describes the quota limits for ApsaraMQ for Kafka. Exceeding these limits may cause stability issues. The 'Other limits' section describes scenarios that can adversely affect the server. You must exercise caution in these scenarios to prevent server overload and related stability issues.

Unless stated otherwise, these limits apply to each cluster. To request a quota increase, submit a ticket.

In the table, `//` represents integer division, which rounds down to the nearest integer.

Limit

Condition

Description

Subscription/Pay-as-you-go instances

Serverless (Basic Edition)

Serverless (Standard/Professional Edition)

Connections (single node)

  • Starts at 1,000 connections.

  • For every 100 MB/s increase in actual message sending traffic, the number of connections increases by 1,000.

  • The upper limit is 10,000.

Formula:

C = min(10000, 1000 + (F // 100) × 1000)

  • Starts at 1,000 connections.

  • For every 300 MB/s increase in reserved sending capacity, the number of connections increases by 1,000.

  • The upper limit is 10,000.

Formula:

C = min(10000, 1000 + (F // 300) × 1000)

The number of TCP connections to a single broker.

If you require a higher connection limit, submit a ticket.

Internet (SSL) connections (single node)

  • Starts at 200 connections.

  • For every 100 MB/s increase in actual message sending traffic, the number of connections increases by 100.

  • The upper limit is 1,000.

Formula:

C = min(1000, 200 + (F // 100) × 100)

  • Starts at 200 connections.

  • For every 300 MB/s increase in reserved sending capacity, the number of connections increases by 100.

  • The upper limit is 1,000.

Formula:

C = min(1000, 200 + (F // 300) × 100)

The number of Internet (SSL) TCP connections to a single broker.

Connection frequency (single node)

50 per second

150 per second

150 per second

The number of connection attempts from a client to the server per second. This includes failed connections due to reasons such as authentication failures.

Internet (SSL) connection frequency (single node)

10 per second

The number of Internet (SSL) connection attempts from a client to the server per second. This includes failed connections due to reasons such as authentication failures.

batch size

A batch size with a 50th percentile (TP50) of less than 4 KB is considered fragmented sending.

The size of a message batch in a PRODUCE request after the messages are batched by the sending client. To improve batching capabilities, use a client of version 2.4 or later. For more information, see Improve sending performance (reduce fragmented sending requests).

Sending request frequency (cluster)

  • Starts at 10,000 requests per second.

  • For every 20 MB/s increase in actual message sending traffic, the number of requests increases by 2,000 per second.

Formula:

R = 10000 + (F // 20) × 2000

  • Starts at 10,000 requests per second.

  • For every 300 MB/s increase in reserved sending capacity, the number of requests increases by 5,000 per second.

Formula:

R = 10000 + (F // 300) × 5000

  • Starts at 10,000 requests per second.

  • For every 60 MB/s increase in reserved sending capacity, the number of requests increases by 2,000 per second.

Formula:

R = 10000 + (F // 60) × 2000

The number of PRODUCE requests that are sent by the client per second.

If you require a higher request limit, submit a ticket.

Consumption request frequency (cluster)

  • Starts at 5,000 requests per second.

  • For every 20 MB/s increase in actual message consumption traffic, the number of requests increases by 1,000 per second.

Formula:

R = 5000 + (F // 20) × 1000

  • Starts at 5,000 requests per second.

  • For every 100 MB/s increase in reserved subscription capacity, the number of requests increases by 2,500 per second.

Formula:

R = 5000 + (F // 100) × 2500

  • Starts at 5,000 requests per second.

  • For every 20 MB/s increase in reserved subscription capacity, the number of requests increases by 1,000 per second.

Formula:

R = 5000 + (F // 20) × 1000

The number of FETCH requests that are sent by the client per second.

If you require a higher request limit, submit a ticket.

Consumer offset commit frequency (single node)

  • Starts at 100 requests per second.

  • For every 100 MB/s increase in actual message sending traffic, the number of requests increases by 100 per second.

  • The upper limit is 1,000 requests per second.

Formula:

R = min(1000, 100 + (F // 100) * 100)

  • Starts at 100 requests per second.

  • For every 100 MB/s increase in reserved sending capacity, the number of requests increases by 100 per second.

  • The upper limit is 1,000 requests per second.

Formula:

R = min(1000, 100 + (F // 100) × 100)

The number of `OFFSET_COMMIT` requests that are sent by the client per second.

If you require a higher request limit, submit a ticket.

Metadata request frequency (cluster)

  • Starts at 100 requests per second.

  • For every 100 MB/s increase in actual message sending traffic, the number of requests increases by 100 per second.

  • The upper limit is 1,000 requests per second.

Formula:

R = min(1000, 100 + (F // 100) * 100)

  • Starts at 100 requests per second.

  • For every 100 MB/s increase in reserved sending capacity, the number of requests increases by 100 per second.

  • The upper limit is 1,000 requests per second.

Formula:

R = min(1000, 100 + (F // 100) × 100)

The number of metadata requests that the server receives from the client, such as METADATA, INIT_PRODUCER_ID, CREATE_ACL, and JOIN_GROUP.

Warning

Excessive requests can affect cluster stability.

Maximum number of partitions

For information about the maximum number of partitions for each instance specification, see Instance partitions.

The number of partitions includes partitions for different types of topics that you create.

If you require a higher partition limit, submit a ticket.

Create/delete partition frequency (cluster)

900 partitions per 10 seconds

This limit includes all partition operations that are initiated from the console, OpenAPI, or Kafka Admin.

Number of consumer groups (cluster)

2,000 per cluster

The recommended topic-to-group subscription ratio is 1:1 and must not exceed 3:1.

The number of consumer groups that you use.

If you require a higher limit on the number of groups, submit a ticket.

Warning

An excessive number of consumer groups can increase the server-side coordination load and the complexity of metadata management. This may affect performance and fault recovery time.

Message format version

You must use a message format version later than V1 for sending and consuming messages.

Use a client of version 2.4 or later.

Warning

Using an earlier Kafka message format can cause issues such as increased server-side CPU utilization, decreased throughput, and compatibility and security problems.

Other limits

  • Enabling compression algorithms, such as GZIP, consumes more server resources. This affects service latency and throughput.

  • Initializing many transactions with a Producer Id at a high frequency can cause memory overflow and server overload, which affects stability. For this reason, the transactional.id.expiration.ms kernel parameter is set to 15 minutes. If you have special requirements, submit a ticket.

  • Invalid message timestamp blocking: If message.timestamp.type is set to `CreateTime`, the broker rejects a message if the difference between the broker's timestamp and the message's timestamp exceeds the value of the message.timestamp.difference.max.ms parameter. This setting prevents incorrect timestamp configurations. If the timestamp is too early, the log segment is immediately deleted. If the timestamp is too far in the future, the log segment cannot be deleted.

  • To prevent abnormal writes to a compacted topic from filling up the cluster storage and causing downtime, the default storage limit for a compacted topic partition is 5 GB. If you have special requirements, submit a ticket.

  • If the CPU utilization of an instance exceeds 85%, the stability of the instance cluster may be affected. This can cause issues such as downtime and long-tail latency jitter when you send and consume messages.

  • Kafka performance is supported by the cluster. If your message sending behavior or partition allocation is skewed, the cluster cannot perform at its full capacity.

  • Open-source transactional messages have many known and unfixed issues. Use them with caution. For an example, see KAFKA-12671. For more information about other issues, see KAFKA ISSUES.

  • Kafka may re-consume messages in many scenarios, such as during rebalancing. To prevent re-consumed messages from affecting your services, you must implement idempotence checks in your consumption logic.

None