All Products
Search
Document Center

ApsaraMQ for RabbitMQ:Throttling peak transaction traffic of an instance

Last Updated:Mar 08, 2026

ApsaraMQ for RabbitMQ throttles the peak transactions per second (TPS) of a single instance. When traffic exceeds the configured threshold, the broker closes the channel that triggered the violation and returns an error. This guide explains how to prevent throttling, monitor peak TPS, and handle throttling errors.

How throttling works

ApsaraMQ for RabbitMQ enforces TPS limits at three levels:

  • Instance total TPS -- caps the combined send and receive throughput of the entire instance.

  • Single-node SendMessage TPS -- caps the send throughput on each backend service node within the cluster.

  • Per-API operation TPS -- caps specific operations such as basicGet, queueDeclare, and exchangeDeclare.

When any limit is exceeded, the broker returns reply-code=530 with the message reply-text=denied for too many requests and closes the channel that sent the request. The connection itself remains open -- only the channel is affected.

Prevent throttling

Instance total TPS

Choose the approach that matches your traffic pattern:

Traffic pattern Action
Testing, short-term, or unpredictable traffic Use a serverless instance. For subscription instances, enable the elastic TPS feature.
Stable, high-volume production traffic Upgrade the TPS specification to a higher tier.

Single-node SendMessage TPS

ApsaraMQ for RabbitMQ uses a distributed architecture with multiple backend service nodes. If all traffic concentrates on a single node, the per-node limit is reached even though the instance-level limit has headroom.

To distribute load evenly across nodes:

  • Open at least 10 connections per queue. Each connection may land on a different backend node, spreading the send workload and preventing hotspots.

  • For Spring users, set CachingConnectionFactory to CONNECTION mode. This creates a new connection for each session instead of multiplexing channels on a single connection. For details, see Spring integration.

Monitor peak TPS

Track actual peak TPS to detect when traffic approaches the throttling threshold. ApsaraMQ for RabbitMQ provides three monitoring methods:

CloudMonitor (recommended)

  • Granularity: minute-level peak TPS (1-minute statistical period).

  • Scope: instance-level peak TPS.

  • Cost: free.

  • View TPS changes over the past 14 days to identify traffic trends and anomalies.

  • Set alert rules on the instance peak TPS metric to receive notifications before throttling occurs.

For details, see Query the peak TPS of an instance and configure an alert rule by using CloudMonitor.

Instance Details page (recommended)

  • Granularity: second-level peak TPS.

  • Scope: instance-level and per-API-operation peak TPS.

  • Cost: free.

  • Provides second-level precision for pinpointing short traffic spikes.

  • Supports filtering by specific API operation.

To keep the result list manageable, only the first 10 minutes of query results are displayed.

For details, see Query the peak TPS of an instance on the Instance Details page.

Simple Log Service

  • Granularity: second-level peak TPS.

  • Scope: instance-level peak TPS.

  • Cost: billed by Simple Log Service. See Billable items of pay-by-feature.

  • Use SLS query statements for advanced analysis in complex business scenarios.

This method requires familiarity with Simple Log Service query syntax. Results may be harder to interpret compared to CloudMonitor or the Instance Details page.

For details, see Query the peak TPS of an instance by using Simple Log Service.

Throttling thresholds

Use the following tables to determine the throttling limit for your instance type and specification.

Instance total TPS

Serverless instances

Cluster type Billing model Throttling threshold
Shared cluster Pay-by-provisioned-capacity-and-elastic-traffic / pay-by-messaging-request Maximum: 50,000 TPS
Exclusive cluster Pay-by-provisioned-capacity-and-elastic-traffic 2x the peak TPS included in the basic specification

Subscription instances

Edition Elastic TPS Throttling threshold
Enterprise Edition Disabled 1x the peak TPS included in the basic specification
Enterprise Edition Enabled 2x the peak TPS included in the basic specification (maximum: 50,000 TPS)
Enterprise Platinum Edition Disabled 1x the peak TPS included in the basic specification
Enterprise Platinum Edition Enabled 2x the peak TPS included in the basic specification (maximum: 50,000 TPS)
Professional Edition Disabled 1x the peak TPS included in the basic specification
Professional Edition Enabled 1.5x the peak TPS included in the basic specification

Single-node SendMessage TPS

The broker limits the TPS for SendMessage operations on each backend service node within the instance.

Instance type Throttling threshold
Serverless -- shared (by cumulative amount) 25,000 TPS
Serverless -- dedicated (reserved + elastic) 25,000 TPS
Subscription -- Enterprise Edition None
Subscription -- Enterprise Platinum Edition (reserved + elastic) 25,000 TPS
Subscription -- Professional Edition 25,000 TPS

Per-API operation limits

These limits apply per instance. Serverless exclusive cluster instances have no per-API throttling.

Operation API method Serverless (shared) Serverless (exclusive) Subscription
Synchronous message receiving basicGet 500 TPS None 500 TPS
Queue clearance purgeQueue 500 TPS None 500 TPS
Exchange creation exchangeDeclare 500 TPS None 500 TPS
Exchange deletion exchangeDelete 500 TPS None 500 TPS
Queue creation queueDeclare 500 TPS None 500 TPS
Queue deletion queueDelete 500 TPS None 500 TPS
Binding creation queueBind 500 TPS None 500 TPS
Binding deletion queueUnbind 500 TPS None 500 TPS
Message restoration basicRecover 500 TPS None 500 TPS
Message requeuing basicReject(requeue=true) / basicNack(requeue=true) 20 TPS None 20 TPS

Handle throttling errors

Error code and message

When throttling is triggered, the broker closes the affected channel and returns:

  • Error code: reply-code=530

  • Error message: reply-text=denied for too many requests

The following Java stack trace shows a typical throttling error:

Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>
(reply-code=530, reply-text=denied for too many requests, ReqId:5FB4C999314635F952FCBFF6, ErrorHelp[dstQueue=XXX_test_queue,
srcExchange=Producer.ExchangeName,bindingKey=XXX_test_bk, http://mrw.so/6rNqO8], class-id=50, method-id=20)
    at com.rabbitmq.client.impl.ChannelN.asyncShutdown(ChannelN.java:516)
    at com.rabbitmq.client.impl.ChannelN.processAsync(ChannelN.java:346)
    at com.rabbitmq.client.impl.AMQChannel.handleCompleteInboundCommand(AMQChannel.java:182)
    at com.rabbitmq.client.impl.AMQChannel.handleFrame(AMQChannel.java:114)
    at com.rabbitmq.client.impl.AMQConnection.readFrame(AMQConnection.java:672)
    at com.rabbitmq.client.impl.AMQConnection.access$300(AMQConnection.java:48)
    at com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:599)
    at java.lang.Thread.run(Thread.java:748)

The error context includes the request ID (ReqId), destination queue (dstQueue), source exchange (srcExchange), and binding key (bindingKey).

Recover from channel closure

Because the broker closes only the channel (not the connection), catch the AlreadyClosedException and recreate the channel. The following Java example shows a retry-based recovery pattern:

private static final int MAX_RETRIES = 5; // Maximum retry attempts
private static final long WAIT_TIME_MS = 2000; // Wait time between retries in milliseconds

private void doAnythingWithReopenChannels(Connection connection, Channel channel) {
    try {
        // ......
        // Any operation to be performed in the current channel.
        // For example, sending or consuming messages.
        // ......

    } catch (AlreadyClosedException e) {
        String message = e.getMessage();
        if (isChannelClosed(message)) {
            // Channel was closed by the broker. Recreate it.
            channel = createChannelWithRetry(connection);
            // Continue with other operations after recovery.
            // ......
        } else {
            throw e;
        }
    }
}

private Channel createChannelWithRetry(Connection connection) {
    for (int attempt = 1; attempt <= MAX_RETRIES; attempt++) {
        try {
            return connection.createChannel();
        } catch (Exception e) {
            System.err.println("Failed to create channel. Attempt " + attempt + " of " + MAX_RETRIES);
            // If channel creation fails (possibly still throttled), wait and retry.
            if (attempt < MAX_RETRIES) {
                try {
                    Thread.sleep(WAIT_TIME_MS);
                } catch (InterruptedException ie) {
                    Thread.currentThread().interrupt(); // Restore the interrupted state.
                }
            } else {
                throw new RuntimeException("Exceeded maximum retries to create channel", e);
            }
        }
    }
    throw new RuntimeException("This line should never be reached");
}

private boolean isChannelClosed(String errorMsg) {
    // Check whether the error message contains "channel.close".
    // This covers both error code 530 (throttling) and 541 (internal error).
    if (errorMsg != null && errorMsg.contains("channel.close")) {
        System.out.println("[ChannelClosed] Error details: " + errorMsg);
        return true;
    }
    return false;
}

Key points:

  • The retry uses a fixed 2,000 ms wait between attempts. Adjust this value based on your traffic pattern.

  • The isChannelClosed method checks for channel.close in the error message, which covers both reply-code=530 (throttling) and other channel closure scenarios.

  • After recovering the channel, resume normal operations on the new channel object.