ApsaraMQ for RabbitMQ throttles the peak transactions per second (TPS) of a single instance. This topic describes the throttling rules for ApsaraMQ for RabbitMQ instances, the behavior that occurs after throttling is triggered, and best practices for managing throttling.
Throttling thresholds
Instance total TPS throttling threshold
Instance type | Serverless instance | Subscription instance | ||||||
Specification | Shared cluster | Exclusive cluster | Elastic TPS disabled | Elastic TPS enabled | ||||
Pay-by-provisioned-capacity-and-elastic-traffic/pay-by-messaging-request | Pay-by-provisioned-capacity-and-elastic-traffic | Enterprise Edition | Enterprise Platinum Edition | Professional Edition | Enterprise Edition | Enterprise Platinum Edition | Professional Edition | |
Throttling threshold | Maximum value: 50000. | Twice the peak TPS included in the basic specification. | Peak TPS included in the basic specification. | Twice the peak TPS included in the basic specification. Maximum value: 50000. | Twice the peak TPS included in the basic specification. Maximum value: 50000. | 1.5 times the peak TPS included in the basic specification. | ||
Single-node SendMessage TPS throttling threshold
The broker limits the TPS for SendMessage operations on each backend service node at the instance level. The following table describes the throttling thresholds.
Limit | Serverless instances | Subscription instances | ||||
Shared | Dedicated | Enterprise Edition | Platinum Edition | Professional Edition | ||
By Cumulative Amount | Reserved + Elastic | Reserved + Elastic | ||||
Throttling threshold | 25,000 TPS | 25,000 TPS | None | 25,000 TPS | None | 25,000 TPS |
API operation throttling threshold
Item | API operation | Serverless instance | Subscription instance | |||
Shared cluster | Exclusive cluster | Enterprise Edition | Enterprise Platinum Edition | Professional Edition | ||
Pay-by-provisioned-capacity-and-elastic-traffic/pay-by-messaging-request | Pay-by-provisioned-capacity-and-elastic-traffic | |||||
Synchronous message receiving on an instance |
| 500 TPS | None | 500 TPS | ||
Queue clearance on an instance |
| 500 TPS | None | 500 TPS | ||
Exchange creation on an instance |
| 500 TPS | None | 500 TPS | ||
Exchange deletion on an instance |
| 500 TPS | None | 500 TPS | ||
Queue creation on an instance |
| 500 TPS | None | 500 TPS | ||
Queue deletion on an instance |
| 500 TPS | None | 500 TPS | ||
Binding creation on an instance |
| 500 TPS | None | 500 TPS | ||
Binding deletion on an instance |
| 500 TPS | None | 500 TPS | ||
Message restoration on an instance |
| 500 TPS | None | 500 TPS | ||
Message requeuing on an instance |
| 20 TPS | None | 20 TPS | ||
Throttling rules
If the peak TPS of an ApsaraMQ for RabbitMQ instance exceeds the TPS limit of its specification, the ApsaraMQ for RabbitMQ instance is throttled.
When throttling is triggered, the following events occur:
The ApsaraMQ for RabbitMQ broker returns an error code. For more information, see Error code and error message.
The ApsaraMQ for RabbitMQ broker closes the channel of the current request. You can catch the exception in your code and reopen the channel. For more information, see Sample code for handling error codes.
Error code and error message
Error code: reply-code=530
Error message: reply-text=denied for too many requests
Sample code for handling error codes
The following sample code is in Java:
private static final int MAX_RETRIES = 5; // The maximum number of retries.
private static final long WAIT_TIME_MS = 2000; // The wait time for each retry in milliseconds.
private void doAnythingWithReopenChannels(Connection connection, Channel channel) {
try {
// ......
// Any operation to be performed in the current channel.
// For example, sending or consuming messages.
// ......
} catch (AlreadyClosedException e) {
String message = e.getMessage();
if (isChannelClosed(message)) {
// If the channel is closed, close and re-create the channel.
channel = createChannelWithRetry(connection);
// You can continue to perform other operations after reconnection.
// ......
} else {
throw e;
}
}
}
private Channel createChannelWithRetry(Connection connection) {
for (int attempt = 1; attempt <= MAX_RETRIES; attempt++) {
try {
return connection.createChannel();
} catch (Exception e) {
System.err.println("Failed to create channel. Attempt " + attempt + " of " + MAX_RETRIES);
// Check for errors. If the channel is still closed due to throttling, you can wait and then retry.
// You can also remove this retry logic.
if (attempt < MAX_RETRIES) {
try {
Thread.sleep(WAIT_TIME_MS);
} catch (InterruptedException ie) {
Thread.currentThread().interrupt(); // Restore the interrupted state.
}
} else {
throw new RuntimeException("Exceeded maximum retries to create channel", e);
}
}
}
throw new RuntimeException("This line should never be reached"); // In theory, this line of code is unreachable.
}
private boolean isChannelClosed(String errorMsg) {
// Check whether the error message contains "channel.close". This error indicates that the channel is closed.
// The error may contain error messages such as 530 and 541.
if (errorMsg != null && errorMsg.contains("channel.close")) {
System.out.println("[ChannelClosed] Error details: " + errorMsg);
return true;
}
return false;
}Query the peak TPS of an instance
You can query the actual peak TPS of an instance to understand your business's traffic fluctuations and peak traffic and determine whether the instance specification meets your business requirements.
ApsaraMQ for RabbitMQ provides the following three methods for querying the peak TPS of an instance:
Method | Description | Time granularity | Resource level |
(Recommend) Query the peak TPS of an instance and configure an alert rule by using CloudMonitor | Benefits:
| Peak TPS at the minute level The peak TPS of an instance during a 1-minute statistical period | Peak TPS of an instance |
(Recommend) Query the peak TPS of an instance on the Instance Details page |
| Peak TPS at the second level |
|
Query the peak TPS of an instance by using Simple Log Service |
| Peak TPS at the second level | Peak TPS of an instance |
What do I do if throttling is triggered due to TPS overage?
If throttling is triggered for an instance or connection and affects your business because the peak TPS is not properly configured, we recommend that you use the following solutions.
Solutions to throttling caused by the exceeding of the total TPS of a single instance
For test scenarios or short-term scenarios with uncertain peak traffic or small amounts of traffic, we recommend that you use a serverless ApsaraMQ for RabbitMQ instance. If you use a subscription-based ApsaraMQ for RabbitMQ instance in such scenarios, we recommend that you enable the elastic TPS feature for the instance. For more information, see Enable the elastic TPS feature for an instance.
For long-term scenarios with stable and large amounts of traffic, we recommend that you upgrade the TPS specification. For more information, see Upgrade instance configurations.
Solutions to throttling caused by the exceeding of TPS on a single node
An ApsaraMQ for RabbitMQ cluster uses a distributed architecture. We recommend that you create multiple connections (at least 10) for each queue so that clients can connect to multiple service nodes in the cluster in a balanced manner. This method can effectively prevent load hotspots and improve message sending and consumption efficiency.
If you are a Spring user, we recommend that you use the CONNECTION mode of
CachingConnectionFactory. For more information, see Spring integration.