When a bucket's bandwidth or queries per second (QPS) exceeds the default limit, OSS throttles or denies requests until traffic drops back within bounds. This topic covers how to confirm the overrun and fix the root cause.
Default limits:
| Metric | Limit | Behavior when exceeded |
|---|---|---|
| Bandwidth (Chinese mainland regions) | 10 Gbit/s | Requests are throttled |
| Bandwidth (regions outside Chinese mainland) | 5 Gbit/s | Requests are throttled |
| QPS | 10,000 requests per second | Requests beyond the limit are denied |
For the full list of OSS limits and performance metrics, see Limits.
Confirm the overrun
Before investigating root causes, verify that the bucket has hit a limit.
Log on to the OSS console.
In the left-side navigation pane, click Buckets, then click the name of the target bucket.
In the left-side navigation pane, choose Data Usage > Basic Statistics to view bandwidth and QPS metrics.
If the metrics confirm an overrun, use the sections below to identify and resolve the root cause.
Identify and fix the root cause
Internet bandwidth spike
If Internet bandwidth has spiked suddenly, route requests through Alibaba Cloud CDN. CDN caches OSS resources at edge nodes, cutting direct OSS origin traffic.
For setup instructions, see Use CDN to accelerate access to OSS.
CDN back-to-origin overload
If CDN back-to-origin requests to OSS are high, the CDN cache hit rate is likely too low. To reduce back-to-origin traffic:
Increase the CDN cache expiration time. For instructions, see Add a cache rule.
Prefetch (warm up) the CDN cache to pre-populate edge nodes before traffic arrives:
Log on to the Alibaba Cloud CDN console.
In the left-side navigation pane, choose Content Delivery > Purge and Prefetch.
On the Purge/Prefetch tab, configure the prefetch rules.
On the Records tab, check whether recent CDN cache purge requests have invalidated cached content and forced back-to-origin requests.
Intranet bandwidth increase
If intranet bandwidth has risen, verify whether the increase reflects legitimate traffic growth. If so, review application access patterns to ensure a reasonable access increment.
QPS spike above 10,000
If QPS exceeds 10,000, the most common causes are request bursts and concentrated object key access.
Limit concurrent requests and manage retries
Limit the number of concurrent operations in the application and maintain a reasonable number of retries, especially under unstable network conditions.
Suspicious or unauthorized traffic
If a sudden, unexplained traffic surge occurs, enable logging or real-time log query to identify the source.
In the logs, check whether source IP addresses, Referer headers, or User-Agent values are concentrated on a small number of entries—a pattern that often indicates scraping or unauthorized access. If you confirm abnormal traffic, see How do I fix abnormal traffic problems in OSS?