This topic describes the limits on query and analysis in Simple Log Service (SLS).
Logstore
Queries
Limitation | Description |
Number of keywords | A keyword query supports a maximum of 30 conditions, excluding Boolean operators. |
Field value size | The maximum size of a single field value is 512 KB. Content that exceeds this limit is not indexed. If a field value is larger than 512 KB, keyword searches for that field might not return the log. However, the complete data is still stored. Note To set the maximum length of a log field value, see Why are field values truncated during query and analysis? |
Concurrent queries | A single project supports a maximum of 100 concurrent queries. For example, 100 users can run queries at the same time across all logstores in a single project. |
Query results | Each query returns up to 100 results per page. You can navigate through the pages to view all results. |
fuzzy search | During a fuzzy search, Simple Log Service finds up to 100 matching terms and returns all logs that contain these terms. For more information, see Fuzzy search. |
Result sorting | By default, results are sorted by time in descending order (newest first) with second-level precision. If nanosecond timestamps exist, sorting uses nanosecond precision. |
Analysis
|
Limit |
General-purpose instance |
Dedicated SQL |
|
|
SQL enhancement |
Full Precision |
||
|
Concurrency |
Up to 15 concurrent queries per project. |
Up to 100 concurrent queries per project. |
Up to 5 concurrent queries per project. |
|
Data volume |
A single query can scan up to 400 MB of log data, excluding cached data. Data that exceeds this limit is truncated, and the result is marked as incomplete query result. |
A single query can scan up to 2 GB of log data, excluding cached data. Data that exceeds this limit is truncated, and the result is marked as incomplete query result. |
Unlimited. |
|
Enabling the mode |
Enabled by default. |
You can enable this feature using a switch. For more information, see SQL enhancement. |
You can enable this feature using a switch. For more information, see SQL complete accuracy. |
|
Fee |
Free of charge. |
Charged based on the actual CPU time used. |
Charged based on the actual CPU time used. |
|
Data effectiveness |
The analysis feature applies only to data written after the feature is enabled. To analyze historical data, you must reindex the data. |
The analysis feature applies only to data written after the feature is enabled. To analyze historical data, you must reindex the data. |
The analysis feature applies only to data written after the feature is enabled. To analyze historical data, you must reindex the data. |
|
Return results |
By default, an analysis operation returns a maximum of 100 rows of data and 100 MB of data. An error is reported for an analytic statement that returns more than 100 MB of data. To return more data, use the LIMIT clause. |
By default, an analysis operation returns a maximum of 100 rows of data and 100 MB of data. An error is reported for an analytic statement that returns more than 100 MB of data. To return more data, use the LIMIT clause. |
By default, an analysis operation returns a maximum of 100 rows of data and 100 MB of data. An error is reported for an analytic statement that returns more than 100 MB of data. To return more data, use the LIMIT clause. |
|
Field value size |
The default maximum length of a single field value is 2 KB (2,048 bytes). You can increase the maximum length to 16 KB (16,384 bytes). The part of a value that exceeds the limit is not used in log analysis and retrieval. Note
To modify the maximum length of a field value, set Maximum Length of Text Field. The updated index setting is effective only for incremental data. For more information, see Create an index. |
The default maximum length of a single field value is 2 KB (2,048 bytes). You can increase the maximum length to 16 KB (16,384 bytes). The part of a value that exceeds the limit is not used in log analysis and retrieval. Note
To modify the maximum length of a field value, set Maximum Length of Text Field. The updated index setting is effective only for incremental data. For more information, see Create an index. |
The default maximum length of a single field value is 2 KB (2,048 bytes). You can increase the maximum length to 16 KB (16,384 bytes). The part of a value that exceeds the limit is not used in log analysis and retrieval. Note
To modify the maximum length of a field value, set Maximum Length of Text Field. The updated index setting is effective only for incremental data. For more information, see Create an index. |
|
Timeout period |
The maximum timeout period for an analysis operation is 55 seconds. |
The maximum timeout period for an analysis operation is 55 seconds. |
The maximum timeout period for an analysis operation is 55 seconds. |
|
Number of bits for double-typed field values |
A double-typed field value can have a maximum of 52 bits. If the number of bits used to encode a floating-point number exceeds 52, a loss of precision occurs. |
A double-typed field value can have a maximum of 52 bits. If the number of bits used to encode a floating-point number exceeds 52, a loss of precision occurs. |
A double-typed field value can have a maximum of 52 bits. If the number of bits used to encode a floating-point number exceeds 52, a loss of precision occurs. |
Metricstore
Limit | Description | Notes |
API list | Only the /query, /query_range, /labels, /label/{label}/values, and /series API operations are supported. |
|
Data specifications |
| For other limits, see Metric. |
Concurrent queries | A single project supports a maximum of 15 concurrent query operations. | For example, 15 users can run query operations in different metricstores within the same project at the same time. |
Data read volume | A single shard can read a maximum of 2 million time series, 2 million data points, or 200 MB of data at a time. If any of these limits is reached during the read process, the process stops. | If a limit is reached, the status is recorded as "incomplete read" and returned to the query side. To support large data volume reads, split shards. |
Data volume for computing | Before a PromQL calculation is run, the volume of raw data on a single node is checked. The current limit allows a maximum of 200 million time series, 200 million data points, or 2 GB of data for a calculation. If any of these limits is exceeded, a calculation error is returned. | If your business requires single-execution aggregation of large data volumes, enable the concurrent computing feature. For more information, see Concurrent computing. |
Data points for computing | During the calculation process in the PromQL engine, a "point selection" operation is performed. If the number of selected data points included in the calculation exceeds 50 million, an error is reported. | This follows the same calculation limits as open source Prometheus. If the query is for an aggregation, you can use the concurrent computing feature. |
Query queue length | When a request is sent to the server-side, it first enters a queue to wait for execution. If the number of tasks waiting in the queue exceeds 200, subsequent requests are discarded. | If a burst of requests with a high number of queries per second (QPS) arrives in a short period, some requests are denied. |
Query results (PromQL) | In the standard open source protocol, the /query_range API operation has a limit of 11,000 data points returned per time series. If the query parameters meet the condition (end - start)/step > 11000, an error is reported. | For queries over a long time range, increase the step size parameter as needed. |
Query results (SQL) | In a single SQL query or calculation, a maximum of 100 rows of data are returned by default. If you add a "limit all" clause to the SQL statement, a maximum of 1 million rows of data are returned. This limit applies to the following two scenarios:
| One million rows of data represent one million data points. For information about the query syntax, see Syntax for time series data query and analysis. |
Nesting PromQL subqueries in SQL | The length of a PromQL statement is limited to 3,000 characters. | For information about the query syntax, see Syntax of query and analysis on metric data. |
Remote Read API | The Remote Read API supports returning 1 GB of data in a single request. The maximum query time span supported is 30 days. Note The Remote Read API pulls all raw data. Calling this API consumes a large amount of memory resources in the metricstore. The number of concurrent requests is limited to 10. We do not recommend that you use this API in production environments. Use the query API of metricstore instead. To obtain raw data, use data transformation, data shipping, or data consumption and export. | The maximum query time span cannot be adjusted. For the open source Remote Read API documentation, see Prometheus Remote Read. Note When you request data using the Remote Read API, you must set the lookback-delta of your local Prometheus to 3 minutes. This matches the default lookback-delta parameter of metricstore. Otherwise, the query results may be incomplete. |
lookback-delta | In SLS metricstore, this parameter is set to 3 minutes by default. | lookback-delta is a parameter specific to PromQL queries. For more information, see lookback-delta. The PromQL API supports custom settings. The maximum value is 3 days. For information about how to configure custom settings, see Time series metric query APIs. |
Timeout | The default timeout period is 1 minute for PromQL API queries and 55 seconds for SQL queries. | The PromQL API supports custom settings. For information about how to configure custom settings, see Time series metric query APIs. |
Limits on Meta APIs | To ensure query performance, Meta APIs are limited to querying a maximum of 5 minutes of data. This limit applies only to the /labels, /label/{label}/values, and /series API operations. | A 5-minute time window means the time range extends 5 minutes back from the end parameter, which is [end - 5min, end]. The PromQL API supports custom start and end times for Meta APIs. For information about how to configure the settings, see Query series API. Note By default, Meta APIs query all data. Set a reasonable match parameter to focus the query and significantly improve performance. For more information, see Query series API. |