This topic describes the limits on query and analysis in Simple Log Service (SLS).
Logstore
Queries
Item | Description |
Number of keywords | The number of keywords that are used as search conditions. The number of logical operators is not included. You can specify up to 30 keywords in a search statement. |
Size of a field value | The maximum size of a field value is 512 KB. The excess part is not involved in searching. If the size of a field value is greater than 512 KB, logs may fail to be obtained by using keywords, but the logs are actually stored in the logstore. Note To set the maximum length of a log field value, see Why are field values truncated when I query and analyze logs? |
Maximum number of concurrent search statements | Each project supports up to 100 concurrent search statements. For example, 100 users can concurrently execute search statements in all logstores of a project. |
Returned result | The returned logs are displayed on multiple pages. Each page displays up to 100 logs. |
Fuzzy search | In a fuzzy search, SLS matches up to 100 words that meet the specified conditions and returns the logs that meet the search conditions and contain one or more of these words. For more information, see Fuzzy search. |
Data sorting in search results | By default, search results are displayed in descending order of the time, which is accurate to the second. If the search results are returned within nanoseconds, the search results are displayed in descending order of the time, which is accurate to the nanosecond. |
Analysis
Limit | Standard instance | Dedicated SQL instance | |
SQL enhancement | Complete accuracy | ||
Concurrency | Up to 15 concurrent queries per project. | Up to 100 concurrent queries per project. | Up to 5 concurrent queries per project. |
Data volume | A single query can scan up to 400 MB of log data (excluding cached data). Data exceeding this limit is truncated and marked as incomplete query results. | A single query can scan up to 2 GB of log data (excluding cached data). Data exceeding this limit is truncated and marked as incomplete query results. | Unlimited. |
Method to enable | By default, the log analysis feature is enabled. | A switch is provided for you to manually enable Dedicated SQL. | A switch is provided for you to manually enable Dedicated SQL. |
Fee | Free of charge. | You are charged based on the actual CPU time. | You are charged based on the actual CPU time. |
Data effectiveness mechanism | You can analyze only the data that is written to SLS after the log analysis feature is enabled. If you need to analyze historical data, you must reindex the historical data. | You can analyze only the data that is written to SLS after the log analysis feature is enabled. If you need to analyze historical data, you must reindex the historical data. | You can analyze only the data that is written to SLS after the log analysis feature is enabled. If you need to analyze historical data, you must reindex the historical data. |
Return results | By default, analysis returns up to 100 rows and 100 MB of data. Exceeding 100 MB results in an error. If you need to return more data, use the LIMIT clause. | By default, analysis returns up to 100 rows and 100 MB of data. Exceeding 100 MB results in an error. If you need to return more data, use the LIMIT clause. | By default, analysis returns up to 100 rows and 100 MB of data. Exceeding 100 MB results in an error. If you need to return more data, use the LIMIT clause. |
Maximum field length | The default maximum length for a single field is 2,048 bytes (2 KB) and can be adjusted up to 16,384 bytes (16 KB). Data beyond this limit will not be included in log query and analysis. Note To change this limit, adjust Maximum Field Length. Changes apply only to new data. For more information, see Create indexes. | The default maximum length for a single field is 2,048 bytes (2 KB) and can be adjusted up to 16,384 bytes (16 KB). Data beyond this limit will not be included in log query and analysis. Note To change this limit, adjust Maximum Field Length. Changes apply only to new data. For more information, see Create indexes. | The default maximum length for a single field is 2,048 bytes (2 KB) and can be adjusted up to 16,384 bytes (16 KB). Data beyond this limit will not be included in log query and analysis. Note To change this limit, adjust Maximum Field Length. Changes apply only to new data. For more information, see Create indexes. |
Timeout period | The maximum timeout period for an analysis operation is 55 seconds. | The maximum timeout period for an analysis operation is 55 seconds. | The maximum timeout period for an analysis operation is 55 seconds. |
Number of bits for double-type field values | Double-type field values are limited to 52 bits. Exceeding this can lead to precision loss in floating-point numbers. | Double-type field values are limited to 52 bits. Exceeding this can lead to precision loss in floating-point numbers. | Double-type field values are limited to 52 bits. Exceeding this can lead to precision loss in floating-point numbers. |
Metricstore
Limit | Description | Notes |
API list | Only the /query, /query_range, /labels, /label/{label}/values, and /series API operations are supported. |
|
Data specifications |
| For other limits, see Metric. |
Concurrent queries | A single project supports a maximum of 15 concurrent query operations. | For example, 15 users can run query operations in different metricstores within the same project at the same time. |
Data read volume | A single shard can read a maximum of 2 million time series, 2 million data points, or 200 MB of data at a time. If any of these limits is reached during the read process, the process stops. | If a limit is reached, the status is recorded as "incomplete read" and returned to the query side. To support large data volume reads, split shards. |
Data volume for computing | Before a PromQL calculation is run, the volume of raw data on a single node is checked. The current limit allows a maximum of 200 million time series, 200 million data points, or 2 GB of data for a calculation. If any of these limits is exceeded, a calculation error is returned. | If your business requires single-execution aggregation of large data volumes, enable the concurrent computing feature. For more information, see Concurrent computing. |
Data points for computing | During the calculation process in the PromQL engine, a "point selection" operation is performed. If the number of selected data points included in the calculation exceeds 50 million, an error is reported. | This follows the same calculation limits as open source Prometheus. If the query is for an aggregation, you can use the concurrent computing feature. |
Query queue length | When a request is sent to the server-side, it first enters a queue to wait for execution. If the number of tasks waiting in the queue exceeds 200, subsequent requests are discarded. | If a burst of requests with a high number of queries per second (QPS) arrives in a short period, some requests are denied. |
Query results (PromQL) | In the standard open source protocol, the /query_range API operation has a limit of 11,000 data points returned per time series. If the query parameters meet the condition (end - start)/step > 11000, an error is reported. | For queries over a long time range, increase the step size parameter as needed. |
Query results (SQL) | In a single SQL query or calculation, a maximum of 100 rows of data are returned by default. If you add a "limit all" clause to the SQL statement, a maximum of 1 million rows of data are returned. This limit applies to the following two scenarios:
| One million rows of data represent one million data points. For information about the query syntax, see Syntax for time series data query and analysis. |
Nesting PromQL subqueries in SQL | The length of a PromQL statement is limited to 3,000 characters. | For information about the query syntax, see Syntax of query and analysis on metric data. |
Remote Read API | The Remote Read API supports returning 1 GB of data in a single request. The maximum query time span supported is 30 days. Note The Remote Read API pulls all raw data. Calling this API consumes a large amount of memory resources in the metricstore. The number of concurrent requests is limited to 10. We do not recommend that you use this API in production environments. Use the query API of metricstore instead. To obtain raw data, use data transformation, data shipping, or data consumption and export. | The maximum query time span cannot be adjusted. For the open source Remote Read API documentation, see Prometheus Remote Read. Note When you request data using the Remote Read API, you must set the lookback-delta of your local Prometheus to 3 minutes. This matches the default lookback-delta parameter of metricstore. Otherwise, the query results may be incomplete. |
lookback-delta | In SLS metricstore, this parameter is set to 3 minutes by default. | lookback-delta is a parameter specific to PromQL queries. For more information, see lookback-delta. The PromQL API supports custom settings. The maximum value is 3 days. For information about how to configure custom settings, see Metric query APIs. |
Timeout | The default timeout period is 1 minute for PromQL API queries and 55 seconds for SQL queries. | The PromQL API supports custom settings. For information about how to configure custom settings, see Metric query APIs. |
Limits on Meta APIs | To ensure query performance, Meta APIs are limited to querying a maximum of 5 minutes of data. This limit applies only to the /labels, /label/{label}/values, and /series API operations. | A 5-minute time window means the time range extends 5 minutes back from the end parameter, which is [end - 5min, end]. The PromQL API supports custom start and end times for Meta APIs. For information about how to configure the settings, see Query Series API. Note By default, Meta APIs query all data. Set a reasonable match parameter to focus the query and significantly improve performance. For more information, see Query Series API. |