This topic describes the limits of the scheduled SQL feature.
Special jobs
Some applications (such as Trace and CloudLens for SLB) of Simple Log Service depend on the Scheduled SQL feature. To ensure that these applications can be used as expected, Scheduled SQL does not allow any changes to the jobs that are generated when you use the applications. These jobs are called special jobs. You cannot update, copy, or delete a special job on the Scheduled SQL page. If you want to update, copy, or delete a special job, perform the operation in the related application.
Query and analysis
Scheduled SQL supports only Dedicated SQL.
Item | Description |
Number of concurrent analytic statements | Each project supports up to 150 concurrent analytic statements. For example, 150 users can concurrently execute analytic statements in all Logstores of a project. |
Data volume | An analytic statement can scan up to 200 billion rows of data. |
Applicable scope | You can analyze only the data that is written to Simple Log Service after the log analysis feature is enabled. If you want to analyze historical data, you must reindex the historical data. For more information, see Reindex logs for a Logstore. |
Return result |
|
Size of a field value | By default, the size of a field value is 2,048 bytes, equivalent to 2 KB. The maximum size of a field value is 16,384 bytes, equivalent to 16 KB. If the size of a field value exceeds 16 KB, the excess data is not involved in analysis. You can modify the maximum size for a field value when you configure indexes. Valid values: 64 to 16384. Unit: bytes. For more information, see Create indexes. |
Timeout period | The maximum timeout period for an analytic statement is 10 minutes. |
Number of bits in the mantissa part of a double-type field value | A double-type field value can contain up to 52 bits in the mantissa part. If the mantissa part of a double-type field value contains more than 52 bits, the precision of the field value is compromised. |
Fuzzy search | In a fuzzy search, Simple Log Service matches up to 100 words that meet the specified conditions and returns the logs that contain one or more of these words and meet the query conditions. |
Inaccurate query result | If query results are inaccurate, no errors are reported. However, the issue is recorded in the instance status information and included in job running records. The recording feature must be manually enabled. |
Data latency | If data latency occurs, some data may be missed in a query. If the data of a point in time arrives later after the instance for that point in time runs, the data is not included when the next instance runs. For more information, see How do I ensure data accuracy when I execute SQL statements to analyze data? |
Time window | The time window for a single query ranges from 1 minute to 24 hours. |
Metastore association | No supported |
LIMIT clause | Scheduled SQL supports only |
Data write
Item | Description |
Write threshold of a Logstore | If the write threshold is exceeded when you write data, the Scheduled SQL job is retried for more than 10 minutes. After the retry time, an error message is returned. For more information, see Data read and write. |
Cross-region data transmission | When data is transmitted across regions inside China, the network is stable, but latency is high. The latency varies based on regions. When data is transmitted across regions outside China, network quality cannot be ensured. |
Job running
Item | Description |
Timeout period | The maximum timeout period of a job is 1,800 seconds. If the timeout period of a job is exceeded, the job is considered failed. We recommend that you create an alert monitoring task to detect errors and retry failed instances in a timely manner. For more information, see Configure alerts for a Scheduled SQL job and Retry a scheduled SQL instance. |
Number of retries | The maximum number of retries for a job is 100. If a job is retried for more than 100 times, the job is considered failed. |
Delayed running | You can delay running an instance for up to 120 seconds. For more information about delayed running scenarios, see Scheduling and running scenarios. |
Historical running record | The historical running records of a single job can be stored for up to 5 days. We recommend that you create an alert monitoring task to detect errors and retry failed instances in a timely manner. For more information, see Configure alerts for a Scheduled SQL job and Retry a scheduled SQL instance. |