Dedicated SQL is a paid feature that is provided by Log Service. You can use the Dedicated SQL feature to analyze log data by using SQL statements. Compared with the Standard SQL feature, which you can use for free, the Dedicated SQL feature has no limits on the number of concurrent operations or the amount of data to be analyzed.
Prerequisites
- A Standard Logstore is created. For more information, see Create a Logstore.
- Logs are collected. For more information, see Data collection overview.
- Indexes are configured. For more information, see Configure indexes.
Background information
If you use the Standard SQL feature to analyze a large amount of log data that is generated over a period of time, Log Service cannot scan all log data in a single query. To ensure timeliness, Log Service limits the amount of data that is scanned in each shard and returns some inaccurate results. In this case, we recommend that you increase the number of shards to increase the computing resources. However, after you increase the number of shards, only new data that is written to the shards can be read for scanning. Historical data cannot be read for scanning. The number of consumers also increases.
Advantages
- The Dedicated SQL feature can analyze hundreds of billions of data records with high performance.
- The Dedicated SQL feature allows up to 100 concurrent operations in each project. The Standard SQL feature allows only 15 concurrent operations.
- The Dedicated SQL feature is allocated exclusive resources. The performance of the Dedicated SQL feature is not affected by traffic bursts from other users.
Scenarios
- You need to analyze data with high performance. For example, you need to analyze data in real time.
- You need to analyze data that is generated over a long period of time. For example, you need to analyze data that is generated over a month.
- You need to analyze a large amount of data. For example, you need to analyze terabytes of data every day.
- You need to analyze data by using more than 15 concurrent SQL statements and display the analysis results based on multiple metrics from multiple dimensions.
Procedure
- Enable once: When you execute a query statement in a Logstore, click the
icon. The Dedicated SQL feature takes effect only for the query statements that you execute in the current Logstore.
- Enable permanently: If you turn on Enable by Default, the Dedicated SQL feature is enabled for the current project and takes effect for all query statements that you execute in the project, including the query statements for alerts and dashboards.
Enable once
Enable permanently
SDK examples
FAQ
- How do I enable the Dedicated SQL feature by calling an API operation?
You can enable the Dedicated SQL feature by calling the GetLogs operation. When you call this operation, you can use the powerSql or query parameter to specify whether to enable the Dedicated SQL feature. For more information, see GetLogs.
- How do I obtain the amount of CPU time that I use?
After you perform analysis and query operations, you can obtain the amount of CPU time that you use in the Log Service console. The following figure shows an example.
- Can I control the cost of the Dedicated SQL feature?
Yes, you can modify the number of CUs to control the cost of the Dedicated SQL feature. To modify the number of cores, go to the Project Overview page of your project and change the value of the CUs of SQL-dedicated Instance parameter.
- What are the fees for the Dedicated SQL feature when I execute a query statement once?
The fees for the Dedicated SQL feature vary based on the amount of data on which you execute query statements. The following table provides examples.
Query statement Amount of data (rows) Average cost per execution (USD) * | select avg(double_0) from stress_s1_mil1
4 billion 0.004435 * | select avg(double_0), sum(double_0), max(double_0), min(double_0), count(double_0) from stress_s1_mil1
4 billion 0.006504 * | select avg(double_0), sum(double_1), max(double_2), min(double_3), count(double_4) from stress_s1_mil1
4 billion 0.013600 * | select key_0 , avg(double_0) as pv from stress_s1_mil1 group by key_0 order by pv desc limit 1000
4 billion 0.011826 * | select long_0, avg(double_0) as pv from stress_s1_mil1 group by long_0 order by pv desc limit 1000
4 billion 0.011087 * | select long_0, long_1, avg(double_0) as pv from stress_s1_mil1 group by long_0,long_1 order by pv desc limit 1000
0.3 billion 0.010791 * | select avg(double_0) from stress_s1_mil1 where key_0='key_987'
4 billion 0.00007