All Products
Search
Document Center

Simple Log Service:Process and store data from a Logstore to a Metricstore

Last Updated:Aug 25, 2023

Simple Log Service provides the Scheduled SQL feature. You can use the feature to analyze data at a scheduled time and aggregate data for storage. You can also use the feature to project and filter data. The Scheduled SQL feature can process data in a source Logstore and store the processed data to a destination Metricstore.

Prerequisites

Important The Logstores that are described in this topic are Standard Logstores. For more information, see Manage a Logstore.

Procedure

Note

The Scheduled SQL feature is in public preview. If you enable the feature, you are charged only for the computing resources that are consumed by Dedicated SQL. For more information, see Billable items of pay-by-feature.

  1. Log on to the Log Service console.

  2. In the Projects section, click the project that you want to manage.

  3. On the Log Storage > Logstores tab, click the Logstore that you want to manage.

  4. Enter a query statement in the search box, select or specify a time range, and then click Search & Analyze.

    A query statement consists of a search statement and an analytic statement in the Search statement|Analytic statement format. For more information, see Search syntax and Log analysis overview.

    Note

    This step allows you to preview data before you create a Scheduled SQL job. You can check whether the query statement that you entered is valid and whether the query results contain data.

  5. On the Graph tab, click Save as Scheduled SQL Job.

    Create
  6. Create a Scheduled SQL job.

    1. In the Compute Settings step, configure the parameters and click Next.

      Parameter

      Description

      Job Name

      The name of the Scheduled SQL job.

      Task Description

      The description of the Scheduled SQL job.

      Resource Pool

      The resource pool that is used for data analysis. Simple Log Service provides an enhanced type of resource pool.

      The enhanced type of resource pool reuses the computing capability of Dedicated SQL. The enhanced type of resource pool can meet concurrent analysis requirements and isolate resources between Scheduled SQL and your SQL analysis operations in the console. You are charged for the enhanced type of resource pool based on the CPU time that is consumed by your SQL analysis operations. For more information, see Enable Dedicated SQL.

      Write Mode

      Select Import Data from Logstore to Metricstore. The Scheduled SQL feature processes data in the source Logstore and writes the processed data to the destination Metricstore.

      SQL Code

      The query statement. By default, the system displays the statement that you entered in Step 4. The preview operation provided for this parameter has the same effect as the preview operation in Step 4. You can click Preview to check whether the query statement is valid and whether the query results contain data.

      When the Scheduled SQL job runs, Simple Log Service executes this query statement to analyze data.

      SQL Settings

      Metric Column

      The metric columns. Simple Log Service aggregates data based on the query statement that you entered. You can select one or more columns of the numeric data type in the query results for this parameter. For more information, see Metric.

      Labels

      The label data. Simple Log Service aggregates data based on the query statement that you entered. You can select one or more columns in the query results for this parameter. For more information, see Metric.

      Rehash

      The switch for hashing. If you turn on Rehash, you can configure the Hash Column parameter to write data that has the same value in a column to the same shard. This improves data locality and query efficiency.

      Valid values of the Hash Column parameter vary based on the query results. You can select one or more columns in the query results as hash columns. For example, if you set Hash Column to status, data that has the same value in the status column is written to the same shard.

      Time Column

      • If you select the time column whose values are UNIX timestamps in the query results for this parameter, the system uses the values of the time column to indicate the time of metrics. Example: atime:1627025331.

      • If you set the value to Null, the system uses the start time of the query statement to indicate the time of metrics.

      For more information, see Metric.

      Additional Tags

      The static tags that are used to identify the attributes of a metric. Each tag is in the key-value pair format.

      For example, you can set label_key to app and label_value to ingress-nginx.

      Target

      Target Region

      The region where the destination project resides.

      Target Project

      The name of the destination project, which stores the results of the query statement.

      Target Logstore

      The name of the destination Metricstore, which stores the results of the query statement.

      Write Authorization

      The method that is used to authorize the Scheduled SQL job to write data to the destination Metricstore. Valid values:

      • Default Role: The Scheduled SQL job assumes the AliyunLogETLRole system role to write the analysis results to the destination Metricstore.

        Important

        Authorization is required only the first time that you create a Scheduled SQL job, and must be completed by using the Alibaba Cloud account to which the destination project belongs.

      • Custom Role: The Scheduled SQL job assumes a custom role to write the analysis results to the destination Metricstore.

        You must grant the custom role the permissions to write data to the destination Metricstore. Then, enter the Alibaba Cloud Resource Name (ARN) of the custom role in the Role ARN field.

      SQL Execution Authorization

      The method that is used to authorize the Scheduled SQL job to read data from the source Logstore and analyze the data by using query statements in the current project. Valid values:

      • Default Role: The Scheduled SQL job assumes the AliyunLogETLRole system role to perform the required operations.

        Important

        Authorization is required only the first time that you create a Scheduled SQL job, and must be completed by using the Alibaba Cloud account to which the destination project belongs.

      • Custom Role: The Scheduled SQL job assumes a custom role to perform the required operations.

        You must grant the custom role the required permissions. Then, enter the ARN of the custom role in the Role ARN field. For more information, see Step 1: Grant the RAM role the permissions to analyze log data in a source Logstore.

    2. In the Scheduling Settings step, configure the following parameters and click OK.

      Parameter

      Description

      Specify Scheduling Interval

      The frequency at which the Scheduled SQL job is scheduled. An instance is generated each time the Scheduled SQL job is scheduled. This parameter determines the scheduled time for each instance. Valid values:

      • Hourly: The scheduled SQL task is scheduled every hour.
      • Daily: The scheduled SQL task is scheduled at a fixed time every day.
      • Weekly: The scheduled SQL task is scheduled at a fixed time on a fixed day of each week.
      • Fixed Interval: The scheduled SQL task is scheduled at a fixed interval.
      • Cron: The scheduled SQL task is scheduled at an interval that is specified by using a cron expression.

        If you use a cron expression, the specified interval is accurate to minutes based on the 24-hour clock. For example, the expression 0 0/1 * * * indicates that the scheduled SQL task is scheduled every hour from 00:00.

        If you need to specify the time zone, select Cron. For a list of common time zones, see Time zones.

      Scheduling Time Range

      The time range during which the Scheduled SQL job is scheduled. Valid values:

      • Start at a specified time: specifies the time when the scheduled SQL task is first scheduled.
      • Within Specific Period: specifies the time range within which the scheduled SQL task is scheduled.
      Important

      If you specify the time range, the instances of the Scheduled SQL job can run only within the time range. After the end time, the Scheduled SQL job no longer generates instances.

      SQL Time Window

      The time window of logs that are analyzed when the Scheduled SQL job runs. This parameter must be configured together with the Scheduling Time Range parameter. The duration specified by this parameter can be up to five times the duration specified by Specify Scheduling Interval. The start time and end time of the SQL time window must be within 24 hours. For more information, see Time expression syntax.

      For example, Specify Scheduling Interval is set to Fixed Interval 10 Minutes, Start Time is set to 2021-04-01 00:00:00, Delay Task is set to 30 Seconds, and SQL Time Window is set to [@m-10m,@m). In this example, the first instance of the Scheduled SQL job is generated at 00:00:30 to analyze the logs that fall in the time range [23:50:00 to 00:00:00). For more information, see Scheduling and running scenarios.

      SQL Timeout

      The threshold of automatic retries if the SQL analysis operation fails. If an instance is retried for a period that exceeds the maximum time that you specify or the number of retries for an instance exceeds the upper limit that you specify, the instance stops retrying and enters the FAILED state. You can manually retry the instance based on the failure cause. For more information, see Retry a scheduled SQL instance.

      Delay Task

      The number of seconds for which the instance is delayed from the scheduled time. Valid values: 0 to 120. Unit: seconds.

      If latency exists when data is written to the destination Metricstore, you can use this parameter to ensure data integrity.

Sample SDKs

Use Log Service SDK for Java to create a Scheduled SQL task