All Products
Search
Document Center

:Overview

Last Updated:Jan 19, 2024

Simple Log Service allows you to use Function Compute to transform streaming data. You can configure an extract-transform-load (ETL) job to detect data updates and invoke functions. Then the incremental data in a Logstore is consumed and transformed.

Scenarios

You can use Simple Log Service triggers to integrate Function Compute and Simple Log Service in the following scenarios:

  • Data cleansing and processing

    Simple Log Service allows you to quickly collect, process, query, and analyze logs.

    image
  • Data shipping

    Simple Log Service allows you to ship data to specified destinations and build data pipelines between big data services on the cloud.

    image
    1. Field preprocessing and shipping

    2. Column store creation and shipping

    3. Custom processing and result storage

ETL functions

  • Function types

  • Trigger mechanisms

    An ETL job corresponds to a Simple Log Service trigger and is used to invoke a function. After you create an ETL job for a Logstore in Simple Log Service, a timer is started to poll data from the shards of the Logstore based on the job configurations. If data is written to the Logstore, a triple data record in the <shard_id,begin_cursor,end_cursor > format is generated as a function event. Then, the associated ETL function is invoked.

    Note

    If no new data is written to the Logstore and the storage system is updated, the cursor information changes. The ETL function is invoked for each shard but no data is transformed. In this case, you can use the cursor information to obtain data from the shards. If no data is obtained, the ETL function is invoked but no data is transformed. You can ignore the function invocations. For more information, see Create a custom function.

    An ETL job invokes functions based on the time mechanism. For example, you set the invocation interval in an ETL job to 60 seconds for a Logstore. If data is continuously written to Shard 0, the ETL function is invoked every 60 seconds to transform data that is located in the cursor range of the last 60 seconds.

    image

FAQ

  • What do I do if I create a trigger but no function is called?

    You can troubleshoot the issue by using the following methods:

    • Check whether new data is written to the Logstore for which your Function Compute trigger is configured. If new data is written to the Logstore, the function is called.

    • Check whether exceptions can be found in trigger logs and operational logs of functions.

  • Why is the call interval of a function larger than expected?

    A function is separately called for each shard. Even if the number of times that a function is called for shards in a Logstore is large, the interval at which the function is called for each shard can be consistent with the call interval that is specified.

    The call interval at which the function is called for a shard is the same as the time interval that is specified for data transformation. When a function is called, latency may exist. This may cause the call interval to be larger than expected. The following list describes two scenarios with a specified call interval of 60 seconds.

    • Scenario 1: The function is called, and latency does not exist. The function is called at 60-second intervals to transform data that is generated in the following time range: [now -60s, now).

      Note

      A function is separately called for each shard. If a Logstore contains 10 shards and latency does not exist when the function is called, the function is called 10 times at 60-second intervals to transform data in real time.

    • Scenario 2: The function is called, and latency exists. The time difference between the point in time at which data in a Simple Log Service shard is transformed and the point in time at which the latest data is written to Simple Log Service is greater than 10 seconds. In this case, the trigger shortens the call interval. For example, the function can be called at 2-second intervals to transform data that is generated within 60 seconds.