All Products
Search
Document Center

Function Compute:Simple Log Service triggers

Last Updated:Dec 16, 2025

You can create a Simple Log Service trigger to connect Simple Log Service to Function Compute. The Simple Log Service trigger automatically triggers a function to process incremental logs in a Logstore based on your business requirements.

Use cases

  • Data cleansing and transformation

    Simple Log Service allows you to quickly collect, process, query, and analyze logs.

    image
  • Data shipping

    Simple Log Service allows you to ship data to its destination and build data pipelines between big data services on the cloud.

    image

Functions for data processing

Function types

Trigger mechanism

An extract, transform, and load (ETL) job corresponds to a Simple Log Service trigger and is used to invoke a function. After you create an ETL job for a Logstore in Simple Log Service, a timer is started to poll data from the shards of the Logstore based on the job configurations. If data is written to the Logstore, a triple data record in the <shard_id,begin_cursor,end_cursor > format is generated as a function event. Then, the associated ETL function is invoked. Simple Log Service pushes function events to Function Compute.

Note

If no data is written to the Logstore and the storage system is updated, the cursor information changes. The ETL function is invoked for each shard but no data is transformed. In this case, you can use the cursor information to obtain data from the shards. If no data is obtained, the ETL function is invoked but no data is transformed. You can ignore the function invocation. For more information, see Create a custom function.

An ETL job invokes a function based on time. For example, an ETL job triggers a function every 60 seconds and data is constantly written to Shard0 of a Logstore. In this case, the shard triggers the function every 60 seconds. If data is no longer written to the shard, the function cannot be triggered. The inputs to execute the function are the start and end offsets of the cursor in the most recent 60 seconds. Data can be processed based on the cursor range of the previous 60 seconds.

image

Limits

You can associate a maximum of five times the number of Logstores in a Simple Log Service project with the project.

Note

We recommend that you configure no more than five Simple Log Service triggers for each Logstore. Otherwise, data may not be efficiently shipped to Function Compute.

Sample scenario

You can configure a Simple Log Service trigger to periodically obtain updated data and invoke a function. Simple Log Service triggers are suitable for scenarios in which you want to incrementally consume data from a Logstore. You can invoke functions to perform custom processing tasks, such as data cleansing tasks and data processing tasks, and ship data to a third-party service. This example only shows how to obtain and display log data.

Note

The function that is used to process data can be a template function that is provided by Simple Log Service or a custom function.

Prerequisites

  • Function Compute

  • Simple Log Service

    • A Simple Log Service project and two Logstores are created. For more information, see Resource management overview.

      One Logstore is used to store the logs collected by Function Compute, which is triggered based on incremental logs. Therefore, you must ensure that logs can be continuously collected. The other Logstore is used to store the logs that are generated by Simple Log Service triggers.

Important

The log project must reside in the same region as the Function Compute service.

Input parameters

  • event

    After a Simple Log Service trigger is executed, the event data is passed to a runtime. The runtime converts the event into a JSON object and passes the object to the input parameter event of the function. Sample code:

    {
        "parameter": {},
        "source": {
            "endpoint": "http://cn-hangzhou-intranet.log.aliyuncs.com",
            "projectName": "fc-test-project",
            "logstoreName": "fc-test-logstore",
            "shardId": 0,
            "beginCursor": "MTUyOTQ4MDIwOTY1NTk3ODQ2Mw==",
            "endCursor": "MTUyOTQ4MDIwOTY1NTk3ODQ2NA=="
        },
        "jobName": "1f7043ced683de1a4e3d8d70b5a412843d81****",
        "taskId": "c2691505-38da-4d1b-998a-f1d4bb8c****",
        "cursorTime": 1529486425
    }                       

    The following table describes the parameters.

    Parameter

    Description

    parameter

    The value of the Invocation Parameters parameter that you configure when you create the trigger.

    source

    The log block information that you want the function to read from Simple Log Service.

    • endpoint: the endpoint of the Alibaba Cloud region in which the Simple Log Service project resides.

    • projectName: the name of the Simple Log Service project.

    • logstoreName: the name of the Logstore that you want Function Compute to consume. The current trigger subscribes to data in the Logstore and sends the data to Function Compute at regular intervals for custom processing.

    • shardId: the ID of a specific shard in the Logstore.

    • beginCursor: the offset from which data consumption starts.

    • endCursor: the offset at which data consumption ends.

    Note

    You can call the GetCursor operation to obtain beginCursor and endCursor. Then, you can create an event in the preceding format to debug the function.

    jobName

    The name of an ETL job in Simple Log Service. Simple Log Service triggers must correspond to ETL jobs in Simple Log Service.

    This parameter is automatically generated by Function Compute and does not need to be configured.

    taskId

    For an ETL job, taskId is the identifier for a deterministic function invocation.

    This parameter is automatically generated by Function Compute and does not need to be configured.

    cursorTime

    The UNIX timestamp of the time when the last log arrives at Simple Log Service. Unit: seconds.

  • context

    When Function Compute runs a function, the system passes a context object to the function input parameter context. The object contains the information about the invocation, service, function, and execution environment.

    In this example, the context.credentials object is used to obtain the key information. For more information about the parameters supported by the object, see Context.

Step 1: Create a Simple Log Service trigger

  1. Log on to the Function Compute console. In the left-side navigation pane, click Functions.

  2. In the top navigation bar, select a region. On the Functions page, click the function that you want to manage.

  3. On the Function Details page, click the Configurations tab. In the left-side navigation pane, click Triggers. Then, click Create Trigger.

  4. In the Create Trigger panel, configure parameters and click OK.

    Parameter

    Description

    Example

    Trigger Type

    Select Simple Log Service.

    Simple Log Service

    Name

    Enter the trigger name. If you leave this parameter empty, Function Compute automatically generates a trigger name.

    log_trigger

    Version or Alias

    The version or alias of the trigger. Default value: LATEST. If you want to create a trigger for another version or alias, select a version or alias in the upper-right corner of the function details page. For more information about versions and aliases, see Manage versions and Manage aliases.

    LATEST

    Simple Log Service Project

    Select a project that you want to consume.

    aliyun-fc-cn-hangzhou-2238f0df-a742-524f-9f90-976ba457****

    Logstore

    Select the Logstore from which you want to consume. The current trigger subscribes to data in the Logstore and sends the data to Function Compute at regular intervals for custom processing.

    function-log

    Trigger Interval

    The interval at which Simple Log Service invokes the function.

    Valid values: 3 to 600. Unit: seconds. Default value: 60.

    60

    Retries

    The maximum number of retries that are allowed for each invocation.

    Valid values: 0 to 100. Default value: 3.

    Note
    • If the function is triggered, status=200 is returned, and the value of the X-Fc-Error-Type parameter in the response header is not UnhandledInvocationError or HandledInvocationError. In other cases, the function fails to be triggered. For more information about X-Fc-Error-Type, see Response parameters.

    • If the function fails to be triggered, the system retries to invoke the function until the function is invoked. The number of retries follows the value of this parameter. If the function still fails after the value of this parameter is reached, the system retries the request in exponential backoff mode with increased intervals.

    3

    Trigger Log

    The Logstore to which you want to store the logs that are generated when Simple Log Service invokes the function.

    function-log2

    Invocation Parameters

    The invocation parameters. You can configure custom parameters in this editor. The custom parameters are passed to the function as the value of the parameter parameter of the event that is used to invoke the function. The value of the Invocation Parameters parameter must be a string in the JSON format.

    By default, this parameter is empty.

    N/A

    Role Name

    Select AliyunLogETLRole.

    Note

    After you configure the preceding parameters, click OK. If this is the first time that you create a trigger of this type, click Authorize Now in the dialog box that appears.

    AliyunLogETLRole

    After the trigger is created, it is displayed on the Triggers tab. To modify or delete a trigger, see Trigger management.

Step 2: Configure the permissions

  1. On the Function Details page of your function, click the Configurations tab. In the left-side navigation pane, click Permissions and then click Modify. In the Permissions panel, specify Function Role.

    • You can use the default role AliyunFCServerlessDevsRole. By default, this role has the read-only permissions on Simple Log Service.

    • You can also use a custom RAM role. The custom RAM role must meet the following two requirements:

      1. When you create a role in the Resource Access Management (RAM) console, you must select Alibaba Cloud Service and select Function Compute from the Select Trusted Service drop-down list. For more information, see Create a RAM role for a trusted Alibaba Cloud service.

      2. You must grant the required permissions on Simple Log Service to the RAM role based on the specific requirements of the function. For more information, see Examples of using custom policies to grant permissions to a RAM user.

  2. Click Deploy.

Step 3: Deploy the function and view the logs

  1. On the Code tab of the function details page, write code in the code editor and click Deploy.

    In this example, a Python function is deployed to implement the following features:

    • Obtain information about Simple Log Service event triggers, such as endpoint, projectName, logstoreName, and beginCursor from the event parameter.

    • Obtain authorization information, such as accessKeyId, accessKey, and securityToken from the context parameter.

    • Initialize the Simple Log Service client based on the obtained information.

    • Obtain the log data based on the specified cursors from the source Logstore.

    Note

    The following sample code provides an example on how to extract most logical logs.

    """
    The sample code is used to implement the following features:
    * Parse the event parameter to obtain the trigger information of Simple Log Service events.
    * Initialize the Simple Log Service client based on the preceding information.
    * Obtain real-time logs from the source Logstore.
    
    
    This sample code is mainly doing the following things:
    * Get SLS processing related information from event
    * Initiate SLS client
    * Pull logs from source log store
    
    """
    #!/usr/bin/env python
    # -*- coding: utf-8 -*-
    
    import logging
    import json
    import os
    from aliyun.log import LogClient
    
    
    logger = logging.getLogger()
    
    
    def handler(event, context):
    
        # Use context.credentials.to obtain information about keys.
        # Access keys can be fetched through context.credentials
        print("The content in context entity is: ", context)
        creds = context.credentials
        access_key_id = creds.access_key_id
        access_key_secret = creds.access_key_secret
        security_token = creds.security_token
    
        # Parse the event parameter to the object data type.
        # parse event in object
        event_obj = json.loads(event.decode())
        print("The content in event entity is: ", event_obj)
    
        # Query the following information from event.source: log project name, Logstore name, the endpoint to access the Simple Log Service project, start cursor, end cursor, and shard ID.
        # Get the name of log project, the name of log store, the endpoint of sls, begin cursor, end cursor and shardId from event.source
        source = event_obj['source']
        log_project = source['projectName']
        log_store = source['logstoreName']
        endpoint = source['endpoint']
        begin_cursor = source['beginCursor']
        end_cursor = source['endCursor']
        shard_id = source['shardId']
    
        # Initialize the Simple Log Service client.
        # Initialize client of sls
        client = LogClient(endpoint=endpoint, accessKeyId=access_key_id, accessKey=access_key_secret, securityToken=security_token)
    
        # Read logs that start from the start and end cursors in the source Logstore. In this example, the specified cursors include all logs of the function invocation.
        # Read data from source logstore within cursor: [begin_cursor, end_cursor) in the example, which contains all the logs trigger the invocation
        while True:
          response = client.pull_logs(project_name=log_project, logstore_name=log_store,
                                    shard_id=shard_id, cursor=begin_cursor, count=100,
                                    end_cursor=end_cursor, compress=False)
          log_group_cnt = response.get_loggroup_count()
          if log_group_cnt == 0:
            break
          logger.info("get %d log group from %s" % (log_group_cnt, log_store))
          logger.info(response.get_loggroup_list())
    
          begin_cursor = response.get_next_cursor()
    
        return 'success'
  2. On the Function Details page, click the Logs tab. In the left-side navigation pane, click Function Logs and view the latest data obtained when the function is executed. If the "The logging feature is not enabled for the current function." message is displayed, click Enable.

After the preceding steps are performed, the Simple Log Service trigger is configured. If you want to debug code in the Function Compute console, continue to perform the following steps.

(Optional) Step 4: Use a simulated event to test the function

  1. On the Code tab of the function details page, click the image.png icon next Test Function and select Configure Test Parameters from the drop-down list.

  2. In the Configure Test Parameters panel, click the Create New Test Event or Modify Existing Test Event tab, enter the event name and event content, and then click OK. If you select Create New Test Event, we recommend that you select Log Service for Event Template. For more information about how to configure test data, see the event section of this topic.

  3. After the simulated event is configured, click Test Function.

    After the function test is complete, you can view the results on the Code tab.image

FAQ

  • What do I do if new logs are generated but the Simple Log Service trigger does not trigger function execution?

    You can troubleshoot the issue by using the following methods:

    • Check whether incremental data changes occurred in the Logstore associated with the trigger. The associated function is triggered if the shard data changes.

    • Check whether exceptions can be found in trigger logs and operational logs of functions.

  • Why is the execution frequency of a Simple Log Service trigger higher than expected?

    A function is separately triggered for each shard. Even if the number of times that a function is triggered for shards in a Logstore is large, the interval at which the function is triggered for each shard can be consistent with the specified trigger interval.

    The trigger interval at which the function is triggered for a shard is the same as the time interval that is specified for data transformation. When a function is triggered, latency may exist. This may cause the trigger interval to be larger than expected. The following list describes two scenarios with a specified trigger interval of 60 seconds.

    • Scenario 1: The function is triggered, and latency does not exist. The function is triggered at a 60-second interval to transform data that is generated in the following time range: [now -60s, now).

      Note

      A function is separately triggered for each shard. If a Logstore contains 10 shards and latency does not exist when the function is triggered, the function is triggered 10 times at a 60-second interval to transform data in real time.

    • Scenario 2: The function is triggered, and latency exists. The time difference between the point in time at which data in a Simple Log Service shard is transformed and the point in time at which the latest data is written to Simple Log Service is greater than 10 seconds. In this case, the trigger shortens the interval. For example, the function can be triggered at a 2-second interval to transform data that is generated within 60 seconds.

  • What do I do if the error message "denied by sts or ram, action: log:GetCursorOrData, resource: ****" is returned?

    If this error message is displayed in the function log, the related permissions may not be configured for the function or the permissions are incorrectly configured. For more information, see the Step 2: Configure the permissions section of this topic.