By configuring a Simple Log Service (SLS) trigger, you can integrate Simple Log Service (SLS) with Function Compute. The SLS trigger automatically executes a function when new logs are generated to incrementally consume data from a Simple Log Service Logstore and perform custom processing tasks.
Use cases
Data cleansing and processing
Simple Log Service allows you to quickly collect, process, query, and analyze logs.
Data shipping
This feature supports data delivery to various destinations and builds data pipelines between cloud-based big data products.
Data processing functions
Function types
Template functions
For more information, see aliyun-log-fc-functions.
Custom functions
The function configuration format depends on the specific implementation of the function. For more information, see ETL function development guide.
Trigger mechanism
An ETL job in Simple Log Service corresponds to a trigger in Function Compute. When you create an ETL job, Simple Log Service starts a timer that periodically polls for shard information in the Logstore. When new data is written, the system generates an event as a triplet in the format of <shard_id,begin_cursor,end_cursor> and triggers the function.
When the storage system is upgraded, a cursor change may occur even if no new data is written. In this case, each shard is triggered once with an empty payload. Handle this in your function by pulling data from the shard with the cursor. If no data is returned, it indicates an empty trigger, and you can ignore the invocation. For more information, see Custom function development guide.
The trigger mechanism for ETL tasks in Simple Log Service is time-based. For example, if you set the trigger interval for an ETL job to 60 seconds and data is continuously written to shard 0 of the Logstore, the shard triggers a function execution every 60 seconds. If no new data is written to the shard, the function is not triggered. The function's input is the cursor range for the last 60 seconds. Within the function, you can read data from shard 0 based on the cursor for further processing.
Limitations
The maximum number of Simple Log Service triggers that you can associate with a single Project is five times the number of Logstores in that Project.
We recommend that you configure no more than five Simple Log Service triggers for each Logstore. Otherwise, the efficiency of data shipping to Function Compute may be affected.
Example scenario
You can configure a Simple Log Service trigger to periodically fetch updated data and invoke a function, which allows you to incrementally consume data from a Logstore. In the function, you can perform custom tasks, such as data cleansing and processing, and ship the data to third-party services. This example only demonstrates how to obtain and print log data.
The function used for data processing can be a template provided by Simple Log Service or a custom function that you create.
Prerequisites
Function Compute
Simple Log Service (SLS)
Create a Project and a Logstore.
Create one Project and two Logstores: a source Logstore for collected logs and another Logstore for the trigger's execution logs. You must ensure logs are continuously collected because Function Compute is triggered by new data.
The Log Project and the Function Compute service must be in the same region.
Input parameters
eventWhen a Simple Log Service trigger fires, it passes a JSON object representing the event data to the function's
eventinput parameter. The format is as follows:{ "parameter": {}, "source": { "endpoint": "http://cn-hangzhou-intranet.log.aliyuncs.com", "projectName": "fc-test-project", "logstoreName": "fc-test-logstore", "shardId": 0, "beginCursor": "MTUyOTQ4MDIwOTY1NTk3ODQ2Mw==", "endCursor": "MTUyOTQ4MDIwOTY1NTk3ODQ2NA==" }, "jobName": "1f7043ced683de1a4e3d8d70b5a412843d81****", "taskId": "c2691505-38da-4d1b-998a-f1d4bb8c****", "cursorTime": 1529486425 }The following table describes the parameters.
Parameter
Description
parameter
The value of the Invocation Parameters that you specified when you configured the trigger.
source
The information about the log block to be read by the function.
endpoint: The region where the Simple Log Service Project resides.
projectName: The name of the Project.
logstoreName: The name of the Logstore that Function Compute consumes. The current trigger subscribes to data from this Logstore and sends it to the function service for custom processing at regular intervals.
shardId: A specific shard in the Logstore.
beginCursor: The position where data consumption begins.
endCursor: The position where data consumption stops.
NoteWhen you debug the function, you can call the GetCursor by time API operation to obtain the beginCursor and endCursor, and then build a function event for testing based on the preceding example.
jobName
The name of the Simple Log Service ETL job. A Simple Log Service trigger configured for a function corresponds to an ETL job in Simple Log Service.
Function Compute generates this parameter automatically. No user configuration is required.
taskId
For an ETL job, taskId is a deterministic identifier for a function invocation.
Function Compute generates this parameter automatically. No user configuration is required.
cursorTime
The Unix timestamp, in seconds, when the last log arrived at the Simple Log Service server.
contextWhen Function Compute runs your function, it passes a
contextobject to the function. This object contains information about the invocation, service, function, tracing, and execution environment.This topic uses
context.credentialsto obtain key information. For more information about other fields, see Context.
Step 1: Create an SLS trigger
Log on to the Function Compute console. In the left-side navigation pane, click Functions.
In the top navigation bar, select a region. On the Functions page, click the function that you want to manage.
On the function details page, click the Trigger tab and then click Create Trigger. In the Create Trigger panel, select Log Service for Trigger Type, configure other parameters, and then click OK.
Parameter
Actions
Example
Name
Enter a custom name for the trigger. If you leave this parameter empty, Function Compute automatically generates a name.
log_trigger
Version or Alias
The default value is LATEST. If you want to create a trigger for a different version or alias, you must first switch to that version or alias in the upper-right corner of the function details page. For an introduction to versions and aliases, see Manage versions and Manage aliases.
LATEST
Log Service Project
Select the Simple Log Service Project from which you want to consume data.
aliyun-fc-cn-hangzhou-2238f0df-a742-524f-9f90-976ba457****
Logstore
Select the Logstore from which you want to consume data. The trigger periodically subscribes to data from this Logstore and sends it to the function service for custom processing.
function-log
Trigger Interval
Specify the interval at which Simple Log Service triggers the function.
Valid values: 3 to 600. Unit: seconds. Default value: 60.
60
Retries
Specify the maximum number of retries allowed for a single invocation.
Valid values: 0 to 100. Default value: 3.
NoteA successful execution is one where the status is 200 and the value of the
X-Fc-Error-Typeparameter in the header is notUnhandledInvocationErrororHandledInvocationError. Other cases indicate a failed execution, which triggers a retry. For more information about theX-Fc-Error-Typeparameter, see Response parameters.If an execution fails, the system retries based on your configuration. If all initial retries fail, it enters a backoff retry phase with an increased interval.
3
Trigger Log
Select a created Logstore. Logs from the function execution triggered by Simple Log Service are recorded in this Logstore.
function-log2
Invocation Parameters
Custom parameters to pass to the function. The value must be a JSON-formatted string and is passed as the parameter field of the event.
This parameter is empty by default.
None
Role Name
Select AliyunLogETLRole.
NoteIf you are creating this type of trigger for the first time, you must click OK and then select Authorize Now in the subsequent dialog box.
AliyunLogETLRole
After the trigger is created, it is displayed on the Triggers tab. To modify or delete a trigger, see Trigger Management.
Step 2: Configure permissions
On the Function Details page, select the Configuration tab. In the Advanced Settings section, click Modify. In the Advanced Settings panel, select a Function Role.
You can use the default role AliyunFCServerlessDevsRole, which has read-only permissions on Simple Log Service by default.
You can also customize a RAM role. A custom RAM role must meet the following two requirements:
When you create a RAM role, select {key, select, RAM {Cloud Account} Service {Cloud Service} Federated {Identity Provider} Account {Cloud Account} CurrentAccount {Current Account} OtherAccount {Other Account} AllAccounts {All Accounts} RAMType {Identity Type} UserName {User Name} RoleName {Role Name} FederatedType {Identity Provider Type} other { {key} } } as the trusted entity and select Function Compute as the trusted service. For more information, see Create a RAM role for a trusted Alibaba Cloud service.
Grant the necessary Simple Log Service permissions to the RAM role based on the specific requirements of the function. For more information, see Examples of custom RAM policies.
When you are finished, click Deploy.
Step 3: Deploy and view logs
On the Code tab of the function details page, enter your code in the code editor and then click Deploy.
This example deploys a Python function that performs the following actions:
Obtains Simple Log Service event trigger information, such as
endpoint,projectName,logstoreName, andbeginCursor, from theeventparameter.Obtains authorization information, such as
accessKeyId,accessKey, andsecurityToken, from thecontextparameter.Initializes the SLS client based on the obtained information.
Obtains log data from the source Logstore at the specified cursor position.
NoteThe following sample code can be used as a template for extracting most logical logs.
""" This sample code shows how to perform the following actions: * Parse SLS event trigger information from the event. * Initialize the SLS client based on the obtained information. * Pull real-time log data from the source Logstore. """ #!/usr/bin/env python # -*- coding: utf-8 -*- import logging import json import os from aliyun.log import LogClient logger = logging.getLogger() def handler(event, context): # Access keys can be fetched through context.credentials. print("The content in context entity is: ", context) creds = context.credentials access_key_id = creds.access_key_id access_key_secret = creds.access_key_secret security_token = creds.security_token # Parse event into an object. event_obj = json.loads(event.decode()) print("The content in event entity is: ", event_obj) # Get the name of the Project, the name of the Logstore, the endpoint of SLS, beginCursor, endCursor, and shardId from event.source. source = event_obj['source'] log_project = source['projectName'] log_store = source['logstoreName'] endpoint = source['endpoint'] begin_cursor = source['beginCursor'] end_cursor = source['endCursor'] shard_id = source['shardId'] # Initialize the SLS client. client = LogClient(endpoint=endpoint, accessKeyId=access_key_id, accessKey=access_key_secret, securityToken=security_token) # Read logs from the source Logstore within cursor range [begin_cursor, end_cursor). In this example, the range contains all logs that trigger the invocation. while True: response = client.pull_logs(project_name=log_project, logstore_name=log_store, shard_id=shard_id, cursor=begin_cursor, count=100, end_cursor=end_cursor, compress=False) log_group_cnt = response.get_loggroup_count() if log_group_cnt == 0: break logger.info("get %d log group from %s" % (log_group_cnt, log_store)) logger.info(response.get_loggroup_list()) begin_cursor = response.get_next_cursor() return 'success'On the Function Details page, choose to view the latest data obtained when the function runs. If the message "The logging feature is not enabled for the current function."appears, click Enable.
You have now completed the configuration of the Simple Log Service trigger. To debug the code in the console, follow these steps.
(Optional) Step 4: Test with a simulated event
On the Code tab of the function details page, click the
icon next Test Function and select Configure Test Parameters from the drop-down list. In the Configure Test Parameters panel, select Create New Test Event or Modify Existing Test Event, enter an event name and content, and then click OK. If you are creating a new test event, we recommend that you select the Log Service template. For more information about how to configure test data, see event.
After the simulated event is configured, click Test Function.
After the execution is complete, you can view the execution result above the Code tab.
FAQ
Troubleshooting: SLS trigger fails to invoke
You can troubleshoot the issue by checking the following:
Confirm that new data exists in the Logstore configured for the trigger. The function is triggered when the shard data changes.
Check the trigger logs and function run logs for any exceptions.
Why is invocation frequency high?
Each shard is triggered separately. You may observe a high overall trigger count for a Logstore, but the triggers for each shard still occur within the specified interval.
The trigger interval for a single shard and the data range processed each time (time interval) are the same. The trigger interval falls into two scenarios during function execution. Assume the trigger interval is 60 seconds.
No trigger delay: The function is triggered periodically as set, once every 60 seconds, processing data in the range of
[now -60s, now).NoteFunction triggers are independent for each shard. Assuming a Logstore has 10 shards, during real-time data processing (with no trigger delay), there will be 10 function invocations every 60 seconds.
Trigger delay occurs (when the current processing position in the Simple Log Service shard lags behind the latest written data by more than 10 seconds): The trigger accelerates to catch up, potentially reducing the interval to 2 seconds per trigger while continuing to process data in 60-second windows.
Error: "denied by sts or ram"
If this error appears in the function logs, it may be because the function has not been configured with the required permissions, or the permission policy is set incorrectly. For more information, see Step 2: Configure permissions.