All Products
Search
Document Center

EventBridge:Stream log events from Simple Log Service

Last Updated:Mar 11, 2026

Simple Log Service (SLS) is a cloud-native observability and analytics platform that provides large-scale, low-cost, and real-time services to process multiple types of data such as logs, metrics, and traces. SLS allows you to collect, transform, query, analyze, consume, and deliver data, and configure alerts.

When log data in SLS changes, you may need to react in real time -- triggering a function, forwarding to a message queue, or routing to another cloud service. EventBridge allows you to obtain events from SLS in real time and load them to event targets. An EventBridge event stream connects an SLS Logstore to a downstream target through a continuous pipeline, delivering each batch of log entries as a structured CloudEvents event.

How it works

An event stream follows a four-stage pipeline:

Source (SLS) --> Filter --> Transform --> Sink (target)

StagePurpose
SourceReads log entries from the SLS Logstore and batches them.
FilterDrops events that do not match a pattern you define.
TransformReshapes event payloads before delivery.
SinkDelivers the final event to the target service.

EventBridge sends a batch when the backlog reaches the Messages threshold or when the Interval timer expires.

Prerequisites

Before you begin, make sure that you have:

Create an event stream for SLS

Step 1: Start the event stream wizard

  1. Log on to the EventBridge console.

  2. In the left-side navigation pane, click Event Streams.

  3. In the top navigation bar, select a region, and then click Create Event Stream.

  4. Enter a Task Name and Description.

Step 2: Configure the source

In the Source step, set Data Provider to Simple Log Service and configure the following parameters. Then click Next Step.

ParameterDescriptionExample
ProjectThe SLS project to consume logs from.test-Project
LogstoreThe SLS Logstore to consume logs from.test-LogStore
Consumer OffsetThe position in the log stream where consumption starts. Valid values: Earliest Offset, Latest Offset, and Specified Time.Latest Offset
RoleThe RAM role that EventBridge assumes to read logs from SLS. For the required permissions, see Create a custom event source of the Log Service type.testRole
MessagesThe maximum number of messages that can be sent in each function invocation. Requests are sent only when the number of messages in the backlog reaches the specified value. Valid values: 1 to 10000.100
Interval (Unit: Seconds)The time interval at which you want to invoke the function. The system sends the aggregated messages to Function Compute at the specified time interval, in seconds. Valid values: 0 to 15. Set to 0 to send messages immediately after aggregation.3

If you select Latest Offset, logs written before the event stream starts are skipped. To process all existing logs, select Earliest Offset.

How Messages and Interval work together: EventBridge sends a batch when the backlog reaches the Messages threshold or when the Interval timer expires. A lower interval delivers smaller batches more frequently, while a higher Messages value groups more log entries per batch.

Step 3: Configure filtering, transformation, and sink

In the Filtering, Transformation, and Sink steps, configure the event filtering pattern, transformation rule, and event target.

For details on event transformation, see Use Function Compute to perform message cleansing.

Step 4: Configure the retry policy and dead-letter queue

Under Task Property, configure the retry policy and dead-letter queue (DLQ) for the event stream. The retry policy controls how EventBridge handles delivery failures. Events that still fail after all retries are sent to the DLQ so they are not lost.

For configuration details, see Retry policies and dead-letter queues.

Step 5: Save and enable the event stream

  1. Click Save to create the event stream.

  2. On the Event Streams page, find the event stream and click Enable in the Actions column.

Enabling takes 30 to 60 seconds. Track the progress in the Status column.

Sample event

Use the following sample event to design filtering patterns and transformation rules.

{
    "datacontenttype": "application/json;charset=utf-8",
    "aliyunaccountid": "175299981560****",
    "data": {
        "key1": "value1",
        "key2": "value2",
        "__topic__": "test_topic",
        "__source__": "test_source",
        "__client_ip__": "122.231.XX.XX",
        "__receive_time__": "1663487595",
        "__pack_id__": "59b662b2257796****"
    },
    "subject": "acs:log:cn-qingdao:175299981560****:project/qiingdaoproject/logstore/qingdao-logstore-1",
    "aliyunoriginalaccountid": "175299981560****",
    "source": "test-SLS",
    "type": "sls:connector",
    "aliyunpublishtime": "2022-09-18T07:53:15.387Z",
    "specversion": "1.0",
    "aliyuneventbusname": "qingdaoBus",
    "id": "qiingdaoproject-qingdao-logstore-1-1-MTY2MzExODM5ODY4NjAxOTQyMw****",
    "time": "2022-09-18T07:53:12Z",
    "aliyunregionid": "cn-qingdao",
    "aliyunpublishaddr": "10.50.XX.XX"
}

For details on the CloudEvents specification fields (specversion, type, source, id, time, and others), see Event overview.

Data fields

The data object contains your log content. Fields prefixed and suffixed with underscores (_) are SLS system fields. For more information, see Reserved fields.

FieldTypeExampleDescription
key1StringtestKeyA user-defined log field.
\_\_topic\_\_StringtestTopicThe log topic.
\_\_source\_\_StringtestSourceThe device from which the log is collected.
\_\_client\_ip\_\_String122.231.XX.XXThe IP address of the host where the log resides.
\_\_receive\_time\_\_String1663487595The Unix timestamp when the server received the log.
\_\_pack\_id\_\_String59b662b2257796\*\*\*\*The unique identifier of the log group this entry belongs to.