All Products
Search
Document Center

Simple Log Service:Manage stores

Last Updated:Nov 04, 2025

A store is the basic unit for data storage and queries in Simple Log Service. Simple Log Service provides three types of stores to handle different data types: Logstore, MetricStore, and EventStore.

How to choose a store type

Simple Log Service provides three types of stores: Logstore, MetricStore, and EventStore. The primary difference among these store types is their compatibility with different data types. Choose the store type that matches your data. If you do not have special requirements, you can use a Logstore by default.

Store type

Scenarios

Logstore

  • Log data (Log): An abstract representation of changes in a system over time. Its content is an ordered collection of operations on a specified object and the results of those operations. In a broad sense, this includes almost all types of data. By default, you can use a Logstore.

  • Trace data (Trace): Records processing information for a single request, including service invocations and processing durations.

MetricStore

Time series data (Metric): Composed of a metric identifier and data points. Data with the same metric identifier forms a time series. Use a MetricStore when you need to store time series data.

EventStore

Event data (Event): An event is a noteworthy and valuable piece of data. Examples include alert monitoring data and the results of periodic inspection jobs. Use an EventStore when you need to store event data.

Logstore

A Logstore is the unit for storing and querying log data in Simple Log Service. Each Logstore belongs to a project. You can create multiple Logstores in a project as needed. Typically, you create separate Logstores for different types of logs from the same application. For example, to collect operation logs (operation_log), application logs (application_log), and access logs (access_log) for App A, you can create a project named app-a. In this project, you can create Logstores named operation_log, application_log, and access_log to store each log type separately.

Specify a Logstore when you write, query, analyze, process, consume, or deliver logs. The details are as follows:

  • A Logstore is used as the unit for log collection.

  • A Logstore is used as the unit for log storage, processing, consumption, and delivery.

  • An index is created in a Logstore to query and analyze logs.

MetricStore

A MetricStore is the unit for storing and querying time series data in Simple Log Service. Each MetricStore belongs to a project. You can create multiple MetricStores in a project as needed. Typically, you create different MetricStores for different types of time series data. For example, to collect basic host monitoring data, cloud service monitoring data, and business application monitoring data, you can create a project named demo-monitor. Then, in this project, you can create MetricStores named host-metrics, cloud-service-metrics, and app-metrics to store these data types separately.

Specify a MetricStore when you write, query, analyze, or consume time series data. The details are as follows:

  • A MetricStore is used as the unit for time series data collection.

  • A MetricStore is used as the unit for time series data storage and consumption.

  • Time series data in a MetricStore can be queried and analyzed using Prometheus Query Language (PromQL), SQL-92, or SQL+PromQL syntax.

EventStore

An EventStore is the unit for storing and querying event data in Simple Log Service. Each EventStore belongs to a project. You can create multiple EventStores in a project as needed. Typically, you create different EventStores for different types of event data. For example, you can categorize data by infrastructure anomalous activity, business application events, and custom events, and use different EventStores for storage and analysis.

Specify an EventStore when you write, query, analyze, or consume event data. The details are as follows:

  • An EventStore is used as the unit for event data collection.

  • An EventStore is used as the unit for event data storage and consumption.

References

LogGroup

A log group (LogGroup) is a collection of logs and is the basic unit for writing and reading logs. The logs in a LogGroup share the same metadata, such as the IP address and source information. When you write logs to or read logs from Simple Log Service, multiple logs are packaged into a LogGroup. This method reduces the number of read and write operations and improves efficiency. Each LogGroup can be up to 5 MB in size.

日志组

Log data (Log)

Log data is an abstract representation of changes in a system over time. A log consists of an ordered collection of operations on a specified object and the results of those operations. Log files, events, database binary logs (BinLogs), and time series data (metrics) are all different forms of logs. Simple Log Service uses a semi-structured data model to define a log. A log consists of five data domains: topic, time, content, source, and tags. Simple Log Service has different format requirements for each data domain. The following table describes the requirements.

Data domain

Description

Format

Topic

Simple Log Service uses the reserved field (__topic__) to identify the log topic. This helps distinguish logs from different services, users, or instances. For example, if System A has modules for frontend HTTP request handling, caching, logic processing, and storage, you can set a topic for each module's logs, such as http_module, cache_module, logic_module, and store_module. After logs are collected into the same Logstore, you can use the topic to quickly identify their source. You can set the log topic in the Global Settings of a collection configuration.

The relationship among a Logstore, topics, and shards is as follows:

image

A string of 0 to 128 bytes in size, including an empty string.

If you do not need to distinguish between logs in a Logstore, you can set the topic to an empty string when you collect logs. An empty string is a valid topic.

Event time

The reserved field (__time__) identifies the log time. For more information, see Reserved fields.

UNIX timestamp.

Log content

The content of the log, which consists of one or more content items in the Key:Value format.

When you use Logtail in simple mode (single-line or multi-line) to collect logs, Logtail does not parse the log content. The entire raw log is uploaded to the content field.

The Key:Value format is as follows:

  • Key: The field name. The key must be a UTF-8 string of 1 to 128 bytes that contains only letters, underscores, and numbers. The key cannot start with a number. The key cannot be any of the following reserved fields:

    • __time__

    • __source__

    • __topic__

    • __partition_time__

    • _extract_others_

    • __extract_others__

  • Value: The field value. The value can be any string up to 1 MB in size.

Log source

The reserved field (__source__) identifies the log source, such as the IP address of the server that generated the log.

A string of 0 to 128 bytes in size.

Log tags

Log tags, which include the following:

  • Custom tags: You can add tags when you write logs by calling the PutLogs operation.

  • System tags: Tags that Simple Log Service adds to logs, including __client_ip__ and __receive_time__.

A dictionary format where both the key and value are strings. In logs, tags are displayed with the __tag__: prefix.

Example

The following example uses a website access log to show the mapping between a raw log and the data model in Simple Log Service.

  • Raw log

    127.0.0.1 - - [01/Mar/2021:12:36:49  0800] "GET /index.html HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36
  • Log collected in simple mode. The entire raw log is saved in the content field.

    日志样例

  • Log collected in regular expression mode. The log content is structured by extracting the content into multiple key-value pairs based on the configured regular expression.

    日志样例

Time series data (Metric)

Time series data consists of a metric identifier and data points. Data with the same identifier forms a time series. The time series data type in Simple Log Service follows the Prometheus data model. All data in a MetricStore is stored as time series data.

image

Metric identifier

Each time series has a unique identifier, which consists of a metric name and labels.

  • A metric name is a string identifier that specifies the type of metric. The metric name must match the regular expression [a-zA-Z_:][a-zA-Z0-9_:]*. For example, http_request_total represents the total number of HTTP requests received.

  • Labels are a set of key-value pairs that identify the attributes of a metric. The key must match the regular expression [a-zA-Z_][a-zA-Z0-9_]*, and the value cannot contain a vertical bar (|). For example, method is POST, and URL is /api/v1/get.

Data point

A data point represents the value of a time series at a specific point in time. Each data point consists of a timestamp and a value. The timestamp is recorded with nanosecond precision, and the value is a double type.

Data structure

The write protocol for time series data is the same as the log write protocol and uses the Protobuf data encoding method. The metric identifier and data points are both located in the content field, as shown in the following table.

Field

Description

Example

__name__

The metric name.

nginx_ingress_controller_response_size

__labels__

The label information. The format is {key}#$#{value}|{key}#$#{value}|{key}#$#{value}.

Note
  • Sort label keys in alphabetical order.

  • We recommend that you do not write a label whose value is an empty string. For example, if the label information is app#$#|controller_class#$#nginx, writing the label with the key `app` to the Metricstore may cause errors during PromQL aggregations.

app#$#ingress-nginx|controller_class#$#nginx|controller_namespace#$#kube-system|controller_pod#$#nginx-ingress-controller-589877c6b7-hw9cj

__time_nano__

The timestamp. You can write timestamps with various precisions, such as seconds (s), milliseconds (ms), microseconds (us), and nanoseconds (ns). For SQL queries, all timestamps are converted to microsecond (us) precision in the output to ensure consistent calculations.

1585727297293000

__value__

The value.

36.0

Example

Query all raw time series data for the process_resident_memory_bytes metric within a specified time range.

* | select * from "sls-mall-k8s-metrics.prom" where __name__ = 'process_resident_memory_bytes' limit all

image

Event data (Event)

An event is a noteworthy and valuable piece of data. Examples include alert monitoring data and the results of periodic inspection jobs. Event data in Simple Log Service adheres to the CloudEvents protocol specification, as described in the following table.

Field type

Field name

Required

Data format

Description

Protocol fields

specversion

Yes

String

The default value is 1.0, which complies with the CloudEvents specification.

id

Yes

String

The event ID. You can use source+id to uniquely identify an event.

source

Yes

String

The context in which an event occurred, such as the event source or the instance that published the event.

type

Yes

String

The event type, such as sls.alert.

subject

No

String

The subject of the event. This field supplements the source field, for example, by describing the object that triggered the event.

datacontenttype

No

String

The event type. The default value is application/cloudevents+json.

dataschema

No

URI

The schema that the data field must adhere to. The default value is empty.

data

No

JSON

The specific event content. The format varies based on the source and type of the event.

time

Yes

Timestamp

The event time. For more information about the format, see RFC 3339. Example: 2022-10-17T11:20:45.984+0800.

Extension fields

title

Yes

String

The event title.

message

Yes

String

The event description.

status

Yes

String

The event status. Valid values:

  • ok

  • info

  • warning

  • error

Example

The following example shows the data for an alert event:

{
    "specversion": "1.0",
    "id": "af****6c",
    "source": "acs:sls",
    "type": "sls.alert",
    "subject": "https://sls.console.aliyun.com/lognext/project/demo-alert-chengdu/logsearch/nginx-access-log?encode=base64&endTime=1684312259&queryString=c3RhdHVzID49IDQwMCB8IHNlbGVjdCByZXF1ZXN0X21ldGhvZCwgY291bnQoKikgYXMgY250IGdyb3VwIGJ5IHJlcXVlc3RfbWV0aG9kIA%3D%3D&queryTimeType=99&startTime=1684311959",
    "datacontenttype": "application/cloudevents+json",
    "data": {
        "aliuid": "16****50",
        "region": "cn-chengdu",
        "project": "demo-alert-chengdu",
        "alert_id": "alert-16****96-247190",
        "alert_name": "Nginx Access Error",
        "alert_instance_id": "77****e4-1aad9f7",
        "alert_type": "sls_alert",
        "next_eval_interval": 300,
        "fire_time": 1684299959,
        "alert_time": 1684312259,
        "resolve_time": 0,
        "status": "firing",
        "severity": 10,
        "labels": {
            "request_method": "GET"
        },
        "annotations": {
            "__count__": "1",
            "cnt": "49",
            "desc": "Nginx has had 49 GET request errors in the last five minutes",
            "title": "Nginx Access Error Alert Triggered"
        },
        "results": [
            {
                "region": "cn-chengdu",
                "project": "demo-alert-chengdu",
                "store": "nginx-access-log",
                "store_type": "log",
                "role_arn": "",
                "query": "status >= 400 | select request_method, count(*) as cnt group by request_method ",
                "start_time": 1684311959,
                "end_time": 1684312259,
                "fire_result": {
                    "cnt": "49",
                    "request_method": "GET"
                },
                "raw_results": [
                    {
                        "cnt": "49",
                        "request_method": "GET"
                    },
                    {
                        "cnt": "3",
                        "request_method": "DELETE"
                    },
                    {
                        "cnt": "7",
                        "request_method": "POST"
                    },
                    {
                        "cnt": "6",
                        "request_method": "PUT"
                    }
                ],
                "raw_result_count": 4,
                "truncated": false,
                "dashboard_id": "",
                "chart_title": "",
                "is_complete": true,
                "power_sql_mode": "auto"
            }
        ],
        "fire_results": [
            {
                "cnt": "49",
                "request_method": "GET"
            }
        ],
        "fire_results_count": 1,
        "condition": "Count:[1] > 0; Condition:[49] > 20",
        "raw_condition": "Count:__count__ > 0; Condition:cnt > 20"
    },
    "time": "2023-05-17T08:30:59Z",
    "title": "Nginx Access Error Alert Triggered",
    "message": "Nginx has had 49 GET request errors in the last five minutes",
    "status": "error"
}

Trace data (Trace)

Trace data records processing information for a single request, including service invocations and processing durations. A piece of trace data corresponds to a call chain. For more information about the format, see Trace data format. In a broad sense, a call chain represents the execution process of a transaction or flow in a distributed system. In the OpenTracing standard, a call chain is a directed acyclic graph (DAG) composed of multiple spans. Each span represents a named and timed continuous execution segment within the call chain.