A store is the basic unit for data storage and queries in Simple Log Service. Simple Log Service provides three types of stores to handle different data types: Logstore, MetricStore, and EventStore.
How to choose a store type
Simple Log Service provides three types of stores: Logstore, MetricStore, and EventStore. The primary difference among these store types is their compatibility with different data types. Choose the store type that matches your data. If you do not have special requirements, you can use a Logstore by default.
Store type | Scenarios |
Logstore |
|
MetricStore | Time series data (Metric): Composed of a metric identifier and data points. Data with the same metric identifier forms a time series. Use a MetricStore when you need to store time series data. |
EventStore | Event data (Event): An event is a noteworthy and valuable piece of data. Examples include alert monitoring data and the results of periodic inspection jobs. Use an EventStore when you need to store event data. |
Logstore
A Logstore is the unit for storing and querying log data in Simple Log Service. Each Logstore belongs to a project. You can create multiple Logstores in a project as needed. Typically, you create separate Logstores for different types of logs from the same application. For example, to collect operation logs (operation_log), application logs (application_log), and access logs (access_log) for App A, you can create a project named app-a. In this project, you can create Logstores named operation_log, application_log, and access_log to store each log type separately.
Specify a Logstore when you write, query, analyze, process, consume, or deliver logs. The details are as follows:
A Logstore is used as the unit for log collection.
A Logstore is used as the unit for log storage, processing, consumption, and delivery.
An index is created in a Logstore to query and analyze logs.
MetricStore
A MetricStore is the unit for storing and querying time series data in Simple Log Service. Each MetricStore belongs to a project. You can create multiple MetricStores in a project as needed. Typically, you create different MetricStores for different types of time series data. For example, to collect basic host monitoring data, cloud service monitoring data, and business application monitoring data, you can create a project named demo-monitor. Then, in this project, you can create MetricStores named host-metrics, cloud-service-metrics, and app-metrics to store these data types separately.
Specify a MetricStore when you write, query, analyze, or consume time series data. The details are as follows:
A MetricStore is used as the unit for time series data collection.
A MetricStore is used as the unit for time series data storage and consumption.
Time series data in a MetricStore can be queried and analyzed using Prometheus Query Language (PromQL), SQL-92, or SQL+PromQL syntax.
EventStore
An EventStore is the unit for storing and querying event data in Simple Log Service. Each EventStore belongs to a project. You can create multiple EventStores in a project as needed. Typically, you create different EventStores for different types of event data. For example, you can categorize data by infrastructure anomalous activity, business application events, and custom events, and use different EventStores for storage and analysis.
Specify an EventStore when you write, query, analyze, or consume event data. The details are as follows:
An EventStore is used as the unit for event data collection.
An EventStore is used as the unit for event data storage and consumption.
References
LogGroup
A log group (LogGroup) is a collection of logs and is the basic unit for writing and reading logs. The logs in a LogGroup share the same metadata, such as the IP address and source information. When you write logs to or read logs from Simple Log Service, multiple logs are packaged into a LogGroup. This method reduces the number of read and write operations and improves efficiency. Each LogGroup can be up to 5 MB in size.

Log data (Log)
Log data is an abstract representation of changes in a system over time. A log consists of an ordered collection of operations on a specified object and the results of those operations. Log files, events, database binary logs (BinLogs), and time series data (metrics) are all different forms of logs. Simple Log Service uses a semi-structured data model to define a log. A log consists of five data domains: topic, time, content, source, and tags. Simple Log Service has different format requirements for each data domain. The following table describes the requirements.
Data domain | Description | Format |
Topic | Simple Log Service uses the reserved field ( The relationship among a Logstore, topics, and shards is as follows: | A string of 0 to 128 bytes in size, including an empty string. If you do not need to distinguish between logs in a Logstore, you can set the topic to an empty string when you collect logs. An empty string is a valid topic. |
Event time | The reserved field ( | UNIX timestamp. |
Log content | The content of the log, which consists of one or more content items in the When you use Logtail in simple mode (single-line or multi-line) to collect logs, Logtail does not parse the log content. The entire raw log is uploaded to the content field. | The
|
Log source | The reserved field ( | A string of 0 to 128 bytes in size. |
Log tags | Log tags, which include the following:
| A dictionary format where both the key and value are strings. In logs, tags are displayed with the |
Example
The following example uses a website access log to show the mapping between a raw log and the data model in Simple Log Service.
Raw log
127.0.0.1 - - [01/Mar/2021:12:36:49 0800] "GET /index.html HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36Log collected in simple mode. The entire raw log is saved in the content field.

Log collected in regular expression mode. The log content is structured by extracting the content into multiple key-value pairs based on the configured regular expression.

Time series data (Metric)
Time series data consists of a metric identifier and data points. Data with the same identifier forms a time series. The time series data type in Simple Log Service follows the Prometheus data model. All data in a MetricStore is stored as time series data.

Metric identifier
Each time series has a unique identifier, which consists of a metric name and labels.
A metric name is a string identifier that specifies the type of metric. The metric name must match the regular expression [a-zA-Z_:][a-zA-Z0-9_:]*. For example, http_request_total represents the total number of HTTP requests received.
Labels are a set of key-value pairs that identify the attributes of a metric. The key must match the regular expression [a-zA-Z_][a-zA-Z0-9_]*, and the value cannot contain a vertical bar (|). For example, method is POST, and URL is /api/v1/get.
Data point
A data point represents the value of a time series at a specific point in time. Each data point consists of a timestamp and a value. The timestamp is recorded with nanosecond precision, and the value is a double type.
Data structure
The write protocol for time series data is the same as the log write protocol and uses the Protobuf data encoding method. The metric identifier and data points are both located in the content field, as shown in the following table.
Field | Description | Example |
__name__ | The metric name. | nginx_ingress_controller_response_size |
__labels__ | The label information. The format is Note
| app#$#ingress-nginx|controller_class#$#nginx|controller_namespace#$#kube-system|controller_pod#$#nginx-ingress-controller-589877c6b7-hw9cj |
__time_nano__ | The timestamp. You can write timestamps with various precisions, such as seconds (s), milliseconds (ms), microseconds (us), and nanoseconds (ns). For SQL queries, all timestamps are converted to microsecond (us) precision in the output to ensure consistent calculations. | 1585727297293000 |
__value__ | The value. | 36.0 |
Example
Query all raw time series data for the process_resident_memory_bytes metric within a specified time range.
* | select * from "sls-mall-k8s-metrics.prom" where __name__ = 'process_resident_memory_bytes' limit all
Event data (Event)
An event is a noteworthy and valuable piece of data. Examples include alert monitoring data and the results of periodic inspection jobs. Event data in Simple Log Service adheres to the CloudEvents protocol specification, as described in the following table.
Field type | Field name | Required | Data format | Description |
Protocol fields |
| Yes | String | The default value is |
| Yes | String | The event ID. You can use | |
| Yes | String | The context in which an event occurred, such as the event source or the instance that published the event. | |
| Yes | String | The event type, such as | |
| No | String | The subject of the event. This field supplements the | |
| No | String | The event type. The default value is | |
| No | URI | The schema that the | |
| No | JSON | The specific event content. The format varies based on the source and type of the event. | |
| Yes | Timestamp | The event time. For more information about the format, see RFC 3339. Example: | |
Extension fields |
| Yes | String | The event title. |
| Yes | String | The event description. | |
| Yes | String | The event status. Valid values:
|
Example
The following example shows the data for an alert event:
{
"specversion": "1.0",
"id": "af****6c",
"source": "acs:sls",
"type": "sls.alert",
"subject": "https://sls.console.alibabacloud.com/lognext/project/demo-alert-chengdu/logsearch/nginx-access-log?encode=base64&endTime=1684312259&queryString=c3RhdHVzID49IDQwMCB8IHNlbGVjdCByZXF1ZXN0X21ldGhvZCwgY291bnQoKikgYXMgY250IGdyb3VwIGJ5IHJlcXVlc3RfbWV0aG9kIA%3D%3D&queryTimeType=99&startTime=1684311959",
"datacontenttype": "application/cloudevents+json",
"data": {
"aliuid": "16****50",
"region": "cn-chengdu",
"project": "demo-alert-chengdu",
"alert_id": "alert-16****96-247190",
"alert_name": "Nginx Access Error",
"alert_instance_id": "77****e4-1aad9f7",
"alert_type": "sls_alert",
"next_eval_interval": 300,
"fire_time": 1684299959,
"alert_time": 1684312259,
"resolve_time": 0,
"status": "firing",
"severity": 10,
"labels": {
"request_method": "GET"
},
"annotations": {
"__count__": "1",
"cnt": "49",
"desc": "Nginx has had 49 GET request errors in the last five minutes",
"title": "Nginx Access Error Alert Triggered"
},
"results": [
{
"region": "cn-chengdu",
"project": "demo-alert-chengdu",
"store": "nginx-access-log",
"store_type": "log",
"role_arn": "",
"query": "status >= 400 | select request_method, count(*) as cnt group by request_method ",
"start_time": 1684311959,
"end_time": 1684312259,
"fire_result": {
"cnt": "49",
"request_method": "GET"
},
"raw_results": [
{
"cnt": "49",
"request_method": "GET"
},
{
"cnt": "3",
"request_method": "DELETE"
},
{
"cnt": "7",
"request_method": "POST"
},
{
"cnt": "6",
"request_method": "PUT"
}
],
"raw_result_count": 4,
"truncated": false,
"dashboard_id": "",
"chart_title": "",
"is_complete": true,
"power_sql_mode": "auto"
}
],
"fire_results": [
{
"cnt": "49",
"request_method": "GET"
}
],
"fire_results_count": 1,
"condition": "Count:[1] > 0; Condition:[49] > 20",
"raw_condition": "Count:__count__ > 0; Condition:cnt > 20"
},
"time": "2023-05-17T08:30:59Z",
"title": "Nginx Access Error Alert Triggered",
"message": "Nginx has had 49 GET request errors in the last five minutes",
"status": "error"
}Trace data (Trace)
Trace data records processing information for a single request, including service invocations and processing durations. A piece of trace data corresponds to a call chain. For more information about the format, see Trace data format. In a broad sense, a call chain represents the execution process of a transaction or flow in a distributed system. In the OpenTracing standard, a call chain is a directed acyclic graph (DAG) composed of multiple spans. Each span represents a named and timed continuous execution segment within the call chain.