In containerized environments, application logs are spread across the standard output and log files of different Docker containers, making management and retrieval challenging. Use LoongCollector from Simple Log Service (SLS) to gather logs from multiple nodes into one logstore. This allows for centralized storage, structured parsing, data masking, filtering, and efficient querying and analysis.
Usage notes
Permission requirements: The Alibaba Cloud account or RAM user that you use for deployment must have the
AliyunLogFullAccesspermission.Docker version and LoongCollector requirements:
If your Docker Engine version is 29.0 or later, or if the minimum supported Docker API version is 1.42 or later, you must use LoongCollector 3.2.4 or later. Otherwise, LoongCollector cannot collect standard container outputs or file logs.
LoongCollector versions 3.2.4 and later support Docker API versions from 1.24 to 1.48.
LoongCollector versions 3.2.3 and earlier support Docker API versions from 1.18 to 1.41.
Limits on collecting standard output:
You must add
"log-driver": "json-file"to the Docker configuration file, daemon.json.For CentOS 7.4 and later versions, excluding CentOS 8.0, you must set
fs.may_detach_mounts=1.
Text log collection limitations: Only the overlay and overlay2 storage drivers are supported. For other driver types, you must mount the log directory manually.
Process for creating a collection configuration
Preparations: Create a project and a logstore. A project is a resource management unit that is used to isolate logs from different applications, and a logstore is used to store logs.
Configure a machine group (install LoongCollector): Install LoongCollector on the servers where you want to collect logs and add the servers to a machine group. Use machine groups to centrally manage collection nodes, distribute configurations, and manage server status.
Create and configure log collection rules
Global and input configuration: Define the name of the collection configuration, and the source and scope of the log collection.
Log processing and structuring: Configure processing rules based on the log format.
Multi-line logs: This applies to single logs that span multiple lines, such as Java exception stacks or Python tracebacks. You must use a regular expression to identify the start of each log.
Structured parsing: Configure parsing plugins, such as the regular expression, separator, or NGINX mode plugin, to extract raw strings into structured key-value pairs. This facilitates subsequent queries and analysis.
Log filtering: Configure collection blacklists and content filtering rules to screen for valid log content. This practice reduces the transmission and storage of redundant data.
Log categorization: Configure topics and log tags to flexibly distinguish logs from different applications, containers, or path sources.
Query and analysis configuration: The system enables full-text index by default, which supports keyword searches. We recommend that you enable field index for precise queries and analysis of structured fields to improve search efficiency.
Validation and troubleshooting: After you complete the configuration, verify that logs are collected successfully. If you encounter issues such as no data being collected, heartbeat failures, or parsing errors, see the FAQ section.
Preparations
Before you collect logs, you must plan and create a project and a logstore to manage and store the logs. If you already have the required resources, skip this step and proceed to Step 1: Configure a machine group (install LoongCollector).
Create a project
Create a logstore
Step 1: Configure a machine group (install LoongCollector)
Deploy LoongCollector as a container on the Docker host and add the host to a machine group. Use the machine group to centrally manage multiple collection nodes, distribute configurations, and monitor status.
Pull the image
On a host that has Docker installed, run the following command to pull the LoongCollector image. Replace
${region_id}with the region ID of the host or a nearby region, such ascn-hangzhou, to improve download speed and stability.# LoongCollector image address docker pull aliyun-observability-release-registry.${region_id}.cr.aliyuncs.com/loongcollector/loongcollector:v3.0.12.0-25723a1-aliyun # Logtail image address docker pull registry.${region_id}.aliyuncs.com/log-service/logtail:v2.1.11.0-aliyunStart the LoongCollector container
Run the following command to start the container. Make sure that you correctly mount the directories and set the required environment variables:
docker run -d \ -v /:/logtail_host:ro \ -v /var/run/docker.sock:/var/run/docker.sock \ --env ALIYUN_LOGTAIL_CONFIG=/etc/ilogtail/conf/${sls_upload_channel}/ilogtail_config.json \ --env ALIYUN_LOGTAIL_USER_ID=${aliyun_account_id} \ --env ALIYUN_LOGTAIL_USER_DEFINED_ID=${user_defined_id} \ aliyun-observability-release-registry.${region_id}.cr.aliyuncs.com/loongcollector/loongcollector:v3.0.12.0-25723a1-aliyunParameter description:
${sls_upload_channel}: The log upload channel. The format is Project Region-Network Transfer Type. Example:Transfer type
Configuration value format
Example
Scenarios
Internal network transfer
regionIdcn-hangzhouThe ECS instance and the project are in the same region.
Internet transfer
regionId-internetcn-hangzhou-internetThe ECS instance and the project are in different regions.
The server is from another cloud provider or a self-built data center.
Transfer acceleration
regionId-accelerationcn-hangzhou-accelerationCross-region communication within and outside China.
${aliyun_account_id}: The Alibaba Cloud account ID.${user_defined_id}: The custom ID of the machine group. This ID is used to bind the machine group. For example, useuser-defined-docker-1. The ID must be unique within the region.ImportantThe following startup conditions must be met:
The three key environment variables are correctly configured:
ALIYUN_LOGTAIL_CONFIG,ALIYUN_LOGTAIL_USER_ID, andALIYUN_LOGTAIL_USER_DEFINED_ID.The
/var/run/docker.sockdirectory is mounted. This directory is used to listen for container lifecycle events.The root directory
/is mounted to/logtail_host. This is used to access the host file system.
Verify the running status of the container
docker ps | grep loongcollectorExpected output example:
6ad510001753 aliyun-observability-release-registry.cn-beijing.cr.aliyuncs.com/loongcollector/loongcollector:v3.0.12.0-25723a1-aliyun "/usr/local/ilogtail…" About a minute ago Up About a minute recursing_shirleyConfigure the machine group
In the navigation pane on the left, choose , click , configure the following parameters, and click OK:
Name: Enter a custom name for the machine group, such as
docker-host-group.Machine Group Identifier: Select Custom Identifier.
Custom Identifier: Enter the
${user_defined_id}that you set when you started the container. The ID must be an exact match. Otherwise, the association fails.
Verify the machine group heartbeat status
Click the name of the new machine group to go to the details page and check the :
OK: Indicates that LoongCollector is connected to SLS.
FAIL: For more information about how to troubleshoot the issue, see Troubleshoot heartbeat errors.
Step 2: Create and configure log collection rules
Define which logs LoongCollector collects, how to parse their structure, how to filter content, and how to bind the configuration to the registered machine group.
On the
Logstores page, click the
icon next to the name of the target logstore.Click
next to Data Collection. In the Quick Data Import dialog box, select a template based on the log source and click Integrate Now.Docker standard output: Select Docker Stdout and Stderr - New Version.
Two templates, a new version and an old version, are available for collecting container standard output. We recommend that you use the new version. For more information about the differences between the new and old versions, see Appendix: Comparison of old and new versions of container standard output. To use the old version for collection, see Collect standard output from Docker containers (Old version).
Docker file logs: Select Docker File - Container.
Configure the Machine Group and click Next.
Scenario: Select Docker Containers.
Move the machine group that you created in Step 1 from the source machine group list to the applied machine group list.
On the Logtail Configuration page, configure the following parameters and click Next.
1. Global and input configurations
Before you start, make sure that you selected a data import template and bound the machine group. In this step, define the name of the collection configuration, the log source, and the collection scope.
Collect Docker standard output
Global Configurations
Configuration Name: Enter a custom name for the collection configuration. The name must be unique within the project and cannot be changed after the configuration is created. The name must meet the following conventions:
Can contain only lowercase letters, digits, hyphens (-), and underscores (_).
Must start and end with a lowercase letter or a digit.
Input Configurations
Turn on the Stdout and Stderr or Standard Error switch. Both switches are turned on by default.
ImportantWe recommend that you do not enable both standard output and standard error at the same time because this may cause confusion in the collected logs.
Collect Docker container text logs
Global Configurations:
Configuration Name: Enter a custom name for the collection configuration. The name must be unique within the project and cannot be changed after the configuration is created. The name must meet the following conventions:
Can contain only lowercase letters, digits, hyphens (-), and underscores (_).
Must start and end with a lowercase letter or a digit.
Input Configurations:
File Path Type:
Path in Container: Collect log files from within the container.
Host Path: Collect logs from local services on the host.
File Path: The absolute path of the log file to be collected.
Linux: The path must start with a forward slash (`/`). For example,
/data/mylogs/**/*.logindicates all files that have the .log extension in the/data/mylogsdirectory.Windows: The path must start with a drive letter. For example,
C:\Program Files\Intel\**\*.Log.
Maximum Directory Monitoring Depth: The maximum directory depth that the wildcard character
**can match in the File Path. The default value is 0, which indicates the current directory. The value can range from 0 to 1000.We recommend that you set this parameter to 0 and configure the path to the directory that contains the file.
2. Log processing and structuring
Configure log processing rules to transform raw, unstructured logs into structured data. This improves the efficiency of log queries and analysis. We recommend that you add a log sample before you configure the rules:
In the Processor Configurations section of the Logtail Configuration page, click Add Log Sample and enter the content of the log to be collected. The system identifies the log format based on the sample and helps you generate regular expressions and parsing rules. This simplifies the configuration.
Scenario 1: Multi-line log processing (such as Java stack logs)
Logs such as Java exception stacks and JSON objects often span multiple lines. In the default collection mode, these logs are split into multiple incomplete records, which leads to a loss of context. To address this issue, enable multi-line collection mode and configure a regular expression for the start of a line to merge consecutive lines of the same log into a single, complete log entry.
Example effect:
Raw log without any processing | In default collection mode, each line is treated as an independent log, breaking up the stack information and losing context | With multi-line mode enabled, a regular expression for the start of a line identifies the complete log, preserving the full semantic structure. |
|
|
|
Configuration: In the Processor Configurations section of the Logtail Configuration page, turn on Multi-line Mode:
Type: Select Custom or Multi-line JSON.
Custom: The format of the raw log is not fixed. You must configure a Regex to Match First Line to identify the starting line of each log entry.
Regex to Match First Line: Supports automatic generation or manual input. The regular expression must match a complete line of data. For example, the matching regular expression in the preceding example is
\[\d+-\d+-\w+:\d+:\d+,\d+]\s\[\w+]\s.*.Automatic generation: Click Generate. Then, in the Log Sample text box, select the log content to be extracted and click Automatically Generate.
Manual input: Click Manually Enter Regular Expression. After you enter the expression, click Validate.
Multi-line JSON: If the raw logs are all in standard JSON format, SLS automatically handles line breaks within a single JSON log.
Processing Method If Splitting Fails:
Discard: If a piece of text does not match the start-of-line rule, it is discarded.
Retain Single Line: The unmatched text is chunked and retained in the original single-line mode.
Scenario 2: Structured logging
If raw logs are unstructured or semi-structured text, such as NGINX access logs or application output logs, direct querying and analysis are often inefficient. SLS provides a variety of data parsing plugins that can automatically convert raw logs of different formats into structured data. This provides a solid data foundation for subsequent analysis, monitoring, and alerting.
Example effect:
Raw Logs | Parsed logs |
| |
Configuration steps: In the Processor Configurations section of the Logtail Configuration page:
Add a parsing plugin: Click Add Processor and configure a plugin such as regular expression parsing, separator parsing, or JSON parsing that matches the log format. For example, to collect NGINX logs, select .
NGINX Log Configuration: Copy the complete
log_formatdefinition from the NGINX server configuration file (nginx.conf) and paste it into this text box.Example:
log_format main '$remote_addr - $remote_user [$time_local] "$request" ''$request_time $request_length ''$status $body_bytes_sent "$http_referer" ''"$http_user_agent"';ImportantThe format definition must be exactly the same as the format that is used to generate the logs on the server. Otherwise, log parsing fails.
Common parameters: The following parameters appear in multiple data parsing plugins. Their functions and usage are consistent.
Original Field: The name of the source field to be parsed. The default value is
content, which indicates the entire collected log content.Retain Original Field if Parsing Fails: We recommend that you turn on this switch. If a log fails to be parsed by the plugin due to a format mismatch, this option ensures that the original log content is not lost and is retained in the specified raw field.
Retain Original Field if Parsing Succeeds: If you select this option, the original log content is retained even if the log is successfully parsed.
3. Log filtering
During log collection, indiscriminately collecting large amounts of low-value or irrelevant logs, such as DEBUG- or INFO-level logs, wastes storage resources, increases costs, affects query efficiency, and poses data breach risks. To address these issues, implement fine-grained filtering strategies for efficient and secure log collection.
Reduce costs through content filtering
Filter logs based on field content. For example, collect only logs of the WARNING or ERROR level.
Example effect:
Raw log without any processing | Collect only |
| |
Configuration steps: In the Processor Configurations section of the Logtail Configuration page:
Click Add Processor and select :
Field Name: The log field to be used for filtering.
Field Value: The regular expression to be used for filtering. Only full-text matching is supported. Partial keyword matching is not supported.
Control collection scope with a blacklist
Use a blacklist to exclude specified directories or files. This prevents irrelevant or sensitive logs from being uploaded.
Configuration steps: In the Input Configurations section of the Logtail Configuration page, enable Collection Blacklist and click Add.
Full matching and wildcard matching are supported for directories and filenames. The supported wildcard characters are the asterisk (`*`) and the question mark (`?`).
File Path Blacklist: The file paths to be ignored. Example:
/home/admin/private*.log: Ignores all files in the/home/admin/directory that start with "private" and end with ".log" during collection./home/admin/private*/*_inner.log: Ignores files that end with "_inner.log" in directories that start with "private" under the/home/admin/directory during collection.
File Blacklist: The filenames to be ignored during collection. Example:
app_inner.log: Ignores all files namedapp_inner.logduring collection.
Directory Blacklist: The directory path cannot end with a forward slash (`/`). Example:
/home/admin/dir1/: The directory blacklist does not take effect./home/admin/dir*: Ignores files in all subdirectories of the/home/admin/directory that start with "dir" during collection./home/admin/*/dir: Ignores all files in subdirectories named "dir" at the second level of the/home/admin/directory during collection. For example, files in the/home/admin/a/dirdirectory are ignored, while files in the/home/admin/a/b/dirdirectory are collected.
Container filtering
Set collection conditions based on container metadata, such as environment variables, pod labels, namespaces, and container names, to precisely control which containers' logs are collected.
Configuration steps: In the Processor Configurations section of the Logtail Configuration page, enable Container Filtering and click Add.
Multiple conditions are combined using the AND logical operator. All regular expression matching is based on the RE2 regular expression engine of Go, which has some limitations compared with engines such as PCRE. When you write regular expressions, follow the limits described in Appendix: Regular expression limits (container filtering).
Environment Variable Blacklist/Whitelist: Specify the environment variable conditions for the containers from which you want to collect logs.
K8s Pod Label Blacklist/Whitelist: Specify the label conditions for the pods where the containers to be collected are located.
K8s Pod Name Regular Matching: Specify the containers to be collected by pod name.
K8s Namespace Regular Matching: Specify the containers to be collected by namespace name.
K8s Container Name Regular Matching: Specify the containers to be collected by container name.
Container Label Blacklist/Whitelist: Collect logs from containers whose labels meet the specified conditions. This parameter is used in Docker scenarios and is not recommended for Kubernetes scenarios.
4. Log categorization
In scenarios where multiple applications or instances share the same log format, it is difficult to distinguish the log source. This leads to a lack of context during queries and low analysis efficiency. To address this issue, configure topics and log tags to achieve automated context association and logical categorization.
Configure a topic
If multiple applications or instances have the same log format but different paths, such as /apps/app-A/run.log and /apps/app-B/run.log, it is difficult to distinguish the source of the collected logs. In this case, you can generate a topic based on the machine group, a custom name, or file path extraction to flexibly distinguish logs from different applications or path sources.
Configuration steps: : Select the topic generation method. The following three types are supported:
Machine Group Topic: If a collection configuration is applied to multiple machine groups, LoongCollector automatically uses the name of the machine group to which the server belongs as the value of the
__topic__field for upload. This is suitable for scenarios where logs are divided by host cluster.Custom: The format is
customized://<custom_topic_name>, for example,customized://app-login. This is suitable for static topic scenarios with fixed business identifiers.File Path Extraction: Extract key information from the full path of the log file to dynamically mark the log source. This is suitable for situations where multiple users or applications share the same log filename but have different paths.
If multiple users or services write logs to different top-level directories but the sub-paths and filenames are the same, the source cannot be distinguished by the filename alone. For example:
/data/logs ├── userA │ └── serviceA │ └── service.log ├── userB │ └── serviceA │ └── service.log └── userC └── serviceA └── service.logConfigure File Path Extraction and use a regular expression to extract key information from the full path. The matched result is uploaded to the logstore as the topic.
Extraction rule: Based on regular expression capturing groups
When you configure a regular expression, the system automatically determines the output field format based on the number and naming of the capturing groups. The rules are as follows:
In the regular expression for the file path, you must escape the forward slash (/).
Capturing group type
Scenario
Generated field
Regex example
Matching path example
Generated field
Single capturing group (only one
(.*?))Only one dimension is needed to distinguish the source (such as username, environment)
Generates the
__topic__field\/logs\/(.*?)\/app\.log/logs/userA/app.log__topic__:userAMultiple capturing groups - non-named (multiple
(.*?))Multiple dimensions are needed but no semantic labels
Generates a tag field
__tag__:__topic_{i}__, where{i}is the ordinal number of the capturing group\/logs\/(.*?)\/(.*?)\/app\.log/logs/userA/svcA/app.log__tag__:__topic_1__userA;__tag__:__topic_2__svcAMultiple capturing groups - named (using
(?P<name>.*?)Multiple dimensions are needed and you want the field meanings to be clear for easy query and analysis
Generates a tag field
__tag__:{name}\/logs\/(?P<user>.*?)\/(?P<service>.*?)\/app\.log/logs/userA/svcA/app.log__tag__:user:userA;__tag__:service:svcA
Log tagging
Enable the log tag enrichment feature to extract key information from container environment variables or Kubernetes pod labels and attach the information as tags. This achieves fine-grained grouping of logs.
Configuration steps: In the Input Configurations section of the Logtail Configuration page, enable Log Tag Enrichment and click Add.
Environment Variables: Configure the environment variable name and tag name. The environment variable value is stored in the tag name.
Environment Variable Name: Specify the name of the environment variable to be extracted.
Tag Name: The name of the environment variable tag.
Pod Labels: Configure the pod label name and tag name. The pod label value is stored in the tag name.
Pod Label Name: The name of the Kubernetes pod label to be extracted.
Tag Name: The name of the tag.
Step 3: Configure query and analysis
After you complete the log processing and plugin configuration, click Next to go to the Query and Analysis Configurations page:
The system enables Full-text Index by default, which supports keyword searches on the original log content.
To perform precise queries by field, click Automatic Index Generation after the Preview Data is loaded on the page. SLS generates a field index based on the first entry in the preview data.
After you complete the configuration, click Next to complete the setup of the entire collection process.
Step 4: Validate and troubleshoot
After the collection configuration is complete and applied to the machine group, the system automatically distributes the configuration and starts to collect incremental logs.
View uploaded logs
Confirm that there is new content in the log file: LoongCollector collects only incremental logs. Run
tail -f /path/to/your/log/fileand trigger a business operation to ensure that new logs are being written.Query logs: Go to the query and analysis page of the target logstore, click Search & Analyze (the default time range is the last 15 minutes), and check whether new logs are flowing in. Each collected Docker container text log contains the following fields by default:
Field name
Description
__source__
The IP address of the LoongCollector (Logtail) container.
_container_ip_
The IP address of the application container.
__tag__:__hostname__
The name of the Docker host where LoongCollector (Logtail) is located.
__tag__:__path__
The log collection path.
__tag__:__receive_time__
The time the log arrived at the server.
__tag__:__user_defined_id__
The custom ID of the machine group.
FAQ
Machine group heartbeat connection is FAIL
Check the user ID: If your server type is not ECS, or if the ECS instance and the project belong to different Alibaba Cloud accounts, check whether the correct user ID exists in the specified directory based on the following table.
Linux: Run the
cd /etc/ilogtail/users/ && touch <uid>command to create a user ID file.Windows: Go to the
C:\LogtailData\users\directory and create an empty file named<uid>.
If a file named with the Alibaba Cloud account ID of the current project exists in the specified path, the user ID is configured correctly.
Check the machine group ID: If you are using a custom ID for the machine group, check whether a
user_defined_idfile exists in the specified directory. If it exists, check whether the content of the file is consistent with the custom ID configured for the machine group.System
Specified directory
Solution
Linux
/etc/ilogtail/user_defined_id# Configure the custom ID. If the directory does not exist, create it manually. echo "user-defined-1" > /etc/ilogtail/user_defined_idWindows
C:\LogtailData\user_defined_idCreate a new
user_defined_idfile in theC:\LogtailDatadirectory and write the custom ID into it. (If the directory does not exist, create it manually.)If both the user ID and the machine group ID are configured correctly, see Troubleshoot LoongCollector (Logtail) machine group issues for further troubleshooting.
No data collected for logs
Check for incremental logs: After you configure LoongCollector (Logtail) for collection, if there are no new logs in the log file to be collected, LoongCollector (Logtail) does not collect logs from that file.
Check the machine group heartbeat status: Go to the page, click the name of the target machine group, and in the section, check the Heartbeat status.
If the heartbeat is OK, the machine group is connected to the SLS project.
If the heartbeat is FAIL: For more information about how to troubleshoot the issue, see Machine group heartbeat connection is FAIL.
Confirm whether the LoongCollector (Logtail) collection configuration has been applied to the machine group: Even if the LoongCollector (Logtail) collection configuration is created, logs are not collected if the configuration is not applied to the machine group.
Go to the page and click the name of the target machine group to go to the Machine Group Configurations page.
On the page, view Manage Configuration. The left side shows All Logtail Configurations, and the right side shows Applied Logtail Configurations. If the target LoongCollector (Logtail) collection configuration has been moved to the applied area on the right, the configuration has been successfully applied to the target machine group.
If the target LoongCollector (Logtail) collection configuration has not been moved to the applied area on the right, click Modify. In the All Logtail Configurations list on the left, select the name of the target LoongCollector (Logtail) configuration, click
to move it to the applied area on the right, and then click Save.
Log collection error or incorrect format
Troubleshooting approach: This situation indicates that the network connection and basic configuration are normal. The problem is mainly a mismatch between the log content and the parsing rules. You need to view the specific error message to locate the problem:
On the Logtail Configuration page, click the name of the LoongCollector (Logtail) configuration that has the collection error. On the Log Collection Error tab, click Select Time Range to set the query time.
In the section, view the alarm metric of the error log and find the corresponding solution in Common error types in data collection.
What to do next
Data visualization: Use visualization dashboards to monitor key metric trends.
Automatic alerting for data anomalies: Set up alert policies to be aware of system anomalies in real time.
Common commands
View the running status of LoongCollector (Logtail)
docker exec ${logtail_container_id} /etc/init.d/ilogtaild statusView information such as the version number, IP address, and startup time of LoongCollector (Logtail)
docker exec ${logtail_container_id} cat /usr/local/ilogtail/app_info.jsonView the running logs of LoongCollector (Logtail)
The running logs of LoongCollector (Logtail) are saved in the /usr/local/ilogtail/ directory inside the container. The filename is ilogtail.LOG, and rotated files are compressed and stored as ilogtail.LOG.x.gz. Example:
# View LoongCollector running logs
docker exec a287de895e40 tail -n 5 /usr/local/ilogtail/loongcollector.LOG
# View Logtail running logs
docker exec a287de895e40 tail -n 5 /usr/local/ilogtail/ilogtail.LOGExample output:
[2025-08-25 09:17:44.610496] [info] [22] /build/loongcollector/file_server/polling/PollingModify.cpp:75 polling modify resume:succeeded
[2025-08-25 09:17:44.610497] [info] [22] /build/loongcollector/file_server/polling/PollingDirFile.cpp:100 polling discovery resume:starts
[2025-08-25 09:17:44.610498] [info] [22] /build/loongcollector/file_server/polling/PollingDirFile.cpp:103 polling discovery resume:succeeded
[2025-08-25 09:17:44.610499] [info] [22] /build/loongcollector/file_server/FileServer.cpp:117 file server resume:succeeded
[2025-08-25 09:17:44.610500] [info] [22] /build/loongcollector/file_server/EventDispatcher.cpp:1019 checkpoint dump:succeededRestart LoongCollector (Logtail)
# Stop loongcollector
docker exec a287de895e40 /etc/init.d/ilogtaild stop
# Start loongcollector
docker exec a287de895e40 /etc/init.d/ilogtaild startFAQ
Common error messages
Error phenomenon | Cause | Solution |
| The project region is inconsistent with the LoongCollector (Logtail) container. | Check the region configuration in |
| Incorrect file path configuration. | Confirm that the log path in the application container matches the collection configuration. |
Error log: The parameter is invalid : uuid=none
Problem description: The LoongCollector (Logtail) log (/usr/local/ilogtail/ilogtail.LOG) contains the error log The parameter is invalid : uuid=none.
Solution: Create a product_uuid file on the host, enter any valid UUID, such as 169E98C9-ABC0-4A92-B1D2-AA6239C0D261, and mount this file to the /sys/class/dmi/id/product_uuid directory of the LoongCollector (Logtail) container.
How can the same log file or container standard output be collected by multiple collection configurations simultaneously?
By default, to prevent data duplication, SLS restricts each log source to be collected by only one collection configuration:
A text log file can match only one Logtail configuration.
Container standard output (stdout):
If you are using the new version of the standard output template, it can be collected by only one standard output collection configuration by default.
If you are using the old version of the standard output template, no extra configuration is needed, and it supports collecting multiple copies by default.
Log on to the Simple Log Service console and go to the target project.
In the navigation pane on the left, choose
Logstores and find the target logstore.Click the
icon next to its name to expand the logstore.Click Logtail Configuration. In the configuration list, find the target Logtail configuration and click Manage Logtail Configuration in the Actions column.
On the Logtail configuration page, click Edit and scroll down to the Input Configurations section:
To collect text file logs: Enable Allow File to Be Collected for Multiple Times.
To collect container standard output: Turn on Allow Collection by Different Logtail Configurations.
Appendix: Detailed explanation of native parsing plugins
In the Processing Configuration section of the Logtail Configuration page, add processing plugins to configure structured processing for raw logs. To add a processing plugin to an existing collection configuration, follow these steps:
In the navigation pane on the left, choose
Logstores and find the target logstore.Click the
icon next to its name to expand the logstore.Click Logtail Configuration. In the configuration list, find the target Logtail configuration and click Manage Logtail Configuration in the Actions column.
On the Logtail configuration page, click Edit.
This section introduces only commonly used processing plugins that cover common log processing scenarios. For more information about other features, see Extension processing plugins.
Rules for combining plugins (applicable to LoongCollector / Logtail 2.0 and later versions):
Native processing plugins and extension processing plugins can be used independently or combined as needed.
We recommend that you prioritize native processing plugins because they offer better performance and higher stability.
If native features cannot meet your business needs, add extension processing plugins after the configured native processing plugins to perform supplementary processing.
Order constraint:
All plugins are executed sequentially in the order in which they are configured, which forms a processing chain. Note that All native processing plugins must precede any extension processing plugins. After you add an extension processing plugin, you cannot add any more native processing plugins.
Regular expression parsing
Extract log fields using regular expressions and parse the logs into key-value pairs. Each field can be independently queried and analyzed.
Example effect:
Raw log without any processing | Using the regex parsing plugin |
| |
Configuration steps: In the Processor Configurations section of the Logtail Configuration page, click Add Processor and select :
Regular Expression: Used to match the log. Supports automatic generation or manual input:
Automatic generation:
Click Generate.
In the Log Sample, highlight the log content to be extracted.
Click Generate Regular Expression.

Manual input: Click Manually Enter Regular Expression based on the log format.
After you complete the configuration, click Validate to test whether the regular expression can correctly parse the log content.
Extracted Field: Set the corresponding field name (Key) for the extracted log content (Value).
For more information about the other parameters, see the common configuration parameter descriptions in Scenario 2: Structured logging.
Separator parsing
Structure the log content using a separator, and parse it into multiple key-value pairs. Single-character and multi-character separators are supported.
Example effect:
Raw log without any processing | Chunk fields by the specified character |
| |
Configuration steps: In the Processor Configurations section of the Logtail Configuration page, click Add Processor and select .
Delimiter: Specify the character that is used to chunk the log content.
Example: For a CSV format file, select Custom and enter a comma (,).
Quote: If a field value contains the separator, you must specify a quote to wrap the field to prevent incorrect chunking.
Extracted Field: Set the corresponding field name (Key) for each column in order of separation. The rules are as follows:
Field names can contain only letters, digits, and underscores (_).
Must start with a letter or an underscore (_).
The maximum length is 128 bytes.
For more information about the other parameters, see the common configuration parameter descriptions in Scenario 2: Structured logging.
Standard JSON parsing
Structure an Object-type JSON log by parsing it into key-value pairs.
Example effect:
Raw log without any processing | Automatic extraction of standard JSON key-values |
| |
Configuration steps: In the Processor Configurations section of the Logtail Configuration page, click Add Processor and select .
Original Field: The default value is content. This field is used to store the raw log content to be parsed.
For more information about the other parameters, see the common configuration parameter descriptions in Scenario 2: Structured logging.
Nested JSON parsing
Parse a nested JSON log into key-value pairs by specifying the expansion depth.
Example effect:
Raw log without any processing | Expansion depth: 0, using expansion depth as a prefix | Expansion depth: 1, using expansion depth as a prefix |
| | |
Configuration steps: In the Processor Configurations section of the Logtail Configuration page, click Add Processor and select :
Original Field: The name of the raw field to be expanded, for example,
content.JSON Expansion Depth: The expansion level of the JSON object. The value 0 indicates full expansion, which is the default value. The value 1 indicates the current level.
Character to Concatenate Expanded Keys: The connector for field names during JSON expansion. The default connector is an underscore (_).
Name Prefix of Expanded Keys: Specify the prefix for field names after JSON expansion.
Expand Array: Turn on this switch to expand the array into key-value pairs with indexes.
Example:
{"k":["a","b"]}is expanded to{"k[0]":"a","k[1]":"b"}.To rename the expanded fields, for example, from prefix_s_key_k1 to new_field_name, add a Rename Field plugin to complete the mapping.
For more information about the other parameters, see the common configuration parameter descriptions in Scenario 2: Structured logging.
JSON array parsing
Use the json_extract function to extract JSON objects from a JSON array.
Example effect:
Raw log without any processing | Extract JSON array structure |
| |
Configuration steps: In the Processor Configurations section of the Logtail Configuration page, switch the Processing Method to SPL, configure the SPL Statement, and use the json_extract function to extract JSON objects from the JSON array.
Example: Extract elements from the JSON array in the log field content and store the results in new fields json1 and json2.
* | extend json1 = json_extract(content, '$[0]'), json2 = json_extract(content, '$[1]')Apache log parsing
Structure the log content based on the definitions in the Apache log configuration file and parse it into multiple key-value pairs.
Example effect:
Raw log without any processing | Apache Common Log Format |
| |
Configuration steps: In the Processor Configurations section of the Logtail Configuration page, click Add Processor and select :
Log Format: combined
APACHE LogFormat Configuration: The system automatically fills in the configuration based on the Log Format.
ImportantVerify the auto-filled content to ensure that it is exactly the same as the LogFormat defined in the server's Apache configuration file. The file is usually located at /etc/apache2/apache2.conf.
For more information about the other parameters, see the common configuration parameter descriptions in Scenario 2: Structured logging.
Data masking
Mask sensitive data in logs.
Example effect:
Raw log without any processing | Masking result |
| |
Configuration steps: In the Processor Configurations section of the Logtail Configuration page, click Add Processor and select .
Original Field: The raw field that contains the log content before parsing.
Data Masking Method:
const: Replace the sensitive content with the modified string.
md5: Replace the sensitive content with its corresponding MD5 hash.
Replacement String: If you set the Masking Method to const, you must enter a string to replace the sensitive content.
Content Expression that Precedes Replaced Content: Used to find the sensitive content. Configure this parameter using the RE2 syntax.
Content Expression to Match Replaced Content: The expression for the sensitive content. Configure this parameter using the RE2 syntax.
Time parsing
Parse the time field in the log and set the parsing result as the value of the __time__ field of the log.
Example effect:
Raw log without any processing | Time parsing |
|
|
Configuration steps: In the Processor Configurations section of the Logtail Configuration page, click Add Processor and select :
Original Field: The raw field that contains the log content before parsing.
Time Format: Set the corresponding time format based on the time content in the log.
Time Zone: Select the time zone of the log time field. By default, the machine time zone is used, which is the time zone of the environment where the LoongCollector (Logtail) process is located.
Appendix: Regular expression limits (container filtering)
The regular expressions that are used for Container Filtering are based on the RE2 engine of Go, which has some syntax limitations compared with other engines such as PCRE. Note the following when you write regular expressions:
1. Differences in named group syntax
Go uses the (?P<name>...) syntax to define named groups and does not support the (?<name>...) syntax from PCRE.
Correct example:
(?P<year>\d{4})Incorrect syntax:
(?<year>\d{4})
2. Unsupported regex features
The following common but complex regex features are not available in RE2. Avoid using them:
Assertion:
(?=...),(?!...),(?<=...),(?<!...)Conditional expression:
(?(condition)true|false)Recursion matching:
(?R),(?0)Subprogram reference:
(?&name),(?P>name)Atomic group:
(?>...)
3. Usage recommendations
We recommend that you use tools such as Regex101 to debug regular expressions. Select the Golang (RE2) mode for validation to ensure compatibility. If you use any of the unsupported syntaxes mentioned above, the plugin does not parse or match correctly.
Appendix: Comparison of old and new versions of container standard output
To improve log storage efficiency and collection consistency, the log metadata format for container standard output has been upgraded. The new format consolidates metadata into the __tag__ field, which achieves storage optimization and format standardization.
Core advantages of the new standard output version
Significant performance improvement
Refactored in C++, performance is improved by 180% to 300% compared with the old Go implementation.
Supports native plugins for data processing and multi-threading parallel processing, which fully utilizes system resources.
Supports flexible combination of native and Go plugins to meet complex scenario requirements.
Greater reliability
Supports a standard output log rotation queue. The log collection mechanism is unified with the file collection mechanism, which provides high reliability in scenarios with rapid standard output log rotation.
Lower resource consumption
CPU usage is reduced by 20% to 25%.
Memory usage is reduced by 20% to 25%.
Enhanced O&M consistency
Unified parameter configuration: The configuration parameters of the new standard output collection plugin are consistent with the file collection plugin.
Unified metadata management: The naming of container metadata fields and the storage location of tag logs are unified with the file collection scenario. The consumer side needs to maintain only one set of processing logic.
Comparison of new and old version features
Feature dimension
Old version features
New version features
Storage method
Metadata is directly embedded in the log content as a normal field.
Metadata is centrally stored in the
__tag__tag.Storage efficiency
Each log carries the full metadata repeatedly, which consumes more storage space.
Multiple logs in the same context can reuse metadata, which saves storage costs.
Format consistency
Inconsistent with the container file collection format.
Field naming and storage structure are fully aligned with container file collection, which provides a unified experience.
Query access method
Can be queried directly by field name, such as
_container_name_.Requires accessing the corresponding key-value through
__tag__, such as__tag__: _container_name_.Container metadata field mapping table
Old version field name
New version field name
_container_ip_
__tag__:_container_ip_
_container_name_
__tag__:_container_name_
_image_name_
__tag__:_image_name_
_namespace_
__tag__:_namespace_
_pod_name_
__tag__:_pod_name_
_pod_uid_
__tag__:_pod_uid_
In the new version, all metadata fields are stored in the tag area of the log in the format
__tag__:<key>, rather than being embedded in the log content.Impact of new version changes on users
Consumer-side adaptation: Because the storage location has changed from "content" to "tag", the user's log consumption logic needs to be adjusted accordingly. For example, you must access fields through __tag__ during queries.
SQL compatibility: Query SQL has been automatically adapted for compatibility, so users do not need to modify their query statements to process both new and old version logs simultaneously.
> Create Machine Group





