Defining log collection settings as a Kubernetes CustomResourceDefinition (CRD) unifies management across all clusters, including Container Service for Kubernetes (ACK) and self-managed ones. This approach replaces inconsistent, error-prone manual processes with versioned automation through kubectl or CI/CD pipelines. When combined with LoongCollector's hot reloading capability, configuration changes take effect immediately without restarting collection components. This improves Operations and Maintenance (O&M) efficiency and system maintainability.
The legacy AliyunLogConfig CRD is no longer maintained. Use the new AliyunPipelineConfig CRD instead. For a comparison of the new and legacy versions, see CRD types.Collection configurations created using a CRD can be modified only by updating the corresponding CRD. Changes made in the Simple Log Service console are not synchronized to the CRD and do not take effect.
Applicability
Operating environment:
Supports ACK (managed and dedicated editions) and self-managed Kubernetes clusters.
Kubernetes version 1.16.0 or later that supports
Mount propagation: HostToContainer.Container runtime (Docker and Containerd only)
Docker:
Requires access permissions for docker.sock.
Standard output collection supports only the JSON log driver.
Supports only the overlay and overlay2 storage drivers. For other types, you must manually mount the log directories.
Containerd: Requires access permissions for containerd.sock.
Resource requirements: LoongCollector (Logtail) runs with system-cluster-critical high priority. Do not deploy it if cluster resources are insufficient, because it may evict existing pods on the node.
CPU: Reserve at least 0.1 Core.
Memory: At least 150 MB for the collection component and at least 100 MB for the controller component.
Actual usage depends on the collection rate, the number of monitored directories and files, and any sending blockages. Ensure that actual usage remains below 80% of the configured limit.
Permission requirements: The Alibaba Cloud account or RAM user used for deployment must have the
AliyunLogFullAccesspermission.To create custom permission policies, see the AliyunCSManagedLogRolePolicy system policy. Copy the permissions from this policy and grant them to the target RAM user or role to configure fine-grained permissions.
Collection configuration creation flow
Install LoongCollector: Deploy LoongCollector as a DaemonSet to ensure that a collection container runs on each node in the cluster. This enables unified collection of logs from all containers on that node.
Create a Logstore: A Logstore is a storage unit for log data. You can create multiple Logstores in a project.
Create a collection configuration YAML file: Connect to the cluster using kubectl. Create the collection configuration file in one of the following ways:
Method 1: Use the collection configuration generator
Use the collection configuration generator in the Simple Log Service console to enter parameters in a graphical user interface and automatically generate a standard YAML file.
Method 2: Manually write the YAML file
Write a YAML file based on the examples and workflows in this topic. Start with a minimal configuration and progressively add processing logic and advanced features.
For more information about complex use cases not covered in this topic or fields that require deep customization, see AliyunPipelineConfig parameters for a complete list of fields, value rules, and plugin capabilities.
A complete collection configuration usually includes the following parts:
Minimal configuration (Required): Builds the data tunnel from the cluster to Simple Log Service. It includes two parts:
Inputs
(inputs): Defines the source of the logs. Container logs include the following two log sources. To collect other types of logs, such as MySQL query results, see Input plugins.Container standard output (stdout and stderr): Log content that the container program prints to the console.
Text log files: Log files written to a specified path inside the container.
Outputs
(flushers): Defines the log destination. Sends collected logs to the specified Logstore.If the destination project or Logstore does not exist, the system automatically creates it. You can also manually create a project and a Logstore in advance.
Common processing configurations (Optional): Defines the
processorsfield to perform structured parsing (such as regular expression or delimiter parsing), masking, or filtering on raw logs.This topic describes only native processing plugins that cover common log processing use cases. For more features, see Extended processing plugins.
Other advanced configurations (Optional): Implements features such as multi-line log collection and log tag enrichment to meet more fine-grained collection requirements.
Structure example:
apiVersion: telemetry.alibabacloud.com/v1alpha1 # Use the default value. Do not modify. kind: ClusterAliyunPipelineConfig # Use the default value. Do not modify. metadata: name: test-config # Set the resource name. It must be unique within the Kubernetes cluster. spec: project: # Set the name of the destination project. name: k8s-your-project config: # Set the Logtail collection configuration. inputs: # Set the input plugins for the Logtail collection configuration. ... processors: # Set the processing plugins for the Logtail collection configuration. ... flushers: # Set the output plugins for the Logtail collection configuration. ...Apply the configuration
kubectl apply -f <your_yaml>
Install LoongCollector (Logtail)
LoongCollector is a next-generation log collection agent from Simple Log Service (SLS) and is an upgraded version of Logtail. LoongCollector and Logtail cannot be installed at the same time. To install Logtail, see Install and configure Logtail.
This topic describes only the basic installation steps for LoongCollector. For detailed parameters, see Install LoongCollector (Kubernetes). If you have already installed LoongCollector or Logtail, skip this step and proceed to create a Logstore to store the collected logs.
ACK cluster
Install LoongCollector from the Container Service for Kubernetes (ACK) console. By default, logs are sent to a Simple Log Service (SLS) project that belongs to the current Alibaba Cloud account.
Log on to the ACK console. In the left navigation pane, click Clusters.
On the Clusters page, click the name of the target cluster to open its details page.
In the navigation pane on the left, click Add-ons.
On the Logs And Monitoring tab, find loongcollector and click Install.
NoteFor a new cluster, on the Component Configurations page, select Enable Log Service. Then, select Create New Project or Use Existing Project.
After the installation is complete, SLS automatically creates related resources in the region where the ACK cluster is located. You can log on to the Simple Log Service console to view them.
Resource type
Resource name
Function
Project
k8s-log-${cluster_id}A resource management unit that isolates logs for different services.
To create a project for more flexible log resource management, see Create a project.
Machine group
k8s-group-${cluster_id}A collection of log collection nodes.
Logstore
config-operation-logImportantDo not delete this Logstore.
Stores logs for the loongcollector-operator component. Its billing method is the same as that of a normal Logstore. For more information, see Billable items for the pay-by-ingested-data mode. Do not create collection configurations in this Logstore.
Self-managed cluster
Connect to the Kubernetes cluster and run the command for your region to download LoongCollector and its dependent components:
Regions in China:
wget https://aliyun-observability-release-cn-shanghai.oss-cn-shanghai.aliyuncs.com/loongcollector/k8s-custom-pkg/3.0.12/loongcollector-custom-k8s-package.tgz; tar xvf loongcollector-custom-k8s-package.tgz; chmod 744 ./loongcollector-custom-k8s-package/k8s-custom-install.shRegions outside China:
wget https://aliyun-observability-release-ap-southeast-1.oss-ap-southeast-1.aliyuncs.com/loongcollector/k8s-custom-pkg/3.0.12/loongcollector-custom-k8s-package.tgz; tar xvf loongcollector-custom-k8s-package.tgz; chmod 744 ./loongcollector-custom-k8s-package/k8s-custom-install.shGo to the
loongcollector-custom-k8s-packagedirectory and modify the./loongcollector/values.yamlconfiguration file.# ===================== Required parameters ===================== # The name of the project that manages collected logs. Example: k8s-log-custom-sd89ehdq. projectName: "" # The region where the project is located. Example for Shanghai: cn-shanghai region: "" # The ID of the Alibaba Cloud account that owns the project. Enclose the ID in quotation marks. Example: "123456789" aliUid: "" # The network type. Valid values: Internet and Intranet. Default value: Internet. net: Internet # The AccessKey ID and AccessKey secret of the Alibaba Cloud account or RAM user. The account or user must have the AliyunLogFullAccess system policy. accessKeyID: "" accessKeySecret: "" # The custom cluster ID. The ID can contain only uppercase letters, lowercase letters, digits, and hyphens (-). clusterID: ""In the
loongcollector-custom-k8s-packagedirectory, run the following command to install LoongCollector and other dependent components:bash k8s-custom-install.sh installAfter the installation is complete, check the running status of the components.
If a pod fails to start, check whether the values.yaml configuration is correct and whether the relevant images were pulled successfully.
# Check the pod status. kubectl get po -n kube-system | grep loongcollector-dsSLS also automatically creates the following resources. You can log on to the Simple Log Service console to view them.
Resource type
Resource name
Function
Project
The value of
projectNamethat you specified in the values.yaml fileA resource management unit that isolates logs for different services.
Machine group
k8s-group-${cluster_id}A collection of log collection nodes.
Logstore
config-operation-logImportantDo not delete this Logstore.
Stores logs for the loongcollector-operator component. Its billing method is the same as that of a normal Logstore. For more information, see Billable items for the pay-by-ingested-data mode. Do not create collection configurations in this Logstore.
Create a Logstore
If you have already created a Logstore, skip this step and proceed to configure collection.
Log on to the Simple Log Service console and click the name of the target project.
In the navigation pane on the left, choose
and click the + icon.On the Create Logstore page, complete the following core configurations:
Logstore Name: Set a name that is unique within the project. This name cannot be changed after creation.
Logstore Type: Choose Standard or Query based on a comparison of their specifications.
Billing Method:
Pay-By-Feature: Billed independently for each resource, such as storage, indexing, and read/write operations. Suitable for small-scale use cases or when feature usage is uncertain.
Pay-By-Ingested-Data: Billed only by the amount of raw data ingested. Provides a 30-day free storage period and free features such as data transformation and delivery. The cost model is simple and suitable for use cases where the storage period is close to 30 days or the data processing pipeline is complex.
Data Retention Period: Set the number of days to retain logs. The value ranges from 1 to 3650 days. A value of 3650 indicates permanent storage. The default is 30 days.
Keep the default settings for other configurations and click OK. For more information about other configurations, see Manage Logstores.
Minimal configuration
In <a baseurl="t3010624_v1_5_0.xdita" data-node="6128095" data-root="16376" data-tag="xref" href="t3145878.xdita#c801b53fd5xu4" id="2ba88d5208b3d">spec.config</a>, you configure the input (inputs) and output (flushers) plugins. These plugins define the core log collection path, which includes the log source and destination.
Container standard output - new version
Purpose: Collects container standard output logs (stdout/stderr) that are printed directly to the console.
The starting point of the collection configuration. Defines the log source. Currently, only one input plugin can be configured.
| Example |
Configure the
|
Collect container text files
Purpose: Collects logs written to a specific file path within a container, such as traditional access.log or app.log files.
The starting point of the collection configuration. Defines the log source. Currently, only one input plugin can be configured.
| Example |
Configure the
|
Common processing configurations
After you complete the minimal configuration, you can add processors plugins to perform structured parsing, masking, or filtering on raw logs.
Core configuration: Add processors to <a baseurl="t3010624_v1_5_0.xdita" data-node="6128095" data-root="16376" data-tag="xref" href="t3145878.xdita#c801b53fd5xu4" id="49324d8ed176z">spec.config</a> to configure processing plugins. You can enable multiple plugins at the same time.
This topic describes only native processing plugins that cover common log processing use cases. For information about additional features, see Extended processing plugins.
For Logtail 2.0 and later versions and the LoongCollector component, follow these plugin combination rules:
Use native plugins first.
If native plugins cannot meet your needs, configure extension plugins after the native plugins.
Native plugins can be used only before extension plugins.
Structured configuration
Regular expression parsing
Use a regular expression to extract log fields and parse the log into key-value pairs.
Key fields | Example |
Type The plugin type. Set this to | |
SourceKey The name of the source field. | |
Regex The regular expression that matches the log. | |
Keys A list of the extracted fields. | |
KeepingSourceWhenParseFail Specifies whether to keep the source field when parsing fails. The default value is | |
KeepingSourceWhenParseSucceed Specifies whether to keep the source field when parsing succeeds. The default value is | |
RenamedSourceKey _ When the source field is kept, this parameter specifies the new name for the field. By default, the field is not renamed. |
Delimiter parsing
Structures log content using a delimiter and parses the content into key-value pairs. This method supports single-character and multi-character delimiters.
Key field details | Example |
Type The plugin type. Set this to | |
SourceKey The name of the source field. | |
Separator The field separator. For example, CSV files use a comma (,). | |
Keys A list of the extracted fields. | |
Quote The quote character. Use this to wrap field content that contains special characters, such as a comma. | |
AllowingShortenedFields Specifies whether the number of extracted fields can be less than the number of keys. The default value is | |
OverflowedFieldsTreatment Specifies the action to take when the number of extracted fields is greater than the number of keys. The default value is
| |
KeepingSourceWhenParseFail Specifies whether to keep the source field when parsing fails. The default value is | |
KeepingSourceWhenParseSucceed Specifies whether to keep the source field when parsing succeeds. The default value is | |
RenamedSourceKey When the source field is kept, this parameter specifies the new name for the field. By default, the field is not renamed. |
Standard JSON parsing
Structures object-type JSON logs and parses them into key-value pairs.
Key field details | Example |
Type The plugin type. Set this to | |
SourceKey The name of the source field. | |
KeepingSourceWhenParseFail Specifies whether to keep the source field when parsing fails. The default value is | |
KeepingSourceWhenParseSucceed Specifies whether to keep the source field when parsing succeeds. The default value is | |
RenamedSourceKey When the source field is kept, this parameter specifies the new name for the field. By default, the field is not renamed. |
Nested JSON parsing
Parses nested JSON logs into key-value pairs by specifying an expansion depth.
Key field details | Example |
Type The plugin type. Set this to | |
SourceKey The name of the source field. | |
ExpandDepth The JSON expansion depth. The default value is 0.
| |
ExpandConnector The connector used for field names during JSON expansion. The default value is an underscore (_). | |
Prefix A prefix for the names of the expanded JSON fields. | |
IgnoreFirstConnector Specifies whether to ignore the first connector. This determines if a connector is added before the top-level field name. The default value is | |
ExpandArray Specifies whether to expand array types. The default value is
Note This parameter is supported in Logtail 1.8.0 and later. | |
KeepSource Specifies whether to keep the source field in the parsed log. The default value is
| |
NoKeyError Specifies whether to report an error if the specified source field is not found in the raw log. The default value is
| |
UseSourceKeyAsPrefix Specifies whether to use the source field name as a prefix for all expanded JSON field names. | |
KeepSourceIfParseError Specifies whether to keep the raw log data if parsing fails. The default value is
|
JSON array parsing
Use the json_extract function to extract JSON objects from a JSON array. For more information, see JSON functions.
Key field details | Example |
Type The plugin type. For the Structured Process Language (SPL) plugin, set this to | |
Script The content of the SPL script. This script is used to extract elements from the JSON array in the content field. | |
TimeoutMilliSeconds The script timeout period in milliseconds. The value must be in the range of 0 to 10,000. The default value is 1,000. |
NGINX log parsing
Structures log content into key-value pairs based on the definition in log_format. If the default format does not meet your requirements, you can use a custom format.
Key field details | Example |
Type The plugin type. For NGINX log parsing, set this to | |
SourceKey The name of the source field. | |
Regex The regular expression. | |
Keys A list of the extracted fields. | |
Extra
| |
KeepingSourceWhenParseFail Specifies whether to keep the source field when parsing fails. The default value is | |
KeepingSourceWhenParseSucceed Specifies whether to keep the source field when parsing succeeds. The default value is | |
RenamedSourceKey When the source field is kept, this parameter specifies the new name for the field. By default, the field is not renamed. |
Apache log parsing
Structures log content into key-value pairs based on the definition in the Apache log configuration file.
Key field details | Example |
Type The plugin type. Set this to | |
SourceKey The name of the source field. | |
Regex The regular expression. | |
Keys A list of the extracted fields. | |
Extra
| |
KeepingSourceWhenParseFail Specifies whether to keep the source field when parsing fails. The default value is | |
KeepingSourceWhenParseSucceed Specifies whether to keep the source field when parsing succeeds. The default value is | |
RenamedSourceKey When the source field is kept, this parameter specifies the new name for the field. By default, the field is not renamed. |
Data masking
Use the processor_desensitize_native plugin to mask sensitive data in logs.
Key fields | Example |
Type The plugin type. Set the value to | |
SourceKey The source field name. | |
Method The masking method. Valid values:
| |
ReplacingString The constant string used to replace sensitive content. This parameter is required when | |
ContentPatternBeforeReplacedString The regular expression for the prefix of the sensitive content. | |
ReplacedContentPattern The regular expression for the sensitive content. | |
ReplacingAll Specifies whether to retain the original field after successful parsing. The default is |
Content filtering
Configure the processor_filter_regex_native plugin to match log field values based on a regular expression and keep only the logs that meet the conditions.
Key fields | Example |
Type The plugin type. The value is | |
FilterRegex The regular expression to match the log field. | |
FilterKey The name of the log field to match. |
Time parsing
Configure the `processor_parse_timestamp_native` plugin to parse the time field in a log and set the parsing result as the log's __time__ field.
Key fields | Example |
Type The plugin type. Set to | |
SourceKey The source field name. | |
SourceFormat Time format. This format must exactly match the format of the time field in the log. | |
SourceTimezone The time zone of the log time. By default, the machine's time zone is used, which is the time zone of the environment where the LoongCollector process is located. Format:
|
Other advanced configurations
After you complete the minimal configuration, you can perform the following operations to collect multi-line logs, configure log topic types, and configure more fine-grained log collection. The following are common advanced configurations and their functions:
Configure multi-line log collection: When a single log entry, such as an exception stack trace, spans multiple lines, you need to enable multi-line mode and configure a regular expression for the start of a line to match the beginning of a log. This ensures that the multi-line entry is collected and stored as a single log in an SLS Logstore.
Configure log topic types: Set different topics for different log streams to organize and categorize log data. This helps you better manage and retrieve relevant logs.
Specify containers for collection (filtering and blacklists): Specify specific containers and paths for collection, including whitelist and blacklist configurations.
Enrich log tags: Add metadata related to environment variables and pod labels to logs as extended fields.
Configure multi-line log collection
By default, Simple Log Service uses single-line mode, which splits and stores logs line by line. This causes multi-line logs that contain stack trace information to be split, with each line stored and displayed as an independent log, which is not conducive to analysis.
To address this issue, you can enable multi-line mode to change how Simple Log Service splits logs. By configuring a regular expression to match the start of a log entry, you can ensure that raw logs are split and stored according to the start-of-line rule.
Core configuration: In the <a baseurl="t3010624_v1_5_0.xdita" data-node="6128095" data-root="16376" data-tag="xref" href="t3145878.xdita#c801b53fd5xu4" id="f3ede8f0e8rnz">spec.config.inputs</a> configuration, add the Multiline parameter.
Key fields | Example |
Multiline Enables multi-line log collection.
| |
Configure log topic types
Core configuration: In <a baseurl="t3010624_v1_5_0.xdita" data-node="6128095" data-root="16376" data-tag="xref" href="t3145878.xdita#c801b53fd5xu4" id="bd8af01817orq">spec.config</a>, add the global parameter to set the topic.
Key fields | Example |
TopicType The topic type. Optional values:
| Machine group topicFile path extractionCustom |
TopicFormat The topic format. This is required when TopicType is set to filepath or custom. |
Specify containers for collection (filtering and blacklists)
Filtering
Collects logs only from containers that meet the specified conditions. Multiple conditions are combined with a logical AND. An empty condition is ignored. Conditions support regular expressions.
Core configuration: In <a baseurl="t3010624_v1_5_0.xdita" data-node="6128095" data-root="16376" data-tag="xref" href="t3145878.xdita#c801b53fd5xu4" id="aa519613f0gdu">spec.config.inputs</a>, configure the ContainerFilters parameters for container filtering.
Key field details | Example |
ContainerFilters Container filtering
All regular expression matching uses Go's RE2 engine. This engine has fewer features than engines such as PCRE. Write regular expressions according to the limits described in Appendix: Regular expression limits (container filtering). | |
Blacklist
To exclude files that meet specified conditions, use the following parameters under config.inputs in the YAML file as needed:
Key field details | Example |
ExcludeFilePaths Blacklist for file paths. Excludes files that meet specified conditions. The path must be an absolute path. The asterisk (*) wildcard character is supported. | |
ExcludeFiles Blacklist for file names. Excludes files that meet specified conditions. The asterisk (*) wildcard character is supported. | |
ExcludeDirs Blacklist for directories. Excludes files that meet specified conditions. The path must be an absolute path. The asterisk (*) wildcard character is supported. |
Enrich log tags
Core configuration: By configuring ExternalEnvTag and ExternalK8sLabelTag in <a baseurl="t3010624_v1_5_0.xdita" data-node="6128095" data-root="16376" data-tag="xref" href="t3145878.xdita#c801b53fd5xu4" id="c7eb12a5deg0j">spec.config.inputs</a>, you can add tags related to container environment variables and Pod labels to logs.
Key fields | Example |
ExternalEnvTag Maps the value of a specified environment variable to a tag field. Format: | |
ExternalK8sLabelTag Maps the value of a Kubernetes Pod label to a tag field. Format: |
Configuration examples
Scenario 1: Collect and parse NGINX access logs into structured fields
Parses NGINX logs and structures the log content into multiple key-value pairs based on the definition in log_format.
Scenario 2: Collect and process multi-line logs
By default, Simple Log Service uses single-line mode, which splits and stores logs line by line. This causes multi-line logs that contain stack trace information to be split, with each line stored and displayed as an independent log, which is not conducive to analysis.
To address this issue, you can enable multi-line mode to change how Simple Log Service splits logs. By configuring a regular expression to match the start of a log entry, you can ensure that raw logs are split and stored according to the start-of-line rule. The following is an example:
FAQ
How do I send logs from an ACK cluster to a project in another Alibaba Cloud account?
Manually install the Simple Log Service LoongCollector (Logtail) component in the ACK cluster and configure it with the destination account's Alibaba Cloud account ID or access credential (AccessKey). This lets you send container logs to a Simple Log Service project in another Alibaba Cloud account.
Scenario: For reasons such as organizational structure, permission isolation, or unified monitoring, you need to collect log data from an ACK cluster to a Simple Log Service project in a separate Alibaba Cloud account. To do this, manually install LoongCollector (Logtail) for cross-account configuration.
Procedure: This section uses the manual installation of LoongCollector as an example. For information about how to install Logtail, see Install and configure Logtail.
Connect to the Kubernetes cluster and run the command for your region to download LoongCollector and its dependent components:
Regions in China:
wget https://aliyun-observability-release-cn-shanghai.oss-cn-shanghai.aliyuncs.com/loongcollector/k8s-custom-pkg/3.0.12/loongcollector-custom-k8s-package.tgz; tar xvf loongcollector-custom-k8s-package.tgz; chmod 744 ./loongcollector-custom-k8s-package/k8s-custom-install.shRegions outside China:
wget https://aliyun-observability-release-ap-southeast-1.oss-ap-southeast-1.aliyuncs.com/loongcollector/k8s-custom-pkg/3.0.12/loongcollector-custom-k8s-package.tgz; tar xvf loongcollector-custom-k8s-package.tgz; chmod 744 ./loongcollector-custom-k8s-package/k8s-custom-install.shGo to the
loongcollector-custom-k8s-packagedirectory and modify the./loongcollector/values.yamlconfiguration file.# ===================== Required parameters ===================== # The name of the project that manages collected logs. Example: k8s-log-custom-sd89ehdq. projectName: "" # The region where the project is located. Example for Shanghai: cn-shanghai region: "" # The ID of the Alibaba Cloud account that owns the project. Enclose the ID in quotation marks. Example: "123456789" aliUid: "" # The network type. Valid values: Internet and Intranet. Default value: Internet. net: Internet # The AccessKey ID and AccessKey secret of the Alibaba Cloud account or RAM user. The account or user must have the AliyunLogFullAccess system policy. accessKeyID: "" accessKeySecret: "" # The custom cluster ID. The ID can contain only uppercase letters, lowercase letters, digits, and hyphens (-). clusterID: ""In the
loongcollector-custom-k8s-packagedirectory, run the following command to install LoongCollector and other dependent components:bash k8s-custom-install.sh installAfter the installation is complete, check the running status of the components.
If a pod fails to start, check whether the values.yaml configuration is correct and whether the relevant images were pulled successfully.
# Check the pod status. kubectl get po -n kube-system | grep loongcollector-dsSLS also automatically creates the following resources. You can log on to the Simple Log Service console to view them.
Resource type
Resource name
Function
Project
The value of
projectNamethat you specified in the values.yaml fileA resource management unit that isolates logs for different services.
Machine group
k8s-group-${cluster_id}A collection of log collection nodes.
Logstore
config-operation-logImportantDo not delete this Logstore.
Stores logs for the loongcollector-operator component. Its billing method is the same as that of a normal Logstore. For more information, see Billable items for the pay-by-ingested-data mode. Do not create collection configurations in this Logstore.
How to collect logs from a single log file or container standard output with multiple collection configurations?
By default, to prevent data duplication, Simple Log Service restricts each log source to a single collection configuration:
A text log file can match only one Logtail collection configuration.
The standard output (stdout) of a container can be collected by only one standard output collection configuration.
Log on to the Simple Log Service console and go to the target project.
In the navigation pane, choose
Logstores and find the target Logstore.Click the
icon next to its name to expand the Logstore.Click Logtail Configurations. In the configuration list, find the target Logtail configuration and click Manage Logtail Configuration in the Actions column.
On the Logtail configuration page, click Edit and scroll down to the Input Configurations section:
To collect logs from text files: Enable Allow File To Be Collected Multiple Times.
To collect container standard output: Enable Allow Standard Output To Be Collected Multiple Times.
Appendix: Regular expression usage limits (container filtering)
The regular expressions used for container filtering are based on Go's RE2 engine, which has syntax limitations compared to other engines such as PCRE. Keep the following points in mind when you write regular expressions:
1. Differences in named group syntax
Go uses the (?P<name>...) syntax to define named groups. It does not support the (?<name>...) syntax from PCRE.
Correct example:
(?P<year>\d{4})Incorrect example:
(?<year>\d{4})
2. Unsupported regular expression features
The following common but complex regular expression features are not available in RE2. Avoid using them:
Assertions:
(?=...),(?!...),(?<=...), and(?<!...)Conditional expressions:
(?(condition)true|false)Recursive matching:
(?R)and(?0)Subroutine references:
(?&name)and(?P>name)Atomic groups:
(?>...)
3. Recommendations
When you debug regular expressions with a tool such as Regex101, select the Golang (RE2) mode for validation to ensure compatibility. If you use any unsupported syntax, the plugin cannot parse or match the expression correctly.