All Products
Search
Document Center

Simple Log Service:UpdateLogtailPipelineConfig

Last Updated:Dec 23, 2025

Updates a Logtail pipeline configuration.

Try it now

Try this API in OpenAPI Explorer, no manual signing needed. Successful calls auto-generate SDK code matching your parameters. Download it with built-in credential security for local usage.

Test

RAM authorization

The table below describes the authorization required to call this API. You can define it in a Resource Access Management (RAM) policy. The table's columns are detailed below:

  • Action: The actions can be used in the Action element of RAM permission policy statements to grant permissions to perform the operation.

  • API: The API that you can call to perform the action.

  • Access level: The predefined level of access granted for each API. Valid values: create, list, get, update, and delete.

  • Resource type: The type of the resource that supports authorization to perform the action. It indicates if the action supports resource-level permission. The specified resource must be compatible with the action. Otherwise, the policy will be ineffective.

    • For APIs with resource-level permissions, required resource types are marked with an asterisk (*). Specify the corresponding Alibaba Cloud Resource Name (ARN) in the Resource element of the policy.

    • For APIs without resource-level permissions, it is shown as All Resources. Use an asterisk (*) in the Resource element of the policy.

  • Condition key: The condition keys defined by the service. The key allows for granular control, applying to either actions alone or actions associated with specific resources. In addition to service-specific condition keys, Alibaba Cloud provides a set of common condition keys applicable across all RAM-supported services.

  • Dependent action: The dependent actions required to run the action. To complete the action, the RAM user or the RAM role must have the permissions to perform all dependent actions.

Action

Access level

Resource type

Condition key

Dependent action

log:UpdateLogtailPipelineConfig

update

*All Resource

*

  • log:TLSVersion
None

Request syntax

PUT /pipelineconfigs/{configName} HTTP/1.1

Path Parameters

Parameter

Type

Required

Description

Example

configName

string

Yes

The name of the Logtail pipeline configuration.

test-config

Request parameters

Parameter

Type

Required

Description

Example

project

string

Yes

The name of the project.

test-project

body

object

No

The content of the Logtail pipeline configuration.

configName

string

Yes

The name of the configuration.

Important The name must be the same as the value of the configName parameter in the request path.

test-config

logSample

string

No

A sample log. Multiple logs are supported.

2022-06-14 11:13:29.796 | DEBUG | __main__::1 - hello world

global

object

No

The global configuration.

inputs

array<object>

Yes

The list of input plug-ins.

Important Currently, you can configure only one input plug-in.

object

No

The input plug-in.

Note

For the parameters of the file input plug-in, see File plug-in. For the parameters of other input plug-ins, see Processing plug-ins.

{ "Type": "input_file", "FilePaths": ["/var/log/*.log"] }

processors

array<object>

No

The list of processing plug-ins.

Note

Processing plug-ins are classified into native processing plug-ins and extension processing plug-ins. For more information, see Processing plug-ins.

Important
Note
  • Native plug-ins can be used only to collect text logs.

  • You cannot add native plug-ins and extension plug-ins at the same time.

  • When you use native plug-ins, the following requirements must be met:
    • The first processing plug-in must be a regular expression-based parsing plug-in, a separator-based parsing plug-in, a JSON-based parsing plug-in, an NGINX-based parsing plug-in, an Apache-based parsing plug-in, or an IIS-based parsing plug-in.

    • After the first processing plug-in, you can add only one time parsing plug-in, one filter plug-in, and multiple data masking plug-ins.

object

No

The processing plug-in.

Note

For more information, see Processing plug-ins.

{ "Type": "processor_parse_json_native", "SourceKey": "content" }

aggregators

array<object>

No

The list of aggregator plug-ins.

Important This parameter is valid only when you use an extension processing plug-in. You can use a maximum of one aggregator plug-in.

object

No

The aggregator plug-in.

flushers

array<object>

Yes

The list of output plug-ins.

Important Currently, you can add only one SLS output plug-in.

object

No

The output plug-in.

{ "Type": "flusher_sls", "Logstore": "test" }

task

object

No

Global configuration

ParameterTypeRequiredDefault valueExampleDescription
TopicTypestringNoEmptyfilepathThe type of the topic. Valid values:
  • filepath: extracts information from the log file path as the topic. This value is valid only when the input plug-in is input_file.

  • machine_group_topic: uses the topic of the machine group to which the configuration is applied.

  • custom: uses a custom topic. For more information, see Log topic.

TopicFormatstringYes, if TopicType is set to filepath or custom.//var/log/(.*).logThe format of the topic.
EnableTimestampNanosecondboolNofalsefalseSpecifies whether to enable nanosecond precision for timestamps.
PipelineMetaTagKeyobjectNoEmpty{"HOST_NAME":"__hostname__"}
Important This parameter is supported only by LoongCollector 3.0.10 and later.
Controls the tags related to LoongCollector information. The key is the tag parameter name, and the value is the field name of the tag in the log. If the value is __default__, the default value is used. If the value is an empty string, the tag is deleted. The following tags can be configured:
  • HOST_NAME: the hostname. This tag is added by default. The default value is "__hostname__".

  • AGENT_TAG: the custom identifier. This tag is added by default. The default value is "__user_defined_id__".

  • HOST_ID: the host ID. This tag is not added by default. The default value is "__host_id__".

  • CLOUD_PROVIDER: This tag is not added by default. The default value is "__cloud_provider__".

Input plug-ins

File input plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/input_fileThe type of the plug-in. Set the value to input_file.
FilePaths[string]Yes/["/var/log/*.log"]The list of paths to the log files to be collected. Currently, only one path is allowed. You can use the asterisk (*) and double asterisk (**) wildcard characters in the path. The double asterisk (**) wildcard character can appear only once and must be used before the filename.
MaxDirSearchDepthuintNo00The maximum depth of the subdirectories that the double asterisk (**) wildcard character can match in a file path. This parameter is valid only when the double asterisk (**) wildcard character is used in the log path. Valid values: 0 to 1000.
ExcludeFilePaths[string]NoEmpty["/home/admin/*.log"]The blacklist of file paths. The path must be an absolute path. The asterisk (*) wildcard character is supported.
ExcludeFiles[string]NoEmpty["app*.log", "password"]The blacklist of filenames. The asterisk (*) wildcard character is supported.
ExcludeDirs[string]NoEmpty["/home/admin/dir1", "/home/admin/dir2*"]The blacklist of directories. The path must be an absolute path. The asterisk (*) wildcard character is supported.
FileEncodingstringNoutf8utf8The encoding format of the file. Valid values: utf8 and gbk.
TailSizeKBuintNo10241024The starting collection position of a matched file when the configuration first takes effect, measured from the end of the file. If the file size is smaller than this value, collection starts from the beginning. Valid values: 0 to 10485760 KB.
MultilineobjectNoEmpty/The multiline aggregation options.
Multiline.ModestringNocustomcustomThe multiline aggregation mode. Valid values: custom and JSON.
Multiline.StartPatternstringRequired when Multiline.Mode is set to custom.Empty\d+-\d+-\d+.*The regular expression for the start of a line.
EnableContainerDiscoveryboolNofalsetrueSpecifies whether to enable container discovery. This parameter is valid only when Logtail runs in DaemonSet mode and the collected file path is a path within the container.
ContainerFiltersobjectNoEmpty/The container filtering options. The relationship between multiple options is "AND". This parameter is valid only when EnableContainerDiscovery is set to true.
ContainerFilters.K8sNamespaceRegexstringNoEmptydefaultFor containers deployed in a Kubernetes environment, specifies the condition for the namespace of the pod to which the container to be collected belongs. If you do not add this parameter, all containers are collected. Regular expressions are supported.
ContainerFilters.K8sPodRegexstringNoEmptytest-podFor containers deployed in a Kubernetes environment, specifies the condition for the name of the pod to which the container to be collected belongs. If you do not add this parameter, all containers are collected. Regular expressions are supported.
ContainerFilters.IncludeK8sLabelmapNoEmpty/For containers deployed in a Kubernetes environment, specifies the label condition for the pod to which the container to be collected belongs. The relationship between multiple conditions is "OR". If you do not add this parameter, all containers are collected. Regular expressions are supported. The key of the map is the name of a pod label, and the value is the value of the pod label. The following rules apply:
  • If the value in the map is empty, any pod that has a label with the specified key is matched.

  • If the value in the map is not empty:
    • If the value starts with ^ and ends with $, a pod is matched if it has a label with the specified key and the label value matches the regular expression.

    • In other cases, a pod is matched if it has a label with the specified key and the label value is the same as the specified value.

ContainerFilters.ExcludeK8sLabelmapNoEmpty/For containers deployed in a Kubernetes environment, specifies the label condition for the pod to which the container to be excluded from collection belongs. The relationship between multiple conditions is "OR". If you do not add this parameter, all containers are collected. Regular expressions are supported. The key of the map is the name of a pod label, and the value is the value of the pod label. The following rules apply:
  • If the value in the map is empty, any pod that has a label with the specified key is matched.

  • If the value in the map is not empty:
    • If the value starts with ^ and ends with $, a pod is matched if it has a label with the specified key and the label value matches the regular expression.

    • In other cases, a pod is matched if it has a label with the specified key and the label value is the same as the specified value.

ContainerFilters.K8sContainerRegexstringNoEmptytest-containerFor containers deployed in a Kubernetes environment, specifies the condition for the name of the container to be collected. If you do not add this parameter, all containers are collected. Regular expressions are supported.
ContainerFilters.IncludeEnvmapNoEmpty/Specifies the environment variable condition for the container to be collected. The relationship between multiple conditions is "OR". If you do not add this parameter, all containers are collected. Regular expressions are supported. The key of the map is the name of an environment variable, and the value is the value of the environment variable. The following rules apply:
  • If the value in the map is empty, any container that has an environment variable with the specified key is matched.

  • If the value in the map is not empty:
    • If the value starts with ^ and ends with $, a container is matched if it has an environment variable with the specified key and the variable value matches the regular expression.

    • In other cases, a container is matched if it has an environment variable with the specified key and the variable value is the same as the specified value.

ContainerFilters.ExcludeEnvmapNoEmpty/Specifies the environment variable condition for the container to be excluded from collection. The relationship between multiple conditions is "OR". If you do not add this parameter, all containers are collected. Regular expressions are supported. The key of the map is the name of an environment variable, and the value is the value of the environment variable. The following rules apply:
  • If the value in the map is empty, any container that has an environment variable with the specified key is matched.

  • If the value in the map is not empty:
    • If the value starts with ^ and ends with $, a container is matched if it has an environment variable with the specified key and the variable value matches the regular expression.

    • In other cases, a container is matched if it has an environment variable with the specified key and the variable value is the same as the specified value.

ContainerFilters.IncludeContainerLabelmapNoEmpty/Specifies the label condition for the container to be collected. The relationship between multiple conditions is "OR". If you do not add this parameter, the default value is empty, which means all containers are collected. Regular expressions are supported. The key of the map is the name of a container label, and the value is the value of the container label. The following rules apply:
  • If the value in the map is empty, any container that has a label with the specified key is matched.

  • If the value in the map is not empty:
    • If the value starts with ^ and ends with $, a container is matched if it has a label with the specified key and the label value matches the regular expression.

    • In other cases, a container is matched if it has a label with the specified key and the label value is the same as the specified value.

ContainerFilters.ExcludeContainerLabelmapNoEmpty/Specifies the label condition for the container to be excluded from collection. The relationship between multiple conditions is "OR". If you do not add this parameter, the default value is empty, which means all containers are collected. Regular expressions are supported. The key of the map is the name of a container label, and the value is the value of the container label. The following rules apply:
  • If the value in the map is empty, any container that has a label with the specified key is matched.

  • If the value in the map is not empty:
    • If the value starts with ^ and ends with $, a container is matched if it has a label with the specified key and the label value matches the regular expression.

    • In other cases, a container is matched if it has a label with the specified key and the label value is the same as the specified value.

ExternalK8sLabelTagmapNoEmpty/For containers deployed in a Kubernetes environment, specifies the pod label-related tags to be added to the logs. The key of the map is the name of a pod label, and the value is the corresponding tag name. For example, if you add app: k8s_label_app to the map, and a pod has the label app=serviceA, this information is added to the log as a tag, which means the field __tag__:k8s_label_app: serviceA is added. If the pod does not have the app label, the empty field __tag__:k8s_label_app: is added.
ExternalEnvTagmapNoEmpty/For containers deployed in a Kubernetes environment, specifies the container environment variable-related tags to be added to the logs. The key of the map is the name of an environment variable, and the value is the corresponding tag name. For example, if you add VERSION: env_version to the map, and a container has the environment variable VERSION=v1.0.0, this information is added to the log as a tag, which means the field __tag__:env_version: v1.0.0 is added. If the container does not have the VERSION environment variable, the empty field __tag__:env_version: is added.
CollectingContainersMetaboolNofalsetrueSpecifies whether to enable container metadata preview.
AppendingLogPositionMetaboolNofalsefalseSpecifies whether to add the metadata of the file to which the log belongs to the log. The metadata includes the __tag__:__inode__ field and the __file_offset__ field.
AllowingIncludedByMultiConfigsboolNofalsefalseSpecifies whether to allow the current configuration to collect files that have been matched by other configurations.
TagsobjectNoEmpty{"FileInodeTagKey":"__inode__"}
Important This parameter is supported only by LoongCollector 3.0.10 and later.
Controls the tags related to file collection. The key is the tag parameter name, and the value is the field name of the tag in the log. If the value is __default__, the default value is used. If the value is an empty string, the tag is deleted. The following tags can be configured:
  • FileInodeTagKey: the file inode. This tag is not added by default. The default value is "__inode__".

  • FilePathTagKey: the file path. This tag is added by default. The default value is "__path__".

The following parameters are valid only when the EnableContainerDiscovery parameter is set to true.
  • K8sNamespaceTagKey: the namespace of the container where the file is located. This tag is added by default. The default value is "_namespace_".

  • K8sPodNameTagKey: the name of the pod where the file is located. This tag is added by default. The default value is "_pod_name_".

  • K8sPodUidTagKey: the UID of the pod where the file is located. This tag is added by default. The default value is "_pod_uid_".

  • ContainerNameTagKey: the name of the container where the file is located. This tag is added by default. The default value is "_container_name_".

  • ContainerIpTagKey: the IP address of the container where the file is located. This tag is added by default. The default value is "_container_ip_".

  • ContainerImageNameTagKey: the image of the container where the file is located. This tag is added by default. The default value is "_image_name_".

FileOffsetKeystringNoEmpty__file_offset__
Important This parameter is supported only by LoongCollector 3.0.10 and later.
The tag for the log's position in the file. This tag is not added by default. The default value is __file_offset__. If the value is __default__, the default value is used. If the value is an empty string, the tag is deleted. If the EnableLogPositionMeta parameter exists at the same time as the Tags.FileInodeTagKey or FileOffsetKey parameter, the EnableLogPositionMeta parameter is ignored.

Container stdout (legacy)

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/service_docker_stdoutThe type of the plug-in. Set the value to service_docker_stdout.
StdoutBooleanNotruetrueSpecifies whether to collect standard output (stdout).
StderrBooleanNotruetrueSpecifies whether to collect standard error (stderr).
StartLogMaxOffsetIntegerNo128 × 1024131072The length of historical data to be retrieved during the first collection, in bytes. We recommend a value between 131072 and 1048576.
IncludeLabelMap, where LabelKey and LabelValue are of the String typeNoEmpty

The whitelist of container labels, used to specify the containers to be collected. The default value is empty, which means the standard output of all containers is collected. If you want to set a container label whitelist, LabelKey is required, and LabelValue is optional.

  • If LabelValue is empty, any container that has a label with the specified LabelKey is matched.

  • If LabelValue is not empty, only containers that have a label with the specified LabelKey=LabelValue are matched.

    By default, LabelValue is matched as a string, meaning a match occurs only if the LabelValue is identical to the container label's value. If the value starts with ^ and ends with $, it is a regular expression match. For example, if you set LabelKey to io.kubernetes.container.name and LabelValue to ^(nginx|cube)$, containers named nginx or cube are matched.

The relationship between multiple whitelist conditions is "OR", meaning a container is matched if its label satisfies any of the whitelist conditions.

ExcludeLabelMap, where LabelKey and LabelValue are of the String typeNoEmpty

The blacklist of container labels, used to exclude containers from collection. The default value is empty, which means no containers are excluded. If you want to set a container label blacklist, LabelKey is required, and LabelValue is optional.

  • If LabelValue is empty, any container that has a label with the specified LabelKey is excluded.

  • If LabelValue is not empty, only containers that have a label with the specified LabelKey=LabelValue are excluded.

    By default, LabelValue is matched as a string, meaning a match occurs only if the LabelValue is identical to the container label's value. If the value starts with ^ and ends with $, it is a regular expression match. For example, if you set LabelKey to io.kubernetes.container.name and LabelValue to ^(nginx|cube)$, containers named nginx or cube are matched.

The relationship between multiple blacklist conditions is "OR", meaning a container is excluded if its label satisfies any of the blacklist conditions.

IncludeEnvMap, where EnvKey and EnvValue are of the String typeNoEmpty

The whitelist of environment variables, used to specify the containers to be collected. The default value is empty, which means the standard output of all containers is collected. If you want to set an environment variable whitelist, EnvKey is required, and EnvValue is optional.

  • If EnvValue is empty, any container that has an environment variable with the specified EnvKey is matched.

  • If EnvValue is not empty, only containers that have an environment variable with the specified EnvKey=EnvValue are matched.

    By default, EnvValue is matched as a string, meaning a match occurs only if the EnvValue is identical to the environment variable's value. If the value starts with ^ and ends with $, it is a regular expression match. For example, if you set EnvKey to NGINX_SERVICE_PORT and EnvValue to `^(80

ExcludeEnvMap, where EnvKey and EnvValue are of the String typeNoEmpty

The blacklist of environment variables, used to exclude containers from collection. The default value is empty, which means no containers are excluded. If you want to set an environment variable blacklist, EnvKey is required, and EnvValue is optional.

  • If EnvValue is empty, the logs of any container that has an environment variable with the specified EnvKey are excluded.

  • If EnvValue is not empty, only containers that have an environment variable with the specified EnvKey=EnvValue are excluded.

    By default, EnvValue is matched as a string, meaning a match occurs only if the EnvValue is identical to the environment variable's value. If the value starts with ^ and ends with $, it is a regular expression match. For example, if you set EnvKey to NGINX_SERVICE_PORT and EnvValue to ^(80|6379)$, containers with service port 80 or 6379 are matched.

The relationship between multiple blacklist conditions is "OR", meaning a container is excluded if its environment variable satisfies any of the key-value pairs.

IncludeK8sLabelMap, where LabelKey and LabelValue are of the String typeNoEmpty

The whitelist of Kubernetes labels (defined in template.metadata), used to specify the containers to be collected. If you want to set a Kubernetes label whitelist, LabelKey is required, and LabelValue is optional.

  • If LabelValue is empty, any container that has a Kubernetes label with the specified LabelKey is matched.

  • If LabelValue is not empty, only containers that have a Kubernetes label with the specified LabelKey=LabelValue are matched.

    By default, LabelValue is matched as a string, meaning a match occurs only if the LabelValue is identical to the Kubernetes label's value. If the value starts with ^ and ends with $, it is a regular expression match. For example, if you set LabelKey to app and LabelValue to ^(test1|test2)$, containers with Kubernetes labels app:test1 or app:test2 are matched.

The relationship between multiple whitelist conditions is "OR", meaning a container is matched if its Kubernetes label satisfies any of the whitelist conditions.

ExcludeK8sLabelMap, where LabelKey and LabelValue are of the String typeNoEmpty

The blacklist of Kubernetes labels (defined in template.metadata), used to exclude containers from collection. If you want to set a Kubernetes label blacklist, LabelKey is required, and LabelValue is optional.

  • If LabelValue is empty, any container that has a Kubernetes label with the specified LabelKey is excluded.

  • If LabelValue is not empty, only containers that have a Kubernetes label with the specified LabelKey=LabelValue are excluded.

    By default, LabelValue is matched as a string, meaning a match occurs only if the LabelValue is identical to the Kubernetes label's value. If the value starts with ^ and ends with $, it is a regular expression match. For example, if you set LabelKey to app and LabelValue to ^(test1|test2)$, containers with Kubernetes labels app:test1 or app:test2 are matched.

The relationship between multiple blacklist conditions is "OR", meaning a container is excluded if its Kubernetes label satisfies any of the blacklist conditions.

K8sNamespaceRegexStringNoEmpty^(default|nginx)$Specifies the containers to be collected by namespace name. Regular expressions are supported. For example, if you set this to ^(default|nginx)$, all containers in the nginx and default namespaces are matched.
K8sPodRegexStringNoEmpty^(nginx-log-demo.*)$Specifies the containers to be collected by pod name. Regular expressions are supported. For example, if you set this to ^(nginx-log-demo.*)$, all containers in pods whose names start with nginx-log-demo are matched.
K8sContainerRegexStringNoEmpty^(container-test)$Specifies the containers to be collected by container name (Kubernetes container names are defined in spec.containers). Regular expressions are supported. For example, if you set this to ^(container-test)$, all containers named container-test are matched.

Data processing parameters

ParameterTypeRequiredDefault valueExampleDescription
BeginLineRegexStringNoEmpty

The regular expression to match the start of a line.

If this configuration item is empty, it indicates single-line mode.

If this expression matches the beginning of a line, that line is treated as a new log. Otherwise, the line is appended to the previous log.

BeginLineCheckLengthIntegerNoEmpty

The length to check for a line start match, in bytes.

The default value is 10 × 1024 bytes.

If the regular expression for the start of a line can be matched within the first N bytes, we recommend setting this parameter to improve line start matching efficiency.

BeginLineTimeoutMsIntegerNoEmpty

The timeout period for matching the start of a line, in milliseconds.

The default value is 3000 milliseconds.

If no new log appears within 3000 milliseconds, the matching ends, and the last log is uploaded to Simple Log Service.

MaxLogSizeIntegerNoEmpty

The maximum length of a log, in bytes. The default value is 0.

The default value is 512 × 1024 bytes.

If the log length exceeds this value, the system stops searching for the start of a line and uploads the log directly.

ExternalK8sLabelTagMap, where LabelKey and LabelValue are of the String typeNoEmpty

After setting the Kubernetes label (defined in template.metadata) log tag, iLogtail adds Kubernetes label-related fields to the logs.

For example, if you set LabelKey to app and LabelValue to k8s_label_app, and a pod has the label app=serviceA, iLogtail adds this information to the log, which means the field k8s_label_app: serviceA is added. If the pod does not have a label named app, the empty field k8s_label_app: is added.

ExternalEnvTagMap, where EnvKey and EnvValue are of the String typeNoEmpty

After setting the container environment variable log tag, iLogtail adds container environment variable-related fields to the logs.

For example, if you set EnvKey to VERSION and EnvValue to env_version, and a container has the environment variable VERSION=v1.0.0, this information is added to the log as a tag, which means the field env_version: v1.0.0 is added. If the container does not have an environment variable named VERSION, the empty field env_version: is added.

Data processing environment variables

Environment variableTypeRequiredDefault valueExampleDescription
ALIYUN_LOG_ENV_TAGSStringNoEmpty

After setting the global environment variable log tag, iLogtail adds fields related to the environment variables of the iLogtail container to the logs. Multiple environment variable names are separated by a VERTICAL LINE (|).

For example, if you set this to node_name|node_ip, and the iLogtail container exposes the relevant environment variables, this information is added to the log as a tag, which means the fields node_ip:172.16.0.1 and node_name:worknode are added.

MySQL input plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/service_mysqlThe type of the plug-in. Set the value to service_mysql.
AddressstringNo127.0.0.1:3306rm-*.mysql.rds.aliyuncs.comThe MySQL address.
UserstringNorootrootThe username for logging on to the MySQL database.
PasswordstringNoEmptyThe password for the user to log on to the MySQL database. If you have high security requirements, set the username and password to xxx. After the collection configuration is synchronized to your machine, find the configuration in the /usr/local/ilogtail/user_log_config.json file and modify it. For more information, see Modify a local configuration.
Important If you modify this parameter in the console, the local configuration is overwritten after synchronization.
DataBasestringNo/project_databaseThe name of the database.
DialTimeOutMsintNo50005000The timeout period for connecting to the MySQL database, in ms.
ReadTimeOutMsintNo50005000The timeout period for reading the MySQL query results, in ms.
StateMentstringNo/The SELECT statement. When CheckPoint is set to true, the WHERE condition in the SELECT statement must include the checkpoint column (CheckPointColumn). You can use a question mark (?) as a placeholder that works with the checkpoint column. For example, set CheckPointColumn to id, CheckPointStart to 0, and StateMent to SELECT * from ... where id > ?. After each collection, the system saves the ID of the last data entry as a checkpoint. In the next collection, the question mark (?) in the query statement is replaced with the ID corresponding to that checkpoint.
LimitboolNofalsetrueSpecifies whether to use LIMIT for paging.
  • true: Use LIMIT.

  • false (default): Do not use LIMIT.

We recommend using LIMIT for paging. If Limit is set to true, the system automatically appends a LIMIT clause to the SELECT statement during SQL queries.
PageSizeintNo/10The page size. This must be configured if Limit is set to true.
MaxSyncSizeintNo00The maximum number of records to synchronize at a time. The default value is 0, which means no limit.
CheckPointboolNofalsetrueSpecifies whether to use a checkpoint.
  • true: Use a checkpoint.

  • false (default): Do not use a checkpoint.

A checkpoint can be used as the starting point for the next data collection to implement incremental data collection.
CheckPointColumnstringNoEmpty1The name of the checkpoint column. This must be configured if CheckPoint is set to true. Warning The values in this column must be incremental. Otherwise, data may be lost. The maximum value in each query result is used as the input for the next query.
CheckPointColumnTypestringNoEmptyintThe data type of the checkpoint column. Supported types are int and time. The internal storage for the int type is int64. The time type supports MySQL's date, datetime, and time types. This must be configured if CheckPoint is set to true.
CheckPointStartstringNoEmptyThe initial value of the checkpoint column. This must be configured if CheckPoint is set to true.
CheckPointSavePerPageboolNotruetrueSpecifies whether to save a checkpoint for each page.
  • true (default): Save a checkpoint for each page.

  • false: Save a checkpoint after each synchronization is complete.

IntervalMsintNo6000060000The synchronization interval. The default value is 60000, in ms.

HTTP input plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/metric_httpThe type of the plug-in. Set the value to metric_http.
AddressstringYes/The list of URLs. Important Must start with http or https.
IntervalMsintYes/10The interval for each request, in ms.
MethodstringNoGETGETThe name of the request method. Must be in uppercase.
BodystringNoEmptyThe content of the HTTP Body field.
HeadersmapNoEmpty{"key":"value"}The content of the HTTP Header, for example, {"key":"value"}. Replace with the actual value.
PerAddressSleepMsintNo100100The interval between requests for each URL in the Addresses list, in ms.
ResponseTimeoutMsintNo50005000The request timeout period, in ms.
IncludeBodyboolNofalsetrueSpecifies whether to collect the request Body. The default value is false. If set to true, the request Body content is stored in a key named content.
FollowRedirectsboolNofalsefalseSpecifies whether to automatically handle redirection.
InsecureSkipVerifyboolNofalsefalseSpecifies whether to skip HTTPS security checks.
ResponseStringMatchstringNo/Performs a regular expression check on the returned Body content. The check result is stored in a key named _response_match_. If it matches, the value is yes. If it does not match, the value is no.

Syslog input plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/service_syslogThe type of the plug-in. Set the value to service_syslog.
AddressstringNotcp://127.0.0.1:9999Specifies the protocol, address, and port for Logtail to listen on. Logtail listens and obtains log data based on the Logtail configuration. The format is [tcp/udp]://[ ip ]:[ port ]. If not configured, the default is tcp://127.0.0.1:9999, which means it can only receive locally forwarded logs. Note
  • The listening protocol, address, and port number set in the Logtail configuration must be the same as the forwarding rule set in the rsyslog configuration file.

  • If the server where Logtail is installed has multiple IP addresses that can receive logs, you can set the address to 0.0.0.0 to listen on all IP addresses of the server.

ParseProtocolstringNoEmptyrfc3164Specifies the protocol used to parse logs. The default is empty, which means no parsing. The options are:
  • Empty: No parsing.

  • rfc3164: Specifies using the RFC3164 protocol to parse logs.

  • rfc5424: Specifies using the RFC5424 protocol to parse logs.
  • auto: Specifies that Logtail automatically selects the appropriate parsing protocol based on the log content.

IgnoreParseFailureboolNotruetrueSpecifies the action to take after a parsing failure. If not configured, the default is true, which means parsing is abandoned, and the returned content field is filled directly. If set to false, the log is discarded upon parsing failure.

Systemd Journal input plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/service_journalThe type of the plug-in. Set the value to service_journal.
JournalPaths[string]YesEmpty/var/log/journalThe Journal log path. We recommend configuring this as the directory where the Journal logs are located.
SeekPositionstringNotailtailThe initial collection method. Can be set to head or tail.
  • head means collect all data.

  • tail means only collect new data after the Logtail collection configuration is applied.

KernelboolNotruetrueSpecifies whether to collect kernel logs.
Units[string]NoEmpty""The list of Units to collect. The default is empty, which means all are collected.
ParseSyslogFacilityboolNofalsefalseSpecifies whether to parse the facility field of syslog logs. If not configured, it is not parsed.
ParsePriorityboolNofalsefalseSpecifies whether to parse the Priority field. If not configured, it is not parsed. If set to true, the mapping relationship for the Priority field is as follows. "0": "emergency" "1": "alert" "2": "critical" "3": "error" "4": "warning" "5": "notice" "6": "informational" "7": "debug"
UseJournalEventTimeboolNofalsefalseSpecifies whether to use the field from the Journal log as the log time. If not configured, the collection time is used as the log time. Real-time log collection typically has a difference of less than 3 seconds.

SQL Server input plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/service_mssqlThe type of the plug-in. Set the value to service_mssql.
AddressstringNo127.0.0.1:1433rm-*.sqlserver.rds.aliyuncs.comThe SQL Server address.
UserstringNorootrootThe account name for logging on to the SQL Server database.
PasswordstringNoEmptyThe password for the account to log on to the SQL Server database. If you have high security requirements, set the username and password to xxx. After the collection configuration is synchronized to your machine, find the configuration in the /usr/local/ilogtail/user_log_config.json file and modify it. For more information, see Modify a local configuration.
Important If you modify this parameter in the console, the local configuration is overwritten after synchronization.
DataBasestringNo/project_databaseThe name of the database.
DialTimeOutMsintNo50005000The timeout period for connecting to the SQL Server database, in ms.
ReadTimeOutMsintNo50005000The timeout period for reading the SQL Server query results, in ms.
StateMentstringNo/The SELECT statement. When CheckPoint is set to true, the WHERE condition in the SELECT statement must include the checkpoint column (CheckPointColumn). You can use a question mark (?) as a placeholder that works with the checkpoint column. For example, set CheckPointColumn to id, CheckPointStart to 0, and StateMent to SELECT * from ... where id > ?. After each collection, the system saves the ID of the last data entry as a checkpoint. In the next collection, the question mark (?) in the query statement is replaced with the ID corresponding to that checkpoint.
LimitboolNofalsetrueSpecifies whether to use LIMIT for paging.
  • true: Use LIMIT.

  • false (default): Do not use LIMIT.

We recommend using LIMIT for paging. If Limit is set to true, the system automatically appends a LIMIT clause to the SELECT statement during SQL queries.
PageSizeintNo/10The page size. This must be configured if Limit is set to true.
MaxSyncSizeintNo00The maximum number of records to synchronize at a time. The default value is 0, which means no limit.
CheckPointboolNofalsetrueSpecifies whether to use a checkpoint.
  • true: Use a checkpoint.

  • false (default): Do not use a checkpoint.

A checkpoint can be used as the starting point for the next data collection to implement incremental data collection.
CheckPointColumnstringNoEmpty1The name of the checkpoint column. This must be configured if CheckPoint is set to true. Warning The values in this column must be incremental. Otherwise, data may be lost. The maximum value in each query result is used as the input for the next query.
CheckPointColumnTypestringNoEmptyintThe data type of the checkpoint column. Supported types are int and time. The internal storage for the int type is int64. The time type supports SQL Server's date, datetime, and time types. This must be configured if CheckPoint is set to true.
CheckPointStartstringNoEmptyThe initial value of the checkpoint column. This must be configured if CheckPoint is set to true.
CheckPointSavePerPageboolNotruetrueSpecifies whether to save a checkpoint for each page.
  • true (default): Save a checkpoint for each page.

  • false: Save a checkpoint after each synchronization is complete.

IntervalMsintNo6000060000The synchronization interval. The default value is 60000, in ms.

PostgreSQL input plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/service_pgsqlThe type of the plug-in. Set the value to service_pgsql.
AddressstringNo127.0.0.1:5432rm-*.pg.rds.aliyuncs.comThe PostgreSQL address.
UserstringNorootrootThe account name for logging on to the PostgreSQL database.
PasswordstringNoEmptyThe password for the account to log on to the PostgreSQL database. If you have high security requirements, set the username and password to xxx. After the collection configuration is synchronized to your machine, find the configuration in the /usr/local/ilogtail/user_log_config.json file and modify it. For more information, see Modify a local configuration.
Important If you modify this parameter in the console, the local configuration is overwritten after synchronization.
DataBasestringNo/project_databaseThe name of the PostgreSQL database.
DialTimeOutMsintNo50005000The timeout period for connecting to the PostgreSQL database, in ms.
ReadTimeOutMsintNo50005000The timeout period for reading the PostgreSQL query results, in ms.
StateMentstringNo/The SELECT statement. When CheckPoint is set to true, the WHERE condition in the StateMent's SELECT statement must include the checkpoint column (CheckPointColumn parameter), and its value must be set to $1. For example, set CheckPointColumn to id and StateMent to SELECT * from ... where id > $1
LimitboolNofalsetrueSpecifies whether to use LIMIT for paging.
  • true: Use LIMIT.

  • false (default): Do not use LIMIT.

We recommend using LIMIT for paging. If Limit is set to true, the system automatically appends a LIMIT clause to the SELECT statement during SQL queries.
PageSizeintNo/10The page size. This must be configured if Limit is set to true.
MaxSyncSizeintNo00The maximum number of records to synchronize at a time. The default value is 0, which means no limit.
CheckPointboolNofalsetrueSpecifies whether to use a checkpoint.
  • true: Use a checkpoint.

  • false (default): Do not use a checkpoint.

A checkpoint can be used as the starting point for the next data collection to implement incremental data collection.
CheckPointColumnstringNoEmpty1The name of the checkpoint column. This must be configured if CheckPoint is set to true. Warning The values in this column must be incremental. Otherwise, data may be lost. The maximum value in each query result is used as the input for the next query.
CheckPointColumnTypestringNoEmptyintThe data type of the checkpoint column. Supported types are int and time. The internal storage for the int type is int64. The time type supports PostgreSQL's time types. This must be configured if CheckPoint is set to true.
CheckPointStartstringNoEmptyThe initial value of the checkpoint column. This must be configured if CheckPoint is set to true.
CheckPointSavePerPageboolNotruetrueSpecifies whether to save a checkpoint for each page.
  • true (default): Save a checkpoint for each page.

  • false: Save a checkpoint after each synchronization is complete.

IntervalMsintNo6000060000The synchronization interval. The default value is 60000, in ms.

SNMP input plug-in

ParameterTypeRequiredDefault valueExampleDescription
Targets[string]Yes/127.0.0.1The IP address of the target machine group.
PortstringNo161161The port used by the SNMP protocol.
CommunitystringNopublicpublicThe community name. SNMPv1 and SNMPv2 use community names for authentication.
UserNamestringNoEmptyrootThe username. SNMPv3 supports authentication using a username.
AuthenticationProtocolstringNoNoAuthNoAuthThe authentication protocol. SNMPv3 supports authentication using an authentication protocol.
AuthenticationPassphrasestringNoEmptyThe authentication password. The default value is empty. If you set AuthenticationProtocol to MD5 or SHA, you need to set AuthenticationPassphrase.
PrivacyProtocolstringNoNoPrivNoPrivThe privacy protocol. SNMPv3 supports authentication using a privacy protocol.
PrivacyPassphrasestringNoEmptyThe privacy protocol password. By default, it is the same as the authentication password. If you set PrivacyProtocol to DES or AES, you must set PrivacyPassphrase.
TimeoutintNo55The timeout period for a single query operation, in seconds.
VersionintNo22The SNMP protocol version. Valid values are 1, 2, and 3.
TransportstringNoudpudpThe SNMP communication method. Valid values are udp and tcp.
MaxRepetitionsintNo00The number of retries after a query timeout.
Oids[string]NoEmpty1The object identifiers to query in the target machine.
Fields[string]NoEmptyintThe fields to query in the target machine. This plug-in first translates the fields by looking them up in the local Management Information Base (MIB), translates them into object identifiers, and queries them together.
Tables[string]NoEmptyThe tables to query in the target machine. This plug-in first queries all fields in the table, then looks them up in the local Management Information Base (MIB), translates them into object identifiers, and queries them together.

Script output plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/input_commandThe type of the plug-in. Set the value to input_command.
ScriptTypestringYesEmptyshellSpecifies the type of the script content. Currently supports bash, shell, python2, and python3.
UserstringYes/publicThe username used to run the command. Only non-root users are supported. Note * Ensure that the specified username exists on the machine. We recommend configuring the least privilege, granting only rwx permissions to the necessary directories or files.
ScriptContentstringYesEmptyThe script content. Supports PlainText and Base64 encrypted content, with a length within 512*1024 bytes.
ContentEncodingstringNoPlainTextPlainTextThe text format of the script content. Valid values:
  • PlainText (default): Plain text, not encoded.

  • Base64: Base64 encoding.

LineSplitSepstringNoEmptyThe separator for the script output content. If empty, no splitting is performed, and the entire output is returned as a single data entry.
CmdPathstringNoEmpty/usr/bin/bashThe path to execute the script command. If empty, the default path is used. The default paths are as follows:
  • bash: /usr/bin/bash

  • shell: /usr/bin/sh

  • python2: /usr/bin/python2

  • python3: /usr/bin/python3

TimeoutMilliSecondsintNo30003000The timeout period for executing the script, in milliseconds.
IgnoreErrorboolNofalsefalseSpecifies whether to ignore Error logs when the plug-in execution fails. The default value is false, which means they are not ignored.
Environments[string]NoThe environment variables. The default is the value of os.Environ(). If Environments is set, the set environment variables are appended to os.Environ().
IntervalMsintNo50005000The collection trigger frequency or script execution frequency, in milliseconds.

Native processing plug-ins

Native regular expression parsing plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_parse_regex_nativeThe type of the plug-in. Set the value to processor_parse_regex_native.
SourceKeystringYes/contentThe name of the source field.
RegexstringYes/(\d+-\d+-\d+)\s+(.*)The regular expression.
Keys[string]Yes/["time", "msg"]The list of extracted fields.
KeepingSourceWhenParseFailboolNofalsefalseSpecifies whether to keep the source field when parsing fails.
KeepingSourceWhenParseSucceedboolNofalsefalseSpecifies whether to keep the source field when parsing succeeds.
RenamedSourceKeystringNoEmptykeyThe field name to store the source field when it is kept. If not filled, the name is not changed by default.

Native JSON parsing plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_parse_json_nativeThe type of the plug-in. Set the value to processor_parse_json_native.
SourceKeystringYes/contentThe name of the source field.
KeepingSourceWhenParseFailboolNofalsefalseSpecifies whether to keep the source field when parsing fails.
KeepingSourceWhenParseSucceedboolNofalsefalseSpecifies whether to keep the source field when parsing succeeds.
RenamedSourceKeystringNoEmptykeyThe field name to store the source field when it is kept. If not filled, the name is not changed by default.

Native separator parsing plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_parse_delimiter_nativeThe type of the plug-in. Set the value to processor_parse_delimiter_native.
SourceKeystringYes/contentThe name of the source field.
SeparatorstringYes/,The separator.
QuotestringNo""The quote.
Keys[string]Yes/["time", "msg"]The list of extracted fields.
AllowingShortenedFieldsboolNotruetrueSpecifies whether to allow the number of extracted fields to be less than the number of Keys. If not allowed, this scenario is treated as a parsing failure.
OverflowedFieldsTreatmentstringNoextendextendThe behavior when the number of extracted fields is greater than the number of Keys. Valid values:
  • extend: Keeps the extra fields, and each extra field is added to the log as a separate field. The field name for the extra fields is __column$i__, where $i represents the extra field sequence number, starting from 0.

  • keep: Keeps the extra fields, but adds the extra content as a single field to the log. The field name is __column0__.

  • discard: Discards the extra fields.

KeepingSourceWhenParseFailboolNofalsefalseSpecifies whether to keep the source field when parsing fails.
KeepingSourceWhenParseSucceedboolNofalsefalseSpecifies whether to keep the source field when parsing succeeds.
RenamedSourceKeystringNoEmptykeyThe field name to store the source field when it is kept. If not filled, the name is not changed by default.

Native Apsara parsing plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_parse_apsara_nativeThe type of the plug-in. Set the value to processor_parse_apsara_native.
SourceKeystringYes/contentThe name of the source field.
TimezonestringNoEmptyGMT+08:00The time zone of the log time. The format is GMT+HH:MM (East) or GMT-HH:MM (West).
KeepingSourceWhenParseFailboolNofalsefalseSpecifies whether to keep the source field when parsing fails.
KeepingSourceWhenParseSucceedboolNofalsefalseSpecifies whether to keep the source field when parsing succeeds.
RenamedSourceKeystringNoEmptykeyThe field name to store the source field when it is kept. If not filled, the name is not changed by default.

Native time parsing plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_parse_timestamp_nativeThe type of the plug-in. Set the value to processor_parse_timestamp_native.
SourceKeystringYes/contentThe name of the source field.
SourceFormatstringYes/%Y/%m/%d %H:%M:%SThe format of the log time. For more information, see Time formats.
SourceTimezonestringNoEmptyGMT+08:00The time zone of the log time. The format is GMT+HH:MM (East) or GMT-HH:MM (West).

Native filter plug-in

ParameterTypeRequiredExampleDefault valueDescription
TypestringYesprocessor_filter_regex_native/The type of the plug-in. Set the value to processor_filter_regex_native.
IncludemapYes//The whitelist of log fields, where the key is the field name and the value is a regular expression. This indicates the condition that the content of the field specified by the key must meet for the current event to be collected. The relationship between multiple conditions is "AND". The log is collected only when all conditions are met.

Native data masking plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_desensitize_nativeThe type of the plug-in. Set the value to processor_desensitize_native.
SourceKeystringYes/contentThe name of the source field.
MethodstringYes/constThe data masking method. Valid values: const: replaces sensitive content with a constant. md5: replaces sensitive content with its MD5 value.
ReplacingStringstringNo, required when Method is set to const./******The constant string used to replace sensitive content.
ContentPatternBeforeReplacedStringstringYes/'password:'The prefix regular expression for sensitive content.
ReplacedContentPatternstringYes/[^']*The regular expression for sensitive content.
ReplacingAllboolNotruetrueSpecifies whether to replace all matched sensitive content.

Extension processing plug-ins

Extract fields

Regular expression mode

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_regexThe type of the plug-in. Set the value to processor_regex.
SourceKeystringYes/contentThe name of the source field.
RegexstringYes/(\d+-\d+-\d+)\s+(.*)The regular expression. You need to use parentheses () to mark the fields to be extracted.
Keys[string]Yes/["ip", "time", "method"]Specifies field names for the extracted content, for example, ["ip", "time", "method"].
NoKeyErrorbooleanNofalsefalseSpecifies whether the system reports an error if the source field you specified is not in the raw log.
  • true: Report an error.

  • false (default): Do not report an error.

NoMatchErrorbooleanNofalsefalseSpecifies whether the system reports an error if the regular expression you specified does not match the value of the source field.
  • true: Report an error.

  • false (default): Do not report an error.

KeepSourcebooleanNofalsefalseSpecifies whether to keep the source field in the parsed log.
  • true: Keep.

  • false (default): Do not keep.

FullMatchbooleanNotruetrueSpecifies whether to extract only when fully matched.
  • true (default): The field values are extracted only if all fields you set in the Keys parameter can be matched with the value of the source field through the regular expression in the Regex parameter.

  • false: Partial matches are also extracted.

KeepSourceIfParseErrorbooleantruetruefalseSpecifies whether to keep the source field in the parsed log if parsing fails.
  • true: Keep.

  • false (default): Do not keep.

Anchor mode

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_anchorThe type of the plug-in. Set the value to processor_anchor.
SourceKeyAnchor arrayYes/contentThe name of the source field.
AnchorsstringYes/The list of anchor items.
StartStringYesEmptytimeThe starting keyword. If empty, it matches the beginning of the string.
StopStringYesEmpty\tThe ending keyword. If empty, it matches the end of the string.
FieldNameStringYesEmptytimeSpecifies a field name for the extracted content.
FieldTypeStringYesEmptystringThe type of the field. Valid values are string or json.
ExpondJsonbooleanNofalsefalseSpecifies whether to perform JSON expansion.
  • true: Expand.

  • false (default): Do not expand.

ExpondConnecterStringNo__The connector for JSON expansion. The default value is an underscore (_).
MaxExpondDepthIntNo00The maximum depth for JSON expansion. The default value is 0, which means no limit.
NoAnchorErrorBooleanNofalsefalseSpecifies whether the system reports an error when an anchor item cannot be found.
  • true: Report an error.

  • false (default): Do not report an error.

NoKeyErrorBooleanNofalsefalseSpecifies whether the system reports an error if the source field you specified is not in the raw log.
  • true: Report an error.

  • false (default): Do not report an error.

KeepSourceBooleanNofalsefalseSpecifies whether to keep the source field in the parsed log.
  • true: Report an error.

  • false (default): Do not report an error.

CSV mode

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_csvThe type of the plug-in. Set the value to processor_csv.
SourceKeyStringYes/csvThe name of the source field.
SplitKeysString arrayYes/["date", "ip", "content"]Specifies field names for the extracted content, for example, ["date", "ip", "content"]. Important If the number of fields to be split is less than the number of fields in the SplitKeys parameter, the extra fields in the SplitKeys parameter are ignored.
PreserveOthersBooleanNofalsefalseSpecifies whether to keep the excess part if the number of fields to be split is greater than the number of fields in the SplitKeys parameter.
  • true: Keep.

  • false (default): Do not keep.

ExpandOthersBooleanNofalsefalseSpecifies whether to parse the excess part.
  • true: Parse. You can parse the excess part through the ExpandOthers parameter and then specify the naming prefix for the excess fields through the ExpandKeyPrefix parameter. *

  • false (default): Do not parse. If you set PreserveOthers to true and ExpandOthers to false, the content of the excess part is stored in the _decode_preserve_ field.

Note If the content of the extra fields contains non-standard content, you need to normalize it according to the CSV format before storing it.
ExpandKeyPrefixStringNoThe naming prefix for the excess fields. For example, if configured as expand_, the field names will be expand_1, expand_2.
TrimLeadingSpaceBooleanNofalsefalseSpecifies whether to ignore leading spaces in field values.
  • true: Ignore.

  • false (default): Do not ignore.

SplitSepStringNo,,The separator. The default value is a comma (,).
KeepSourceBooleanNofalsefalseSpecifies whether to keep the source field in the parsed log.
  • true: Keep.

  • false (default): Do not keep.

NoKeyErrorBooleanNofalsefalseSpecifies whether the system reports an error if the source field you specified is not in the raw log.
  • true: Report an error.

  • false (default): Do not report an error.

Single-character separator mode

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_split_charThe type of the plug-in. Set the value to processor_split_char.
SourceKeyStringYesThe name of the source field.
SplitSepStringYesThe separator. Must be a single character. Can be set to an invisible character, such as \u0001.
SplitKeysString arrayYes["ip", "time", "method"]Specifies field names for the extracted content, for example, ["ip", "time", "method"].
PreserveOthersBooleanNofalsefalseSpecifies whether to keep the excess part if the number of fields to be split is greater than the number of fields in the SplitKeys parameter.
  • true: Keep.

  • false (default): Do not keep.

QuoteFlagBooleanNofalsefalseSpecifies whether to use a quote.
  • true: Use.

  • false (default): Do not use.

QuoteStringNo/\u0001The quote. Must be a single character. Can be an invisible character, such as \u0001. Valid only when QuoteFlag is set to true.
NoKeyErrorBooleanNofalsefalseSpecifies whether the system reports an error if the source field you specified is not in the raw log.
  • true: Report an error.

  • false (default): Do not report an error.

NoMatchErrorBooleanNofalsefalseSpecifies whether the system reports an error if the separator you specified does not match the separator in the log.
  • true: Report an error.

  • false (default): Do not report an error.

KeepSourceBooleanNofalsefalseSpecifies whether to keep the source field in the parsed log.
  • true: Keep.

  • false (default): Do not keep.

Multi-character separator mode

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_split_stringThe type of the plug-in. Set the value to processor_split_string.
SourceKeyStringYesThe name of the source field.
SplitSepStringYesThe separator. Must be a single character. Can be set to an invisible character, such as \u0001\u0002.
SplitKeysString arrayYes["key1","key2"]Specifies field names for the extracted content, for example, ["key1","key2"].Note If the number of fields to be split is less than the number of fields in the SplitKeys parameter, the extra fields in the SplitKeys parameter are ignored.
PreserveOthersBooleanNofalsefalseSpecifies whether to keep the excess part if the number of fields to be split is greater than the number of fields in the SplitKeys parameter.
  • true: Keep.

  • false (default): Do not keep.

ExpandOthersBooleanNofalsefalseSpecifies whether to use a quote.
  • true: Use.

  • false (default): Do not use.

ExpandKeyPrefixStringNo/expand_The naming prefix for the excess part. For example, if configured as expand_, the field names will be expand_1, expand_2.
NoKeyErrorBooleanNofalsefalseSpecifies whether the system reports an error if the source field you specified is not in the raw log.
  • true: Report an error.

  • false (default): Do not report an error.

NoMatchErrorBooleanNofalsefalseSpecifies whether the system reports an error if the separator you specified does not match the separator in the log.
  • true: Report an error.

  • false (default): Do not report an error.

KeepSourceBooleanNofalsefalseSpecifies whether to keep the source field in the parsed log.
  • true: Keep.

  • false (default): Do not keep.

Key-value pair mode

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_split_key_valueThe type of the plug-in. Set the value to processor_split_key_value.
SourceKeystringYesThe name of the source field.
DelimiterstringNo\t\tThe separator between key-value pairs. The default value is a tab character \t.
SeparatorstringNo::The separator between the key and value in a single key-value pair. The default value is a colon (:).
KeepSourceBooleanNofalsefalseSpecifies whether to keep the source field in the parsed log.
  • true: Keep.

  • false (default): Do not keep.

ErrIfSourceKeyNotFoundBooleanNotruefalseSpecifies whether the system reports an error if the source field you specified is not in the raw log.
  • true (default): Report an error.

  • false: Do not report an error.

DiscardWhenSeparatorNotFoundBooleanNofalsefalseSpecifies whether to discard the key-value pair if no matching separator is found.
  • true: Discard.

  • false (default): Do not discard.

ErrIfSeparatorNotFoundBooleanNotruefalseSpecifies whether the system reports an error when the specified separator does not exist.
  • true (default): Report an error.

  • false: Do not report an error.

ErrIfKeyIsEmptyBooleanNotruefalseSpecifies whether the system reports an error when the key after separation is empty.
  • true (default): Report an error.

  • false: Do not report an error.

QuoteStringNoThe quote. When set, if a value is enclosed in quotes, the value within the quotes is extracted. Multi-character quotes are supported. By default, the quote function is not enabled. Important * If the quote is a double quote (""), you need to add an escape character, which is a backslash (\). When a backslash (\) is used with a quote inside the quotes, the backslash (\) is output as part of the value.

Grok mode

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_grokThe type of the plug-in. Set the value to processor_grok.
CustomPatternDirString arrayNoThe directory where the custom Grok pattern files are located. The processor_grok plug-in reads all files in the directory. If this parameter is not added, no custom Grok pattern files are imported. Important After updating a custom Grok pattern file, you need to restart Logtail for it to take effect.
CustomPatternsMapNoThe custom GROK pattern, where the key is the rule name and the value is the Grok expression. For default supported expressions, see processor_grok. If the expression you need is not in the link, enter a custom Grok expression in Match. If this parameter is not added, no custom GROK patterns are used.
SourceKeyStringNocontentcontentThe name of the source field. The default value is the content field.
MatchString arrayYesThe array of Grok expressions. The processor_grok plug-in matches the log against the list of expressions configured here from top to bottom and returns the first successfully matched extraction result. Note Configuring multiple Grok expressions may affect performance. We recommend no more than 5.
TimeoutMilliSecondsLongNo0The maximum attempt time for extracting fields with a Grok expression, in milliseconds. If this parameter is not added or is set to 0, it means there is no timeout.
IgnoreParseFailureBooleanNotruetrueSpecifies whether to ignore logs that fail to be parsed.
  • true (default): Ignore.

  • false: Delete.

KeepSourceBooleanNotruetrueSpecifies whether to keep the source field after successful parsing.
  • true (default): Keep.

  • false: Discard.

NoKeyErrorBooleanNofalsetrueSpecifies whether the system reports an error if the source field you specified is not in the raw log.
  • true: Report an error.

  • false (default): Do not report an error.

NoMatchErrorBooleanNotruetrueSpecifies whether the system reports an error when none of the expressions set in the Match parameter match the log.
  • true (default): Report an error.

  • false: Do not report an error.

TimeoutErrorBooleanNotruetrueSpecifies whether the system reports an error on a match timeout.
  • true (default): Report an error.

  • false: Do not report an error.

Add fields

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_add_fieldsThe type of the plug-in. Set the value to processor_add_fields.
FieldsMapYesThe field names and values to be added. Key-value pair format. Supports adding multiple.
IgnoreIfExistBooleanNofalsefalseSpecifies whether to ignore duplicate fields if a field with the same name exists.
  • true: Ignore.

  • false (default): Do not ignore.

Drop fields

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_dropThe type of the plug-in. Set the value to processor_drop.
DropKeysString arrayYesSpecifies the fields to be dropped. Multiple can be configured.

Rename fields

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_renameThe type of the plug-in. Set the value to processor_rename.
NoKeyErrorBooleanNofalsefalseSpecifies whether the system reports an error if the source field you specified is not in the log.
  • true: Report an error.

  • false (default): Do not report an error.

SourceKeysString arrayYesThe source fields to be renamed.
DestKeysString arrayYesThe fields after renaming.

Pack fields

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_packjsonThe type of the plug-in. Set the value to processor_packjson.
SourceKeysString arrayYesThe source fields to be packed.
DestKeyStringNoThe field after packing.
KeepSourceBooleanNotruetrueSpecifies whether to keep the source field in the parsed log.
  • true (default): Keep.

  • false: Discard.

AlarmIfIncompleteBooleanNotruetrueSpecifies whether the system reports an error if the source field you specified is not in the raw log.
  • true (default): Keep.

  • false: Discard.

Expand JSON fields

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_jsonThe type of the plug-in. Set the value to processor_json.
SourceKeyStringYesThe name of the source field to be expanded.
NoKeyErrorBooleanNotruetrueSpecifies whether the system reports an error if the source field you specified is not in the raw log.
  • true (default): Report an error.

  • false: Do not report an error.

ExpandDepthIntNo01The depth of JSON expansion. The default value is 0, which means no limit. 1 indicates the current level, and so on.
ExpandConnectorStringNo__The connector for JSON expansion. The default value is an underscore (_).
PrefixStringNoThe prefix to be added to the field names during JSON expansion.
KeepSourceBooleanNotruetrueSpecifies whether to keep the source field in the parsed log.
  • true (default): Keep.

  • false: Discard.

UseSourceKeyAsPrefixBooleanNoSpecifies whether to use the source field name as a prefix for all expanded JSON field names.
KeepSourceIfParseErrorBooleanNotruetrueSpecifies whether to keep the source log if parsing fails.
  • true (default): Keep.

  • false: Discard.

ExpandArrayBooleanNofalsefalseSpecifies whether to expand array types. This parameter is supported by Logtail 1.8.0 and later.
  • false (default): Do not expand.

  • true: Expand. For example, {"k":["1","2"]} is expanded to {"k[0]":"1","k[1]":"2"}.

Filter logs

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_filter_regexThe type of the plug-in. Set the value to processor_filter_regex.
IncludeJSON ObjectNoThe Key is the log field, and the Value is the regular expression that the field value must match. The relationship between key-value pairs is AND. If the value of a log field matches the corresponding regular expression, the log is collected.
ExcludeJSON ObjectNoThe Key is the log field, and the Value is the regular expression that the field value must match. The relationship between key-value pairs is OR. If the value of any field in the log matches the corresponding regular expression, the log is discarded.

Extract log time

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_gotimeThe type of the plug-in. Set the value to processor_gotime.
SourceKeyStringYesThe name of the source field.
SourceFormatStringYesThe format of the source time.
SourceLocationIntYesThe time zone of the source time. If the parameter value is empty, it indicates the time zone of the host or container where Logtail is located.
DestKeyStringYesThe destination field after parsing.
DestFormatStringYesThe time format after parsing.
DestLocationIntNoThe time zone after parsing. If the parameter value is empty, it indicates the local time zone.
SetTimeBooleanNotruetrueSpecifies whether to set the parsed time as the log time.
  • true (default): Yes.

  • false: No.

KeepSourceBooleanNotruetrueSpecifies whether to keep the source field in the parsed log.
  • true (default): Keep.

  • false: Do not keep.

NoKeyErrorBooleanNotruetrueSpecifies whether the system reports an error if the source field you specified is not in the raw log.
  • true (default): Report an error.

  • false: Do not report an error.

AlarmIfFailBooleanNotruetrueSpecifies whether the system reports an error if it fails to extract the log time.
  • true (default): Report an error.

  • false: Do not report an error.

Convert IP addresses

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_geoipThe type of the plug-in. Set the value to processor_geoip.
SourceKeyStringYesThe name of the source field to be converted for the IP address.
DBPathStringYes/user/data/GeoLite2-City_20180102/GeoLite2-City.mmdbThe full path of the GeoIP database. For example, /user/data/GeoLite2-City_20180102/GeoLite2-City.mmdb.
NoKeyErrorBooleanNofalsefalseSpecifies whether the system reports an error if the source field name you specified is not in the raw log.
  • true: Report an error.

  • false (default): Do not report an error.

NoMatchErrorBooleanNotruetrueSpecifies whether the system reports an error if the IP address is invalid or not found in the database.
  • true (default): Report an error.

  • false: Do not report an error.

KeepSourceBooleanNotruetrueSpecifies whether to keep the source field in the parsed log.
  • true (default): Keep.

  • false: Do not keep.

LanguageStringNozh-CNzh-CNThe language attribute. The default value is zh-CN. Important Make sure your GeoIP database contains the corresponding language.

Data masking

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_desensitizeThe type of the plug-in. Set the value to processor_desensitize.
SourceKeyStringYesThe name of the log field.
MethodStringYesconstThe data masking method. Valid values:
  • const: Replaces sensitive content with a string. You can specify the target string with the ReplaceString parameter.

  • md5: Replaces sensitive content with its corresponding MD5 value.

MatchStringNofullfullSpecifies the method for extracting sensitive content. Valid values:
  • full (default): Extracts all, which means replacing all content in the target field value.

  • regex: Uses a regular expression to extract sensitive content.

ReplaceStringStringNoThe string used to replace sensitive content. Required when Method is set to const.
RegexBeginStringNoThe regular expression to match the prefix of sensitive content. Required when Match is set to regex.
RegexContentStringNoThe regular expression to match sensitive content. Required when Match is set to regex.

Map field values

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_dict_mapThe type of the plug-in. Set the value to processor_dict_map.
SourceKeyStringYesThe name of the source field.
MapDictMapNoThe mapping dictionary. If the mapping dictionary is small, you can set it directly with this parameter. You do not need to provide a local CSV dictionary file. Important When you set the DictFilePath parameter, the configuration in the MapDict parameter does not take effect.
DictFilePathStringNoA dictionary file in CSV format. The separator for this CSV file is a comma (,), and field references are indicated by double quotes (").
DestKeyStringNoThe name of the field after mapping.
HandleMissingBooleanNofalsefalseSpecifies whether the system processes the target field if it is missing from the raw log.
  • true: Process. The system fills it with the value from the Missing parameter.

  • false (default): Do not process.

MissingStringNoUnknownUnknownWhen processing a missing target field in the raw log, sets the corresponding fill value. The default value is Unknown. This parameter takes effect when HandleMissing is configured as true.
MaxDictSizeIntNo10001000The maximum size of the mapping dictionary. The default value is 1000, which means up to 1000 mapping rules can be stored. To limit the plug-in's memory usage on the server, you can reduce this value.
ModeStringNooverwriteoverwriteThe processing method when the mapped field already exists in the raw log.
  • overwrite (default): Overwrites the original field.

  • fill: Does not overwrite the original field.

Encrypt fields

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_encryptThe type of the plug-in. Set the value to processor_encrypt.
SourceKeyString arrayYesThe name of the source field.
EncryptionParametersObjectYesThe key-related configuration.
KeyStringYesSets the key. Must be 64 hexadecimal characters.
IVStringNo00000000000000000000000000000000Sets the initial vector for encryption. Must be 32 hexadecimal characters. The default value is 00000000000000000000000000000000.
KeyFilePathBooleanNoThe file path to read the encryption parameters from. If not configured, it reads from Logtail Configuration - Input Configuration - File Path.
KeepSourceValueIfErrorStringNofalsefalseSpecifies whether the system keeps the value of the source field if encryption fails.
  • true: Keep.

  • false (default): Do not keep.

If encryption fails, the field value is replaced with ENCRYPT_ERROR.

Replace strings

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_string_replaceThe type of the plug-in. Set the value to processor_string_replace.
SourceKeyStringYesThe name of the source field.
MethodStringYesSpecifies the matching method. Valid values:
  • const: Replaces with a string.

  • regex: Replaces using a regular expression.

  • unquote: Removes escape characters.

MatchStringNoEnter the content to match.
  • When Method is set to const, enter the string that matches the content to be replaced. If multiple strings match, all are replaced.

  • When Method is set to regex, enter the regular expression that matches the content to be replaced. If multiple strings match, all are replaced. You can also use regex grouping to match a specific group.

  • When Method is set to unquote, you do not need to configure this parameter.

ReplaceStringStringNoThe string for replacement. The default value is "".
  • When Method is set to const, enter the string to replace the original content.

  • When Method is set to regex, enter the string to replace the original content. Supports replacement based on regex groups.

  • When Method is set to unquote, you do not need to configure this parameter.

DestKeyStringNoSpecifies a new field for the replaced content. By default, no new field is added.

Encode and decode data

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_base64_encodingThe type of the plug-in. Set the value to processor_base64_encoding.
SourceKeyStringYesThe name of the source field.
NewKeyStringYesThe name of the result field after encoding.
NoKeyErrorBooleanNofalsefalseSpecifies whether the system reports an error if the source field you specified is not in the raw log.
  • true: Report an error.

  • false (default): Do not report an error.

Convert logs to metrics

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_log_to_sls_metricThe type of the plug-in. Set the value to processor_log_to_sls_metric.
MetricTimeKeyStringNoSpecifies the time field in the log, which maps to the __time_nano__ field in the time series data. By default, the value of the __time__ field in the log is extracted. Ensure that the specified field is a valid, properly formatted timestamp. Currently supports Unix timestamps in seconds (10 digits), milliseconds (13 digits), microseconds (16 digits), and nanoseconds (19 digits).
MetricLabelKeys[]StringYesSpecifies the Key list for the __labels__ field. The Key must follow the regular expression ^[a-zA-Z_][a-zA-Z0-9_]*$. The Value cannot contain a vertical line (|) or #$#. For more information, see Time series data (Metric). Adding the __labels__ field in the MetricLabelKeys parameter is not supported. If the source field contains a __labels__ field, its value is appended to the new __labels__ field.
MetricValuesMapYesUsed to specify the Metric name and Metric value. The Metric name corresponds to the __name__ field and must follow the regular expression ^[a-zA-Z_:][a-zA-Z0-9_:]*$. The Metric value corresponds to the __value__ field and must be of Double type. For more information, see Time series data (Metric) .
CustomMetricLabelsMapNoThe custom __labels__ field. The Key must follow the regular expression ^[a-zA-Z_][a-zA-Z0-9_]*$, and the Value cannot contain a vertical line (|) or #$#. For more information, see Time series data (Metric).
IgnoreErrorBooleanNoSpecifies whether to output an Error log when there are no matching logs. The default value is false, which means no output.

Convert logs to traces

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_otel_traceThe type of the plug-in. Set the value to processor_otel_trace.
SourceKeyStringYesThe name of the source field.
FormatStringYesjsonThe format after conversion. Valid values: protobuf, json, protojson.
NoKeyErrorBooleanNofalsetrueSpecifies whether to report an error when the corresponding source field is not in the log. The default value is false.
TraceIDNeedDecodeBooleanNoSpecifies whether to perform Base64 decoding on the TraceID. The default value is false. When Format is set to protojson, if the TraceID has been Base64 encoded, you need to set TraceIDNeedDecode to true. Otherwise, the conversion will fail.
SpanIDNeedDecodeBooleanNoSpecifies whether to perform Base64 decoding on the SpanID. The default value is false. When Format is set to protojson, if the SpanID has been Base64 encoded, you need to set SpanIDNeedDecode to true. Otherwise, the conversion will fail.
ParentSpanIDNeedDecodeBooleanNoSpecifies whether to perform Base64 decoding on the ParentSpanID. The default value is false. When Format is set to protojson, if the ParentSpanID has been Base64 encoded, you need to set ParentSpanIDNeedDecode to true. Otherwise, the conversion will fail.

Output plug-ins

SLS output plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/flusher_slsThe type of the plug-in. Set the value to flusher_sls.
LogstorestirngYes/test-logstoreThe name of the Logstore.

Response elements

Element

Type

Description

Example

None defined.

Examples

Success response

JSON format

{}

Error codes

See Error Codes for a complete list.

Release notes

See Release Notes for a complete list.