All Products
Search
Document Center

Simple Log Service:CreateLogtailPipelineConfig

Last Updated:Dec 23, 2025

Creates a Logtail pipeline configuration.

Try it now

Try this API in OpenAPI Explorer, no manual signing needed. Successful calls auto-generate SDK code matching your parameters. Download it with built-in credential security for local usage.

Test

RAM authorization

The table below describes the authorization required to call this API. You can define it in a Resource Access Management (RAM) policy. The table's columns are detailed below:

  • Action: The actions can be used in the Action element of RAM permission policy statements to grant permissions to perform the operation.

  • API: The API that you can call to perform the action.

  • Access level: The predefined level of access granted for each API. Valid values: create, list, get, update, and delete.

  • Resource type: The type of the resource that supports authorization to perform the action. It indicates if the action supports resource-level permission. The specified resource must be compatible with the action. Otherwise, the policy will be ineffective.

    • For APIs with resource-level permissions, required resource types are marked with an asterisk (*). Specify the corresponding Alibaba Cloud Resource Name (ARN) in the Resource element of the policy.

    • For APIs without resource-level permissions, it is shown as All Resources. Use an asterisk (*) in the Resource element of the policy.

  • Condition key: The condition keys defined by the service. The key allows for granular control, applying to either actions alone or actions associated with specific resources. In addition to service-specific condition keys, Alibaba Cloud provides a set of common condition keys applicable across all RAM-supported services.

  • Dependent action: The dependent actions required to run the action. To complete the action, the RAM user or the RAM role must have the permissions to perform all dependent actions.

Action

Access level

Resource type

Condition key

Dependent action

log:CreateLogtailPipelineConfig

create

*All Resource

*

  • log:TLSVersion
None

Request syntax

POST /pipelineconfigs HTTP/1.1

Request parameters

Parameter

Type

Required

Description

Example

project

string

Yes

The name of the project.

test-project

body

object

No

The content of the Logtail pipeline configuration.

configName

string

Yes

The name of the configuration.

Note

The configuration name must be unique within the project and cannot be modified after the configuration is created. The name must meet the following requirements:

  • It can contain only lowercase letters, digits, hyphens (-), and underscores (_).

  • It must start and end with a lowercase letter or a digit.

  • It must be 2 to 128 characters in length.

test-config

logSample

string

No

A sample log. Multiple logs are supported.

2022-06-14 11:13:29.796 | DEBUG | __main__::1 - hello world

global

object

No

The global configuration.

inputs

array<object>

Yes

The list of input plug-ins.

Important Currently, you can configure only one input plug-in.

object

No

The input plug-in.

Note

For more information about the parameters of the file input plug-in, see File plug-in. For more information about the parameters of other input plug-ins, see Processing plug-ins.

{ "Type": "input_file", "FilePaths": ["/var/log/*.log"] }

processors

array<object>

No

The list of processing plug-ins.

Note

Processing plug-ins are classified into native processing plug-ins and extension processing plug-ins. For more information, see Processing plug-ins.

Important
Note
  • Native plug-ins can be used only to collect text logs.

  • You cannot add native plug-ins and extension plug-ins at the same time.

  • When you use native plug-ins, the following requirements must be met:
    • The first processing plug-in must be a regular expression parsing plug-in, a separator parsing plug-in, a JSON parsing plug-in, an NGINX parsing plug-in, an Apache parsing plug-in, or an IIS parsing plug-in.

    • After the first processing plug-in, you can add only one time parsing plug-in, one filter plug-in, and multiple data masking plug-ins.

object

No

The processing plug-in.

Note

For more information about native processing plug-ins and extension processing plug-ins, see Processing plug-ins.

{ "Type": "processor_parse_json_native", "SourceKey": "content" }

aggregators

array<object>

No

The list of aggregation plug-ins.

Important This parameter is valid only when an extension processing plug-in is used. You can use a maximum of one aggregation plug-in.

object

No

The aggregation plug-in.

flushers

array<object>

Yes

The list of output plug-ins.

Important Currently, you can configure only one flusher_sls plug-in.

object

No

The output plug-in.

{ "Type": "flusher_sls", "Logstore": "test" }

task

object

No

Global configuration

ParameterTypeRequiredDefault valueExampleDescription
TopicTypestringNoEmptyfilepathThe topic type. Valid values:
  • filepath: extracts information from the log file path to use as a topic. This value is valid only when the input plug-in is input_file.

  • machine_group_topic: uses the topic of the machine group to which the configuration is applied as the topic.

  • custom: a custom topic. For more information, see Log topics.

TopicFormatstringNo. This parameter is required if you set TopicType to filepath or custom.//var/log/(.*).logThe topic format.
EnableTimestampNanosecondboolNofalsefalseSpecifies whether to enable nanosecond precision for timestamps.
PipelineMetaTagKeyobjectNoEmpty{"HOST_NAME":"__hostname__"}
Important This parameter is supported only by LoongCollector 3.0.10 and later.
Controls the tags related to LoongCollector information. The key is the tag parameter name, and the value is the field name of the tag in the log. If the value is __default__, the default value is used. If the value is an empty string, the tag is deleted. The following tags can be configured:
  • HOST_NAME: the hostname. This tag is added by default. The default value is "__hostname__".

  • AGENT_TAG: the custom identifier. This tag is added by default. The default value is "__user_defined_id__".

  • HOST_ID: the host ID. This tag is not added by default. The default value is "__host_id__".

  • CLOUD_PROVIDER: This tag is not added by default. The default value is "__cloud_provider__".

Input plug-ins

File input plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/input_fileThe type of the plug-in. Set the value to input_file.
FilePaths[string]Yes/["/var/log/*.log"]The list of paths to the log files that you want to collect. Currently, you can specify only one path. You can use the wildcard characters (*) and (**) in the path. The wildcard character (**) can appear only once and only before the filename.
MaxDirSearchDepthuintNo00The maximum depth of the directories that the wildcard character (**) in the file path can match. This parameter is valid only when the wildcard character (**) is used in the log path. Valid values: 0 to 1000.
ExcludeFilePaths[string]NoEmpty["/home/admin/*.log"]The blacklist of file paths. The paths must be absolute paths. You can use the wildcard character (*).
ExcludeFiles[string]NoEmpty["app*.log", "password"]The blacklist of filenames. You can use the wildcard character (*).
ExcludeDirs[string]NoEmpty["/home/admin/dir1", "/home/admin/dir2*"]The blacklist of directories. The paths must be absolute paths. You can use the wildcard character (*).
FileEncodingstringNoutf8utf8The encoding format of the file. Valid values: utf8 and gbk.
TailSizeKBuintNo10241024The size of the data to be collected from the end of a file when the configuration first takes effect. If the file size is smaller than this value, data is collected from the beginning of the file. Valid values: 0 to 10485760 KB.
MultilineobjectNoEmpty/The multiline aggregation options.
Multiline.ModestringNocustomcustomThe multiline aggregation mode. Valid values: custom and JSON.
Multiline.StartPatternstringThis parameter is required if you set Multiline.Mode to custom.Empty\d+-\d+-\d+.*The regular expression for the start of a log entry.
EnableContainerDiscoveryboolNofalsetrueSpecifies whether to enable container discovery. This parameter is valid only when Logtail runs in DaemonSet mode and the collection file path is a path within a container.
ContainerFiltersobjectNoEmpty/The container filtering options. Multiple options are combined with a logical AND. This parameter is valid only if you set EnableContainerDiscovery to true.
ContainerFilters.K8sNamespaceRegexstringNoEmptydefaultFor containers deployed in a Kubernetes environment, specifies the namespace condition for the pods where the containers to be collected are located. If you do not add this parameter, all containers are collected. Regular expressions are supported.
ContainerFilters.K8sPodRegexstringNoEmptytest-podFor containers deployed in a Kubernetes environment, specifies the name condition for the pods where the containers to be collected are located. If you do not add this parameter, all containers are collected. Regular expressions are supported.
ContainerFilters.IncludeK8sLabelmapNoEmpty/For containers deployed in a Kubernetes environment, specifies the label conditions for the pods where the containers to be collected are located. Multiple conditions are combined with a logical OR. If you do not add this parameter, all containers are collected. Regular expressions are supported. The key in the map is the pod label name, and the value is the pod label value. The following rules apply:
  • If the value in the map is empty, all pods that have a label with the specified key are matched.

  • If the value in the map is not empty:
    • If the value starts with ^ and ends with $, pods are matched if they have a label with the specified key and the label value matches the regular expression.

    • In other cases, pods are matched if they have a label with the specified key and the specified label value.

ContainerFilters.ExcludeK8sLabelmapNoEmpty/For containers deployed in a Kubernetes environment, specifies the label conditions for the pods where the containers to be excluded from collection are located. Multiple conditions are combined with a logical OR. If you do not add this parameter, all containers are collected. Regular expressions are supported. The key in the map is the pod label name, and the value is the pod label value. The following rules apply:
  • If the value in the map is empty, all pods that have a label with the specified key are matched.

  • If the value in the map is not empty:
    • If the value starts with ^ and ends with $, pods are matched if they have a label with the specified key and the label value matches the regular expression.

    • In other cases, pods are matched if they have a label with the specified key and the specified label value.

ContainerFilters.K8sContainerRegexstringNoEmptytest-containerFor containers deployed in a Kubernetes environment, specifies the name condition for the containers to be collected. If you do not add this parameter, all containers are collected. Regular expressions are supported.
ContainerFilters.IncludeEnvmapNoEmpty/Specifies the environment variable conditions for the containers to be collected. Multiple conditions are combined with a logical OR. If you do not add this parameter, all containers are collected. Regular expressions are supported. The key in the map is the environment variable name, and the value is the environment variable value. The following rules apply:
  • If the value in the map is empty, all containers that have an environment variable with the specified key are matched.

  • If the value in the map is not empty:
    • If the value starts with ^ and ends with $, containers are matched if they have an environment variable with the specified key and the environment variable value matches the regular expression.

    • In other cases, containers are matched if they have an environment variable with the specified key and the specified environment variable value.

ContainerFilters.ExcludeEnvmapNoEmpty/Specifies the environment variable conditions for the containers to be excluded from collection. Multiple conditions are combined with a logical OR. If you do not add this parameter, all containers are collected. Regular expressions are supported. The key in the map is the environment variable name, and the value is the environment variable value. The following rules apply:
  • If the value in the map is empty, all containers that have an environment variable with the specified key are matched.

  • If the value in the map is not empty:
    • If the value starts with ^ and ends with $, containers are matched if they have an environment variable with the specified key and the environment variable value matches the regular expression.

    • In other cases, containers are matched if they have an environment variable with the specified key and the specified environment variable value.

ContainerFilters.IncludeContainerLabelmapNoEmpty/Specifies the label conditions for the containers to be collected. Multiple conditions are combined with a logical OR. If you do not add this parameter, the default value is empty, which means all containers are collected. Regular expressions are supported. The key in the map is the container label name, and the value is the container label value. The following rules apply:
  • If the value in the map is empty, all containers that have a label with the specified key are matched.

  • If the value in the map is not empty:
    • If the value starts with ^ and ends with $, containers are matched if they have a label with the specified key and the label value matches the regular expression.

    • In other cases, containers are matched if they have a label with the specified key and the specified label value.

ContainerFilters.ExcludeContainerLabelmapNoEmpty/Specifies the label conditions for the containers to be excluded from collection. Multiple conditions are combined with a logical OR. If you do not add this parameter, the default value is empty, which means all containers are collected. Regular expressions are supported. The key in the map is the container label name, and the value is the container label value. The following rules apply:
  • If the value in the map is empty, all containers that have a label with the specified key are matched.

  • If the value in the map is not empty:
    • If the value starts with ^ and ends with $, containers are matched if they have a label with the specified key and the label value matches the regular expression.

    • In other cases, containers are matched if they have a label with the specified key and the specified label value.

ExternalK8sLabelTagmapNoEmpty/For containers deployed in a Kubernetes environment, specifies the tags related to pod labels that you want to add to logs. The key in the map is the pod label name, and the value is the corresponding tag name. For example, if you add app: k8s_label_app to the map and a pod has the label app=serviceA, the tag __tag__:k8s_label_app: serviceA is added to the log. If the pod does not have the app label, the empty field __tag__:k8s_label_app: is added.
ExternalEnvTagmapNoEmpty/For containers deployed in a Kubernetes environment, specifies the tags related to container environment variables that you want to add to logs. The key in the map is the environment variable name, and the value is the corresponding tag name. For example, if you add VERSION: env_version to the map and a container has the environment variable VERSION=v1.0.0, the tag __tag__:env_version: v1.0.0 is added to the log. If the container does not have the VERSION environment variable, the empty field __tag__:env_version: is added.
CollectingContainersMetaboolNofalsetrueSpecifies whether to enable container metadata preview.
AppendingLogPositionMetaboolNofalsefalseSpecifies whether to add the metadata of the file to which the log belongs to the log. The metadata includes the __tag__:__inode__ field and the __file_offset__ field.
AllowingIncludedByMultiConfigsboolNofalsefalseSpecifies whether to allow the current configuration to collect files that are already matched by other configurations.
TagsobjectNoEmpty{"FileInodeTagKey":"__inode__"}
Important This parameter is supported only by LoongCollector 3.0.10 and later.
Controls the tags related to file collection. The key is the tag parameter name, and the value is the field name of the tag in the log. If the value is __default__, the default value is used. If the value is an empty string, the tag is deleted. The following tags can be configured:
  • FileInodeTagKey: the file inode. This tag is not added by default. The default value is "__inode__".

  • FilePathTagKey: the file path. This tag is added by default. The default value is "__path__".

The following parameters are valid only if you set the EnableContainerDiscovery parameter to true.
  • K8sNamespaceTagKey: the namespace of the container where the file is located. This tag is added by default. The default value is "_namespace_".

  • K8sPodNameTagKey: the name of the pod where the file is located. This tag is added by default. The default value is "_pod_name_".

  • K8sPodUidTagKey: the UID of the pod where the file is located. This tag is added by default. The default value is "_pod_uid_".

  • ContainerNameTagKey: the name of the container where the file is located. This tag is added by default. The default value is "_container_name_".

  • ContainerIpTagKey: the IP address of the container where the file is located. This tag is added by default. The default value is "_container_ip_".

  • ContainerImageNameTagKey: the image of the container where the file is located. This tag is added by default. The default value is "_image_name_".

FileOffsetKeystringNoEmpty__file_offset__
Important This parameter is supported only by LoongCollector 3.0.10 and later.
The tag for the position of the log in the file. This tag is not added by default. The default value is __file_offset__. If the value is __default__, the default value is used. If the value is an empty string, the tag is deleted. If the AppendingLogPositionMeta parameter is specified along with the Tags.FileInodeTagKey or FileOffsetKey parameter, the AppendingLogPositionMeta parameter is ignored.

Container standard output (legacy)

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/service_docker_stdoutThe type of the plug-in. Set the value to service_docker_stdout.
StdoutBooleanNotruetrueSpecifies whether to collect standard output (stdout).
StderrBooleanNotruetrueSpecifies whether to collect standard error (stderr).
StartLogMaxOffsetIntegerNo128 × 1024131072The length of historical data to be retrieved during the first collection, in bytes. We recommend that you set this value to a number between 131072 and 1048576.
IncludeLabelMap, where LabelKey and LabelValue are of the String typeNoEmpty

The whitelist of container labels, which is used to specify the containers to be collected. By default, this parameter is empty, which indicates that the standard output of all containers is collected. If you want to set a whitelist of container labels, LabelKey is required and LabelValue is optional.

  • If LabelValue is empty, all containers that have a label with the specified LabelKey are matched.

  • If LabelValue is not empty, only containers that have a label with the specified LabelKey and LabelValue are matched.

    By default, LabelValue is matched as a string. A match is found only if LabelValue is identical to the value of the container label. If the value starts with ^ and ends with $, it is matched as a regular expression. For example, if you set LabelKey to io.kubernetes.container.name and LabelValue to ^(nginx|cube)$, containers named nginx and cube are matched.

Multiple whitelists are combined with a logical OR. A container is matched if its label meets the condition of any whitelist.

ExcludeLabelMap, where LabelKey and LabelValue are of the String typeNoEmpty

The blacklist of container labels, which is used to exclude containers from collection. By default, this parameter is empty, which indicates that no containers are excluded. If you want to set a blacklist of container labels, LabelKey is required and LabelValue is optional.

  • If LabelValue is empty, all containers that have a label with the specified LabelKey are excluded.

  • If LabelValue is not empty, only containers that have a label with the specified LabelKey and LabelValue are excluded.

    By default, LabelValue is matched as a string. A match is found only if LabelValue is identical to the value of the container label. If the value starts with ^ and ends with $, it is matched as a regular expression. For example, if you set LabelKey to io.kubernetes.container.name and LabelValue to ^(nginx|cube)$, containers named nginx and cube are matched.

Multiple blacklists are combined with a logical OR. A container is excluded if its label meets the condition of any blacklist.

IncludeEnvMap, where EnvKey and EnvValue are of the String typeNoEmpty

The whitelist of environment variables, which is used to specify the containers to be collected. By default, this parameter is empty, which indicates that the standard output of all containers is collected. If you want to set a whitelist of environment variables, EnvKey is required and EnvValue is optional.

  • If EnvValue is empty, all containers that have an environment variable with the specified EnvKey are matched.

  • If EnvValue is not empty, only containers that have an environment variable with the specified EnvKey and EnvValue are matched.

    By default, EnvValue is matched as a string. A match is found only if EnvValue is identical to the value of the environment variable. If the value starts with ^ and ends with $, it is matched as a regular expression. For example, if you set EnvKey to NGINX_SERVICE_PORT and EnvValue to `^(80

ExcludeEnvMap, where EnvKey and EnvValue are of the String typeNoEmpty

The blacklist of environment variables, which is used to exclude containers from collection. By default, this parameter is empty, which indicates that no containers are excluded. If you want to set a blacklist of environment variables, EnvKey is required and EnvValue is optional.

  • If EnvValue is empty, the logs of all containers that have an environment variable with the specified EnvKey are excluded.

  • If EnvValue is not empty, only containers that have an environment variable with the specified EnvKey and EnvValue are excluded.

    By default, EnvValue is matched as a string. A match is found only if EnvValue is identical to the value of the environment variable. If the value starts with ^ and ends with $, it is matched as a regular expression. For example, if you set EnvKey to NGINX_SERVICE_PORT and EnvValue to ^(80|6379)$, containers with a service port of 80 or 6379 are matched.

Multiple blacklists are combined with a logical OR. A container is excluded if its environment variable meets the condition of any key-value pair.

IncludeK8sLabelMap, where LabelKey and LabelValue are of the String typeNoEmpty

Specifies the containers to be collected using a whitelist of Kubernetes labels that are defined in template.metadata. If you want to set a whitelist of Kubernetes labels, LabelKey is required and LabelValue is optional.

  • If LabelValue is empty, all containers that have a Kubernetes label with the specified LabelKey are matched.

  • If LabelValue is not empty, only containers that have a Kubernetes label with the specified LabelKey and LabelValue are matched.

    By default, LabelValue is matched as a string. A match is found only if LabelValue is identical to the value of the Kubernetes label. If the value starts with ^ and ends with $, it is matched as a regular expression. For example, if you set LabelKey to app and LabelValue to ^(test1|test2)$, containers that have the Kubernetes label app:test1 or app:test2 are matched.

Multiple whitelists are combined with a logical OR. A container is matched if its Kubernetes label meets the condition of any whitelist.

ExcludeK8sLabelMap, where LabelKey and LabelValue are of the String typeNoEmpty

Excludes containers from collection using a blacklist of Kubernetes labels that are defined in template.metadata. If you want to set a blacklist of Kubernetes labels, LabelKey is required and LabelValue is optional.

  • If LabelValue is empty, all containers that have a Kubernetes label with the specified LabelKey are excluded.

  • If LabelValue is not empty, only containers that have a Kubernetes label with the specified LabelKey and LabelValue are excluded.

    By default, LabelValue is matched as a string. A match is found only if LabelValue is identical to the value of the Kubernetes label. If the value starts with ^ and ends with $, it is matched as a regular expression. For example, if you set LabelKey to app and LabelValue to ^(test1|test2)$, containers that have the Kubernetes label app:test1 or app:test2 are matched.

Multiple blacklists are combined with a logical OR. A container is excluded if its Kubernetes label meets the condition of any blacklist.

K8sNamespaceRegexStringNoEmpty^(default|nginx)$Specifies the containers to be collected by namespace name. Regular expressions are supported. For example, if you set this parameter to ^(default|nginx)$, all containers in the nginx and default namespaces are matched.
K8sPodRegexStringNoEmpty^(nginx-log-demo.*)$Specifies the containers to be collected by pod name. Regular expressions are supported. For example, if you set this parameter to ^(nginx-log-demo.*)$, all containers in pods whose names start with nginx-log-demo are matched.
K8sContainerRegexStringNoEmpty^(container-test)$Specifies the containers to be collected by container name. The Kubernetes container name is defined in spec.containers. Regular expressions are supported. For example, if you set this parameter to ^(container-test)$, all containers named container-test are matched.

Data processing parameters

ParameterTypeRequiredDefault valueExampleDescription
BeginLineRegexStringNoEmpty

The regular expression to match the start of a log entry.

If this parameter is empty, single-line mode is used.

If the expression matches the beginning of a line, that line is treated as a new log entry. Otherwise, the line is appended to the previous log entry.

BeginLineCheckLengthIntegerNoEmpty

The length to check for a match at the start of a line, in bytes.

The default value is 10 × 1024 bytes.

If the regular expression for the start of a line can be matched within the first N bytes, we recommend that you set this parameter to improve matching efficiency.

BeginLineTimeoutMsIntegerNoEmpty

The timeout period for matching the start of a line, in milliseconds.

The default value is 3000 milliseconds.

If no new log appears within 3000 milliseconds, the matching ends, and the last log entry is uploaded to Simple Log Service.

MaxLogSizeIntegerNoEmpty

The maximum length of a log entry, in bytes. The default value is 0.

The default value is 512 × 1024 bytes.

If the length of a log entry exceeds this value, the system stops searching for the start of the line and uploads the log directly.

ExternalK8sLabelTagMap, where LabelKey and LabelValue are of the String typeNoEmpty

After you set the Kubernetes label (defined in template.metadata) log tag, iLogtail adds fields related to the Kubernetes label to the log.

For example, if you set LabelKey to app and LabelValue to k8s_label_app, and a pod has the label app=serviceA, iLogtail adds this information to the log by adding the field k8s_label_app: serviceA. If the pod does not have a label named app, the empty field k8s_label_app: is added.

ExternalEnvTagMap, where EnvKey and EnvValue are of the String typeNoEmpty

After you set the container environment variable log tag, iLogtail adds fields related to the container environment variable to the log.

For example, if you set EnvKey to VERSION and EnvValue to env_version, and a container has the environment variable VERSION=v1.0.0, this information is added to the log as a tag by adding the field env_version: v1.0.0. If the container does not have an environment variable named VERSION, the empty field env_version: is added.

Data processing environment variables

Environment variableTypeRequiredDefault valueExampleDescription
ALIYUN_LOG_ENV_TAGSStringNoEmpty

After you set the global environment variable log tag, iLogtail adds fields related to the environment variables of the container where iLogtail resides to the log. Separate multiple environment variable names with a vertical bar (|).

For example, if you set this parameter to node_name|node_ip, and the iLogtail container exposes the relevant environment variables, this information is added to the log as tags by adding the fields node_ip:172.16.0.1 and node_name:worknode.

MySQL input plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/service_mysqlThe type of the plug-in. Set the value to service_mysql.
AddressstringNo127.0.0.1:3306rm-*.mysql.rds.aliyuncs.comThe MySQL address.
UserstringNorootrootThe username that is used to log on to the MySQL database.
PasswordstringNoEmptyThe password of the user that is used to log on to the MySQL database. For higher security, set the username and password to xxx. After the collection configuration is synchronized to your local machine, find the corresponding configuration in the /usr/local/ilogtail/user_log_config.json file and modify it. For more information, see Modify local configurations.
Important If you modify this parameter in the console, the local configuration is overwritten after synchronization.
DataBasestringNo/project_databaseThe name of the database.
DialTimeOutMsintNo50005000The timeout period for connecting to the MySQL database, in ms.
ReadTimeOutMsintNo50005000The timeout period for reading the MySQL query results, in ms.
StateMentstringNo/The SELECT statement. If you set CheckPoint to true, the WHERE condition in the SELECT statement must contain the checkpoint column (CheckPointColumn). You can use a question mark (?) as a placeholder to be used with the checkpoint column. For example, you can set CheckPointColumn to id, CheckPointStart to 0, and StateMent to SELECT * from ... where id > ?. After each collection, the system saves the ID of the last data entry as a checkpoint. In the next collection, the question mark (?) in the query statement is replaced with the ID of this checkpoint.
LimitboolNofalsetrueSpecifies whether to use LIMIT for paging.
  • true: Use LIMIT.

  • false (default): Do not use LIMIT.

We recommend that you use LIMIT for paging. If you set Limit to true, the system automatically appends a LIMIT clause to the SELECT statement when performing an SQL query.
PageSizeintNo/10The page size. This parameter is required if you set Limit to true.
MaxSyncSizeintNo00The maximum number of records to synchronize at a time. The default value is 0, which means no limit.
CheckPointboolNofalsetrueSpecifies whether to use a checkpoint.
  • true: Use a checkpoint.

  • false (default): Do not use a checkpoint.

A checkpoint can be used as the starting point for the next data collection, enabling incremental data collection.
CheckPointColumnstringNoEmpty1The name of the checkpoint column. This parameter is required if you set CheckPoint to true. Warning The values in this column must be incremental. Otherwise, data may be missed during collection. The maximum value in each query result is used as the input for the next query.
CheckPointColumnTypestringNoEmptyintThe data type of the checkpoint column. Supported values: int and time. The int type is stored internally as int64. The time type supports the date, datetime, and time types of MySQL. This parameter is required if you set CheckPoint to true.
CheckPointStartstringNoEmptyThe initial value of the checkpoint column. This parameter is required if you set CheckPoint to true.
CheckPointSavePerPageboolNotruetrueSpecifies whether to save a checkpoint for each page.
  • true (default): Save a checkpoint for each page.

  • false: Save a checkpoint after each synchronization is complete.

IntervalMsintNo6000060000The synchronization interval. The default value is 60000, in ms.

HTTP input plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/metric_httpThe type of the plug-in. Set the value to metric_http.
AddressstringYes/The list of URLs. Important The URLs must start with http or https.
IntervalMsintYes/10The interval between requests, in ms.
MethodstringNoGETGETThe request method name. It must be in uppercase.
BodystringNoEmptyThe content of the HTTP Body field.
HeadersmapNoEmpty{"key":"value"}The content of the HTTP header, such as {"key":"value"}. Replace the content with the actual value.
PerAddressSleepMsintNo100100The interval between requests for each URL in the Addresses list, in ms.
ResponseTimeoutMsintNo50005000The request timeout period, in ms.
IncludeBodyboolNofalsetrueSpecifies whether to collect the request body. The default value is false. If you set this parameter to true, the request body content is stored in a key named content.
FollowRedirectsboolNofalsefalseSpecifies whether to automatically handle redirections.
InsecureSkipVerifyboolNofalsefalseSpecifies whether to skip HTTPS security checks.
ResponseStringMatchstringNo/Performs a regular expression check on the returned body content. The check result is stored in a key named _response_match_. If a match is found, the value is yes. If no match is found, the value is no.

Syslog input plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/service_syslogThe type of the plug-in. Set the value to service_syslog.
AddressstringNotcp://127.0.0.1:9999Specifies the protocol, address, and port that Logtail listens on. Logtail listens and obtains log data based on the Logtail configuration. The format is [tcp/udp]://[ ip ]:[ port ]. If this parameter is not configured, the default value tcp://127.0.0.1:9999 is used, which means that only logs forwarded locally can be received. Note
  • The listening protocol, address, and port number specified in the Logtail configuration must be the same as the forwarding rule specified in the rsyslog configuration file.

  • If the server where Logtail is installed has multiple IP addresses that can receive logs, you can set the address to 0.0.0.0 to listen on all IP addresses of the server.

ParseProtocolstringNoEmptyrfc3164Specifies the protocol used to parse logs. The default value is empty, which means logs are not parsed. Valid values:
  • rfc3164: Use the RFC3164 protocol to parse logs.

  • rfc5424: Use the RFC5424 protocol to parse logs.

  • auto: Logtail automatically selects the appropriate parsing protocol based on the log content.

IgnoreParseFailureboolNotruetrueSpecifies the operation to perform after a parsing failure. If this parameter is not configured, the default value true is used, which means the parsing is abandoned and the returned content field is directly filled. If you set this parameter to false, the log is discarded upon parsing failure.

Systemd journal input plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/service_journalThe type of the plug-in. Set the value to service_journal.
JournalPaths[string]YesEmpty/var/log/journalThe Journal log path. We recommend that you set this to the directory where the Journal logs are located.
SeekPositionstringNotailtailThe method for the first collection. You can set this to head or tail.
  • head indicates that all data is collected.

  • tail indicates that only new data generated after the Logtail collection configuration is applied is collected.

KernelboolNotruetrueSpecifies whether to collect kernel logs.
Units[string]NoEmpty""The list of units to collect. By default, this is empty, which means all units are collected.
ParseSyslogFacilityboolNofalsefalseSpecifies whether to parse the facility field of syslog logs. If this parameter is not configured, the field is not parsed.
ParsePriorityboolNofalsefalseSpecifies whether to parse the Priority field. If this parameter is not configured, the field is not parsed. If you set this parameter to true, the mapping relationship for the Priority field is as follows. plaintext "0": "emergency" "1": "alert" "2": "critical" "3": "error" "4": "warning" "5": "notice" "6": "informational" "7": "debug"
UseJournalEventTimeboolNofalsefalseSpecifies whether to use the field in the Journal log as the log time. If this parameter is not configured, the collection time is used as the log time. The time difference for real-time log collection is generally within 3 seconds.

SQL Server input plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/service_mssqlThe type of the plug-in. Set the value to service_mssql.
AddressstringNo127.0.0.1:1433rm-*.sqlserver.rds.aliyuncs.comThe SQL Server address.
UserstringNorootrootThe username that is used to log on to the SQL Server database.
PasswordstringNoEmptyThe password of the user that is used to log on to the SQL Server database. For higher security, set the username and password to xxx. After the collection configuration is synchronized to your local machine, find the corresponding configuration in the /usr/local/ilogtail/user_log_config.json file and modify it. For more information, see Modify local configurations.
Important If you modify this parameter in the console, the local configuration is overwritten after synchronization.
DataBasestringNo/project_databaseThe name of the database.
DialTimeOutMsintNo50005000The timeout period for connecting to the SQL Server database, in ms.
ReadTimeOutMsintNo50005000The timeout period for reading the SQL Server query results, in ms.
StateMentstringNo/The SELECT statement. If you set CheckPoint to true, the WHERE condition in the SELECT statement must contain the checkpoint column (CheckPointColumn). You can use a question mark (?) as a placeholder to be used with the checkpoint column. For example, you can set CheckPointColumn to id, CheckPointStart to 0, and StateMent to SELECT * from ... where id > ?. After each collection, the system saves the ID of the last data entry as a checkpoint. In the next collection, the question mark (?) in the query statement is replaced with the ID of this checkpoint.
LimitboolNofalsetrueSpecifies whether to use LIMIT for paging.
  • true: Use LIMIT.

  • false (default): Do not use LIMIT.

We recommend that you use LIMIT for paging. If you set Limit to true, the system automatically appends a LIMIT clause to the SELECT statement when performing an SQL query.
PageSizeintNo/10The page size. This parameter is required if you set Limit to true.
MaxSyncSizeintNo00The maximum number of records to synchronize at a time. The default value is 0, which means no limit.
CheckPointboolNofalsetrueSpecifies whether to use a checkpoint.
  • true: Use a checkpoint.

  • false (default): Do not use a checkpoint.

A checkpoint can be used as the starting point for the next data collection, enabling incremental data collection.
CheckPointColumnstringNoEmpty1The name of the checkpoint column. This parameter is required if you set CheckPoint to true. Warning The values in this column must be incremental. Otherwise, data may be missed during collection. The maximum value in each query result is used as the input for the next query.
CheckPointColumnTypestringNoEmptyintThe data type of the checkpoint column. Supported values: int and time. The int type is stored internally as int64. The time type supports the date, datetime, and time types of SQL Server. This parameter is required if you set CheckPoint to true.
CheckPointStartstringNoEmptyThe initial value of the checkpoint column. This parameter is required if you set CheckPoint to true.
CheckPointSavePerPageboolNotruetrueSpecifies whether to save a checkpoint for each page.
  • true (default): Save a checkpoint for each page.

  • false: Save a checkpoint after each synchronization is complete.

IntervalMsintNo6000060000The synchronization interval. The default value is 60000, in ms.

PostgreSQL input plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/service_pgsqlThe type of the plug-in. Set the value to service_pgsql.
AddressstringNo127.0.0.1:5432rm-*.pg.rds.aliyuncs.comThe PostgreSQL address.
UserstringNorootrootThe username that is used to log on to the PostgreSQL database.
PasswordstringNoEmptyThe password of the user that is used to log on to the PostgreSQL database. For higher security, set the username and password to xxx. After the collection configuration is synchronized to your local machine, find the corresponding configuration in the /usr/local/ilogtail/user_log_config.json file and modify it. For more information, see Modify local configurations.
Important If you modify this parameter in the console, the local configuration is overwritten after synchronization.
DataBasestringNo/project_databaseThe name of the PostgreSQL database.
DialTimeOutMsintNo50005000The timeout period for connecting to the PostgreSQL database, in ms.
ReadTimeOutMsintNo50005000The timeout period for reading the PostgreSQL query results, in ms.
StateMentstringNo/The SELECT statement. If you set CheckPoint to true, the WHERE condition in the SELECT statement must contain the checkpoint column (the CheckPointColumn parameter), and the value of this column must be set to $1. For example, you can set CheckPointColumn to id and StateMent to SELECT * from ... where id > $1
LimitboolNofalsetrueSpecifies whether to use LIMIT for paging.
  • true: Use LIMIT.

  • false (default): Do not use LIMIT.

We recommend that you use LIMIT for paging. If you set Limit to true, the system automatically appends a LIMIT clause to the SELECT statement when performing an SQL query.
PageSizeintNo/10The page size. This parameter is required if you set Limit to true.
MaxSyncSizeintNo00The maximum number of records to synchronize at a time. The default value is 0, which means no limit.
CheckPointboolNofalsetrueSpecifies whether to use a checkpoint.
  • true: Use a checkpoint.

  • false (default): Do not use a checkpoint.

A checkpoint can be used as the starting point for the next data collection, enabling incremental data collection.
CheckPointColumnstringNoEmpty1The name of the checkpoint column. This parameter is required if you set CheckPoint to true. Warning The values in this column must be incremental. Otherwise, data may be missed during collection. The maximum value in each query result is used as the input for the next query.
CheckPointColumnTypestringNoEmptyintThe data type of the checkpoint column. Supported values: int and time. The int type is stored internally as int64. The time type supports the time types of PostgreSQL. This parameter is required if you set CheckPoint to true.
CheckPointStartstringNoEmptyThe initial value of the checkpoint column. This parameter is required if you set CheckPoint to true.
CheckPointSavePerPageboolNotruetrueSpecifies whether to save a checkpoint for each page.
  • true (default): Save a checkpoint for each page.

  • false: Save a checkpoint after each synchronization is complete.

IntervalMsintNo6000060000The synchronization interval. The default value is 60000, in ms.

SNMP input plug-in

ParameterTypeRequiredDefault valueExampleDescription
Targets[string]Yes/127.0.0.1The IP address of the target machine group.
PortstringNo161161The port used by the SNMP protocol.
CommunitystringNopublicpublicThe community name. SNMPv1 and SNMPv2 use community names for authentication.
UserNamestringNoEmptyrootThe username. SNMPv3 supports authentication using a username.
AuthenticationProtocolstringNoNoAuthNoAuthThe authentication protocol. SNMPv3 supports authentication using an authentication protocol.
AuthenticationPassphrasestringNoEmptyThe authentication password. The default value is empty. If you set AuthenticationProtocol to MD5 or SHA, you must set AuthenticationPassphrase.
PrivacyProtocolstringNoNoPrivNoPrivThe privacy protocol. SNMPv3 supports authentication using a privacy protocol.
PrivacyPassphrasestringNoEmptyThe privacy protocol password. By default, it is the same as the authentication password. If you set PrivacyProtocol to DES or AES, you must set PrivacyPassphrase.
TimeoutintNo55The timeout period for a single query operation, in seconds.
VersionintNo22The SNMP protocol version. Valid values: 1, 2, and 3.
TransportstringNoudpudpThe SNMP communication method. Valid values: udp and tcp.
MaxRepetitionsintNo00The number of retries after a query timeout.
Oids[string]NoEmpty1The object identifiers to query on the target machine.
Fields[string]NoEmptyintThe fields to query on the target machine. This plug-in first translates the fields by looking them up in the local Management Information Base (MIB), translates them into object identifiers, and then queries them together.
Tables[string]NoEmptyThe tables to query on the target machine. This plug-in first queries all fields in the table, then looks them up in the local MIB, translates them into object identifiers, and queries them together.

Script input plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/input_commandThe type of the plug-in. Set the value to input_command.
ScriptTypestringYesEmptyshellSpecifies the type of script content. Currently, bash, shell, python2, and python3 are supported.
UserstringYes/publicThe username used to run the command. Only non-root users are supported. Note * Make sure the specified username exists on the machine. We recommend that you configure the least privilege and grant only rwx permissions to the directories or files that you want to monitor.
ScriptContentstringYesEmptyThe script content. Plain text and Base64-encrypted content are supported. The length cannot exceed 512 × 1024 bytes.
ContentEncodingstringNoPlainTextPlainTextThe text format of the script content. Valid values:
  • PlainText (default): plain text, not encoded.

  • Base64: Base64 encoding.

LineSplitSepstringNoEmptyThe separator for the script output content. If this is empty, no splitting is performed, and the entire output is returned as a single data entry.
CmdPathstringNoEmpty/usr/bin/bashThe path to execute the script command. If this is empty, the default path is used. The default paths are as follows:
  • bash: /usr/bin/bash

  • shell: /usr/bin/sh

  • python2: /usr/bin/python2

  • python3: /usr/bin/python3

TimeoutMilliSecondsintNo30003000The timeout period for executing the script, in milliseconds.
IgnoreErrorboolNofalsefalseSpecifies whether to ignore error logs when the plug-in execution fails. The default value is false, which means they are not ignored.
Environments[string]NoThe environment variables. The default is the value of os.Environ(). If Environments is set, the specified environment variables are appended to the value of os.Environ().
IntervalMsintNo50005000The collection trigger frequency or script execution frequency, in milliseconds.

Native processing plug-ins

Regular expression parsing plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_parse_regex_nativeThe type of the plug-in. Set the value to processor_parse_regex_native.
SourceKeystringYes/contentThe source field name.
RegexstringYes/(\d+-\d+-\d+)\s+(.*)The regular expression.
Keys[string]Yes/["time", "msg"]The list of extracted fields.
KeepingSourceWhenParseFailboolNofalsefalseSpecifies whether to keep the source field when parsing fails.
KeepingSourceWhenParseSucceedboolNofalsefalseSpecifies whether to keep the source field when parsing succeeds.
RenamedSourceKeystringNoEmptykeyThe field name to store the source field when it is kept. If not specified, the source field is not renamed by default.

JSON parsing plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_parse_json_nativeThe type of the plug-in. Set the value to processor_parse_json_native.
SourceKeystringYes/contentThe source field name.
KeepingSourceWhenParseFailboolNofalsefalseSpecifies whether to keep the source field when parsing fails.
KeepingSourceWhenParseSucceedboolNofalsefalseSpecifies whether to keep the source field when parsing succeeds.
RenamedSourceKeystringNoEmptykeyThe field name to store the source field when it is kept. If not specified, the source field is not renamed by default.

Separator parsing plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_parse_delimiter_nativeThe type of the plug-in. Set the value to processor_parse_delimiter_native.
SourceKeystringYes/contentThe source field name.
SeparatorstringYes/,The separator.
QuotestringNo""The quote.
Keys[string]Yes/["time", "msg"]The list of extracted fields.
AllowingShortenedFieldsboolNotruetrueSpecifies whether to allow the number of extracted fields to be less than the number of keys. If not allowed, this scenario is considered a parsing failure.
OverflowedFieldsTreatmentstringNoextendextendThe behavior when the number of extracted fields is greater than the number of keys. Valid values:
  • extend: retains the extra fields. Each extra field is added to the log as a separate field. The field names for the extra fields are __column$i__, where $i is the sequence number of the extra field, starting from 0.

  • keep: retains the extra fields, but adds the extra content to the log as a single field named __column0__.

  • discard: discards the extra fields.

KeepingSourceWhenParseFailboolNofalsefalseSpecifies whether to keep the source field when parsing fails.
KeepingSourceWhenParseSucceedboolNofalsefalseSpecifies whether to keep the source field when parsing succeeds.
RenamedSourceKeystringNoEmptykeyThe field name to store the source field when it is kept. If not specified, the source field is not renamed by default.

Apsara parsing plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_parse_apsara_nativeThe type of the plug-in. Set the value to processor_parse_apsara_native.
SourceKeystringYes/contentThe source field name.
TimezonestringNoEmptyGMT+08:00The time zone of the log time. The format is GMT+HH:MM (east of UTC) or GMT-HH:MM (west of UTC).
KeepingSourceWhenParseFailboolNofalsefalseSpecifies whether to keep the source field when parsing fails.
KeepingSourceWhenParseSucceedboolNofalsefalseSpecifies whether to keep the source field when parsing succeeds.
RenamedSourceKeystringNoEmptykeyThe field name to store the source field when it is kept. If not specified, the source field is not renamed by default.

Time parsing plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_parse_timestamp_nativeThe type of the plug-in. Set the value to processor_parse_timestamp_native.
SourceKeystringYes/contentThe source field name.
SourceFormatstringYes/%Y/%m/%d %H:%M:%SThe log time format. For more information, see Time formats.
SourceTimezonestringNoEmptyGMT+08:00The time zone of the log time. The format is GMT+HH:MM (east of UTC) or GMT-HH:MM (west of UTC).

Filtering plug-in

ParameterRequiredExampleDefault valueDescriptionNote
TypestringYesprocessor_filter_regex_native/The type of the plug-in. Set the value to processor_filter_regex_native.
IncludemapYes//The whitelist of log fields. The key is the field name and the value is a regular expression. This specifies the condition that the content of the field specified by the key must meet for the event to be collected. Multiple conditions are combined with a logical AND. The log is collected only when all conditions are met.

Data masking plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_desensitize_nativeThe type of the plug-in. Set the value to processor_desensitize_native.
SourceKeystringYes/contentThe source field name.
MethodstringYes/constThe data masking method. Valid values: const: replaces sensitive content with a constant. md5: replaces sensitive content with its MD5 value.
ReplacingStringstringNo. This parameter is required if you set Method to const./******The constant string to replace the sensitive content.
ContentPatternBeforeReplacedStringstringYes/'password:'The regular expression for the prefix of the sensitive content.
ReplacedContentPatternstringYes/[^']*The regular expression for the sensitive content.
ReplacingAllboolNotruetrueSpecifies whether to replace all matched sensitive content.

Extension processors

Extract fields

Regular expression mode

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_regexThe type of the plug-in. Set the value to processor_regex.
SourceKeystringYes/contentThe source field name.
RegexstringYes/(\d+-\d+-\d+)\s+(.*)The regular expression. You need to use parentheses () to mark the fields to be extracted.
Keys[string]Yes/["ip", "time", "method"]Specifies the field names for the extracted content, such as ["ip", "time", "method"].
NoKeyErrorbooleanNofalsefalseSpecifies whether the system reports an error if the specified source field does not exist in the raw log.
  • true: Report an error.

  • false (default): Do not report an error.

NoMatchErrorbooleanNofalsefalseSpecifies whether the system reports an error if the specified regular expression does not match the value of the source field.
  • true: Report an error.

  • false (default): Do not report an error.

KeepSourcebooleanNofalsefalseSpecifies whether to keep the source field in the parsed log.
  • true: Keep.

  • false (default): Do not keep.

FullMatchbooleanNotruetrueSpecifies whether to extract only when a full match is found.
  • true (default): The field values are extracted only if all fields specified in the Keys parameter can be matched with the value of the source field using the regular expression in the Regex parameter.

  • false: Partial matches are also extracted.

KeepSourceIfParseErrorbooleantruetruefalseSpecifies whether to keep the source field in the parsed log if parsing fails.
  • true: Keep.

  • false (default): Do not keep.

Anchor mode

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_anchorThe type of the plug-in. Set the value to processor_anchor.
SourceKeyAnchor arrayYes/contentThe source field name.
AnchorsstringYes/The list of anchor items.
StartStringYesEmptytimeThe starting keyword. If empty, it matches the beginning of the string.
StopStringYesEmpty\tThe ending keyword. If empty, it matches the end of the string.
FieldNameStringYesEmptytimeSpecifies the field name for the extracted content.
FieldTypeStringYesEmptystringThe type of the field. Valid values: string and json.
ExpondJsonbooleanNofalsefalseSpecifies whether to expand JSON fields.
  • true: Expand.

  • false (default): Do not expand.

ExpondConnecterStringNo__The connector for JSON expansion. The default value is an underscore (_).
MaxExpondDepthIntNo00The maximum depth for JSON expansion. The default value is 0, which means no limit.
NoAnchorErrorBooleanNofalsefalseSpecifies whether the system reports an error if the anchor item cannot be found.
  • true: Report an error.

  • false (default): Do not report an error.

NoKeyErrorBooleanNofalsefalseSpecifies whether the system reports an error if the specified source field does not exist in the raw log.
  • true: Report an error.

  • false (default): Do not report an error.

KeepSourceBooleanNofalsefalseSpecifies whether to keep the source field in the parsed log.
  • true: Report an error.

  • false (default): Do not report an error.

CSV mode

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_csvThe type of the plug-in. Set the value to processor_csv.
SourceKeyStringYes/csvThe source field name.
SplitKeysString arrayYes/["date", "ip", "content"]Specifies the field names for the extracted content, such as ["date", "ip", "content"]. Important If the number of fields to be split is less than the number of fields in the SplitKeys parameter, the extra fields in the SplitKeys parameter are ignored.
PreserveOthersBooleanNofalsefalseSpecifies whether to keep the excess part if the number of fields to be split is greater than the number of fields in the SplitKeys parameter.
  • true: Keep.

  • false (default): Do not keep.

ExpandOthersBooleanNofalsefalseSpecifies whether to parse the excess part.
  • true: Parse. You can parse the excess part through the ExpandOthers parameter and then specify the naming prefix for the excess fields through the ExpandKeyPrefix parameter. *

  • false (default): Do not parse. If you set PreserveOthers to true and ExpandOthers to false, the content of the excess part is stored in the _decode_preserve_ field.

Note If the content of the extra fields is not in a standard format, you need to normalize it according to the CSV format before storing it.
ExpandKeyPrefixStringNoThe naming prefix for the excess fields. For example, if you configure it as expand_, the field names will be expand_1, expand_2.
TrimLeadingSpaceBooleanNofalsefalseSpecifies whether to ignore leading spaces in field values.
  • true: Ignore.

  • false (default): Do not ignore.

SplitSepStringNo,,The separator. The default value is a comma (,).
KeepSourceBooleanNofalsefalseSpecifies whether to keep the source field in the parsed log.
  • true: Keep.

  • false (default): Do not keep.

NoKeyErrorBooleanNofalsefalseSpecifies whether the system reports an error if the specified source field does not exist in the raw log.
  • true: Report an error.

  • false (default): Do not report an error.

Single-character separator

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_split_charThe type of the plug-in. Set the value to processor_split_char.
SourceKeyStringYesThe source field name.
SplitSepStringYesThe separator. It must be a single character and can be a non-printable character, such as \u0001.
SplitKeysString arrayYes["ip", "time", "method"]Specifies the field names for the extracted content, such as ["ip", "time", "method"].
PreserveOthersBooleanNofalsefalseSpecifies whether to keep the excess part if the number of fields to be split is greater than the number of fields in the SplitKeys parameter.
  • true: Keep.

  • false (default): Do not keep.

QuoteFlagBooleanNofalsefalseSpecifies whether to use a quote.
  • true: Use.

  • false (default): Do not use.

QuoteStringNo/\u0001The quote. It must be a single character and can be a non-printable character, such as \u0001. This is valid only when QuoteFlag is set to true.
NoKeyErrorBooleanNofalsefalseSpecifies whether the system reports an error if the specified source field does not exist in the raw log.
  • true: Report an error.

  • false (default): Do not report an error.

NoMatchErrorBooleanNofalsefalseSpecifies whether the system reports an error if the specified separator does not match the separator in the log.
  • true: Report an error.

  • false (default): Do not report an error.

KeepSourceBooleanNofalsefalseSpecifies whether to keep the source field in the parsed log.
  • true: Keep.

  • false (default): Do not keep.

Multi-character separator

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_split_stringThe type of the plug-in. Set the value to processor_split_string.
SourceKeyStringYesThe source field name.
SplitSepStringYesThe separator. It must be a single character and can be a non-printable character, such as \u0001\u0002.
SplitKeysString arrayYes["key1","key2"]Specifies the field names for the extracted content, such as ["key1","key2"].Note If the number of fields to be split is less than the number of fields in the SplitKeys parameter, the extra fields in the SplitKeys parameter are ignored.
PreserveOthersBooleanNofalsefalseSpecifies whether to keep the excess part if the number of fields to be split is greater than the number of fields in the SplitKeys parameter.
  • true: Keep.

  • false (default): Do not keep.

ExpandOthersBooleanNofalsefalseSpecifies whether to use a quote.
  • true: Use.

  • false (default): Do not use.

ExpandKeyPrefixStringNo/expand_The naming prefix for the excess part. For example, if you configure it as expand_, the field names will be expand_1, expand_2.
NoKeyErrorBooleanNofalsefalseSpecifies whether the system reports an error if the specified source field does not exist in the raw log.
  • true: Report an error.

  • false (default): Do not report an error.

NoMatchErrorBooleanNofalsefalseSpecifies whether the system reports an error if the specified separator does not match the separator in the log.
  • true: Report an error.

  • false (default): Do not report an error.

KeepSourceBooleanNofalsefalseSpecifies whether to keep the source field in the parsed log.
  • true: Keep.

  • false (default): Do not keep.

Key-value pairs

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_split_key_valueThe type of the plug-in. Set the value to processor_split_key_value.
SourceKeystringYesThe source field name.
DelimiterstringNo\t\tThe separator between key-value pairs. The default value is the tab character \t.
SeparatorstringNo::The separator between the key and value in a single key-value pair. The default value is a colon (:).
KeepSourceBooleanNofalsefalseSpecifies whether to keep the source field in the parsed log.
  • true: Keep.

  • false (default): Do not keep.

ErrIfSourceKeyNotFoundBooleanNotruefalseSpecifies whether the system reports an error if the specified source field does not exist in the raw log.
  • true (default): Report an error.

  • false: Do not report an error.

DiscardWhenSeparatorNotFoundBooleanNofalsefalseSpecifies whether to discard the key-value pair if no matching separator is found.
  • true: Discard.

  • false (default): Do not discard.

ErrIfSeparatorNotFoundBooleanNotruefalseSpecifies whether the system reports an error if the specified separator does not exist.
  • true (default): Report an error.

  • false: Do not report an error.

ErrIfKeyIsEmptyBooleanNotruefalseSpecifies whether the system reports an error if the key is empty after splitting.
  • true (default): Report an error.

  • false: Do not report an error.

QuoteStringNoThe quote. If set, and the value is enclosed in quotes, the value within the quotes is extracted. Multi-character quotes are supported. By default, the quote feature is not enabled. Important * If the quote is a double quotation mark (""), you need to add an escape character, which is a backslash (\). When a backslash (\) is used with a quote inside the quotes, the backslash (\) is output as part of the value.

Grok mode

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_grokThe type of the plug-in. Set the value to processor_grok.
CustomPatternDirString arrayNoThe directory where the custom Grok pattern files are located. The processor_grok plug-in reads all files in the directory. If this parameter is not added, no custom Grok pattern files are imported. Important After updating the custom Grok pattern file, you need to restart Logtail for the changes to take effect.
CustomPatternsMapNoThe custom GROK pattern, where the key is the rule name and the value is the Grok expression. For more information about the supported expressions, see processor_grok. If the link does not contain the expression you need, enter a custom Grok expression in Match. If this parameter is not added, no custom GROK patterns are used.
SourceKeyStringNocontentcontentThe source field name. The default value is the content field.
MatchString arrayYesAn array of Grok expressions. The processor_grok plug-in matches the log against the expressions in this list from top to bottom and returns the extraction result of the first successful match. Note Configuring multiple Grok expressions may affect performance. We recommend no more than 5.
TimeoutMilliSecondsLongNo0The maximum time to attempt to extract fields using a Grok expression, in milliseconds. If this parameter is not added or is set to 0, it means there is no timeout.
IgnoreParseFailureBooleanNotruetrueSpecifies whether to ignore logs that fail to be parsed.
  • true (default): Ignore.

  • false: Delete.

KeepSourceBooleanNotruetrueSpecifies whether to keep the source field after successful parsing.
  • true (default): Keep.

  • false: Discard.

NoKeyErrorBooleanNofalsetrueSpecifies whether the system reports an error if the specified source field does not exist in the raw log.
  • true: Report an error.

  • false (default): Do not report an error.

NoMatchErrorBooleanNotruetrueSpecifies whether the system reports an error if none of the expressions set in the Match parameter match the log.
  • true (default): Report an error.

  • false: Do not report an error.

TimeoutErrorBooleanNotruetrueSpecifies whether the system reports an error on a match timeout.
  • true (default): Report an error.

  • false: Do not report an error.

Add fields

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_add_fieldsThe type of the plug-in. Set the value to processor_add_fields.
FieldsMapYesThe field names and values to be added. Key-value pair format, supports adding multiple.
IgnoreIfExistBooleanNofalsefalseSpecifies whether to ignore duplicate fields if a field with the same name exists.
  • true: Ignore.

  • false (default): Do not ignore.

Drop fields

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_dropThe type of the plug-in. Set the value to processor_drop.
DropKeysString arrayYesSpecifies the fields to be dropped. Multiple fields can be configured.

Rename fields

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_renameThe type of the plug-in. Set the value to processor_rename.
NoKeyErrorBooleanNofalsefalseSpecifies whether the system reports an error if the specified source field does not exist in the log.
  • true: Report an error.

  • false (default): Do not report an error.

SourceKeysString arrayYesThe source fields to be renamed.
DestKeysString arrayYesThe fields after renaming.

Package fields

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_packjsonThe type of the plug-in. Set the value to processor_packjson.
SourceKeysString arrayYesThe source fields to be packaged.
DestKeyStringNoThe field after packaging.
KeepSourceBooleanNotruetrueSpecifies whether to keep the source field in the parsed log.
  • true (default): Keep.

  • false: Discard.

AlarmIfIncompleteBooleanNotruetrueSpecifies whether the system reports an error if the specified source field does not exist in the raw log.
  • true (default): Keep.

  • false: Discard.

Expand JSON fields

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_jsonThe type of the plug-in. Set the value to processor_json.
SourceKeyStringYesThe name of the source field to be expanded.
NoKeyErrorBooleanNotruetrueSpecifies whether the system reports an error if the specified source field does not exist in the raw log.
  • true (default): Report an error.

  • false: Do not report an error.

ExpandDepthIntNo01The depth of JSON expansion. The default value is 0, which means no limit. 1 indicates the current level, and so on.
ExpandConnectorStringNo__The connector for JSON expansion. The default value is an underscore (_).
PrefixStringNoThe prefix to add to the field names during JSON expansion.
KeepSourceBooleanNotruetrueSpecifies whether to keep the source field in the parsed log.
  • true (default): Keep.

  • false: Discard.

UseSourceKeyAsPrefixBooleanNoSpecifies whether to use the source field name as a prefix for all expanded JSON field names.
KeepSourceIfParseErrorBooleanNotruetrueSpecifies whether to keep the raw log if parsing fails.
  • true (default): Keep.

  • false: Discard.

ExpandArrayBooleanNofalsefalseSpecifies whether to expand array types. This parameter is supported by Logtail 1.8.0 and later.
  • false (default): Do not expand.

  • true: Expand. For example, {"k":["1","2"]} is expanded to {"k[0]":"1","k[1]":"2"}.

Filter logs

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_filter_regexThe type of the plug-in. Set the value to processor_filter_regex.
IncludeJSON ObjectNoThe key is the log field, and the value is the regular expression that the field value must match. The key-value pairs are combined with a logical AND. If the value of a log field matches the corresponding regular expression, the log is collected.
ExcludeJSON ObjectNoThe key is the log field, and the value is the regular expression that the field value must match. The key-value pairs are combined with a logical OR. If the value of any field in the log matches the corresponding regular expression, the log is discarded.

Extract log time

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_gotimeThe type of the plug-in. Set the value to processor_gotime.
SourceKeyStringYesThe source field name.
SourceFormatStringYesThe format of the source time.
SourceLocationIntYesThe time zone of the source time. If the parameter value is empty, it indicates the time zone of the host or container where Logtail is located.
DestKeyStringYesThe destination field after parsing.
DestFormatStringYesThe time format after parsing.
DestLocationIntNoThe time zone after parsing. If the parameter value is empty, it indicates the local time zone.
SetTimeBooleanNotruetrueSpecifies whether to set the parsed time as the log time.
  • true (default): Yes.

  • false: No.

KeepSourceBooleanNotruetrueSpecifies whether to keep the source field in the parsed log.
  • true (default): Keep.

  • false: Do not keep.

NoKeyErrorBooleanNotruetrueSpecifies whether the system reports an error if the specified source field does not exist in the raw log.
  • true (default): Report an error.

  • false: Do not report an error.

AlarmIfFailBooleanNotruetrueSpecifies whether the system reports an error if it fails to extract the log time.
  • true (default): Report an error.

  • false: Do not report an error.

Transform IP addresses

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_geoipThe type of the plug-in. Set the value to processor_geoip.
SourceKeyStringYesThe name of the source field for which you want to perform IP address transformation.
DBPathStringYes/user/data/GeoLite2-City_20180102/GeoLite2-City.mmdbThe full path of the GeoIP database. For example, /user/data/GeoLite2-City_20180102/GeoLite2-City.mmdb.
NoKeyErrorBooleanNofalsefalseSpecifies whether the system reports an error if the specified source field name does not exist in the raw log.
  • true: Report an error.

  • false (default): Do not report an error.

NoMatchErrorBooleanNotruetrueSpecifies whether the system reports an error if the IP address is invalid or not found in the database.
  • true (default): Report an error.

  • false: Do not report an error.

KeepSourceBooleanNotruetrueSpecifies whether to keep the source field in the parsed log.
  • true (default): Keep.

  • false: Do not keep.

LanguageStringNozh-CNzh-CNThe language attribute. The default value is zh-CN. Important Make sure your GeoIP database contains the corresponding language.

Data masking

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_desensitizeThe type of the plug-in. Set the value to processor_desensitize.
SourceKeyStringYesThe log field name.
MethodStringYesconstThe data masking method. Valid values:
  • const: Replaces sensitive content with a string. You can specify the target string with the ReplaceString parameter.

  • md5: Replaces sensitive content with its corresponding MD5 value.

MatchStringNofullfullSpecifies the method for extracting sensitive content. Valid values:
  • full (default): Extracts all content, which means all content in the target field value is replaced.

  • regex: Uses a regular expression to extract sensitive content.

ReplaceStringStringNoThe string used to replace sensitive content. This is required when Method is set to const.
RegexBeginStringNoThe regular expression to match the prefix of the sensitive content. This is required when Match is set to regex.
RegexContentStringNoThe regular expression to match the sensitive content. This is required when Match is set to regex.

Field value mapping

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_dict_mapThe type of the plug-in. Set the value to processor_dict_map.
SourceKeyStringYesThe source field name.
MapDictMapNoThe mapping dictionary. If the mapping dictionary is small, you can set it directly with this parameter. You do not need to provide a local CSV dictionary file. Important If you set the DictFilePath parameter, the configuration in the MapDict parameter does not take effect.
DictFilePathStringNoThe dictionary file in CSV format. The separator for this CSV file is a comma (,), and the field reference is represented by a double quotation mark (").
DestKeyStringNoThe field name after mapping.
HandleMissingBooleanNofalsefalseSpecifies whether the system processes the log if the target field is missing from the raw log.
  • true: Process. The system fills in the value specified in the Missing parameter.

  • false (default): Do not process.

MissingStringNoUnknownUnknownThe fill value to use when processing a missing target field in the raw log. The default value is Unknown. This parameter takes effect when HandleMissing is set to true.
MaxDictSizeIntNo10001000The maximum size of the mapping dictionary. The default value is 1000, which means up to 1000 mapping rules can be stored. To limit the plug-in's memory usage on the server, you can reduce this value.
ModeStringNooverwriteoverwriteThe processing method when the mapped field already exists in the raw log.
  • overwrite (default): Overwrite the original field.

  • fill: Do not overwrite the original field.

Field encryption

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_encryptThe type of the plug-in. Set the value to processor_encrypt.
SourceKeyString arrayYesThe source field name.
EncryptionParametersObjectYesThe key-related configuration.
KeyStringYesSet the key. It must be 64 hexadecimal characters.
IVStringNo00000000000000000000000000000000Set the initialization vector for encryption. It must be 32 hexadecimal characters. The default value is 00000000000000000000000000000000.
KeyFilePathBooleanNoThe file path to read the encryption parameters. If not configured, it is read according to Logtail Configuration - Input Configuration - File Path.
KeepSourceValueIfErrorStringNofalsefalseSpecifies whether the system keeps the value of the source field if encryption fails.
  • true: Keep.

  • false (default): Do not keep.

If encryption fails, the field value is replaced with ENCRYPT_ERROR.

String replacement

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_string_replaceThe type of the plug-in. Set the value to processor_string_replace.
SourceKeyStringYesThe source field name.
MethodStringYesSpecifies the matching method. Valid values:
  • const: Replace using a string.

  • regex: Replace using a regular expression.

  • unquote: Remove escape characters.

MatchStringNoEnter the content to match.
  • If Method is set to const, enter the string that matches the content to be replaced. If multiple strings match, all are replaced.

  • If Method is set to regex, enter the regular expression that matches the content to be replaced. If multiple strings match, all are replaced. You can also match a specific group using regex grouping.

  • If Method is set to unquote, you do not need to configure this parameter.

ReplaceStringStringNoThe string for replacement. The default value is "".
  • If Method is set to const, enter the string to replace the original content.

  • If Method is set to regex, enter the string to replace the original content. Replacement based on regex groups is supported.

  • If Method is set to unquote, you do not need to configure this parameter.

DestKeyStringNoSpecifies a new field for the replaced content. By default, no new field is added.

Data encoding and decoding

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_base64_encodingThe type of the plug-in. Set the value to processor_base64_encoding.
SourceKeyStringYesThe source field name.
NewKeyStringYesThe field name for the encoded result.
NoKeyErrorBooleanNofalsefalseSpecifies whether the system reports an error if the specified source field does not exist in the raw log.
  • true: Report an error.

  • false (default): Do not report an error.

Convert log to metric

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_log_to_sls_metricThe type of the plug-in. Set the value to processor_log_to_sls_metric.
MetricTimeKeyStringNoSpecifies the time field in the log, which is mapped to the __time_nano__ field in the time series data. By default, the value of the __time__ field in the log is extracted. Make sure the specified field is a valid, properly formatted timestamp. Currently, Unix timestamps in seconds (10 digits), milliseconds (13 digits), microseconds (16 digits), and nanoseconds (19 digits) are supported.
MetricLabelKeys[]StringYesSpecifies the list of keys for the __labels__ field. The keys must follow the regular expression ^[a-zA-Z_][a-zA-Z0-9_]*$. The value cannot contain a vertical bar (|) or #$#. For more information, see Time series data (Metric). You cannot add the __labels__ field in the MetricLabelKeys parameter. If the source field contains a __labels__ field, its value is appended to the new __labels__ field.
MetricValuesMapYesUsed to specify the metric name and metric value. The metric name corresponds to the __name__ field and must follow the regular expression ^[a-zA-Z_:][a-zA-Z0-9_:]*$. The metric value corresponds to the __value__ field and must be of the Double type. For more information, see Time series data (Metric).
CustomMetricLabelsMapNoThe custom __labels__ field. The key must follow the regular expression ^[a-zA-Z_][a-zA-Z0-9_]*$, and the value cannot contain a vertical bar (|) or #$#. For more information, see Time series data (Metric).
IgnoreErrorBooleanNoSpecifies whether to output an error log when there are no matching logs. The default value is false, which means no error log is output.

Convert log to trace

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/processor_otel_traceThe type of the plug-in. Set the value to processor_otel_trace.
SourceKeyStringYesThe source field name.
FormatStringYesjsonThe format after conversion. Valid values: protobuf, json, and protojson.
NoKeyErrorBooleanNofalsetrueSpecifies whether to report an error if the corresponding source field is not found in the log. The default value is false.
TraceIDNeedDecodeBooleanNoSpecifies whether to Base64-decode the TraceID. The default value is false. If you set Format to protojson and the TraceID has been Base64-encoded, you need to set TraceIDNeedDecode to true. Otherwise, the conversion will fail.
SpanIDNeedDecodeBooleanNoSpecifies whether to Base64-decode the SpanID. The default value is false. If you set Format to protojson and the SpanID has been Base64-encoded, you need to set SpanIDNeedDecode to true. Otherwise, the conversion will fail.
ParentSpanIDNeedDecodeBooleanNoSpecifies whether to Base64-decode the ParentSpanID. The default value is false. If you set Format to protojson and the ParentSpanID has been Base64-encoded, you need to set ParentSpanIDNeedDecode to true. Otherwise, the conversion will fail.

Output plug-ins

SLS output plug-in

ParameterTypeRequiredDefault valueExampleDescription
TypestringYes/flusher_slsThe type of the plug-in. Set the value to flusher_sls.
LogstorestirngYes/test-logstoreThe name of the Logstore.

Response elements

Element

Type

Description

Example

None defined.

Examples

Success response

JSON format

{}

Error codes

See Error Codes for a complete list.

Release notes

See Release Notes for a complete list.