Creates a Logtail pipeline configuration.
Try it now
Test
RAM authorization
|
Action |
Access level |
Resource type |
Condition key |
Dependent action |
|
log:CreateLogtailPipelineConfig |
create |
*All Resource
|
|
None |
Request syntax
POST /pipelineconfigs HTTP/1.1
Request parameters
|
Parameter |
Type |
Required |
Description |
Example |
| project |
string |
Yes |
The name of the project. |
test-project |
| body |
object |
No |
The content of the Logtail pipeline configuration. |
|
| configName |
string |
Yes |
The name of the configuration. Note
The configuration name must be unique within the project and cannot be modified after the configuration is created. The name must meet the following requirements:
|
test-config |
| logSample |
string |
No |
A sample log. Multiple logs are supported. |
2022-06-14 11:13:29.796 | DEBUG | __main__: |
| global |
object |
No |
The global configuration. |
|
| inputs |
array<object> |
Yes |
The list of input plug-ins. Important Currently, you can configure only one input plug-in. |
|
|
object |
No |
The input plug-in. Note
For more information about the parameters of the file input plug-in, see File plug-in. For more information about the parameters of other input plug-ins, see Processing plug-ins. |
{ "Type": "input_file", "FilePaths": ["/var/log/*.log"] } |
|
| processors |
array<object> |
No |
The list of processing plug-ins. Note
Processing plug-ins are classified into native processing plug-ins and extension processing plug-ins. For more information, see Processing plug-ins. Important
Note
|
|
|
object |
No |
The processing plug-in. Note
For more information about native processing plug-ins and extension processing plug-ins, see Processing plug-ins. |
{ "Type": "processor_parse_json_native", "SourceKey": "content" } |
|
| aggregators |
array<object> |
No |
The list of aggregation plug-ins. Important This parameter is valid only when an extension processing plug-in is used. You can use a maximum of one aggregation plug-in. |
|
|
object |
No |
The aggregation plug-in. |
||
| flushers |
array<object> |
Yes |
The list of output plug-ins. Important Currently, you can configure only one flusher_sls plug-in. |
|
|
object |
No |
The output plug-in. |
{ "Type": "flusher_sls", "Logstore": "test" } |
|
| task |
object |
No |
Global configuration
| Parameter | Type | Required | Default value | Example | Description |
| TopicType | string | No | Empty | filepath | The topic type. Valid values:
|
| TopicFormat | string | No. This parameter is required if you set TopicType to filepath or custom. | / | /var/log/(.*).log | The topic format. |
| EnableTimestampNanosecond | bool | No | false | false | Specifies whether to enable nanosecond precision for timestamps. |
| PipelineMetaTagKey | object | No | Empty | {"HOST_NAME":"__hostname__"} | Important This parameter is supported only by LoongCollector 3.0.10 and later.
|
Input plug-ins
File input plug-in
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | input_file | The type of the plug-in. Set the value to input_file. |
| FilePaths | [string] | Yes | / | ["/var/log/*.log"] | The list of paths to the log files that you want to collect. Currently, you can specify only one path. You can use the wildcard characters (*) and (**) in the path. The wildcard character (**) can appear only once and only before the filename. |
| MaxDirSearchDepth | uint | No | 0 | 0 | The maximum depth of the directories that the wildcard character (**) in the file path can match. This parameter is valid only when the wildcard character (**) is used in the log path. Valid values: 0 to 1000. |
| ExcludeFilePaths | [string] | No | Empty | ["/home/admin/*.log"] | The blacklist of file paths. The paths must be absolute paths. You can use the wildcard character (*). |
| ExcludeFiles | [string] | No | Empty | ["app*.log", "password"] | The blacklist of filenames. You can use the wildcard character (*). |
| ExcludeDirs | [string] | No | Empty | ["/home/admin/dir1", "/home/admin/dir2*"] | The blacklist of directories. The paths must be absolute paths. You can use the wildcard character (*). |
| FileEncoding | string | No | utf8 | utf8 | The encoding format of the file. Valid values: utf8 and gbk. |
| TailSizeKB | uint | No | 1024 | 1024 | The size of the data to be collected from the end of a file when the configuration first takes effect. If the file size is smaller than this value, data is collected from the beginning of the file. Valid values: 0 to 10485760 KB. |
| Multiline | object | No | Empty | / | The multiline aggregation options. |
| Multiline.Mode | string | No | custom | custom | The multiline aggregation mode. Valid values: custom and JSON. |
| Multiline.StartPattern | string | This parameter is required if you set Multiline.Mode to custom. | Empty | \d+-\d+-\d+.* | The regular expression for the start of a log entry. |
| EnableContainerDiscovery | bool | No | false | true | Specifies whether to enable container discovery. This parameter is valid only when Logtail runs in DaemonSet mode and the collection file path is a path within a container. |
| ContainerFilters | object | No | Empty | / | The container filtering options. Multiple options are combined with a logical AND. This parameter is valid only if you set EnableContainerDiscovery to true. |
| ContainerFilters.K8sNamespaceRegex | string | No | Empty | default | For containers deployed in a Kubernetes environment, specifies the namespace condition for the pods where the containers to be collected are located. If you do not add this parameter, all containers are collected. Regular expressions are supported. |
| ContainerFilters.K8sPodRegex | string | No | Empty | test-pod | For containers deployed in a Kubernetes environment, specifies the name condition for the pods where the containers to be collected are located. If you do not add this parameter, all containers are collected. Regular expressions are supported. |
| ContainerFilters.IncludeK8sLabel | map | No | Empty | / | For containers deployed in a Kubernetes environment, specifies the label conditions for the pods where the containers to be collected are located. Multiple conditions are combined with a logical OR. If you do not add this parameter, all containers are collected. Regular expressions are supported. The key in the map is the pod label name, and the value is the pod label value. The following rules apply:
|
| ContainerFilters.ExcludeK8sLabel | map | No | Empty | / | For containers deployed in a Kubernetes environment, specifies the label conditions for the pods where the containers to be excluded from collection are located. Multiple conditions are combined with a logical OR. If you do not add this parameter, all containers are collected. Regular expressions are supported. The key in the map is the pod label name, and the value is the pod label value. The following rules apply:
|
| ContainerFilters.K8sContainerRegex | string | No | Empty | test-container | For containers deployed in a Kubernetes environment, specifies the name condition for the containers to be collected. If you do not add this parameter, all containers are collected. Regular expressions are supported. |
| ContainerFilters.IncludeEnv | map | No | Empty | / | Specifies the environment variable conditions for the containers to be collected. Multiple conditions are combined with a logical OR. If you do not add this parameter, all containers are collected. Regular expressions are supported. The key in the map is the environment variable name, and the value is the environment variable value. The following rules apply:
|
| ContainerFilters.ExcludeEnv | map | No | Empty | / | Specifies the environment variable conditions for the containers to be excluded from collection. Multiple conditions are combined with a logical OR. If you do not add this parameter, all containers are collected. Regular expressions are supported. The key in the map is the environment variable name, and the value is the environment variable value. The following rules apply:
|
| ContainerFilters.IncludeContainerLabel | map | No | Empty | / | Specifies the label conditions for the containers to be collected. Multiple conditions are combined with a logical OR. If you do not add this parameter, the default value is empty, which means all containers are collected. Regular expressions are supported. The key in the map is the container label name, and the value is the container label value. The following rules apply:
|
| ContainerFilters.ExcludeContainerLabel | map | No | Empty | / | Specifies the label conditions for the containers to be excluded from collection. Multiple conditions are combined with a logical OR. If you do not add this parameter, the default value is empty, which means all containers are collected. Regular expressions are supported. The key in the map is the container label name, and the value is the container label value. The following rules apply:
|
| ExternalK8sLabelTag | map | No | Empty | / | For containers deployed in a Kubernetes environment, specifies the tags related to pod labels that you want to add to logs. The key in the map is the pod label name, and the value is the corresponding tag name. For example, if you add app: k8s_label_app to the map and a pod has the label app=serviceA, the tag __tag__:k8s_label_app: serviceA is added to the log. If the pod does not have the app label, the empty field __tag__:k8s_label_app: is added. |
| ExternalEnvTag | map | No | Empty | / | For containers deployed in a Kubernetes environment, specifies the tags related to container environment variables that you want to add to logs. The key in the map is the environment variable name, and the value is the corresponding tag name. For example, if you add VERSION: env_version to the map and a container has the environment variable VERSION=v1.0.0, the tag __tag__:env_version: v1.0.0 is added to the log. If the container does not have the VERSION environment variable, the empty field __tag__:env_version: is added. |
| CollectingContainersMeta | bool | No | false | true | Specifies whether to enable container metadata preview. |
| AppendingLogPositionMeta | bool | No | false | false | Specifies whether to add the metadata of the file to which the log belongs to the log. The metadata includes the __tag__:__inode__ field and the __file_offset__ field. |
| AllowingIncludedByMultiConfigs | bool | No | false | false | Specifies whether to allow the current configuration to collect files that are already matched by other configurations. |
| Tags | object | No | Empty | {"FileInodeTagKey":"__inode__"} | Important This parameter is supported only by LoongCollector 3.0.10 and later.
|
| FileOffsetKey | string | No | Empty | __file_offset__ | Important This parameter is supported only by LoongCollector 3.0.10 and later. |
Container standard output (legacy)
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | service_docker_stdout | The type of the plug-in. Set the value to service_docker_stdout. |
| Stdout | Boolean | No | true | true | Specifies whether to collect standard output (stdout). |
| Stderr | Boolean | No | true | true | Specifies whether to collect standard error (stderr). |
| StartLogMaxOffset | Integer | No | 128 × 1024 | 131072 | The length of historical data to be retrieved during the first collection, in bytes. We recommend that you set this value to a number between 131072 and 1048576. |
| IncludeLabel | Map, where LabelKey and LabelValue are of the String type | No | Empty | The whitelist of container labels, which is used to specify the containers to be collected. By default, this parameter is empty, which indicates that the standard output of all containers is collected. If you want to set a whitelist of container labels, LabelKey is required and LabelValue is optional.
Multiple whitelists are combined with a logical OR. A container is matched if its label meets the condition of any whitelist. | |
| ExcludeLabel | Map, where LabelKey and LabelValue are of the String type | No | Empty | The blacklist of container labels, which is used to exclude containers from collection. By default, this parameter is empty, which indicates that no containers are excluded. If you want to set a blacklist of container labels, LabelKey is required and LabelValue is optional.
Multiple blacklists are combined with a logical OR. A container is excluded if its label meets the condition of any blacklist. | |
| IncludeEnv | Map, where EnvKey and EnvValue are of the String type | No | Empty | The whitelist of environment variables, which is used to specify the containers to be collected. By default, this parameter is empty, which indicates that the standard output of all containers is collected. If you want to set a whitelist of environment variables, EnvKey is required and EnvValue is optional.
| |
| ExcludeEnv | Map, where EnvKey and EnvValue are of the String type | No | Empty | The blacklist of environment variables, which is used to exclude containers from collection. By default, this parameter is empty, which indicates that no containers are excluded. If you want to set a blacklist of environment variables, EnvKey is required and EnvValue is optional.
Multiple blacklists are combined with a logical OR. A container is excluded if its environment variable meets the condition of any key-value pair. | |
| IncludeK8sLabel | Map, where LabelKey and LabelValue are of the String type | No | Empty | Specifies the containers to be collected using a whitelist of Kubernetes labels that are defined in template.metadata. If you want to set a whitelist of Kubernetes labels, LabelKey is required and LabelValue is optional.
Multiple whitelists are combined with a logical OR. A container is matched if its Kubernetes label meets the condition of any whitelist. | |
| ExcludeK8sLabel | Map, where LabelKey and LabelValue are of the String type | No | Empty | Excludes containers from collection using a blacklist of Kubernetes labels that are defined in template.metadata. If you want to set a blacklist of Kubernetes labels, LabelKey is required and LabelValue is optional.
Multiple blacklists are combined with a logical OR. A container is excluded if its Kubernetes label meets the condition of any blacklist. | |
| K8sNamespaceRegex | String | No | Empty | ^(default|nginx)$ | Specifies the containers to be collected by namespace name. Regular expressions are supported. For example, if you set this parameter to ^(default|nginx)$, all containers in the nginx and default namespaces are matched. |
| K8sPodRegex | String | No | Empty | ^(nginx-log-demo.*)$ | Specifies the containers to be collected by pod name. Regular expressions are supported. For example, if you set this parameter to ^(nginx-log-demo.*)$, all containers in pods whose names start with nginx-log-demo are matched. |
| K8sContainerRegex | String | No | Empty | ^(container-test)$ | Specifies the containers to be collected by container name. The Kubernetes container name is defined in spec.containers. Regular expressions are supported. For example, if you set this parameter to ^(container-test)$, all containers named container-test are matched. |
Data processing parameters
| Parameter | Type | Required | Default value | Example | Description |
| BeginLineRegex | String | No | Empty | The regular expression to match the start of a log entry. If this parameter is empty, single-line mode is used. If the expression matches the beginning of a line, that line is treated as a new log entry. Otherwise, the line is appended to the previous log entry. | |
| BeginLineCheckLength | Integer | No | Empty | The length to check for a match at the start of a line, in bytes. The default value is 10 × 1024 bytes. If the regular expression for the start of a line can be matched within the first N bytes, we recommend that you set this parameter to improve matching efficiency. | |
| BeginLineTimeoutMs | Integer | No | Empty | The timeout period for matching the start of a line, in milliseconds. The default value is 3000 milliseconds. If no new log appears within 3000 milliseconds, the matching ends, and the last log entry is uploaded to Simple Log Service. | |
| MaxLogSize | Integer | No | Empty | The maximum length of a log entry, in bytes. The default value is 0. The default value is 512 × 1024 bytes. If the length of a log entry exceeds this value, the system stops searching for the start of the line and uploads the log directly. | |
| ExternalK8sLabelTag | Map, where LabelKey and LabelValue are of the String type | No | Empty | After you set the Kubernetes label (defined in template.metadata) log tag, iLogtail adds fields related to the Kubernetes label to the log. For example, if you set LabelKey to app and LabelValue to | |
| ExternalEnvTag | Map, where EnvKey and EnvValue are of the String type | No | Empty | After you set the container environment variable log tag, iLogtail adds fields related to the container environment variable to the log. For example, if you set EnvKey to |
Data processing environment variables
| Environment variable | Type | Required | Default value | Example | Description |
| ALIYUN_LOG_ENV_TAGS | String | No | Empty | After you set the global environment variable log tag, iLogtail adds fields related to the environment variables of the container where iLogtail resides to the log. Separate multiple environment variable names with a vertical bar ( For example, if you set this parameter to node_name|node_ip, and the iLogtail container exposes the relevant environment variables, this information is added to the log as tags by adding the fields node_ip:172.16.0.1 and node_name:worknode. |
MySQL input plug-in
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | service_mysql | The type of the plug-in. Set the value to service_mysql. |
| Address | string | No | 127.0.0.1:3306 | rm-*.mysql.rds.aliyuncs.com | The MySQL address. |
| User | string | No | root | root | The username that is used to log on to the MySQL database. |
| Password | string | No | Empty | The password of the user that is used to log on to the MySQL database. For higher security, set the username and password to xxx. After the collection configuration is synchronized to your local machine, find the corresponding configuration in the /usr/local/ilogtail/user_log_config.json file and modify it. For more information, see Modify local configurations.Important If you modify this parameter in the console, the local configuration is overwritten after synchronization. | |
| DataBase | string | No | / | project_database | The name of the database. |
| DialTimeOutMs | int | No | 5000 | 5000 | The timeout period for connecting to the MySQL database, in ms. |
| ReadTimeOutMs | int | No | 5000 | 5000 | The timeout period for reading the MySQL query results, in ms. |
| StateMent | string | No | / | The SELECT statement. If you set CheckPoint to true, the WHERE condition in the SELECT statement must contain the checkpoint column (CheckPointColumn). You can use a question mark (?) as a placeholder to be used with the checkpoint column. For example, you can set CheckPointColumn to id, CheckPointStart to 0, and StateMent to SELECT * from ... where id > ?. After each collection, the system saves the ID of the last data entry as a checkpoint. In the next collection, the question mark (?) in the query statement is replaced with the ID of this checkpoint. | |
| Limit | bool | No | false | true | Specifies whether to use LIMIT for paging.
|
| PageSize | int | No | / | 10 | The page size. This parameter is required if you set Limit to true. |
| MaxSyncSize | int | No | 0 | 0 | The maximum number of records to synchronize at a time. The default value is 0, which means no limit. |
| CheckPoint | bool | No | false | true | Specifies whether to use a checkpoint.
|
| CheckPointColumn | string | No | Empty | 1 | The name of the checkpoint column. This parameter is required if you set CheckPoint to true. Warning The values in this column must be incremental. Otherwise, data may be missed during collection. The maximum value in each query result is used as the input for the next query. |
| CheckPointColumnType | string | No | Empty | int | The data type of the checkpoint column. Supported values: int and time. The int type is stored internally as int64. The time type supports the date, datetime, and time types of MySQL. This parameter is required if you set CheckPoint to true. |
| CheckPointStart | string | No | Empty | The initial value of the checkpoint column. This parameter is required if you set CheckPoint to true. | |
| CheckPointSavePerPage | bool | No | true | true | Specifies whether to save a checkpoint for each page.
|
| IntervalMs | int | No | 60000 | 60000 | The synchronization interval. The default value is 60000, in ms. |
HTTP input plug-in
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | metric_http | The type of the plug-in. Set the value to metric_http. |
| Address | string | Yes | / | The list of URLs. Important The URLs must start with http or https. | |
| IntervalMs | int | Yes | / | 10 | The interval between requests, in ms. |
| Method | string | No | GET | GET | The request method name. It must be in uppercase. |
| Body | string | No | Empty | The content of the HTTP Body field. | |
| Headers | map | No | Empty | {"key":"value"} | The content of the HTTP header, such as {"key":"value"}. Replace the content with the actual value. |
| PerAddressSleepMs | int | No | 100 | 100 | The interval between requests for each URL in the Addresses list, in ms. |
| ResponseTimeoutMs | int | No | 5000 | 5000 | The request timeout period, in ms. |
| IncludeBody | bool | No | false | true | Specifies whether to collect the request body. The default value is false. If you set this parameter to true, the request body content is stored in a key named content. |
| FollowRedirects | bool | No | false | false | Specifies whether to automatically handle redirections. |
| InsecureSkipVerify | bool | No | false | false | Specifies whether to skip HTTPS security checks. |
| ResponseStringMatch | string | No | / | Performs a regular expression check on the returned body content. The check result is stored in a key named _response_match_. If a match is found, the value is yes. If no match is found, the value is no. |
Syslog input plug-in
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | service_syslog | The type of the plug-in. Set the value to service_syslog. |
| Address | string | No | tcp://127.0.0.1:9999 | Specifies the protocol, address, and port that Logtail listens on. Logtail listens and obtains log data based on the Logtail configuration. The format is [tcp/udp]://[ ip ]:[ port ]. If this parameter is not configured, the default value tcp://127.0.0.1:9999 is used, which means that only logs forwarded locally can be received. Note
| |
| ParseProtocol | string | No | Empty | rfc3164 | Specifies the protocol used to parse logs. The default value is empty, which means logs are not parsed. Valid values:
|
| IgnoreParseFailure | bool | No | true | true | Specifies the operation to perform after a parsing failure. If this parameter is not configured, the default value true is used, which means the parsing is abandoned and the returned content field is directly filled. If you set this parameter to false, the log is discarded upon parsing failure. |
Systemd journal input plug-in
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | service_journal | The type of the plug-in. Set the value to service_journal. |
| JournalPaths | [string] | Yes | Empty | /var/log/journal | The Journal log path. We recommend that you set this to the directory where the Journal logs are located. |
| SeekPosition | string | No | tail | tail | The method for the first collection. You can set this to head or tail.
|
| Kernel | bool | No | true | true | Specifies whether to collect kernel logs. |
| Units | [string] | No | Empty | "" | The list of units to collect. By default, this is empty, which means all units are collected. |
| ParseSyslogFacility | bool | No | false | false | Specifies whether to parse the facility field of syslog logs. If this parameter is not configured, the field is not parsed. |
| ParsePriority | bool | No | false | false | Specifies whether to parse the Priority field. If this parameter is not configured, the field is not parsed. If you set this parameter to true, the mapping relationship for the Priority field is as follows. plaintext "0": "emergency" "1": "alert" "2": "critical" "3": "error" "4": "warning" "5": "notice" "6": "informational" "7": "debug" |
| UseJournalEventTime | bool | No | false | false | Specifies whether to use the field in the Journal log as the log time. If this parameter is not configured, the collection time is used as the log time. The time difference for real-time log collection is generally within 3 seconds. |
SQL Server input plug-in
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | service_mssql | The type of the plug-in. Set the value to service_mssql. |
| Address | string | No | 127.0.0.1:1433 | rm-*.sqlserver.rds.aliyuncs.com | The SQL Server address. |
| User | string | No | root | root | The username that is used to log on to the SQL Server database. |
| Password | string | No | Empty | The password of the user that is used to log on to the SQL Server database. For higher security, set the username and password to xxx. After the collection configuration is synchronized to your local machine, find the corresponding configuration in the /usr/local/ilogtail/user_log_config.json file and modify it. For more information, see Modify local configurations.Important If you modify this parameter in the console, the local configuration is overwritten after synchronization. | |
| DataBase | string | No | / | project_database | The name of the database. |
| DialTimeOutMs | int | No | 5000 | 5000 | The timeout period for connecting to the SQL Server database, in ms. |
| ReadTimeOutMs | int | No | 5000 | 5000 | The timeout period for reading the SQL Server query results, in ms. |
| StateMent | string | No | / | The SELECT statement. If you set CheckPoint to true, the WHERE condition in the SELECT statement must contain the checkpoint column (CheckPointColumn). You can use a question mark (?) as a placeholder to be used with the checkpoint column. For example, you can set CheckPointColumn to id, CheckPointStart to 0, and StateMent to SELECT * from ... where id > ?. After each collection, the system saves the ID of the last data entry as a checkpoint. In the next collection, the question mark (?) in the query statement is replaced with the ID of this checkpoint. | |
| Limit | bool | No | false | true | Specifies whether to use LIMIT for paging.
|
| PageSize | int | No | / | 10 | The page size. This parameter is required if you set Limit to true. |
| MaxSyncSize | int | No | 0 | 0 | The maximum number of records to synchronize at a time. The default value is 0, which means no limit. |
| CheckPoint | bool | No | false | true | Specifies whether to use a checkpoint.
|
| CheckPointColumn | string | No | Empty | 1 | The name of the checkpoint column. This parameter is required if you set CheckPoint to true. Warning The values in this column must be incremental. Otherwise, data may be missed during collection. The maximum value in each query result is used as the input for the next query. |
| CheckPointColumnType | string | No | Empty | int | The data type of the checkpoint column. Supported values: int and time. The int type is stored internally as int64. The time type supports the date, datetime, and time types of SQL Server. This parameter is required if you set CheckPoint to true. |
| CheckPointStart | string | No | Empty | The initial value of the checkpoint column. This parameter is required if you set CheckPoint to true. | |
| CheckPointSavePerPage | bool | No | true | true | Specifies whether to save a checkpoint for each page.
|
| IntervalMs | int | No | 60000 | 60000 | The synchronization interval. The default value is 60000, in ms. |
PostgreSQL input plug-in
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | service_pgsql | The type of the plug-in. Set the value to service_pgsql. |
| Address | string | No | 127.0.0.1:5432 | rm-*.pg.rds.aliyuncs.com | The PostgreSQL address. |
| User | string | No | root | root | The username that is used to log on to the PostgreSQL database. |
| Password | string | No | Empty | The password of the user that is used to log on to the PostgreSQL database. For higher security, set the username and password to xxx. After the collection configuration is synchronized to your local machine, find the corresponding configuration in the /usr/local/ilogtail/user_log_config.json file and modify it. For more information, see Modify local configurations.Important If you modify this parameter in the console, the local configuration is overwritten after synchronization. | |
| DataBase | string | No | / | project_database | The name of the PostgreSQL database. |
| DialTimeOutMs | int | No | 5000 | 5000 | The timeout period for connecting to the PostgreSQL database, in ms. |
| ReadTimeOutMs | int | No | 5000 | 5000 | The timeout period for reading the PostgreSQL query results, in ms. |
| StateMent | string | No | / | The SELECT statement. If you set CheckPoint to true, the WHERE condition in the SELECT statement must contain the checkpoint column (the CheckPointColumn parameter), and the value of this column must be set to $1. For example, you can set CheckPointColumn to id and StateMent to SELECT * from ... where id > $1 | |
| Limit | bool | No | false | true | Specifies whether to use LIMIT for paging.
|
| PageSize | int | No | / | 10 | The page size. This parameter is required if you set Limit to true. |
| MaxSyncSize | int | No | 0 | 0 | The maximum number of records to synchronize at a time. The default value is 0, which means no limit. |
| CheckPoint | bool | No | false | true | Specifies whether to use a checkpoint.
|
| CheckPointColumn | string | No | Empty | 1 | The name of the checkpoint column. This parameter is required if you set CheckPoint to true. Warning The values in this column must be incremental. Otherwise, data may be missed during collection. The maximum value in each query result is used as the input for the next query. |
| CheckPointColumnType | string | No | Empty | int | The data type of the checkpoint column. Supported values: int and time. The int type is stored internally as int64. The time type supports the time types of PostgreSQL. This parameter is required if you set CheckPoint to true. |
| CheckPointStart | string | No | Empty | The initial value of the checkpoint column. This parameter is required if you set CheckPoint to true. | |
| CheckPointSavePerPage | bool | No | true | true | Specifies whether to save a checkpoint for each page.
|
| IntervalMs | int | No | 60000 | 60000 | The synchronization interval. The default value is 60000, in ms. |
SNMP input plug-in
| Parameter | Type | Required | Default value | Example | Description |
| Targets | [string] | Yes | / | 127.0.0.1 | The IP address of the target machine group. |
| Port | string | No | 161 | 161 | The port used by the SNMP protocol. |
| Community | string | No | public | public | The community name. SNMPv1 and SNMPv2 use community names for authentication. |
| UserName | string | No | Empty | root | The username. SNMPv3 supports authentication using a username. |
| AuthenticationProtocol | string | No | NoAuth | NoAuth | The authentication protocol. SNMPv3 supports authentication using an authentication protocol. |
| AuthenticationPassphrase | string | No | Empty | The authentication password. The default value is empty. If you set AuthenticationProtocol to MD5 or SHA, you must set AuthenticationPassphrase. | |
| PrivacyProtocol | string | No | NoPriv | NoPriv | The privacy protocol. SNMPv3 supports authentication using a privacy protocol. |
| PrivacyPassphrase | string | No | Empty | The privacy protocol password. By default, it is the same as the authentication password. If you set PrivacyProtocol to DES or AES, you must set PrivacyPassphrase. | |
| Timeout | int | No | 5 | 5 | The timeout period for a single query operation, in seconds. |
| Version | int | No | 2 | 2 | The SNMP protocol version. Valid values: 1, 2, and 3. |
| Transport | string | No | udp | udp | The SNMP communication method. Valid values: udp and tcp. |
| MaxRepetitions | int | No | 0 | 0 | The number of retries after a query timeout. |
| Oids | [string] | No | Empty | 1 | The object identifiers to query on the target machine. |
| Fields | [string] | No | Empty | int | The fields to query on the target machine. This plug-in first translates the fields by looking them up in the local Management Information Base (MIB), translates them into object identifiers, and then queries them together. |
| Tables | [string] | No | Empty | The tables to query on the target machine. This plug-in first queries all fields in the table, then looks them up in the local MIB, translates them into object identifiers, and queries them together. |
Script input plug-in
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | input_command | The type of the plug-in. Set the value to input_command. |
| ScriptType | string | Yes | Empty | shell | Specifies the type of script content. Currently, bash, shell, python2, and python3 are supported. |
| User | string | Yes | / | public | The username used to run the command. Only non-root users are supported. Note * Make sure the specified username exists on the machine. We recommend that you configure the least privilege and grant only rwx permissions to the directories or files that you want to monitor. |
| ScriptContent | string | Yes | Empty | The script content. Plain text and Base64-encrypted content are supported. The length cannot exceed 512 × 1024 bytes. | |
| ContentEncoding | string | No | PlainText | PlainText | The text format of the script content. Valid values:
|
| LineSplitSep | string | No | Empty | The separator for the script output content. If this is empty, no splitting is performed, and the entire output is returned as a single data entry. | |
| CmdPath | string | No | Empty | /usr/bin/bash | The path to execute the script command. If this is empty, the default path is used. The default paths are as follows:
|
| TimeoutMilliSeconds | int | No | 3000 | 3000 | The timeout period for executing the script, in milliseconds. |
| IgnoreError | bool | No | false | false | Specifies whether to ignore error logs when the plug-in execution fails. The default value is false, which means they are not ignored. |
| Environments | [string] | No | The environment variables. The default is the value of os.Environ(). If Environments is set, the specified environment variables are appended to the value of os.Environ(). | ||
| IntervalMs | int | No | 5000 | 5000 | The collection trigger frequency or script execution frequency, in milliseconds. |
Native processing plug-ins
Regular expression parsing plug-in
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_parse_regex_native | The type of the plug-in. Set the value to processor_parse_regex_native. |
| SourceKey | string | Yes | / | content | The source field name. |
| Regex | string | Yes | / | (\d+-\d+-\d+)\s+(.*) | The regular expression. |
| Keys | [string] | Yes | / | ["time", "msg"] | The list of extracted fields. |
| KeepingSourceWhenParseFail | bool | No | false | false | Specifies whether to keep the source field when parsing fails. |
| KeepingSourceWhenParseSucceed | bool | No | false | false | Specifies whether to keep the source field when parsing succeeds. |
| RenamedSourceKey | string | No | Empty | key | The field name to store the source field when it is kept. If not specified, the source field is not renamed by default. |
JSON parsing plug-in
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_parse_json_native | The type of the plug-in. Set the value to processor_parse_json_native. |
| SourceKey | string | Yes | / | content | The source field name. |
| KeepingSourceWhenParseFail | bool | No | false | false | Specifies whether to keep the source field when parsing fails. |
| KeepingSourceWhenParseSucceed | bool | No | false | false | Specifies whether to keep the source field when parsing succeeds. |
| RenamedSourceKey | string | No | Empty | key | The field name to store the source field when it is kept. If not specified, the source field is not renamed by default. |
Separator parsing plug-in
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_parse_delimiter_native | The type of the plug-in. Set the value to processor_parse_delimiter_native. |
| SourceKey | string | Yes | / | content | The source field name. |
| Separator | string | Yes | / | , | The separator. |
| Quote | string | No | " | " | The quote. |
| Keys | [string] | Yes | / | ["time", "msg"] | The list of extracted fields. |
| AllowingShortenedFields | bool | No | true | true | Specifies whether to allow the number of extracted fields to be less than the number of keys. If not allowed, this scenario is considered a parsing failure. |
| OverflowedFieldsTreatment | string | No | extend | extend | The behavior when the number of extracted fields is greater than the number of keys. Valid values:
|
| KeepingSourceWhenParseFail | bool | No | false | false | Specifies whether to keep the source field when parsing fails. |
| KeepingSourceWhenParseSucceed | bool | No | false | false | Specifies whether to keep the source field when parsing succeeds. |
| RenamedSourceKey | string | No | Empty | key | The field name to store the source field when it is kept. If not specified, the source field is not renamed by default. |
Apsara parsing plug-in
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_parse_apsara_native | The type of the plug-in. Set the value to processor_parse_apsara_native. |
| SourceKey | string | Yes | / | content | The source field name. |
| Timezone | string | No | Empty | GMT+08:00 | The time zone of the log time. The format is GMT+HH:MM (east of UTC) or GMT-HH:MM (west of UTC). |
| KeepingSourceWhenParseFail | bool | No | false | false | Specifies whether to keep the source field when parsing fails. |
| KeepingSourceWhenParseSucceed | bool | No | false | false | Specifies whether to keep the source field when parsing succeeds. |
| RenamedSourceKey | string | No | Empty | key | The field name to store the source field when it is kept. If not specified, the source field is not renamed by default. |
Time parsing plug-in
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_parse_timestamp_native | The type of the plug-in. Set the value to processor_parse_timestamp_native. |
| SourceKey | string | Yes | / | content | The source field name. |
| SourceFormat | string | Yes | / | %Y/%m/%d %H:%M:%S | The log time format. For more information, see Time formats. |
| SourceTimezone | string | No | Empty | GMT+08:00 | The time zone of the log time. The format is GMT+HH:MM (east of UTC) or GMT-HH:MM (west of UTC). |
Filtering plug-in
| Parameter | Required | Example | Default value | Description | Note |
| Type | string | Yes | processor_filter_regex_native | / | The type of the plug-in. Set the value to processor_filter_regex_native. |
| Include | map | Yes | / | / | The whitelist of log fields. The key is the field name and the value is a regular expression. This specifies the condition that the content of the field specified by the key must meet for the event to be collected. Multiple conditions are combined with a logical AND. The log is collected only when all conditions are met. |
Data masking plug-in
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_desensitize_native | The type of the plug-in. Set the value to processor_desensitize_native. |
| SourceKey | string | Yes | / | content | The source field name. |
| Method | string | Yes | / | const | The data masking method. Valid values: const: replaces sensitive content with a constant. md5: replaces sensitive content with its MD5 value. |
| ReplacingString | string | No. This parameter is required if you set Method to const. | / | ****** | The constant string to replace the sensitive content. |
| ContentPatternBeforeReplacedString | string | Yes | / | 'password:' | The regular expression for the prefix of the sensitive content. |
| ReplacedContentPattern | string | Yes | / | [^']* | The regular expression for the sensitive content. |
| ReplacingAll | bool | No | true | true | Specifies whether to replace all matched sensitive content. |
Extension processors
Extract fields
Regular expression mode
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_regex | The type of the plug-in. Set the value to processor_regex. |
| SourceKey | string | Yes | / | content | The source field name. |
| Regex | string | Yes | / | (\d+-\d+-\d+)\s+(.*) | The regular expression. You need to use parentheses () to mark the fields to be extracted. |
| Keys | [string] | Yes | / | ["ip", "time", "method"] | Specifies the field names for the extracted content, such as ["ip", "time", "method"]. |
| NoKeyError | boolean | No | false | false | Specifies whether the system reports an error if the specified source field does not exist in the raw log.
|
| NoMatchError | boolean | No | false | false | Specifies whether the system reports an error if the specified regular expression does not match the value of the source field.
|
| KeepSource | boolean | No | false | false | Specifies whether to keep the source field in the parsed log.
|
| FullMatch | boolean | No | true | true | Specifies whether to extract only when a full match is found.
|
| KeepSourceIfParseError | boolean | true | true | false | Specifies whether to keep the source field in the parsed log if parsing fails.
|
Anchor mode
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_anchor | The type of the plug-in. Set the value to processor_anchor. |
| SourceKey | Anchor array | Yes | / | content | The source field name. |
| Anchors | string | Yes | / | The list of anchor items. | |
| Start | String | Yes | Empty | time | The starting keyword. If empty, it matches the beginning of the string. |
| Stop | String | Yes | Empty | \t | The ending keyword. If empty, it matches the end of the string. |
| FieldName | String | Yes | Empty | time | Specifies the field name for the extracted content. |
| FieldType | String | Yes | Empty | string | The type of the field. Valid values: string and json. |
| ExpondJson | boolean | No | false | false | Specifies whether to expand JSON fields.
|
| ExpondConnecter | String | No | _ | _ | The connector for JSON expansion. The default value is an underscore (_). |
| MaxExpondDepth | Int | No | 0 | 0 | The maximum depth for JSON expansion. The default value is 0, which means no limit. |
| NoAnchorError | Boolean | No | false | false | Specifies whether the system reports an error if the anchor item cannot be found.
|
| NoKeyError | Boolean | No | false | false | Specifies whether the system reports an error if the specified source field does not exist in the raw log.
|
| KeepSource | Boolean | No | false | false | Specifies whether to keep the source field in the parsed log.
|
CSV mode
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_csv | The type of the plug-in. Set the value to processor_csv. |
| SourceKey | String | Yes | / | csv | The source field name. |
| SplitKeys | String array | Yes | / | ["date", "ip", "content"] | Specifies the field names for the extracted content, such as ["date", "ip", "content"]. Important If the number of fields to be split is less than the number of fields in the SplitKeys parameter, the extra fields in the SplitKeys parameter are ignored. |
| PreserveOthers | Boolean | No | false | false | Specifies whether to keep the excess part if the number of fields to be split is greater than the number of fields in the SplitKeys parameter.
|
| ExpandOthers | Boolean | No | false | false | Specifies whether to parse the excess part.
|
| ExpandKeyPrefix | String | No | The naming prefix for the excess fields. For example, if you configure it as expand_, the field names will be expand_1, expand_2. | ||
| TrimLeadingSpace | Boolean | No | false | false | Specifies whether to ignore leading spaces in field values.
|
| SplitSep | String | No | , | , | The separator. The default value is a comma (,). |
| KeepSource | Boolean | No | false | false | Specifies whether to keep the source field in the parsed log.
|
| NoKeyError | Boolean | No | false | false | Specifies whether the system reports an error if the specified source field does not exist in the raw log.
|
Single-character separator
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_split_char | The type of the plug-in. Set the value to processor_split_char. |
| SourceKey | String | Yes | The source field name. | ||
| SplitSep | String | Yes | The separator. It must be a single character and can be a non-printable character, such as \u0001. | ||
| SplitKeys | String array | Yes | ["ip", "time", "method"] | Specifies the field names for the extracted content, such as ["ip", "time", "method"]. | |
| PreserveOthers | Boolean | No | false | false | Specifies whether to keep the excess part if the number of fields to be split is greater than the number of fields in the SplitKeys parameter.
|
| QuoteFlag | Boolean | No | false | false | Specifies whether to use a quote.
|
| Quote | String | No | / | \u0001 | The quote. It must be a single character and can be a non-printable character, such as \u0001. This is valid only when QuoteFlag is set to true. |
| NoKeyError | Boolean | No | false | false | Specifies whether the system reports an error if the specified source field does not exist in the raw log.
|
| NoMatchError | Boolean | No | false | false | Specifies whether the system reports an error if the specified separator does not match the separator in the log.
|
| KeepSource | Boolean | No | false | false | Specifies whether to keep the source field in the parsed log.
|
Multi-character separator
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_split_string | The type of the plug-in. Set the value to processor_split_string. |
| SourceKey | String | Yes | The source field name. | ||
| SplitSep | String | Yes | The separator. It must be a single character and can be a non-printable character, such as \u0001\u0002. | ||
| SplitKeys | String array | Yes | ["key1","key2"] | Specifies the field names for the extracted content, such as ["key1","key2"].Note If the number of fields to be split is less than the number of fields in the SplitKeys parameter, the extra fields in the SplitKeys parameter are ignored. | |
| PreserveOthers | Boolean | No | false | false | Specifies whether to keep the excess part if the number of fields to be split is greater than the number of fields in the SplitKeys parameter.
|
| ExpandOthers | Boolean | No | false | false | Specifies whether to use a quote.
|
| ExpandKeyPrefix | String | No | / | expand_ | The naming prefix for the excess part. For example, if you configure it as expand_, the field names will be expand_1, expand_2. |
| NoKeyError | Boolean | No | false | false | Specifies whether the system reports an error if the specified source field does not exist in the raw log.
|
| NoMatchError | Boolean | No | false | false | Specifies whether the system reports an error if the specified separator does not match the separator in the log.
|
| KeepSource | Boolean | No | false | false | Specifies whether to keep the source field in the parsed log.
|
Key-value pairs
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_split_key_value | The type of the plug-in. Set the value to processor_split_key_value. |
| SourceKey | string | Yes | The source field name. | ||
| Delimiter | string | No | \t | \t | The separator between key-value pairs. The default value is the tab character \t. |
| Separator | string | No | : | : | The separator between the key and value in a single key-value pair. The default value is a colon (:). |
| KeepSource | Boolean | No | false | false | Specifies whether to keep the source field in the parsed log.
|
| ErrIfSourceKeyNotFound | Boolean | No | true | false | Specifies whether the system reports an error if the specified source field does not exist in the raw log.
|
| DiscardWhenSeparatorNotFound | Boolean | No | false | false | Specifies whether to discard the key-value pair if no matching separator is found.
|
| ErrIfSeparatorNotFound | Boolean | No | true | false | Specifies whether the system reports an error if the specified separator does not exist.
|
| ErrIfKeyIsEmpty | Boolean | No | true | false | Specifies whether the system reports an error if the key is empty after splitting.
|
| Quote | String | No | The quote. If set, and the value is enclosed in quotes, the value within the quotes is extracted. Multi-character quotes are supported. By default, the quote feature is not enabled. Important * If the quote is a double quotation mark (""), you need to add an escape character, which is a backslash (\). When a backslash (\) is used with a quote inside the quotes, the backslash (\) is output as part of the value. |
Grok mode
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_grok | The type of the plug-in. Set the value to processor_grok. |
| CustomPatternDir | String array | No | The directory where the custom Grok pattern files are located. The processor_grok plug-in reads all files in the directory. If this parameter is not added, no custom Grok pattern files are imported. Important After updating the custom Grok pattern file, you need to restart Logtail for the changes to take effect. | ||
| CustomPatterns | Map | No | The custom GROK pattern, where the key is the rule name and the value is the Grok expression. For more information about the supported expressions, see processor_grok. If the link does not contain the expression you need, enter a custom Grok expression in Match. If this parameter is not added, no custom GROK patterns are used. | ||
| SourceKey | String | No | content | content | The source field name. The default value is the content field. |
| Match | String array | Yes | An array of Grok expressions. The processor_grok plug-in matches the log against the expressions in this list from top to bottom and returns the extraction result of the first successful match. Note Configuring multiple Grok expressions may affect performance. We recommend no more than 5. | ||
| TimeoutMilliSeconds | Long | No | 0 | The maximum time to attempt to extract fields using a Grok expression, in milliseconds. If this parameter is not added or is set to 0, it means there is no timeout. | |
| IgnoreParseFailure | Boolean | No | true | true | Specifies whether to ignore logs that fail to be parsed.
|
| KeepSource | Boolean | No | true | true | Specifies whether to keep the source field after successful parsing.
|
| NoKeyError | Boolean | No | false | true | Specifies whether the system reports an error if the specified source field does not exist in the raw log.
|
| NoMatchError | Boolean | No | true | true | Specifies whether the system reports an error if none of the expressions set in the Match parameter match the log.
|
| TimeoutError | Boolean | No | true | true | Specifies whether the system reports an error on a match timeout.
|
Add fields
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_add_fields | The type of the plug-in. Set the value to processor_add_fields. |
| Fields | Map | Yes | The field names and values to be added. Key-value pair format, supports adding multiple. | ||
| IgnoreIfExist | Boolean | No | false | false | Specifies whether to ignore duplicate fields if a field with the same name exists.
|
Drop fields
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_drop | The type of the plug-in. Set the value to processor_drop. |
| DropKeys | String array | Yes | Specifies the fields to be dropped. Multiple fields can be configured. |
Rename fields
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_rename | The type of the plug-in. Set the value to processor_rename. |
| NoKeyError | Boolean | No | false | false | Specifies whether the system reports an error if the specified source field does not exist in the log.
|
| SourceKeys | String array | Yes | The source fields to be renamed. | ||
| DestKeys | String array | Yes | The fields after renaming. |
Package fields
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_packjson | The type of the plug-in. Set the value to processor_packjson. |
| SourceKeys | String array | Yes | The source fields to be packaged. | ||
| DestKey | String | No | The field after packaging. | ||
| KeepSource | Boolean | No | true | true | Specifies whether to keep the source field in the parsed log.
|
| AlarmIfIncomplete | Boolean | No | true | true | Specifies whether the system reports an error if the specified source field does not exist in the raw log.
|
Expand JSON fields
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_json | The type of the plug-in. Set the value to processor_json. |
| SourceKey | String | Yes | The name of the source field to be expanded. | ||
| NoKeyError | Boolean | No | true | true | Specifies whether the system reports an error if the specified source field does not exist in the raw log.
|
| ExpandDepth | Int | No | 0 | 1 | The depth of JSON expansion. The default value is 0, which means no limit. 1 indicates the current level, and so on. |
| ExpandConnector | String | No | _ | _ | The connector for JSON expansion. The default value is an underscore (_). |
| Prefix | String | No | The prefix to add to the field names during JSON expansion. | ||
| KeepSource | Boolean | No | true | true | Specifies whether to keep the source field in the parsed log.
|
| UseSourceKeyAsPrefix | Boolean | No | Specifies whether to use the source field name as a prefix for all expanded JSON field names. | ||
| KeepSourceIfParseError | Boolean | No | true | true | Specifies whether to keep the raw log if parsing fails.
|
| ExpandArray | Boolean | No | false | false | Specifies whether to expand array types. This parameter is supported by Logtail 1.8.0 and later.
|
Filter logs
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_filter_regex | The type of the plug-in. Set the value to processor_filter_regex. |
| Include | JSON Object | No | The key is the log field, and the value is the regular expression that the field value must match. The key-value pairs are combined with a logical AND. If the value of a log field matches the corresponding regular expression, the log is collected. | ||
| Exclude | JSON Object | No | The key is the log field, and the value is the regular expression that the field value must match. The key-value pairs are combined with a logical OR. If the value of any field in the log matches the corresponding regular expression, the log is discarded. |
Extract log time
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_gotime | The type of the plug-in. Set the value to processor_gotime. |
| SourceKey | String | Yes | The source field name. | ||
| SourceFormat | String | Yes | The format of the source time. | ||
| SourceLocation | Int | Yes | The time zone of the source time. If the parameter value is empty, it indicates the time zone of the host or container where Logtail is located. | ||
| DestKey | String | Yes | The destination field after parsing. | ||
| DestFormat | String | Yes | The time format after parsing. | ||
| DestLocation | Int | No | The time zone after parsing. If the parameter value is empty, it indicates the local time zone. | ||
| SetTime | Boolean | No | true | true | Specifies whether to set the parsed time as the log time.
|
| KeepSource | Boolean | No | true | true | Specifies whether to keep the source field in the parsed log.
|
| NoKeyError | Boolean | No | true | true | Specifies whether the system reports an error if the specified source field does not exist in the raw log.
|
| AlarmIfFail | Boolean | No | true | true | Specifies whether the system reports an error if it fails to extract the log time.
|
Transform IP addresses
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_geoip | The type of the plug-in. Set the value to processor_geoip. |
| SourceKey | String | Yes | The name of the source field for which you want to perform IP address transformation. | ||
| DBPath | String | Yes | /user/data/GeoLite2-City_20180102/GeoLite2-City.mmdb | The full path of the GeoIP database. For example, /user/data/GeoLite2-City_20180102/GeoLite2-City.mmdb. | |
| NoKeyError | Boolean | No | false | false | Specifies whether the system reports an error if the specified source field name does not exist in the raw log.
|
| NoMatchError | Boolean | No | true | true | Specifies whether the system reports an error if the IP address is invalid or not found in the database.
|
| KeepSource | Boolean | No | true | true | Specifies whether to keep the source field in the parsed log.
|
| Language | String | No | zh-CN | zh-CN | The language attribute. The default value is zh-CN. Important Make sure your GeoIP database contains the corresponding language. |
Data masking
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_desensitize | The type of the plug-in. Set the value to processor_desensitize. |
| SourceKey | String | Yes | The log field name. | ||
| Method | String | Yes | const | The data masking method. Valid values:
| |
| Match | String | No | full | full | Specifies the method for extracting sensitive content. Valid values:
|
| ReplaceString | String | No | The string used to replace sensitive content. This is required when Method is set to const. | ||
| RegexBegin | String | No | The regular expression to match the prefix of the sensitive content. This is required when Match is set to regex. | ||
| RegexContent | String | No | The regular expression to match the sensitive content. This is required when Match is set to regex. |
Field value mapping
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_dict_map | The type of the plug-in. Set the value to processor_dict_map. |
| SourceKey | String | Yes | The source field name. | ||
| MapDict | Map | No | The mapping dictionary. If the mapping dictionary is small, you can set it directly with this parameter. You do not need to provide a local CSV dictionary file. Important If you set the DictFilePath parameter, the configuration in the MapDict parameter does not take effect. | ||
| DictFilePath | String | No | The dictionary file in CSV format. The separator for this CSV file is a comma (,), and the field reference is represented by a double quotation mark ("). | ||
| DestKey | String | No | The field name after mapping. | ||
| HandleMissing | Boolean | No | false | false | Specifies whether the system processes the log if the target field is missing from the raw log.
|
| Missing | String | No | Unknown | Unknown | The fill value to use when processing a missing target field in the raw log. The default value is Unknown. This parameter takes effect when HandleMissing is set to true. |
| MaxDictSize | Int | No | 1000 | 1000 | The maximum size of the mapping dictionary. The default value is 1000, which means up to 1000 mapping rules can be stored. To limit the plug-in's memory usage on the server, you can reduce this value. |
| Mode | String | No | overwrite | overwrite | The processing method when the mapped field already exists in the raw log.
|
Field encryption
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_encrypt | The type of the plug-in. Set the value to processor_encrypt. |
| SourceKey | String array | Yes | The source field name. | ||
| EncryptionParameters | Object | Yes | The key-related configuration. | ||
| Key | String | Yes | Set the key. It must be 64 hexadecimal characters. | ||
| IV | String | No | 00000000000000000000000000000000 | Set the initialization vector for encryption. It must be 32 hexadecimal characters. The default value is 00000000000000000000000000000000. | |
| KeyFilePath | Boolean | No | The file path to read the encryption parameters. If not configured, it is read according to Logtail Configuration - Input Configuration - File Path. | ||
| KeepSourceValueIfError | String | No | false | false | Specifies whether the system keeps the value of the source field if encryption fails.
ENCRYPT_ERROR. |
String replacement
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_string_replace | The type of the plug-in. Set the value to processor_string_replace. |
| SourceKey | String | Yes | The source field name. | ||
| Method | String | Yes | Specifies the matching method. Valid values:
| ||
| Match | String | No | Enter the content to match.
| ||
| ReplaceString | String | No | The string for replacement. The default value is "".
| ||
| DestKey | String | No | Specifies a new field for the replaced content. By default, no new field is added. |
Data encoding and decoding
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_base64_encoding | The type of the plug-in. Set the value to processor_base64_encoding. |
| SourceKey | String | Yes | The source field name. | ||
| NewKey | String | Yes | The field name for the encoded result. | ||
| NoKeyError | Boolean | No | false | false | Specifies whether the system reports an error if the specified source field does not exist in the raw log.
|
Convert log to metric
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_log_to_sls_metric | The type of the plug-in. Set the value to processor_log_to_sls_metric. |
| MetricTimeKey | String | No | Specifies the time field in the log, which is mapped to the __time_nano__ field in the time series data. By default, the value of the __time__ field in the log is extracted. Make sure the specified field is a valid, properly formatted timestamp. Currently, Unix timestamps in seconds (10 digits), milliseconds (13 digits), microseconds (16 digits), and nanoseconds (19 digits) are supported. | ||
| MetricLabelKeys | []String | Yes | Specifies the list of keys for the __labels__ field. The keys must follow the regular expression ^[a-zA-Z_][a-zA-Z0-9_]*$. The value cannot contain a vertical bar (|) or #$#. For more information, see Time series data (Metric). You cannot add the __labels__ field in the MetricLabelKeys parameter. If the source field contains a __labels__ field, its value is appended to the new __labels__ field. | ||
| MetricValues | Map | Yes | Used to specify the metric name and metric value. The metric name corresponds to the __name__ field and must follow the regular expression ^[a-zA-Z_:][a-zA-Z0-9_:]*$. The metric value corresponds to the __value__ field and must be of the Double type. For more information, see Time series data (Metric). | ||
| CustomMetricLabels | Map | No | The custom __labels__ field. The key must follow the regular expression ^[a-zA-Z_][a-zA-Z0-9_]*$, and the value cannot contain a vertical bar (|) or #$#. For more information, see Time series data (Metric). | ||
| IgnoreError | Boolean | No | Specifies whether to output an error log when there are no matching logs. The default value is false, which means no error log is output. |
Convert log to trace
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | processor_otel_trace | The type of the plug-in. Set the value to processor_otel_trace. |
| SourceKey | String | Yes | The source field name. | ||
| Format | String | Yes | json | The format after conversion. Valid values: protobuf, json, and protojson. | |
| NoKeyError | Boolean | No | false | true | Specifies whether to report an error if the corresponding source field is not found in the log. The default value is false. |
| TraceIDNeedDecode | Boolean | No | Specifies whether to Base64-decode the TraceID. The default value is false. If you set Format to protojson and the TraceID has been Base64-encoded, you need to set TraceIDNeedDecode to true. Otherwise, the conversion will fail. | ||
| SpanIDNeedDecode | Boolean | No | Specifies whether to Base64-decode the SpanID. The default value is false. If you set Format to protojson and the SpanID has been Base64-encoded, you need to set SpanIDNeedDecode to true. Otherwise, the conversion will fail. | ||
| ParentSpanIDNeedDecode | Boolean | No | Specifies whether to Base64-decode the ParentSpanID. The default value is false. If you set Format to protojson and the ParentSpanID has been Base64-encoded, you need to set ParentSpanIDNeedDecode to true. Otherwise, the conversion will fail. |
Output plug-ins
SLS output plug-in
| Parameter | Type | Required | Default value | Example | Description |
| Type | string | Yes | / | flusher_sls | The type of the plug-in. Set the value to flusher_sls. |
| Logstore | stirng | Yes | / | test-logstore | The name of the Logstore. |
Response elements
|
Element |
Type |
Description |
Example |
None defined.
Examples
Success response
JSON format
{}
Error codes
See Error Codes for a complete list.
Release notes
See Release Notes for a complete list.