×
Community Blog Hello, iLogtail 2.0!

Hello, iLogtail 2.0!

The article discusses the limitations of the existing iLogtail architecture and collection configuration and introduces the new features in iLogtail 2.

By Dumin

1. Overview

As the requirements for observable data collection continue to innovate, diversified data input and output options, personalized data processing capabilities, and high-performance data processing throughput capabilities have become prerequisites for top observable data collectors. However, due to historical reasons, the existing iLogtail architecture and collection configuration structure can no longer meet the above requirements, gradually becoming a bottleneck restricting the rapid evolution of iLogtail:

▶ iLogtail was initially designed for scenarios where file logs are collected to Simple Log Service:

(1) It simply divides logs into multiple formats. Each format of logs only supports one processing method (such as parsing in regex mode and JSON parsing).

(2) Function implementation is strongly bound to concepts related to Simple Log Service (such as Logstore). Based on this design concept, the existing iLogtail architecture leans towards a monolithic architecture, resulting in serious coupling between modules, poor scalability and universality, and difficulty in providing cascading ability of multiple processes.

▶ The introduction of the Golang plug-in system greatly expands the input and output channels of iLogtail, improving the processing capability to a certain extent. However, due to the implementation of the C++ part, the ability to combine I/O with the processing modules is still severely limited:

(1) The native high-performance processing capabilities of the C++ part are still limited to scenarios where log files are collected and delivered to Simple Log Service.

(2) The processing ability of the C++ part cannot be combined with the processing ability of the plug-in system, and only one of the two can be ensured, thus lowering the performance of complex log processing scenarios.

▶ The existing iLogtail collection configuration structure is similar to the architecture of iLogtail and also uses a tiled structure, which lacks the concept of processing pipelines and cannot express the semantics of cascading processes.

For these reasons, on the 10th anniversary of the birth of iLogtail, Simple Log Service started to upgrade iLogtail, hoping to make iLogtail easier to use, more scalable, and have better performance, so as to better serve our users.

After more than half a year of reconstruction and optimization, iLogtail 2.0 is ready to be released. Let's dive into the new features of iLogtail 2.0!

2. New Features

2.1 [Commercial Edition] Comprehensive Upgrade of the Collection Configuration Pipeline Structure

To address the limitation of the previous tiled structure in expressing complex collection behaviors, iLogtail 2.0 introduces a new pipeline configuration. Each configuration corresponds to a processing pipeline, consisting of an input module, a processing module, and an output module. Each module comprises multiple plugins with the following functions:

  • Input plug-ins: Used to obtain data from a specified input source. For more information about the functions of each plug-in, see Input plug-ins [1].
  • Processing plug-ins: Parse and process logs. For more information about the functions of each plug-in, see Processing plug-ins [2]. The processing plug-ins can be further classified into native processing plug-ins and extended processing plug-ins.
  • Native processing plug-ins: Provide high performance and are suitable for most business scenarios. Therefore, they are recommended to use.
  • Extended processing plug-ins: Offer broader functions but have lower performance compared to native processing plugins. It is advisable to use extended processing plugins only when native processing plugins cannot fulfill all processing requirements.
  • Output plug-ins: Send the processed data to a specified storage.

A JSON object can be used to represent a pipeline configuration:

1

The inputs, processors, and flushers represent the input module, processing module, and output module. Each element {...} in the list represents a plug-in. Global represents some configurations of the pipeline. For more information about the pipeline configuration structure, see iLogtail pipeline configuration structure [3].

Example: Collect the test.log in the /var/log directory, parse the logs in JSON format, and send the logs to Simple Log Service. The old and new configurations corresponding to this collection requirement are as follows. You can see that the new configuration is refined and the operations performed are clear at a glance.

Old Configuration:

{
    "configName": "test-config",
    "inputType": "file",
    "inputDetail": {
        "topicFormat": "none",
        "priority": 0,
        "logPath": "/var/log",
        "filePattern": "test.log",
        "maxDepth": 0,
        "tailExisted": false,
        "fileEncoding": "utf8",
        "logBeginRegex": ".*",
        "dockerFile": false,
        "dockerIncludeLabel": {},
        "dockerExcludeLabel": {},
        "dockerIncludeEnv": {},
        "dockerExcludeEnv": {},
        "preserve": true,
        "preserveDepth": 1,
        "delaySkipBytes": 0,
        "delayAlarmBytes": 0,
        "logType": "json_log",
        "timeKey": "",
        "timeFormat": "",
        "adjustTimezone": false,
        "logTimezone": "",
        "filterRegex": [],
        "filterKey": [],
        "discardNonUtf8": false,
        "sensitive_keys": [],
        "mergeType": "topic",
        "sendRateExpire": 0,
        "maxSendRate": -1,
        "localStorage": true
    },
    "outputType": "LogService",
    "outputDetail": {
        "logstoreName": "test_logstore"
    }
}

New Pipeline Configuration:

{
    "configName": "test-config",
    "inputs": [
        {
            "Type": "file_log",
            "FilePaths": "/var/log/test.log"
        }
    ],
    "processors": [
        {
            "Type": "processor_parse_json_native"
            "SourceKey": "content"
        }
    ],
    "flushers": [
        {
            "Type": "flusher_sls",
            "Logstore": "test_logstore"
        }
    ]
}

If further processing is required after JSON parsing is executed, you only need to add a processing plug-in in the pipeline configuration, but in the old configuration, this requirement cannot be expressed.

For more information about the compatibility between the new pipeline configuration and the old configuration, see the Compatibility Description section at the end of this article.

New APIs

To support the new pipeline configuration and distinguish it from the old configuration structure, we provide new APIs for managing pipeline configuration, including:

  • CreateLogtailPipelineConfig
  • UpdateCreateLogtailPipelineConfig
  • GetLogtailPipelineConfig
  • DeleteLogtailPipelineConfig
  • ListLogtailPipelineConfig

For more information about these interfaces, see OpenAPI documentation[4].

New Console

Corresponding to the pipeline collection configuration structure, the front-end console has also been newly upgraded, which is divided into global configuration, input configuration, processing configuration, and output configuration.

2

Compared with the old console, the new console has the following features:

Parameter cohesion: Parameters related to a feature are displayed in a centralized manner, which avoids configuration missing caused by scattered parameters in the old console.

Example: The maximum directory monitoring depth is closely related to the ** in the log path. In the old console, these two are separated far away and easy to forget. In the new console, the two are put together for easy understanding.

Old console:

3

New console:

4
All parameters are valid: In the old console, after plug-in processing is enabled, some console parameters become invalid, causing unnecessary misunderstanding. All parameters in the new console are valid.

New CRDs

Similarly, corresponding to the new collection configuration, the CRD resources corresponding to the collection configuration in the Kubernetes scenario are also upgraded. Compared with the old CRDs, the new CRDs have the following features:

  • Support new pipeline collection configurations.
  • The CRD type is changed to the cluster level, and the CRD name is directly used as the collection configuration name to avoid conflicts caused by multiple different CRD resources in the same cluster pointing to the same collection configuration.
  • Define the results of all operations to avoid undefined behaviors after multiple operations on old CRDs.
apiVersion: log.alibabacloud.com/v1alpha1
kind: ClusterAliyunLogConfig
metadata:
  name: test-config
spec:
  project:
    name: test-project
  logstore:
    name: test-logstore
  machineGroup:
    name: test-machine_group
  config:
    inputs:
      - Type: input_file
        FilePaths:
          - /var/log/test.log
    processors:
      - Type: processor_parse_json_native
        SourceKey: content

2.2 More Flexibility in Processing Plug-in Combinations

In text log collection scenarios, when your logs are complex and need to be parsed multiple times, are you confused because you can only use extended processing plug-ins? Are you worried about the performance loss and various inconsistencies caused by this?

Upgrade to iLogtail 2.0, and the above problems will be a thing of the past!

The processing pipeline of iLogtail 2.0 supports the new cascade mode. Compared with the 1.x series, iLogtail 2.0 has the following capabilities:

  • Native processing plug-ins can be combined with each other: The dependencies between the original native processing plug-ins no longer exist. You can combine native processing plug-ins to meet your processing requirements.
  • Native processing plug-ins and extended processing plug-ins can be used at the same time: If you cannot use only native processing plug-ins to process complex log parsing, you can add extended processing plug-ins.

Note: An extended processing plug-in can only appear after all native processing plug-ins, but not before any native processing plug-in.

Example: If your text log is as follows:
{"time": "2024-01-22T14:00:00.745074", "level": "warning", "module": "box", "detail": "127.0.0.1 GET 200"}
You need to parse the time, level, and module fields. You also need to perform regular parsing on the detail field to split the IP, method, and status fields and discard the drop field. In this case, you can use the Data Parsing (JSON Mode) native plug-in, Data Parsing (Regex Mode) native plug-in, and Drop Field extended plug-in in sequence to complete the requirements.

[Commercial Edition]

5
6

[Open Source Edition]

{
  "configName": "test-config"
  "inputs": [...],
  "processors": [
    {
      "Type": "processor_parse_json_native",
      "SourceKey": "content"
    },
    {
      "Type": "processor_parse_regex_native",
      "SourceKey": "detail",
      "Regex": "(\\S)+\\s(\\S)+\\s(.*)",
      "Keys": [
        "ip",
        "method",
        "status"
      ]
    }
    {
      "Type": "processor_drop",
      "DropKeys": [
        "module"
      ]
    }
  ],
  "flushers": [...]
}

The results are as follows:

7

2.3 New SPL Processing Mode

In addition to using a combination of processing plug-ins to process logs, iLogtail 2.0 also adds the SPL (SLS Processing Language) processing mode, which uses the syntax provided by Simple Log Service for unified query, client-side processing, and data processing to implement client-side data processing. The advantages of using the SPL processing mode are:

  • Rich tools and functions: The SPL processing mode supports multi-level pipe operations and has rich built-in operators and functions
  • Low difficulty: Low code, easy to learn
  • [Commercial Edition] Unified Syntax: One Language for Log Collection, Query, Processing, and Consumption

8

SPL Syntax

Overall structure

  • Instruction-based statements to support unified processing of structured and unstructured data
  • Exploratory syntax guided by the pipe operator (|) has complex logic and simple orchestration
<data-source>
| <spl-cmd> -option=<option> -option ... <expression>, ... as <output>, ...
| <spl-cmd> ...
| <spl-cmd> ...

SQL computing instructions for structured data:

  • where uses an SQL expression to compute the results to generate a new field
  • extend filters data entries based on the computing result of the SQL expression
*
| extend latency=cast(latency as BIGINT)
| where status='200' AND latency>100

Extraction instructions for unstructured data:

  • parse-regexp extracts the information that matches groups in the specified regular expression from the specified field
  • parse-json extracts the first layer of JSON information from the specified field
  • parse-csv extracts information in CSV format from the specified field
*
| project-csv -delim='^_^' content as time, body
| project-regexp body, '(\S+)\s+(\w+)' as msg, user

2.4 Finer-grained Log Parsing Control

For native parsing plug-ins, iLogtail 2.0 provides finer-grained parsing control, including the following parameters:

  • KeepingSourceWhenParseFail: Specifies whether to retain the original field if the parsing fails. If you do not configure this parameter, it is not retained by default.
  • KeepingSourceWhenParseSucceed: Specifies whether to retain the original field if the parsing is successful. If you do not configure this parameter, it is not retained by default.
  • RenameSourceKey: Stores the field name of the original field when the original field is retained. If you do not configure this parameter, the name is not changed by default.

Example: If you want to retain this field in the log and rename it as raw when the log field content fails to be parsed, you can configure the following parameters:

  • KeepingSourceWhenParseFail: true
  • RenameSourceKey: raw

2.5 [Commercial Edition] Log Time Parsing Supports Nanosecond Accuracy

In iLogtail 1.x, if you want to extract the time field to nanosecond precision, Simple Log Service can only add the nanosecond timestamp field to your logs. In iLogtail 2.0, nanosecond information is directly appended to the log collection time (__time__) without the need to add additional fields. This not only reduces unnecessary log storage space but also facilitates the sorting of logs based on nanosecond time precision in the Simple Log Service console.

To extract time fields from logs to nanosecond precision in iLogtail 2.0, you need to first configure the native time parsing plug-in. Next, add .%f to the end of SourceFormat. Then, add "EnableTimestampNanosecond": true to the global parameters.

Example: A log contains the time field, whose value is 2024-01-23T14:00:00.745074, and the time zone is UTC+8. You need to parse the time to nanosecond precision and set time to this value.

9
10

The results are as follows:

11

Note: iLogtail 2.0 no longer supports nanosecond timestamp extraction in versions 1.x. If you have used the nanosecond timestamp extraction feature in versions 1.x, you must manually enable the new nanosecond timestamp extraction feature after you upgrade to iLogtail 2.0. For more information, see the Compatibility Description at the end of this article.

2.6 [Commercial Edition] Clearer Status Observation

Compared with the simple metrics exposed by iLogtail 1.x, iLogtail 2.0 greatly improves its observability:

  • All collection configurations have complete metrics. You can perform statistics and comparisons on different collection configurations in dimensions such as Project and Logstore.
  • All plug-ins have their own metrics and can build a complete pipeline topology. The status of each plug-in can be clearly observed.
  • C++ native plug-ins provide more detailed metrics that can be used to monitor and optimize plug-in configuration parameters.

12

2.7 Faster and Safer Operation

iLogtail 2.0 supports C++ 17 syntax. The C++ compiler is upgraded to GCC 9 and the versions of C++ dependency libraries are updated. This makes iLogtail run faster and more secure.

Table: Performance of iLogtail 2.0 in single-threaded log processing (the length of a single log is 1 KB in this example)

Scenario CPU (cores) Memory (MB) Processing Rate (MB/s)
Single-line Log Collection 1.06 33 400
Multi-line Log Collection 1.04 33 150

3. Compatibility Description

3.1 Collection Configuration

Commercial Edition

  • The new pipeline collection configuration is fully forward-compatible with the old collection configuration, so:
  • When you upgrade iLogtail to 2.0, Simple Log Service automatically converts the old version of iLogtail to the new version when you deliver the configuration. You do not need to perform any additional operations. You can use the GetLogtailPipelineConfig operation to obtain the new pipeline configuration of the old version.
  • The old collection configuration is not fully backward-compatible with new pipeline configurations
  • If the collection and processing capabilities described by the pipeline configuration can be expressed by the old configuration, the pipeline configuration can still be used by iLogtail versions 0.x and 1.x. Simple Log Service automatically converts the new pipeline configuration to the old configuration when the configuration is delivered to iLogtail.
  • Otherwise, this pipeline configuration is ignored by iLogtail versions 0.x and 1.x.

Open Source Edition

There is a small amount of incompatibility between the new collection configuration and the old collection configuration. For more information, see iLogtail 2.0 collection configuration incompatibility change description [5].

3.2 iLogtail Client

(1) Storing Tags When Using Extended Processing Plugins

When using extended plugins to process logs, iLogtail 1.x stored some tags in common log fields for implementation reasons. This caused inconvenience when using features like query, search, and consumption in the Simple Log Service console. To resolve this, iLogtail 2.0 will return all tags to their original locations by default. If you still want to maintain the 1.x behavior, add "UsingOldContentTag": true to the global parameters in the configuration.

  • For collection configurations created through the old console and old APIs, the tag storage location remains the same as version 1.x after upgrading to iLogtail 2.0.
  • For collection configurations created through the new console and new APIs, the tag storage location will be reverted to the default after upgrading to iLogtail 2.0.

(2) High-Precision Log Time Extraction

Version 2.0 no longer supports the PreciseTimestampKey and PreciseTimestampUnit parameters from versions 1.x. After upgrading to iLogtail 2.0, the previous nanosecond timestamp extraction function will no longer work. If you still need to parse nanosecond timestamps, manually update the configuration based on the Log Time Parsing Supports Nanosecond Accuracy section (2.5).

(3) Time Zone Adjustment for Microsecond Timestamps in Apsara Logs

The native Apsara parsing processing plugins in version 2.0 no longer support the AdjustingMicroTimezone parameters from versions 1.x. The default microsecond timestamp will be adjusted to the correct time zone based on the configured time zone.

(4) Log Parsing Control

For native parsing plugins, in addition to the three parameters mentioned in the Finer-grained Log Parsing Control section (2.4), there is also the CopyingRawLog parameter. This parameter is only valid when both KeepingSourceWhenParseFail and KeepingSourceWhenParseSucceed are true. It adds an additional raw_log field to the log when parsing fails, containing the content that failed to be parsed.

This parameter is included for compatibility with earlier configurations. After upgrading to iLogtail 2.0, it is recommended to delete this parameter to reduce unnecessary duplicate log uploads.

Summary

The goal of Simple Log Service is to provide users with a comfortable and convenient user experience. Compared to iLogtail 1.x, the changes in iLogtail 2.0 are more noticeable, but they are just the beginning of iLogtail's journey towards a modern observable data collector. We highly recommend trying iLogtail 2.0 if feasible. You may experience some initial discomfort during the transition, but we believe that you will soon be drawn to the more powerful features and improved performance of iLogtail 2.0.

References

[1] Input plug-ins
https://www.alibabacloud.com/help/en/sls/user-guide/overview-19
[2] Processing plug-ins
https://www.alibabacloud.com/help/en/sls/user-guide/overview-22
[3] iLogtail pipeline configuration structure
https://next.api.aliyun.com/struct/Sls/2020-12-30/LogtailPipelineConfig?spm=api-workbench.api_explorer.0.0.65e61a47jWtoir
[4] OpenAPI documentation
https://next.api.aliyun.com/document/Sls/2020-12-30/CreateLogtailPipelineConfig?spm=api-workbench.api_explorer.0.0.65e61a47jWtoir
[5] iLogtail 2.0 collection configuration incompatibility change description
https://github.com/alibaba/ilogtail/discussions/1294

0 1 0
Share on

You may also like

Comments