All Products
Search
Document Center

Simple Log Service:Manage Logtail configurations for log collection

Last Updated:Jun 26, 2025

This topic describes how to create, view, modify, and delete Logtail configurations for log collection in the Simple Log Service console. In addition to console operations, Simple Log Service also supports API and SDK methods.

Overview of Logtail configurations

Logtail configurations define the core rules for how to collect and process log data. Their purpose is to enable efficient log collection, structured parsing, filtering, and processing through flexible configuration.

Logtail configurations in the console comprise three parts:

  • Global configurations are used to organize collected text logs into different log topics.

    Global configurations

    Parameter

    Description

    Configuration Name

    The name of the Logtail configuration. It must be unique in a project. After the Logtail configuration is created, its name cannot be changed.

    Log Topic Type

    Select a method to generate log topics.

    • Machine Group Topic: Simple Log Service allows you to apply a Logtail configuration to multiple machine groups. Use the topic to distinguish logs from different machine groups. When Logtail reports data, it uploads the topic of the server's machine group as the log topic to the project. Specify the log topic as a query condition when querying logs.

    • File Path Extraction: If different users or applications store logs in different top-level directories but have the same subdirectories and log file names, Simple Log Service cannot clearly distinguish which user or application generated the logs during collection. You can use File Path Extraction to distinguish log data generated by different users or applications. Use a regular expression to fully match the file path and upload the matched result (username or application name) as the log topic to Simple Log Service.

      File path extraction scenario examples

      Note

      You must escape the forward slashes (/) in a regular expression that is used to match a log file path.

      Scenario 1: Different users record logs in different directories, but the log file names are the same. The directory paths are as follows:

      /data/logs
      ├── userA
      │   └── serviceA
      │       └── service.log
      ├── userB
      │   └── serviceA
      │       └── service.log
      └── userC
          └── serviceA
              └── service.log

      If you only configure the file path as /data/logs and the file name as service.log in the Logtail configuration, Logtail will collect the content of all three service.log files into the same logstore, making it impossible to distinguish which user generated the logs. You can specify the following regular expression to extract a value from each log file path. Each value is used as a unique log topic.

      • Regular expression

        \/data\/logs\/(.*)\/serviceA\/.*
      • Extraction results

        __topic__: userA
        __topic__: userB
        __topic__: userC

      Scenario 2: If a single log topic is not sufficient to distinguish the source of logs, configure multiple capturing groups in the log file path to extract key information. Capturing groups include named (?P<name>) and unnamed capturing groups. If you use named capturing groups, the generated tag field is __tag__:{name}. If you use unnamed ones, the generated tag field is __tag__:__topic_{i}__, where {i} is the ordinal number of the capturing group.

      Note

      When multiple capturing groups exist in the regular expression, the __topic__ field will not be generated.

      For example, if the file path is /data/logs/userA/serviceA/service.log, you can extract multiple values from the file path using the following methods:

      • Example 1: Extract multiple values by using unnamed capturing groups in a regular expression.

        • Regular expression

          \/data\/logs\/(.*?)\/(.*?)\/service.log
        • Extraction results

          __tag__:__topic_1__: userA
          __tag__:__topic_2__: serviceA
      • Example 2: Extract multiple values by using named capturing groups in a regular expression.

        • Regular expression

          \/data\/logs\/(?P<user>.*?)\/(?P<service>.*?)\/service.log
        • Extraction results

          __tag__:user: userA
          __tag__:service: serviceA

      Verification: After configuration, query logs by log topic: On the log query and analysis page, enter the corresponding generated log topic, such as __topic__: userA or __tag__:__topic_1__: userA to query logs of the respective topic. For more information, see Query syntax and features.

      image

    • Custom: Enter customized:// + custom topic name to use a custom static log topic.

    Advanced Parameters

    For other optional advanced parameters related to global configuration, see Create a Logtail pipeline configuration.

  • Input configurations are used to define the collection details.

    Input configurations

    Parameter

    Description

    File Path

    Specify the directory and name of log files based on the location of the logs on your server, such as an Elastic Compute Service (ECS) instance.

    • Linux file paths must start with a forward slash (/). Example: /apsara/nuwa/**/app.Log.

    • Windows file paths must start with a drive letter. Example: C:\Program Files\Intel\**\*.Log.

    You can specify an exact directory and an exact name. You can also use wildcard characters to specify the directory and name. When you configure this parameter, use only the asterisk (*) or question mark (?) as wildcard characters.

    Simple Log Service scans all levels of the specified directory to find the log files that match the specified conditions. Examples:

    • If you specify /apsara/nuwa/**/*.log, Simple Log Service collects logs from the log files suffixed by .log in the /apsara/nuwa directory and its recursive subdirectories.

    • If you specify /var/logs/app_*/**/*.log, Simple Log Service collects logs from the log files that meet the following conditions: The file name is suffixed by .log. The file is stored in a subdirectory of the /var/logs directory or one of its recursive subdirectories. The name of the subdirectory matches the app_* pattern.

    • If you specify /var/log/nginx/**/access*, Simple Log Service collects logs from the log files whose names start with access in the /var/log/nginx directory and its recursive subdirectories.

    Maximum Directory Monitoring Depth

    Specify the maximum number of levels of subdirectories that you want to monitor. The subdirectories are in the log file directory that you specify. This parameter specifies the levels of subdirectories that can be matched by the ** wildcard characters included in the value of File Path. A value of 0 specifies that only the log file directory that you specify is monitored.

    File Encoding

    Select the encoding format of log files.

    First Collection Size

    Specify the size of data that Logtail can collect from a log file the first time it does so. Default value: 1024. Unit: KB.

    • If it's less than 1,024 KB, Logtail collects data from the beginning of the file.

    • If it's greater than 1,024 KB, Logtail collects the last 1,024 KB of data in the file.

    You can configure First Collection Size based on your business requirements. Valid values: 0 to 10485760. Unit: KB.

    Collection Blacklist

    If you enable this, configure a blacklist to specify the directories or files that you want Simple Log Service to skip when it collects logs. You can specify exact directories and file names. You can also use wildcard characters to specify directories and file names. When you configure this parameter, you can use only the asterisk (*) or question mark (?) as wildcard characters.

    Important
    • If you use wildcard characters to specify a value for File Path and you want to skip some subdirectories in the specified directory, configure Collection Blacklist to specify the subdirectories. You must specify complete ones.

      For example, if you set File Path to /home/admin/app*/log/*.log and you want to skip all subdirectories in the /home/admin/app1* directory, select Directory Blacklist and enter /home/admin/app1*/** in the Directory Name field. If you enter /home/admin/app1*, the blacklist does not take effect.

    • When a blacklist is in use, computational overhead is generated. We recommend a maximum of 10 entries per blacklist.

    • You cannot specify a directory that ends with a forward slash (/). For example, if you specify the /home/admin/dir1/ directory, the directory blacklist does not take effect.

    The following types of blacklists are supported:

    File Path Blacklist

    • If you select File Path Blacklist and enter /home/admin/private*.log in the File Path Name field, all files prefixed by private and suffixed by .log in the /home/admin/ directory are skipped.

    • If you select File Path Blacklist and enter /home/admin/private*/*_inner.log in the File Path Name field, all files suffixed by _inner.log in the subdirectories prefixed by private in the /home/admin/ directory are skipped. For example, the /home/admin/private/app_inner.log file is skipped, but the /home/admin/private/app.log file is not.

    File Blacklist

    If you select File Blacklist and enter app_inner.log in the File Name field, all files whose names are app_inner.log are skipped.

    Directory Blacklist

    • If you select Directory Blacklist and enter /home/admin/dir1 in the Directory Name field, all files in the /home/admin/dir1 directory are skipped.

    • If you select Directory Blacklist and enter /home/admin/dir* in the Directory Name field, all files in the subdirectories prefixed by dir in the /home/admin/ directory are skipped.

    • If you select Directory Blacklist and enter /home/admin/*/dir in the Directory Name field, all files in the dir subdirectory in each second-level subdirectory of the /home/admin/ directory are skipped. For example, the files in the /home/admin/a/dir directory are skipped, but those in the /home/admin/a/b/dir directory are not.

    Allow File to Be Collected Multiple Times

    By default, you can use only one Logtail configuration to collect logs from a log file. If you want to collect multiple copies of logs from a log file, turn on Allow File to Be Collected Multiple Times.

    Advanced Parameters

    Optional. Configure the advanced parameters that are related to input processors. For more information, see CreateLogtailPipelineConfig.

  • Processor configurations are used to transform raw logs into structured data.

    Processor configurations

    Parameter

    Description

    Log Sample

    Add a sample log collected from an actual scenario. You can use the sample log to easily configure parameters related to log processing. You can add multiple sample logs. Ensure that the total length of them does not exceed 1,500 characters.

    [2023-10-01T10:30:01,000] [INFO] java.lang.Exception: exception happened
        at TestPrintStackTrace.f(TestPrintStackTrace.java:3)
        at TestPrintStackTrace.g(TestPrintStackTrace.java:7)
        at TestPrintStackTrace.main(TestPrintStackTrace.java:16)

    Multi-line Mode

    • Specify the type of multi-line logs. A multi-line log spans multiple consecutive lines. You can configure this parameter to identify each multi-line log in a log file.

      • Custom: A multi-line log is identified based on the value of Regex to Match First Line.

      • Multi-line JSON: Each JSON object is expanded into multiple lines. Example:

        {
          "name": "John Doe",
          "age": 30,
          "address": {
            "city": "New York",
            "country": "USA"
          }
        }
    • Configure Processing Method If Splitting Fails.

      Exception in thread "main" java.lang.NullPointerException
          at com.example.MyClass.methodA(MyClass.java:12)
          at com.example.MyClass.methodB(MyClass.java:34)
          at com.example.MyClass.main(MyClass.java:½0)

      For the preceding sample log, Simple Log Service can discard the log or retain each single line as a log if it fails to split it.

      • Discard: The log is discarded.

      • Retain Single Line: Each line of log text is retained as a log. A total of four logs are retained.

    Processing Method

    Add processors as needed. You can add native and extended processors for data processing.

    Important

    Refer to the console page prompts for usage restrictions on the processors.

    • Logtail V2.0

      • You can arbitrarily combine native processors for data processing.

      • You can combine native and extended processors. Extended processors must follow native processors in the sequence.

    • Logtail earlier than V2.0

      • You cannot add native and extended processors at the same time.

      • You can use native processors only to collect text logs. When you add them, note the following:

        • You must first add one of the following Logtail processors: Data Parsing (Regex Mode), Data Parsing (Delimiter Mode), Data Parsing (JSON Mode), Data Parsing (NGINX Mode), Data Parsing (Apache Mode), and Data Parsing (IIS Mode).

        • After you add the first processor, you can add a Time Parsing processor, a Data Filtering processor, and multiple Data Masking processors.

      • When you configure the Retain Original Field if Parsing Fails and Retain Original Field if Parsing Succeeds parameters, you can use only the following parameter combinations. For others, Simple Log Service does not ensure configuration effects.

        • Upload logs that are parsed.

          image

        • Upload logs that are obtained after successful parsing, and raw ones if the parsing fails.

          image

        • Upload logs obtained after parsing. Add a raw log field to the logs if the parsing succeeds, and raw logs if it fails.

          For example, if a raw log is "content": "{"request_method":"GET", "request_time":"200"}" and it's successfully parsed, the system adds a raw log field, which is specified by the New Name of Original Field parameter. If you do not configure the parameter, the original field name is used. The field value is {"request_method":"GET", "request_time":"200"}.

          image

Create a Logtail configuration

  1. Log on to the Simple Log Service console. In the Projects section, click the one you want.

    image

  2. Find your logstore, and select Data Collection > Logtail Configurations > Add Logtail Configuration. Click Integrate Now. In this example, Regular Expression - Text Log is used, which means text logs will be parsed using regular expression matching.image

  3. Select Servers and ECS. Select the machine group you created earlier, and click the > button to add it to the applied machine group. Then click Next. If no machine group is available, see Create a machine group.image

  4. In Global Configurations, enter the configuration name. In Other Global Configurations, set the log topic.

    imageThe log topic configuration items are described as follows: For detailed parameters, see Global configuration parameters.

    • Machine Group Topic: If you select it, you must configure it when creating a machine group.

    • File Path Extraction: If you select it, you must configure the regular expression.

    • Custom: If you select it, you must enter customized:// + custom topic name to use a custom static log topic.

  5. In Input Configurations, configure the File Path, which represents the path for log collection. The log path must start with a forward slash (/), such as /data/wwwlogs/main/**/*.Log, which indicates files with the .Log suffix in the /data/wwwlogs/main directory. To set the maximum depth of the log directory to be monitored (that is, the maximum directory depth that the wildcard ** in the File Path can match), modify the value of the maximum directory monitoring depth. A value of 0 specifies that only the specified log file directory is monitored. For detailed parameters, see Input configuration parameters.

    image

  6. In Processor Configurations, set Log Sample, Multi-line Mode, and Processing Method.image

    1. We recommend that you add a sample log in the Log Sample field. Sample logs can help you easily configure log processing-related parameters. If you configure this field, use a sample log from an actual collection scenario.

    2. Turn on Multi-line Mode as needed. A multi-line log spans multiple consecutive lines. If you turn it off, Simple Log Service collects logs in single-line mode. Each log is placed in a line. If you turn it on, configure the following parameters:

      • Type

        • Custom: If the format of the raw logs is not fixed, configure Regex To Match First Line to identify the first line of each log. For example, we can use the regular expression \[\d+-\d+-\w+:\d+:\d+,\d+]\s\[\w+]\s.* to split the five lines of raw data in the example into two logs. Note that the value of the Regex to Match First Line parameter must match the entire line of data.

          [2023-10-01T10:30:01,000] [INFO] java.lang.Exception: exception happened
              at TestPrintStackTrace.f(TestPrintStackTrace.java:3)
              at TestPrintStackTrace.g(TestPrintStackTrace.java:7)
              at TestPrintStackTrace.main(TestPrintStackTrace.java:16)
          [2023-10-01T10:31:01,000] [INFO] java.lang.Exception: exception happened
        • Multi-line JSON: If the raw logs are in standard JSON format, set Type to Multi-line JSON. Logtail automatically processes the line feeds that occur within a JSON-formatted log.

      • Processing Method If Splitting Fails:

        • Discard: Discards the text.

        • Retain Single Line: Saves each line of the text as a log.

    3. Processors: Set Processing Method to Processors. A processor is configured to split logs. In this example, Logtail collects text logs in full regex mode, and a Data Parsing (Regex Mode) processor is automatically generated. You can use other processors as needed.

      The following describes common processors. For more processor capabilities such as time parsing, filtering, and data masking, see Processing plugins. Simple Log Service also provides SPL-based data processing, which features higher processing efficiency while implementing functions similar to traditional processors. For more information, see Use Logtail SPL to parse logs.

      Data Parsing (Regex Mode)

      1. Click Data Parsing (Regex Mode) to enter the processor configuration page.image

      2. Configure the regular expression and specify keys based on the extracted values. Click Generate under Regular Expression. Then select content in the log sample and click Generate Regular Expression to automatically generate a regular expression for the selected content.

        image

      3. After the regular expression is generated, specify keys based on the extracted values in the Extracted Field. These key-value pairs can be used to create indexes.

        image

      For more information, see Data Parsing (Regex Mode).

      Data Parsing (JSON Mode)

      Important

      To process collected JSON logs, add a Data Parsing (JSON Mode) processor. JSON logs can be in object or array format. An object log contains key-value pairs, and an array log has an ordered list of values. The Data Parsing (JSON Mode) processor can parse object-type JSON logs and extract key-value pairs from the first layer. The extracted keys become field names, and the values become field values. The processor cannot parse JSON logs of the array type. For more granular processing, see Extended plugin: expand JSON fields.

      Turn on Multi-line Mode as needed. If you turn it on, do as follows:

      1. Set Type to Multi-line JSON.

      2. Set Processing Method If Splitting Fails to Retain Single Line.

        image

      3. Delete Data Parsing (Regex Mode) from the Processing Method list and add Data Parsing (JSON Mode).

      image

      The following table describes the parameters of Data Parsing (JSON Mode):

      Parameter name

      Description

      Original Field

      The original field that stores log content before parsing. Default value: content.

      Retain Original Field if Parsing Fails

      If selected, the original field is retained when parsing fails.

      Retain Original Field if Parsing Succeeds

      If selected, the original field is retained when parsing succeeds.

      New Name of Original Field

      After you select Retain Original Field if Parsing Fails or Retain Original Field if Parsing Succeeds, rename the original field storing the raw log content.

      For more information, see Data Parsing (JSON Mode).

      Data Parsing (Delimiter Mode)

      Note

      Use a Data Parsing (Delimiter Mode) processor to parse logs into multiple key-value pairs based on a specific delimiter.

      Delete the Data Parsing (Regex Mode) processor from the Processing Method list and add a Data Parsing (Delimiter Mode) processor.image

      The following table describes the parameters of the Data Parsing (Delimiter Mode) processor.

      Parameter

      Description

      Original Field

      The original field that stores log content before parsing. Default value: content.

      Delimiter

      The delimiter based on which you want to extract log fields. Select a delimiter based on the actual log content, such as Vertical Bar (|).

      Note

      If you select Non-printable Character as the delimiter, find the hexadecimal value of the invisible character in the ASCII table and enter it in the format 0x<hexadecimal value of the invisible character in the ASCII table>. For example, the first invisible character in the ASCII table is 0x01.

      Quote

      When log field content contains delimiters, you must specify quotes to wrap the content. Content wrapped in quotes will be parsed by Simple Log Service as a complete field. You must select a quote based on the format of logs that you want to collect.

      Note

      If you select Non-printable Character as the quote, find the hexadecimal value of the invisible character in the ASCII table and enter it in the format 0x<hexadecimal value of the invisible character in the ASCII table>. For example, the first invisible character in the ASCII table is 0x01.

      Extracted Field

      • If you configure a log sample, Simple Log Service extracts log content based on the log sample and the delimiter you select, and defines the content as values. Specify a key for each value.

      • If you do not specify a sample log, the Value column is unavailable. Specify keys based on the actual logs and the delimiter.

      A key can contain only letters, digits, and underscores (_) and must start with a letter or an underscore (_). A key can be up to 128 bytes in length.

      Allow Missing Field

      Specifies whether to upload logs to Simple Log Service if the number of values extracted from logs is less than the number of keys. If you select Allow Missing Field, the logs are uploaded.

      For example, if the log is 11|22|33|44, the delimiter is the vertical bar (|), and the keys are A, B, C, D, and E.

      • If you select Allow Missing Field, the value of the E field is empty, and the log is uploaded to Simple Log Service.

      • Otherwise, the log is discarded.

        Note

        Linux Logtail 1.0.28 and later or Windows Logtail 1.0.28.0 and later support the Allow Missing Field parameter for delimiter mode configuration.

      Processing Method of Field to which Excess Part is Assigned

      The method for handling extra values when the number of extracted values exceeds the specified keys. Valid values:

      • Expand: Retains the excess values and adds them to fields in the __column$i__ format, where $i represents the sequence number of the excess field, starting from 0. Examples: __column0__ and __column1__.

      • Retain: Retains the excess values and adds them to a field named __column0__.

      • Discard: Discards the excess values.

      Retain Original Field if Parsing Fails

      Retains the original field when parsing fails.

      Retain Original Field if Parsing Succeeds

      Retains the original field when parsing succeeds.

      New Name of Original Field

      After you select Retain Original Field if Parsing Fails or Retain Original Field if Parsing Succeeds, rename the original field storing the raw log content.

      For more information, see Data Parsing (Delimiter Mode).

      Data Parsing (Apache Mode)

      Note

      Use a Data Parsing (Apache Mode) processor to parse Apache logs into structured data based on the log format that you specify in the Apache configuration file. A log is parsed into multiple key-value pairs.

      Procedure

      Delete the Data Parsing (Regex Mode) processor from the Processing Method list and add a Data Parsing (Apache Mode) processor.

      image

      The following table describes the parameters of the Data Parsing (Apache Mode) processor.

      Parameter name

      Description

      Log Format

      Select the log format defined in the Apache configuration file, such as common, combined, or custom.

      APACHE LogFormat Configuration

      The log configuration section specified in the Apache configuration file. In most cases, the section starts with LogFormat.

      • When you set Log Format to common or combined, the configuration fields of the corresponding format are automatically filled. Confirm whether the format is consistent with the format defined in the Apache configuration file.

      • When you set Log Format to Custom, fill in this field as needed, for example, LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %D %f %k %p %q %R %T %I %O" customized.

      Original Field

      The original field storing the log content before parsing. Default value: content.

      Regular Expression

      The regular expression extracting Apache logs. Simple Log Service automatically generates this regular expression based on the content in the APACHE LogFormat Configuration.

      Extracted Field

      Automatically generates log fields (keys) based on the content in the APACHE LogFormat Configuration.

      Retain Original Field if Parsing Fails

      Retains the original field when parsing fails.

      Retain Original Field if Parsing Succeeds

      Retains the original field when parsing succeeds.

      New Name of Original Field

      After you select Retain Original Field if Parsing Fails or Retain Original Field if Parsing Succeeds, rename the original field storing the raw log content.

      For more information, see Data Parsing (Apache Mode).

      Data Parsing (NGINX Mode)

      Note

      Use a Data Parsing (NGINX Mode) processor to parse NGINX logs into structured data based on the log format that you specify in the NGINX configuration file. A log is parsed into multiple key-value pairs.

      Delete the Data Parsing (Regex Mode) processor from the Processing Method list, then add a Data Parsing (NGINX Mode) processor.

      image

      The following table describes the parameters of the Data Parsing (NGINX Mode) processor.

      Parameter name

      Note

      NGINX Log Configuration

      The log configuration section in the Nginx configuration file starts with log_format. For example:

      log_format main  '$remote_addr - $remote_user [$time_local] "$request" '
                       '$request_time $request_length '
                       '$status $body_bytes_sent "$http_referer" '
                       '"$http_user_agent"';

      For more information, see Introduction to Nginx logs.

      Original Field

      The original field that stores the log content before parsing. Default value: content.

      Regular Expression

      The regular expression that is used to extract NGINX logs. Simple Log Service automatically generates this regular expression based on the content in NGINX Log Configuration .

      Extracted Field

      Automatically extracts the corresponding log fields (keys) based on the NGINX Log Configuration.

      Retain Original Field if Parsing Fails

      Retains the original field when parsing fails.

      Retain Original Field if Parsing Succeeds

      Retains the original field when parsing succeeds.

      New Name of Original Field

      After you select Retain Original Field if Parsing Fails or Retain Original Field if Parsing Succeeds, rename the original field storing the raw log content.

      For more information, see Data Parsing (NGINX Mode).

      Data Parsing (IIS Mode)

      Note

      Use a Data Parsing (IIS Mode) processor to parse IIS logs into structured data based on the log format that you specify in the IIS configuration file. A log is parsed into multiple key-value pairs.

      image

      The following table describes the parameters of the Data Parsing (IIS Mode) processor.

      Parameter name

      Note

      Log Format

      Select the log format used by your IIS server logs.

      • IIS: Microsoft IIS log file format.

      • NCSA: NCSA common log file format.

      • W3C: W3C extended log file format.

      IIS Configuration Fields

      The IIS configuration fields:

      • If you set Log Format to IIS or NCSA, the system automatically specifies the IIS configuration fields.

      • If you set Log Format to W3C, specify the content of the logExtFileFlags parameter in the IIS configuration file. Example:

        logExtFileFlags="Date, Time, ClientIP, UserName, SiteName, ComputerName, ServerIP, Method, UriStem, UriQuery, HttpStatus, Win32Status, BytesSent, BytesRecv, TimeTaken, ServerPort, UserAgent, Cookie, Referer, ProtocolVersion, Host, HttpSubStatus"
        • Default path of the IIS5 configuration file: C:\WINNT\system32\inetsrv\MetaBase.bin.

        • Default path of the IIS6 configuration file: C:\WINDOWS\system32\inetsrv\MetaBase.xml.

        • Default path of the IIS7 configuration file: C:\Windows\System32\inetsrv\config\applicationHost.config.

      Original Field

      The original field that stores the log content before parsing. Default value: content.

      Regular Expression

      The regular expression that is used to extract IIS logs. Simple Log Service automatically generates this regular expression based on the content in the IIS Configuration Fields.

      Extracted Field

      Automatically generates log fields (keys) based on the content in the IIS Configuration Field.

      Retain Original Field if Parsing Fails

      Retains the original field when parsing fails.

      Retain Original Field if Parsing Succeeds

      Retains the original field when parsing succeeds.

      New Name of Original Field

      After you select Retain Original Field if Parsing Fails or Retain Original Field if Parsing Succeeds, rename the original field storing the raw log content.

      For more information, see Data parsing (IIS mode).

      SPL-based data processing

      Simple Log Service offers custom SPL-based data processing. Compared to traditional plugins, SPL-based processing is faster, more efficient, and easier to use. This enhances the overall capabilities of Simple Log Service, allowing you to process data using SPL statements and their computing features. For more information, see the following topics:

View a Logtail configuration

  1. Log on to the Simple Log Service console.

  2. In the Projects section, click the one you want to manage.

    image

  3. On the Log Storage > Logstores tab, click the > icon in front of the target logstore, then choose Data Collection > Logtail Configuration.

  4. Click the target Logtail configuration to view its details.

Modify a Logtail configuration

  1. Log on to the Simple Log Service console.

  2. In the Projects section, click the one you want to manage.

    image

  3. On the Log Storage > Logstores tab, click the > icon in front of the target logstore, then choose Data Collection > Logtail Configuration.

  4. In the Logtail Configuration list, click the target Logtail configuration.

  5. On the Logtail Configuration page, click Edit.

  6. Modify the configuration and click Save.

    For more information, see Overview of Logtail configurations.

Delete a Logtail configuration

  1. In the Logtail Configuration list, select the target Logtail configuration and click Delete in the Actions column.

  2. In the Delete dialog box, click OK.

    After the Logtail configuration is deleted, it is detached from the machine group, and Logtail stops collecting the logs based on the configuration.

    Note

    To delete a logstore, you must first delete all Logtail configurations associated with it.