This topic describes how to collect JSON logs and configure indexes. You can specify the required settings in the Log Service console.

Background information

JSON logs can be written in the following two types of structures:
  • Object: a collection of key-value pairs.
  • Array: an ordered list of values.

Logtail can parse JSON objects from logs and extract the key and value from the first layer of an object. The extracted key is used as the field name, and the extracted value is used as the field value. The valid data types of field values include object, array, and elementary data type, such as a string or a number.

Logtail cannot automatically parse JSON array from logs. You can use the full regex mode or simple mode to collect logs. For more information, see Collect logs in the simple modeor Use the full regex mode to collect logs.

JSON logs are separated by \n. Each line contains only one log entry.

The following examples list some JSON log entries:

{"url": "POST /PutData? Category=YunOsAccountOpLog&AccessKeyId=U0Ujpek********&Date=Fri%2C%2028%20Jun%202013%2006%3A53%3A30%20GMT&Topic=raw&Signature=pD12XYLmGxKQ%2Bmkd6x7hAgQ7b1c%3D HTTP/1.1", "ip": "10.200.98.220", "user-agent": "aliyun-sdk-java", "request": {"status": "200", "latency": "18204"}, "time": "05/May/2016:13:30:28"}
{"url": "POST /PutData? Category=YunOsAccountOpLog&AccessKeyId=U0Ujpek********&Date=Fri%2C%2028%20Jun%202013%2006%3A53%3A30%20GMT&Topic=raw&Signature=pD12XYLmGxKQ%2Bmkd6x7hAgQ7b1c%3D HTTP/1.1", "ip": "10.200.98.210", "user-agent": "aliyun-sdk-java", "request": {"status": "200", "latency": "10204"}, "time": "05/May/2016:13:30:29"}

Procedure

  1. Log on to the Log Service console.
  2. In Import Data section, select JSON - Text Log.
  3. In the Specify Logstore step, select the target project and Logstore, and click Next.
    You can also click Create Now to create a project and a Logstore. For more information, see Step 1: Create a project and a Logstore.
  4. In the Create Machine Group step, create a machine group.
    • If a machine group is available, click Using Existing Machine Groups.
    • This section uses ECS instances as an example to describe how to create a machine group. To create a machine group, perform the following steps:
      1. Install Logtail on ECS instances. For more information, see Install Logtail on ECS instances.

        If Logtail is installed on the ECS instances, click Complete Installation.

        Note If you need to collect logs from user-created clusters or servers of third-party cloud service providers, you must install Logtail on these servers. For more information, see Install Logtail in Linux or Install Logtail in Windows.
      2. After the installation is complete, click Complete Installation.
      3. On the page that appears, specify the parameters for the machine group. For more information, see Create an IP address-based machine group or Create a custom ID-based machine group.
  5. In the Machine Group Settings step, apply the configurations to the machine group.
    Select the created machine group and move the group from Source Server Groups to Applied Server Groups.
  6. In the Logtail Config step, create a Logtail configuration.
    Parameter Description
    Config Name The name of the Logtail configuration. The name cannot be modified after the Logtail configuration is created.

    You can also click Import Other Configuration to import Logtail configurations from other projects.

    Log Path The path and name of the specified log file.
    The file names can be complete names or names that contain wildcards. For more information, visit Wildcard matching. The log files in all levels of subdirectories under a specified directory are monitored if the log files match the specified pattern. Examples:
    • /apsara/nuwa/ … /*.log indicates that the files whose extension is .log in the /apsara/nuwa directory and its subdirectories are monitored.
    • /var/logs/app_* … /*.log* indicates that each file that meets the following conditions is monitored: The file name contains .log. The file is stored in a subdirectory (at all levels) of the /var/logs directory. The name of the subdirectory matches the app_* pattern.
    Note
    • Each log file can be collected by using only one Logtail configuration file.
    • You can include only asterisks (*) and question marks (?) as wildcard characters in the log path.
    Blacklist If this switch is turned on, you can configure a blacklist in the Add Blacklist section. You can configure a blacklist to skip the specified directories or files during log data collection. The names of the specified directories and files support exact match and wildcard match. Examples:
    • If you select Filter by Directory from the Filter Type drop-down list and enter /tmp/mydir in the Content column, all files in the directory are skipped.
    • If you select Filter by File from the Filter Type drop-down list and enter /tmp/mydir/file in the Content column, only the specified file is skipped.
    Docker File If the file in the Docker container is a log file, you can directly specify the log path and container tags. Logtail automatically monitors the creation and destruction of containers, and collects log entries of the specified containers based on the specified tags. For more information about container text logs, see Use the console to collect Kubernetes text logs in the DaemonSet mode.
    Mode The default value is JSON Mode. You can select other modes. For more information about how to configure other modes, see Overview.
    Use System Time
    • If you turn on the Use System Time switch, the current system time of the server where Logtail is installed is used as the log time.
    • If you turn on the Use System Time switch, you must specify a key and format for the time field. For more information about how to configure the time format, see Time formats.
    Drop Failed to Parse Logs
    • Specifies whether to drop failed-to-parse logs. If you enable the Drop Failed to Parse Logs feature, logs that fail to be parsed are not uploaded to Log Service.
    • If you disable the Drop Failed to Parse Logs feature, raw logs are uploaded to Log Service when the raw logs fail to be parsed.
    Maximum Directory Monitoring Depth The maximum number of directory layers that can be recursively monitored during log collection. Valid values: 0 to 1000. The value 0 indicates that only the directory specified in the Log Path parameter is monitored.
    You can configure advanced options based on your business requirements. We recommend that you do not modify the settings. The following table describes the parameters in the advanced options.
    Parameter Description
    Enable Plug-in Processing Specifies whether to enable the plug-in processing feature. If you enable this feature, plug-ins are used to process logs. For more information, see Process data.
    Upload Raw Log Specifies whether to upload raw logs. If you enable this feature, raw logs are written to the __raw__ field and uploaded together with the parsed logs.
    Topic Generation Mode
    • Null - Do not generate topic: This mode is selected by default. In this mode, the topic field is set to an empty string. You can query logs without the need to enter a topic.
    • Machine Group Topic Attributes: This mode is used to differentiate logs that are generated by different servers.
    • File Path Regex: In this mode, you must configure a regular expression in the Custom RegEx field. The part of a log path that matches the regular expression is used as the topic name. This mode is used to differentiate logs that are generated by different users or instances.
    Log File Encoding
    • utf8: indicates that UTF-8 encoding is used.
    • gbk: indicates that GBK encoding is used.
    Timezone The time zone where logs are collected. Valid values:
    • System Timezone: This option is selected by default. It indicates that the time zone where logs are collected is the same as the time zone to which the server belongs.
    • Custom: Select a time zone.
    Timeout The timeout period of log files. If a log file is not updated within the specified period, Logtail considers the file to be timed out. Valid values:
    • Never: All log files are continuously monitored and never time out.
    • 30 Minute Timeout: If a log file is not updated within 30 minutes, Logtail considers the file to be timed out and no longer monitors the file.

      If you select 30 Minute Timeout, you must specify the Maximum Timeout Directory Depth parameter. Valid values: 1 to 3.

    Filter Configuration The filter conditions that are used to collect logs. Only logs that match the specified filter conditions are collected. Examples:
    • Collect logs that meet a condition: Specify the filter condition to Key:level Regex:WARNING|ERROR if you need to collect only logs of only the WARNING or ERROR severity level.
    • Filter out logs that do not meet a condition:
      • Specify the filter condition to Key:level Regex:^(?!. *(INFO|DEBUG)). * if you need to filter out logs of the INFO or DEBUG severity level.
      • Specify the filter condition to Key:url Regex:. *^(?!.*(healthcheck)). * if you need to filter out logs whose URL contains the keyword healthcheck. For example, logs in which the value of the url key is /inner/healthcheck/jiankong.html are not collected.

    For more examples, visit regex-exclude-word and regex-exclude-pattern.

  7. In the Configure Query and Analysis step, configure the indexes.
    Indexes are configured by default. You can re-configure the indexes based on your business requirements. For more information, see Enable and configure the index feature for a Logstore.
    Note
    • You must configure Full Text Index or Field Search. If you configure both of them, the settings of Field Search are applied.
    • If the data type of index is long or double, the Case Sensitive and Delimiter settings are unavailable.

After all configurations are completed, Log Service starts to collect logs.