This topic describes how to use Logtail to collect the logs of an Alibaba Cloud Elastic Compute Service (ECS) instance in full regex mode. This topic also describes how to query and analyze the collected logs.

Prerequisites

  • An ECS instance is available. For more information, see ECS quick start.
  • The ECS instance continuously generates logs.
    Important Logtail collects only incremental logs. If a log file on a server is not updated after the applied Logtail configuration is delivered to the server, Logtail does not collect logs from the file. For more information, see Read log files.

Background information

In this example, the logs are stored in the /var/log/nginx/access.log file, and the sample log is 127.0.0.1 - - [10/Jun/2022:12:36:49 +0800] "GET /index.html HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36". In this example, the full regex mode is used to collect logs based on the sample log. For more information, see Collect logs in full regex mode.

Step 1: Activate Log Service

  1. Log on to the Log Service console.
  2. Follow the on-screen instructions to activate Log Service.
    For more information about the billing of Log Service, see Billing overview.

Step 2: Create a project and a Logstore

  1. Create a project.
    1. In the Projects section, click Create Project.
    2. In the Create Project panel, configure the following parameters. For other parameters, retain the default settings. For more information, see Create a project.
      ParameterDescription
      Project NameThe name of the project. The name must be unique within your Alibaba Cloud account. After the project is created, you cannot change the name of the project.
      RegionThe region of the data center for the project. We recommend that you select the region where the ECS instance resides. This way, you can use an internal network of Alibaba Cloud to accelerate log collection.

      After the project is created, you cannot change the region or migrate the project to another region.

    3. Click OK.
  2. Create a Logstore.
    After the project is created, you are prompted to create a Logstore.

    In the Create Logstore panel, configure the following parameters. For other parameters, retain the default settings. For more information, see Create a Logstore.

    ParameterDescription
    Logstore NameThe name of the Logstore. The name must be unique in the project to which the Logstore belongs. After the Logstore is created, you cannot change the name of the Logstore.
    ShardsThe number of shards that you want to use. Log Service provides shards that can be used to read and write data.

    Each shard supports a write capacity of 5 MB/s and 500 writes/s and a read capacity of 10 MB/s and 100 reads/s. If one shard can meet your business requirements, you can set Shards to 1.

    Automatic ShardingSpecifies whether to enable the automatic sharding feature. If you turn on Automatic Sharding, Log Service automatically increases the number of shards when the specified number of shards cannot meet your write requirements.

    If the specified number of shards can meet your business requirements, you can turn off Automatic Sharding.

Step 3: Collect logs

After the Logstore is created, you are prompted to collect data.

Important By default, you can use only one Logtail configuration to collect logs from a log file. For more information about how to use multiple Logtail configurations to collect logs from a log file, see What do I do if I want to use multiple Logtail configurations to collect logs from a log file?
  1. In the Created dialog box, click OK.
  2. In the Import Data dialog box, click the On-premises Open Source/Commercial Software tab. Then, click RegEx - Text Log.
  3. Install Logtail.
    1. On the ECS Instances tab, select the ECS instance and click Execute Now.
    2. Confirm that the value of Execution Status is Success. Then, click Complete Installation.
  4. Create an IP address-based machine group and click Next.
    Configure the following parameters and retain the default settings for other parameters. For more information, see Create an IP address-based machine group.
    ParameterDescription
    NameThe name of the machine group. The name must be unique in the current project. After the machine group is created, you cannot change the name of the machine group.
    IP AddressesThe IP address of the ECS instance. If you enter multiple IP addresses, separate them with line feeds.
    Important Windows and Linux servers cannot be added to the same machine group.
  5. Select the new machine group from Source Server Groups and move the machine group to Applied Server Groups. Then, click Next.
    Important If you apply a machine group immediately after you create the machine group, the heartbeat status of the machine group may be FAIL. This issue occurs because the machine group is not connected to Log Service. To resolve this issue, you can click Automatic Retry. If the issue persists, see What do I do if no heartbeat connections are detected on Logtail?
  6. Create a Logtail configuration and click Next.
    Configure the following parameters and retain the default settings for other parameters. For more information, see Collect logs in full regex mode.
    ParameterDescription
    Config NameThe name of the Logtail configuration. The name must be unique in the current project. After the Logtail configuration is created, you cannot change the name of the Logtail configuration.
    Log PathSpecify the directory and name of log files based on the location of the logs on the server.
    • If you specify a log path in a Linux operating system, the path must start with a forward slash (/). Example: /apsara/nuwa/.../app.Log.
    • If you specify a log path in a Windows operating system, the path must start with a drive letter. Example: C:\Program Files\Intel\...\*.Log.
    You can specify an exact directory and an exact name. You can also use wildcards to specify the directory and name. For more information, see Wildcard matching. Log Service scans all levels of the specified directory for the log files that match specified conditions. Examples:
    • If you specify /apsara/nuwa/**/*.log, Log Service collects logs from the log files whose names are suffixed by .log in the /apsara/nuwa directory and the recursive subdirectories of the directory.
    • If you specify /var/logs/app_*/*.log, Log Service collects logs from the log files that meet the following conditions: The file name is suffixed by .log. The file is stored in a subdirectory under the /var/logs directory or in a recursive subdirectory of the subdirectory. The name of the subdirectory matches the app_* pattern.
    • If you specify /var/log/nginx/**/access*, Log Service collects logs from the log files whose names start with access in the /var/log/nginx directory and the recursive subdirectories of the directory.
    Note When you configure this parameter, you can use only asterisks (*) or question marks (?) as wildcards.
    • You can use an asterisk (*) to match multiple characters.
    • You can use a question mark (?) to match a single character.
    Log SampleA valid sample log that is collected from an actual scenario. Example:
    127.0.0.1 - - [10/Jun/2022:12:36:49 +0800] "GET /index.html HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36"
    Extract FieldIf you turn on Extract Field, Log Service can extract key-value pairs by using a regular expression.
    RegExIf you turn on Extract Field, you must configure this parameter.
    • Automatic generation

      In the Log Sample field, select the content that you want to extract and click Generate Regular Expression. A regular expression is automatically generated. Example: (\S+)\s-\s(\S+)\s\[([^]]+)]\s"(\w+)([^"]+)"\s(\d+)\s(\d+)[^-]+([^"]+)"\s"([^"]+).*.

    • Manual configuration

      Click Manual to specify a regular expression. Then, click Validate to check whether the regular expression can be used to parse logs or extract content from logs. For more information, see How do I test a regular expression?

    Extracted ContentIf you turn on Extract Field, you must configure this parameter.

    After log content is extracted as values by using the regular expression, you must specify a key for each value.

    After you configure the parameters, click Next. Then, Log Service starts to collect logs.
    Note
  7. Preview data, configure indexes, and then click Next.
    By default, full-text indexing is enabled for Log Service. You can also configure field indexes based on collected logs in manual mode or automatic mode. To configure field indexes in automatic mode, click Automatic Index Generation. This way, Log Service automatically creates field indexes. For more information, see Create indexes.
    Important If you want to query and analyze logs, you must enable full-text indexing or field indexing. If you enable both full-text indexing and field indexing, the system uses only field indexes.

Step 4: Query and analyze logs

After you configure indexes, you can query and analyze logs.

  1. In the End step of the configuration wizard, click Log Query.
    You must wait approximately 1 minute for the indexes to take effect. Then, you can view the collected logs on the Raw Logs tab. For more information, see Query and analyze logs.
  2. On the query and analysis page of the Logstore that you specify, enter a query statement, and select a time range.
    For example, you can execute the following query statement to count the number of requests that correspond to each status code. The query and analysis results are displayed in a table.

FAQ

  • Am I charged if I only create projects and Logstores?

    Log Service provides shards that can be used to read and write data. By default, shard resources are reserved when you create a Logstore. You are charged for active shards. For more information, see Why am I charged for active shards?

  • What do I do if logs fail to be collected?

    When you use Logtail to collect logs, a failure may occur due to Logtail heartbeat failures, collection errors, or invalid Logtail configurations. For more information, see What do I do if errors occur when I use Logtail to collect logs?

  • What do I do if I can query logs but cannot analyze logs on the query and analysis page of a Logstore?

    If you want to analyze logs, you must configure indexes for log fields and turn on the switches in the Enable Analytics column. For more information, see Create indexes.