This topic describes how to collect the logs of an Alibaba Cloud Elastic Compute Service (ECS) instance in the Log Service console. This topic also describes how to query and analyze the collected logs.

Prerequisites

  • An ECS instance is available. For more information, see ECS quick start.
  • Logs are available on the ECS instance.

Background information

In this example, the logs are stored in the /var/log/nginx/access.log file, and the sample log is 127.0.0.1#-#13/Apr/2020:09:44:41 +0800#GET /1 HTTP/1.1#0.000#74#404#3650#-#curl/7.29.0. The delimiter mode is used to collect the sample log. For more information, see Collect logs in delimiter mode.

Step 1: Activate Log Service

  1. Log on to the Log Service console.
  2. Activate Log Service as prompted.

Step 2: Create a project and a Logstore

  1. Log on to the Log Service console.
  2. Create a project.
    1. In the Projects section, click Create Project.
    2. In the Create Project panel, configure the parameters. Use the default settings for other parameters. The following table describes the parameters.
      Parameter Description
      Project Name The name of the project. The name must be unique in a region. After the project is created, you cannot change the name of the project.
      Region The region where the data center of the project resides. We recommend that you select the region where your ECS instance resides. Then, you can use an internal network of Alibaba Cloud to accelerate log collection.

      After the project is created, you cannot change the region or migrate the project to another region.

    3. Click OK.
  3. Create a Logstore.
    After the project is created, you are prompted to create a Logstore.

    In the Create Logstore panel, configure the parameters. Use the default settings for other parameters. The following table describes the parameters.

    Parameter Description
    Logstore Name The name of the Logstore. The name must be unique in the project to which the Logstore belongs. After the Logstore is created, you cannot change the name of the Logstore.
    Shards Log Service provides shards to read and write data.

    Each shard supports a write speed of 5 MB/s, 500 write operations per second, a read speed of 10 MB/s, and 100 read operations per second. If one shard can meet your business requirements, you can set the Shards parameter to 1.

    Automatic Sharding Specifies whether to enable the automatic sharding feature. If you turn on Automatic Sharding and the read or write speed does not meet your business requirements, Log Service increases the number of shards.

    If the specified number of shards can meet your business requirements, you can turn off Automatic Sharding.

Step 3: Collect logs

After the Logstore is created, you are prompted to import data.

  1. In the Created dialog box, click OK.
  2. In the Import Data dialog box, click Delimiter Mode - Text Log.
  3. Install Logtail.
    1. On the ECS Instances tab, select the ECS instance from which you want to collect logs and click Execute Now.
    2. Make sure that the value of the Execution Status parameter is Success. Then, click Complete Installation.
  4. Create an IP address-based machine group and click Next.
    Configure the parameters and use the default settings for other parameters. The following table describes the parameters.
    Parameter Description
    Name The name of the machine group. The name must be unique in a project. After the machine group is created, you cannot change the name of the machine group.
    IP Address The IP address of the ECS instance. Separate multiple IP addresses with line feeds.
    Notice You cannot add Windows and Linux servers to the same machine group.
  5. Select the new machine group from the Source Server Groups section and move the machine group to the Applied Server Groups section. Then, click Next.
    Notice If you enable a machine group immediately after you create the machine group, the heartbeat status of the machine group may be FAIL. This issue occurs because the machine group is not connected to Log Service. To resolve this issue, you can click Automatic Retry. If the issue persists, see What do I do if a Logtail machine group has no heartbeats?
  6. Create a Logtail configuration and click Next.
    Configure the parameters and use the default settings for other parameters. The following table describes the parameters.
    Parameter Description
    Config Name The name of the Logtail configuration. The name must be unique in a project. After the Logtail configuration is created, you cannot change the name of the Logtail configuration.
    Log Path The directory and name of the log file.
    You can specify an exact directory and an exact name. You can also use wildcards to specify the directory and name. For more information, see Wildcard matching. Log Service scans all levels of the specified directory for log files that meet the specified conditions. Examples:
    • If you specify /apsara/nuwa/…/*.log, Log Service matches the log files whose name is suffixed by .log in the /apsara/nuwa directory and its recursive subdirectories.
    • If you specify /var/logs/app_*/*.log, Log Service matches the log files that meet the following conditions: The file name contains .log. The file is stored in a subdirectory of the /var/logs directory or in a recursive subdirectory of the subdirectory. The name of the subdirectory matches the app_* pattern.
    Note
    • By default, logs in each log file can be collected by using only one Logtail configuration.
    • You can use only asterisks (*) or question marks (?) as wildcards when you specify a log path.
    Log Sample A valid sample log that is collected from an actual scenario. Example:
    127.0.0.1|#|-|#|13/Apr/2020:09:44:41 +0800|#|GET /1 HTTP/1.1|#|0.000|#|74|#|404|#|3650|#|-|#|curl/7.29.0
    Delimiter The delimiter that is used in the sample log. Example: |#|.
    Note If you set the Delimiter parameter to Hidden Characters, you must enter a character in the following format: 0xHexadecimal ASCII code of the non-printable character. For example, if you want to use the non-printable character whose hexadecimal ASCII code is 01, you must enter 0x01.
    Extracted Content The log content that can be extracted. Log Service extracts log content based on the specified sample log and delimiter. The extracted log content is delimited into values. You must specify a key for each value.
    If you click Next, the Logtail configuration is created, and Log Service starts to collect logs.
    Note
  7. Configure indexes.
    Note
    • An index takes effect only on the log data that is written to Log Service after the index is created.
    • If you want to query and analyze logs, you must configure field indexes for log fields and turn on Enable Analytics for the fields. For more information, see Configure indexes.
    1. After the collected logs are displayed in the Preview Data section, click Automatic Index Generation.
    2. In the Automatically Generate Index Attributes dialog box, confirm the index settings and click OK.
    3. Click Next.

Step 4: Query and analyze logs

  1. In the Projects section, click the project in which you want to query and analyze logs.
  2. Choose Log Storage > Logstores. On the Logstores tab, click the Logstore where logs are stored.
  3. Enter a query statement in the search box, select a time range, and then click Search & Analyze.
    For example, you can execute the following query statement to obtain the geographical distribution of source IP addresses on the previous day. Log Service displays the query and analysis result in a table.
    • Query statement
      * | select count(1) as c, ip_to_province(remote_addr) as address group by address limit 100

      For more information, see Log search overview and Log analysis overview.

    • Query and analysis result

      The following figure shows that 329 requests are sent from the Guangdong province and 313 requests are sent from Beijing on the previous day. Log Service can display the query and analysis result on a chart. For more information, see Chart overview.

      Query and analysis result

FAQ

  • Am I charged if I only create projects and Logstores?

    Log Service provides shards to read and write data. By default, shard resources are reserved when you create a Logstore. You are charged for active shards. For more information, see Why am I charged for active shards?

  • What do I do if logs fail to be collected?

    When you use Logtail to collect logs, logs may fail to be collected due to Logtail heartbeat failures, collection errors, or invalid Logtail configurations. For more information, see What do I do if errors occur when I use Logtail to collect logs?.

  • What do I do if I can query logs but cannot analyze logs on the Query & Analysis page of a Logstore?

    If you want to analyze logs, you must configure field indexes for log fields and turn on Enable Analytics for the fields. For more information, see Configure indexes.