This topic describes how to collect the logs of an Alibaba Cloud Elastic Compute Service (ECS) instance in the Log Service console. This topic also describes how to query and analyze the collected logs.
- An ECS instance is available. For more information, see ECS quick start.
- Logs are available on the ECS instance.
127.0.0.1|#|-|#|13/Apr/2020:09:44:41 +0800|#|GET /1 HTTP/1.1|#|0.000|#|74|#|404|#|3650|#|-|#|curl/7.29.0. The delimiter mode is used in this example to collect the sample log entry. For more information, see Collect logs in delimiter mode.
Step 1: Activate Log Service
- Log on to the Log Service console.
- Activate Log Service as prompted.
Step 2: Create a project and a Logstore
- Log on to the Log Service console.
- Create a project.
- In the Projects section, click Create Project.
- In the Create Project panel, configure the following parameters. For other parameters, retain the default
Parameter Description Project Name The name of the project. The project name must be unique within an Alibaba Cloud account. After the project is created, you cannot change the name of the project. Region The region of the data center for the project. We recommend that you select the region where the ECS instance resides. Then, you can use an internal network of Alibaba Cloud to accelerate log collection.
After the project is created, you cannot change the region or migrate the project to another region.
- Click OK.
- Create a Logstore. After the project is created, you are prompted to create a Logstore.
In the Create Logstore panel, configure the following parameters. For other parameters, retain the default settings.
Parameter Description Logstore Name The name of the Logstore. The name must be unique in the project to which the Logstore belongs. After the Logstore is created, you cannot change the name of the Logstore. Shards The number of shards that you want to use. Log Service provides shards to read and write data.
Each shard supports a write capacity of 5 MB/s and 500 times/s and a read capacity of 10 MB/s and 100 times/s. If one shard can meet your business requirements, you can set Shards to 1.
Automatic Sharding The switch of the automatic sharding feature. If you turn on Automatic Sharding, Log Service automatically increases the number of shards when the specified number of shards cannot meet your write requirements.
If the specified number of shards can meet your business requirements, you can turn off Automatic Sharding.
Step 3: Collect logs
- In the Import Data section, click Delimiter Mode - Text Log.
- Select the newly created project and Logstore. Then, click Next.
- Install Logtail.
- On the ECS Instances tab, select the ECS instance from which you want to collect logs and click Execute Now.
- Confirm that the value of Execution Status is Success. Then, click Complete Installation.
- Create an IP address-based machine group and click Next. Configure the following parameters and retain the default settings for other parameters.
Parameter Description Name The name of the machine group. The name must be unique in a project. After the machine group is created, you cannot change the name of the machine group. IP Address The IP address of the ECS instance. If you enter multiple IP addresses, separate them with line feeds.Notice Windows and Linux servers cannot be added to the same machine group.
- Select the newly created machine group and move it from the Source Server Groups section to the Applied Server Groups section. Then, click Next. Notice If you apply a machine group immediately after it is created, the heartbeat status of the machine group may be FAIL. This issue occurs because the machine group is not connected to Log Service. In this case, you can click Automatic Retry. If the issue persists, see What do I do if no heartbeat connections are detected on Logtail?
- Create a Logtail configuration and click Next. Configure the following parameters and retain the default settings for other parameters.
Parameter Description Config Name The name of the Logtail configuration. The name must be unique in a project. After the Logtail configuration is created, you cannot change the name of the Logtail configuration. Log Path The directory and name of the log file.The file name can be a complete name or a name that contains wildcards. For more information, see Wildcard matching. Log Services scans all levels of the specified directory to match log files. Example:
- If you specify /apsara/nuwa/…/*.log, Log Service matches the files whose name is suffixed by .log in the /apsara/nuwa directory and its recursive subdirectories.
- If you specify /var/logs/app_*/*.log, Log Service matches the files that meet the following conditions: The file name contains .log. The file is stored in a subdirectory under /var/logs or in a recursive subdirectory of the subdirectory. The name of the subdirectory matches the app_* pattern.
- By default, logs in each log file can be collected by using only one Logtail configuration.
- You can use only asterisks (*) or question marks (?) as wildcards in the log path.
Log Sample A valid sample log entry that is collected from an actual scenario. Example:
127.0.0.1|#|-|#|13/Apr/2020:09:44:41 +0800|#|GET /1 HTTP/1.1|#|0.000|#|74|#|404|#|3650|#|-|#|curl/7.29.0
Delimiter The delimiter used in the sample log entry. Example:
|#|.Note If you select Hidden Characters for Quote, you must enter a character in the following format:
0xHexadecimal ASCII code of the non-printable character. For example, if you want to use the non-printable character whose hexadecimal ASCII code is 01, you must enter 0x01.
Extracted Content The log content that can be extracted. Log Service extracts log content based on the specified sample log entry and delimiter. The extracted log content is delimited into values. You must specify a key for each value.If you click Next, the Logtail configuration is created, and Log Service starts to collect logs.Note
- The Logtail configuration requires a maximum of 3 minutes to take effect.
- If an error occurs when you use Logtail to collect logs, see Diagnose collection errors.
- Configure indexes. Note
- An index takes effect only on the log data that is written to Log Service after the index is created.
- If you want to query and analyze logs, you must configure indexes for log fields and turn on the switches in the Enable Analytics column. For more information, see Configure indexes.
- After the collected logs are displayed in the Preview Data section, click Automatic Index Generation.
- In the Automatically Generate Index Attributes dialog box, confirm the index settings and click OK.
- Click Next.
Step 4: Query and analyze logs
- In the Projects section, click the project in which you want to query and analyze logs.
- On the tab, click the Logstore that you want to view.
- Enter a query statement in the search box, select a time range, and then click Search & Analyze. For example, you can execute the following query statement to obtain the geographical distribution of source IP addresses from the last day. Log Service can display the query result in a table.
- Query statement
* | select count(1) as c, ip_to_province(remote_addr) as address group by address limit 100
- Query and analysis results
The following figure shows that 329 IP addresses are from the Guangdong province and 313 IP addresses are from Beijing on the last day. Log Service can display the query result on a chart. For more information, see Chart overview.
- Query statement
- Am I charged if I only create projects and Logstores?
Log Service provides shards to read and write data. By default, shard resources are reserved when you create a Logstore. You are charged for active shards. For more information, see Why am I charged for active shards?
- What do I do if logs fail to be collected?
When you use Logtail to collect logs, a failure may occur due to Logtail heartbeat failures, collection errors, or invalid Logtail configurations. For more information, see Troubleshoot collection errors.
- What do I do if I can query logs but cannot analyze logs on the query and analysis
page of a Logstore?
If you want to analyze logs, you must configure indexes for log fields and turn on the switches in the Enable Analytics column. For more information, see Configure indexes.