This guide shows how to collect NGINX logs from an Elastic Compute Service (ECS) instance using LoongCollector of Simple Log Service (SLS). You will learn how to configure log collection, query data with SQL, view a visualization dashboard, set up alerts, and clean up resources to avoid fees.
Prerequisites
Activate services and prepare an account
Activate SLS: If this is your first time using SLS, log on to the Simple Log Service console and activate the service as prompted.
Prepare an account:
Log on with an Alibaba Cloud account: This account has all permissions by default and can be used directly.
Log on with a RAM user: The Alibaba Cloud account must grant the required access policies to the RAM user:
AliyunLogFullAccess: Used to create and manage SLS resources such as projects and logstores.AliyunECSFullAccess: Used to install the collection agent on an ECS instance.AliyunOOSFullAccess: Used to automatically install the collection agent on an ECS instance through Alibaba Cloud Operation Orchestration Service (OOS).
In a production environment, you can create custom permission policies for more fine-grained permission management of RAM users.
Prepare an ECS instance
Ensure that the security group of the ECS instance allows outbound traffic on port 80 (HTTP) and port 443 (HTTPS).
Generate mock logs
Create a script file named
generate_nginx_logs.shand paste the following content into the file. This script writes a standard NGINX access log entry to the/var/log/nginx/access.logfile every 5 seconds.Grant execution permissions:
chmod +x generate_nginx_logs.sh.Run the script in the background:
nohup ./generate_nginx_logs.sh &.
Create a project and a logstore
A project is the resource management unit in SLS and is used to isolate data for different projects. A logstore is the storage unit for log data.
Log on to the Simple Log Service console.
Click Create Project:
Region: Select the same region as your ECS instance. This lets you collect logs over the Alibaba Cloud internal network, which speeds up log collection.
Project Name: Enter a globally unique name within Alibaba Cloud, such as
nginx-quickstart-abc.
Keep the default settings for other configurations and click Create.
On the page that indicates the project was created, click Create Logstore.
Enter a Logstore Name, such as
nginx-access-log, do not change the other configurations, and click OK.By default, a medium logstore is created, which is billed based on the volume of data written.
Install LoongCollector
In the dialog box that appears after the logstore is created, click OK to open the Quick Data Import panel.
On the Nginx - Text Logs card, click Integrate Now.
Machine Group Configurations:
Scenario: Servers
Installation Environment: ECS
Click Create Machine Group. In the panel that appears, select the target ECS instance.
Click Install and Create Machine Group. After the installation is successful, configure the machine group Name, such as
my-nginx-server, and then click OK.NoteIf the installation fails or remains pending, check whether the ECS region is the same as the project region.
Click Next to check the heartbeat status of the machine group.
When you first create a machine group, if the heartbeat status is FAIL, click Automatic Retry. The status changes to OK after about two minutes.
Create a collection configuration
After the heartbeat status is OK, click Next to go to the Logtail Configuration page:
: Enter a configuration name, such as
nginx-access-log-config.: The path for log collection,
/var/log/nginx/access.log.Processor Configurations:
Log Sample: Click Add Log Sample and paste a sample log entry:
192.168.*.* - - [15/Apr/2025:16:40:00 +0800] "GET /nginx-logo.png HTTP/1.1" 0.000 514 200 368 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.*.* Safari/537.36"Processing Method: Select Data Parsing (NGINX Mode). In the NGINX Log Configuration field, configure the
log_format. Copy and paste the following content, then click OK.log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" $request_time $request_length';In a production environment, the
log_formathere must be consistent with the definition in your NGINX configuration file, which is usually located at /etc/nginx/nginx.conf.Log parsing example:
Raw log
Structured parsed log
192.168.*.* - - [15/Apr/2025:16:40:00 +0800] "GET /nginx-logo.png HTTP/1.1" 0.000 514 200 368 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.*.* Safari/537.36"body_bytes_sent: 368 http_referer: - http_user_agent : Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.x.x Safari/537.36 remote_addr:192.168.*.* remote_user: - request_length: 514 request_method: GET request_time: 0.000 request_uri: /nginx-logo.png status: 200 time_local: 15/Apr/2025:16:40:00
Click Next to go to the Query and Analysis Configurations page. The collection configuration takes about one minute to be applied. Click Automatic Refresh. If preview data appears, the configuration is applied.
Query and analyze logs
Click Next to go to the final page. Click Query Logs. You are redirected to the query and analysis page of the target logstore. You can write SQL analytic statements to extract key business and O&M metrics from the parsed logs. Set the time range to Last 15 Minutes:
If an error pop-up appears, it is because the index is not yet configured. Close the pop-up and wait for one minute. You can then view the log content from the access.log file.
Example 1: Website page views (PVs)
Count the total number of log entries within the specified time range.
* | SELECT count(*) AS pvExample 2: Statistics on requests and error rate by minute
Calculate the total number of requests, the number of error requests (HTTP status code ≥ 400), and the error rate per minute.
* | SELECT date_trunc('minute', __time__) as time, count(1) as total_requests, count_if(status >= 400) as error_requests, round(count_if(status >= 400) * 100.0 / count(1), 2) as error_rate GROUP BY time ORDER BY time DESC LIMIT 100Example 3: Statistics on PVs by request method (GET, POST, etc.)
Group and count page views by minute and request method (GET, POST, etc.).
* | SELECT date_format(minute, '%m-%d %H:%i') AS time, request_method, pv FROM ( SELECT date_trunc('minute', __time__) AS minute, request_method, count(*) AS pv FROM log GROUP BY minute, request_method ) ORDER BY minute ASC LIMIT 10000
Visualize data on a dashboard
After you configure the NGINX parsing plug-in, SLS automatically creates a preset dashboard named nginx-access-log_NGINX Access Log.
In the navigation pane on the left, click
and choose .Find and click the dashboard name to view charts for core metrics such as page views (PVs), unique visitors (UVs), error rate, and request method distribution.
All charts can be customized and modified as needed.
Configure monitoring and alerts
Configure an alert rule to automatically send notifications when a service is abnormal, such as when the number of errors surges.
In the navigation pane on the left, click
Alerts.Create an action policy:
On the tab, click Create.
Configure the ID and Name, for example,
send-notification-to-admin.In the Primary Action Policy, click
Action Group.Select a Notification Method, such as a SMS Message, configure the Recipient, and select an Alert Template.
Click Confirm.
Create an alert rule:
Switch to the Alert Rules tab and click Create Alert.
Enter a rule name, such as
Too many server 5xx errors.In the Query Statistics field, click Create to set query conditions.
Logstore: Select
nginx-access-logthat you created.Time Range: 15 minutes (Relative).
: Enter
status >= 500 | SELECT *.Click Preview to confirm that data can be queried, and then click OK.
Trigger Condition: Configure the rule to trigger a critical alert when the query result contains more than 100 entries.
This configuration means an alert is triggered if more than 100 5xx errors occur within 15 minutes.
: Select Simple Log Service Notification and enable it.
Action Policy: Select the action policy that you created in the previous step.
Repeat Interval: Set to 15 minutes to avoid excessive repeated notifications.
Click OK to save the alert rule.
Verification: When the alert condition is met, the configured notification channel receives an alert. View all triggered alert records on the Alert History page.
Clean up resources
To avoid unnecessary charges, clean up all the resources that you created after you are finished.
Stop the log generation script
Log on to the ECS instance and run the following command to stop the log generation script that is running in the background.
kill $(ps aux | grep '[g]enerate_nginx_logs.sh' | awk '{print $2}')Uninstall LoongCollector (Optional)
In the sample code, you can replace
${region_id}withcn-hangzhou. To speed up execution, replace${region_id}with the region of your ECS instance.wget https://aliyun-observability-release-${region_id}.oss-${region_id}.aliyuncs.com/loongcollector/linux64/latest/loongcollector.sh -O loongcollector.sh;Run the uninstall command.
chmod +x loongcollector.sh; sudo ./loongcollector.sh uninstall;
Delete the project.
On the Simple Log Service console, go to the Project List page and find the project that you created, such as
nginx-quickstart-xxx.In the Actions column, click Delete.
In the delete panel, enter the project name and select a reason for deletion.
Click OK. Deleting a project also deletes all its associated resources, including logstores, collection configurations, dashboards, and alert rules.
WarningAfter a project is deleted, all its log data and configuration information are released and cannot be recovered. Before you delete a project, confirm the action to prevent data loss.
What to do next
You have now completed the process of log collection, query and analysis, dashboard visualization, and alert configuration. We recommend that you read the following documents to better understand the core concepts and plan your log resource system based on your business needs:
Familiarize yourself with data collection methods and choose the appropriate one for your business scenario.
Understand the storage resource hierarchy, plan the resource lifecycle, and allocate a reasonable number of shards.
FAQ
What should I do if the display time is inconsistent with the original log time after collection?
By default, the time field (__time__) in SLS uses the time when the log arrives at the server. To use the time from the original log, add a Time parsing plugin to the collection configuration.
Will I be charged for only creating a project and a logstore?
When you create a logstore, SLS reserves shard resources by default. This may incur active shard lease fees. For more information, see Why am I charged for active shard leases?
How do I troubleshoot log collection failures?
Log collection using Logtail may fail due to reasons such as abnormal Logtail heartbeats, collection errors, or incorrect Logtail collection configurations. For troubleshooting information, see Troubleshoot Logtail log collection failures.
Why can I query logs but not analyze them?
To analyze logs, you must configure a field index for the relevant fields and enable the statistics feature. Check the index configuration of your logstore.
How do I stop billing for SLS?
SLS cannot be disabled after it is activated. If you no longer use SLS, stop billing by deleting all projects under your account.