All Products
Search
Document Center

Simple Log Service:Collect and analyze ECS text logs using LoongCollector

Last Updated:Dec 10, 2025

This guide shows how to collect NGINX logs from an Elastic Compute Service (ECS) instance using LoongCollector of Simple Log Service (SLS). You will learn how to configure log collection, query data with SQL, view a visualization dashboard, set up alerts, and clean up resources to avoid fees.

image

Prerequisites

Activate services and prepare an account

  • Activate SLS: If this is your first time using SLS, log on to the Simple Log Service console and activate the service as prompted.

  • Prepare an account:

    • Log on with an Alibaba Cloud account: This account has all permissions by default and can be used directly.

    • Log on with a RAM user: The Alibaba Cloud account must grant the required access policies to the RAM user:

      • AliyunLogFullAccess: Used to create and manage SLS resources such as projects and logstores.

      • AliyunECSFullAccess: Used to install the collection agent on an ECS instance.

      • AliyunOOSFullAccess: Used to automatically install the collection agent on an ECS instance through Alibaba Cloud Operation Orchestration Service (OOS).

      In a production environment, you can create custom permission policies for more fine-grained permission management of RAM users.

Prepare an ECS instance

Ensure that the security group of the ECS instance allows outbound traffic on port 80 (HTTP) and port 443 (HTTPS).

Generate mock logs

  1. Log on to the ECS instance.

  2. Create a script file named generate_nginx_logs.sh and paste the following content into the file. This script writes a standard NGINX access log entry to the /var/log/nginx/access.log file every 5 seconds.

    generate_nginx_logs.sh

    #!/bin/bash
    
    #==============================================================================
    # Script Name: generate_nginx_logs.sh
    # Script Description: Simulates an NGINX server and continuously writes logs to access.log.
    #==============================================================================
    
    # --- Configurable Parameters ---
    
    # Log file path
    LOG_FILE="/var/log/nginx/access.log"
    
    # --- Mock Data Pools ---
    
    # Random IP address pool
    IP_ADDRESSES=(
        "192.168.1.10" "10.0.0.5" "172.16.31.40" "203.0.113.15"
        "8.8.8.8" "1.1.X.X" "91.198.XXX.XXX" "114.114.114.114"
        "180.76.XX.XX" "223.5.5.5"
    )
    
    # HTTP request method pool
    HTTP_METHODS=("GET" "POST" "PUT" "DELETE" "HEAD")
    
    # Common request path pool
    REQUEST_PATHS=(
        "/index.html" "/api/v1/users" "/api/v1/products?id=123" "/images/logo.png"
        "/static/js/main.js" "/static/css/style.css" "/login" "/admin/dashboard"
        "/robots.txt" "/sitemap.xml" "/non_existent_page.html"
    )
    
    # HTTP status code pool (You can adjust the weights. For example, add more 200s to increase their probability.)
    HTTP_STATUSES=(200 200 200 200 201 301 404 404 500 502 403)
    
    # Common User-Agent pool
    USER_AGENTS=(
        "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
        "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36"
        "Mozilla/5.0 (iPhone; CPU iPhone OS 14_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.1.1 Mobile/15E148 Safari/604.1"
        "Mozilla/5.0 (Linux; Android 11; SM-G991U) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.120 Mobile Safari/537.36"
        "curl/7.68.0"
        "Googlebot/2.1 (+http://www.google.com/bot.html)"
    )
    
    # Common Referer pool
    REFERERS=(
        "https://www.google.com/"
        "https://www.bing.com/"
        "https://github.com/"
        "https://stackoverflow.com/"
        "-"
        "-"
        "-"
    )
    #  Check and create the log directory
    LOG_DIR=$(dirname "$LOG_FILE")
    if [ ! -d "$LOG_DIR" ]; then
        echo "Log directory '$LOG_DIR' does not exist. Attempting to create..."
        # Use sudo to create the directory because root permissions are usually required
        sudo mkdir -p "$LOG_DIR"
        if [ $? -ne 0 ]; then
            echo "Error: Failed to create directory '$LOG_DIR'. Check permissions or create it manually."
            exit 1
        fi
        echo "Directory created successfully."
    fi
    
    # Check write permissions for the log file
    trap 'echo -e "\n\nScript interrupted. Stopping log generation..."; exit 0;' SIGINT
    
    # --- Core Function ---
    
    # Define a function to randomly select an element from an array
    # Usage: random_element "array_name"
    function random_element() {
        local arr=("${!1}")
        echo "${arr[$((RANDOM % ${#arr[@]}))]}"
    }
    
    # Catch the Ctrl+C interrupt signal for a graceful exit
    trap 'echo -e "\n\nScript interrupted. Stopping log generation..."; exit 0;' SIGINT
    
    # --- Main Loop ---
    
    echo "Start generating mock NGINX logs to $LOG_FILE ..."
    echo "A log entry is generated every 5 seconds."
    echo "Press Ctrl+C to stop."
    sleep 2
    
    # Infinite loop to continuously generate logs
    while true; do
        # 1. Get the current time in the default NGINX format: [dd/Mon/YYYY:HH:MM:SS +ZZZZ]
        timestamp=$(date +'%d/%b/%Y:%H:%M:%S %z')
    
        # 2. Randomly select data from the pools
        ip=$(random_element IP_ADDRESSES[@])
        method=$(random_element HTTP_METHODS[@])
        path=$(random_element REQUEST_PATHS[@])
        status=$(random_element HTTP_STATUSES[@])
        user_agent=$(random_element USER_AGENTS[@])
        referer=$(random_element REFERERS[@])
    
        # 3. Generate a random response body size (in bytes)
        bytes_sent=$((RANDOM % 5000 + 100)) # A random number between 100 and 5100
    
        # 4. Concatenate into a complete NGINX combined format log entry
        # Format: $remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"
        log_line="$ip - - [$timestamp] \"$method $path HTTP/1.1\" $status $bytes_sent \"$referer\" \"$user_agent\""
    
        # 5. Append the log line to the file
        # echo "$log_line" >> "$LOG_FILE"
        echo "$log_line" | sudo tee -a "$LOG_FILE" > /dev/null
        
        # 6. Wait for 5 seconds before the next loop
        sleep 5
    done
  3. Grant execution permissions: chmod +x generate_nginx_logs.sh.

  4. Run the script in the background: nohup ./generate_nginx_logs.sh &.

Create a project and a logstore

A project is the resource management unit in SLS and is used to isolate data for different projects. A logstore is the storage unit for log data.

  1. Log on to the Simple Log Service console.

  2. Click Create Project:

    • Region: Select the same region as your ECS instance. This lets you collect logs over the Alibaba Cloud internal network, which speeds up log collection.

    • Project Name: Enter a globally unique name within Alibaba Cloud, such as nginx-quickstart-abc.

  3. Keep the default settings for other configurations and click Create.

  4. On the page that indicates the project was created, click Create Logstore.

  5. Enter a Logstore Name, such as nginx-access-log, do not change the other configurations, and click OK.

    By default, a medium logstore is created, which is billed based on the volume of data written.

Install LoongCollector

  1. In the dialog box that appears after the logstore is created, click OK to open the Quick Data Import panel.

  2. On the Nginx - Text Logs card, click Integrate Now.

  3. Machine Group Configurations:

    • Scenario: Servers

    • Installation Environment: ECS

  4. Click Create Machine Group. In the panel that appears, select the target ECS instance.

  5. Click Install and Create Machine Group. After the installation is successful, configure the machine group Name, such as my-nginx-server, and then click OK.

    Note

    If the installation fails or remains pending, check whether the ECS region is the same as the project region.

  6. Click Next to check the heartbeat status of the machine group.

    When you first create a machine group, if the heartbeat status is FAIL, click Automatic Retry. The status changes to OK after about two minutes.

Create a collection configuration

  1. After the heartbeat status is OK, click Next to go to the Logtail Configuration page:

    • Configuration Name: Enter a configuration name, such as nginx-access-log-config.

    • File Path: The path for log collection, /var/log/nginx/access.log.

    • Processor Configurations:

      • Log Sample: Click Add Log Sample and paste a sample log entry:

        192.168.*.* - - [15/Apr/2025:16:40:00 +0800] "GET /nginx-logo.png HTTP/1.1" 0.000 514 200 368 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.*.* Safari/537.36"
      • Processing Method: Select Data Parsing (NGINX Mode). In the NGINX Log Configuration field, configure the log_format. Copy and paste the following content, then click OK.

        log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                            '$status $body_bytes_sent "$http_referer" '
                            '"$http_user_agent" $request_time $request_length';
        In a production environment, the log_format here must be consistent with the definition in your NGINX configuration file, which is usually located at /etc/nginx/nginx.conf.

        Log parsing example:

        Raw log

        Structured parsed log

        192.168.*.* - - [15/Apr/2025:16:40:00 +0800] "GET /nginx-logo.png HTTP/1.1" 0.000 514 200 368 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.*.* Safari/537.36"

        body_bytes_sent: 368
        http_referer: -
        http_user_agent : Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.x.x Safari/537.36
        remote_addr:192.168.*.*
        remote_user: -
        request_length: 514
        request_method: GET
        request_time: 0.000
        request_uri: /nginx-logo.png
        status: 200
        time_local: 15/Apr/2025:16:40:00
  2. Click Next to go to the Query and Analysis Configurations page. The collection configuration takes about one minute to be applied. Click Automatic Refresh. If preview data appears, the configuration is applied.

Query and analyze logs

Click Next to go to the final page. Click Query Logs. You are redirected to the query and analysis page of the target logstore. You can write SQL analytic statements to extract key business and O&M metrics from the parsed logs. Set the time range to Last 15 Minutes:

Note

If an error pop-up appears, it is because the index is not yet configured. Close the pop-up and wait for one minute. You can then view the log content from the access.log file.

  • Example 1: Website page views (PVs)

    Count the total number of log entries within the specified time range.

    * | SELECT count(*) AS pv
  • Example 2: Statistics on requests and error rate by minute

    Calculate the total number of requests, the number of error requests (HTTP status code ≥ 400), and the error rate per minute.

    * | SELECT 
      date_trunc('minute', __time__) as time,
      count(1) as total_requests,
      count_if(status >= 400) as error_requests,
      round(count_if(status >= 400) * 100.0 / count(1), 2) as error_rate
    GROUP BY time 
    ORDER BY time DESC 
    LIMIT 100
    
  • Example 3: Statistics on PVs by request method (GET, POST, etc.)

    Group and count page views by minute and request method (GET, POST, etc.).

    * |
    SELECT
        date_format(minute, '%m-%d %H:%i') AS time,
        request_method,
        pv
    FROM (
        SELECT
            date_trunc('minute', __time__) AS minute,
            request_method,
            count(*) AS pv
        FROM
            log
        GROUP BY
            minute,
            request_method
    )
    ORDER BY
        minute ASC
    LIMIT 10000

Visualize data on a dashboard

After you configure the NGINX parsing plug-in, SLS automatically creates a preset dashboard named nginx-access-log_NGINX Access Log.

  1. In the navigation pane on the left, click image and choose Dashboard > Dashboards.

  2. Find and click the dashboard name to view charts for core metrics such as page views (PVs), unique visitors (UVs), error rate, and request method distribution.

  3. All charts can be customized and modified as needed.

Configure monitoring and alerts

Configure an alert rule to automatically send notifications when a service is abnormal, such as when the number of errors surges.

  1. In the navigation pane on the left, click image Alerts.

  2. Create an action policy:

    • On the Notification Management > Action Policy tab, click Create.

    • Configure the ID and Name, for example, send-notification-to-admin.

    • In the Primary Action Policy, click image Action Group.

    • Select a Notification Method, such as a SMS Message, configure the Recipient, and select an Alert Template.

    • Click Confirm.

  3. Create an alert rule:

    1. Switch to the Alert Rules tab and click Create Alert.

    2. Enter a rule name, such as Too many server 5xx errors.

    3. In the Query Statistics field, click Create to set query conditions.

      • Logstore: Select nginx-access-log that you created.

      • Time Range: 15 minutes (Relative).

      • Query: Enter status >= 500 | SELECT * .

      • Click Preview to confirm that data can be queried, and then click OK.

    4. Trigger Condition: Configure the rule to trigger a critical alert when the query result contains more than 100 entries.

      This configuration means an alert is triggered if more than 100 5xx errors occur within 15 minutes.
    5. Destination: Select Simple Log Service Notification and enable it.

      • Action Policy: Select the action policy that you created in the previous step.

      • Repeat Interval: Set to 15 minutes to avoid excessive repeated notifications.

    6. Click OK to save the alert rule.

  4. Verification: When the alert condition is met, the configured notification channel receives an alert. View all triggered alert records on the Alert History page.

Clean up resources

To avoid unnecessary charges, clean up all the resources that you created after you are finished.

  1. Stop the log generation script

    Log on to the ECS instance and run the following command to stop the log generation script that is running in the background.

    kill $(ps aux | grep '[g]enerate_nginx_logs.sh' | awk '{print $2}')
  2. Uninstall LoongCollector (Optional)

    1. In the sample code, you can replace ${region_id} with cn-hangzhou. To speed up execution, replace ${region_id} with the region of your ECS instance.

      wget https://aliyun-observability-release-${region_id}.oss-${region_id}.aliyuncs.com/loongcollector/linux64/latest/loongcollector.sh -O loongcollector.sh;
    2. Run the uninstall command.

      chmod +x loongcollector.sh; sudo ./loongcollector.sh uninstall;
  3. Delete the project.

    1. On the Simple Log Service console, go to the Project List page and find the project that you created, such as nginx-quickstart-xxx.

    2. In the Actions column, click Delete.

    3. In the delete panel, enter the project name and select a reason for deletion.

    4. Click OK. Deleting a project also deletes all its associated resources, including logstores, collection configurations, dashboards, and alert rules.

    Warning

    After a project is deleted, all its log data and configuration information are released and cannot be recovered. Before you delete a project, confirm the action to prevent data loss.

What to do next

You have now completed the process of log collection, query and analysis, dashboard visualization, and alert configuration. We recommend that you read the following documents to better understand the core concepts and plan your log resource system based on your business needs:

FAQ

What should I do if the display time is inconsistent with the original log time after collection?

By default, the time field (__time__) in SLS uses the time when the log arrives at the server. To use the time from the original log, add a Time parsing plugin to the collection configuration.

Will I be charged for only creating a project and a logstore?

When you create a logstore, SLS reserves shard resources by default. This may incur active shard lease fees. For more information, see Why am I charged for active shard leases?

How do I troubleshoot log collection failures?

Log collection using Logtail may fail due to reasons such as abnormal Logtail heartbeats, collection errors, or incorrect Logtail collection configurations. For troubleshooting information, see Troubleshoot Logtail log collection failures.

Why can I query logs but not analyze them?

To analyze logs, you must configure a field index for the relevant fields and enable the statistics feature. Check the index configuration of your logstore.

How do I stop billing for SLS?

SLS cannot be disabled after it is activated. If you no longer use SLS, stop billing by deleting all projects under your account.