All Products
Search
Document Center

Simple Log Service:Connect Simple Log Service to Grafana

Last Updated:Jan 25, 2025

This topic describes how to use Grafana to display and analyze NGINX logs that are collected by Simple Log Service.

Important

Simple Log Service supports exporting dashboards and importing them into Grafana. For more information, see Export a Grafana dashboard and import the dashboard to Simple Log Service.

Prerequisites

  • NGINX logs are collected. For more information, see Collect text logs in NGINX configuration mode.

  • The indexing feature is enabled and indexes are created. For more information, see Collect and analyze NGINX access logs.

  • The aliyun-log-grafana-datasource-plugin software package is downloaded. For ease of understanding, the plug-in is also referred to as the Simple Log Service plug-in in this topic.

    The download command is wget https://github.com/aliyun/aliyun-log-grafana-datasource-plugin/archive/refs/heads/master.zip.

    Note

    In this topic, aliyun-log-grafana-datasource-plugin V2.36 is used.

  • Grafana is installed. For more information, see the Grafana documentation.

    Note
    • In this topic, Grafana 11.4.0 is used.

    • If you install Grafana on your computer, you must enable port 3000 in your browser settings.

    • If you want to use pie charts, you must run the following command to install the Pie Chart plug-in:

      grafana-cli plugins install grafana-piechart-panel

Version compatibility between Grafana and aliyun-log-grafana-datasource-plugin

The following table describes the version compatibility between Grafana and aliyun-log-grafana-datasource-plugin.

Grafana

aliyun-log-grafana-datasource-plugin

8.0.0 and later

V2.x

Earlier than 8.0.0

V1.0

Usage notes

If you want to configure Security Token Service (STS)-based redirection and ensure security, take note of the following points:

● For the Resource Access Management (RAM) user whose AccessKey pair is used when you add a data source in Grafana, you must attach the AliyunRAMReadOnlyAccess policy to the RAM user.

● For the RAM role whose Alibaba Cloud Resource Name (ARN) is specified for the data source, you must attach only the AliyunLogReadOnlyAccess policy to the RAM role. For more information, see Embed console pages and share log data.

After you configure STS-based redirection in password-free mode, check whether public access and sharing are allowed for Grafana dashboards that are generated based on the data source. Public access may lead to increase in traffic costs and public exposure of log content.

Note

For more information about system policies, see System policies for Simple Log Service.

Step 1: Install the Simple Log Service plug-in

  1. Run one of the following commands to decompress the aliyun-log-grafana-datasource-plugin software package to the plug-in directory of Grafana.

    • If Grafana is installed by using a Yellowdog Updater, Modified (YUM) repository or RedHat Package Manager (RPM) package, run the following command:

      unzip aliyun-log-grafana-datasource-plugin-master.zip -d /var/lib/grafana/plugins
    • If Grafana is installed by using a .tar.gz file, run the following command:

      {PATH_TO} specifies the installation path to the configuration file of Grafana.

      unzip aliyun-log-grafana-datasource-plugin-master.zip -d {PATH_TO}/grafana-11.4.0/data/plugins
  2. Modify the configuration file of Grafana.

    1. Open the configuration file.

      • If Grafana is installed by using a YUM repository or an RPM package, open the /etc/grafana/grafana.ini file.

      • If Grafana is installed by using a .tar.gz file, open the {PATH_TO}/grafana-11.4.0/conf/defaults.ini file.

    2. Find [plugins] in the configuration file and configure the allow_loading_unsigned_plugins parameter.

      allow_loading_unsigned_plugins = aliyun-log-service-datasource
  3. Restart Grafana.

    1. Run the kill command to terminate the Grafana process.

    2. Run one of the following commands to start Grafana.

      • If Grafana is installed by using a YUM repository or an RPM package, run the following command:

        systemctl restart grafana-server
      • If Grafana is installed by using a .tar.gz file, run the following command:

        ./bin/grafana-server web

Step 2: Add a data source for Grafana

  1. Log on to Grafana.

  2. In the left-side navigation pane, choose Connections > Data Sources.

  3. On the Data Sources tab, click Add data source.

  4. On the Add data source page, search for log-service-datasource. After log-service-datasource is displayed on the Add data source page, click log-service-datasource.image

  5. On the aliyun-log-service-datasource page, configure the parameters.image

    • The following table describes the required parameters.

      Parameter

      Description

      Endpoint

      The endpoint of the Simple Log Service project. Example: http://cn-qingdao.log.aliyuncs.com. Enter an endpoint based on your business requirements. For more information, see Endpoints.

      Project

      The name of the Simple Log Service project.

      AccessKeyID

      The AccessKey ID is used to identify a user. For more information, see Access key pair.

      We recommend that you grant only the required permissions to a RAM user based on the principle of least privilege. For more information about how to grant permissions to a RAM user, see Create a RAM user and authorize the RAM user to access Simple Log Service and Examples of using custom policies to grant permissions to a RAM user.

      AccessKeySecret

      The AccessKey secret is used to encrypt and verify a signature string. The AccessKey secret must be kept confidential.

    • The following table describes the optional parameters.

      Parameter

      Description

      Name

      The name of the data source. Default value: aliyun-log-service-datasource.

      Default

      By default, the switch is turned on.

      Default Logstore

      If you do not specify a logstore, make sure that the specified AccessKey pair has the ListProject permission on the specified project.

      RoleArn

      The ARN of the RAM role that is specified for STS-based redirection.

      HTTP headers

      You can specify custom headers. This parameter takes effect only when the type of the data source is MetricStore (PromQL). For more information, see the fields in the FormValue parameter of the Query acceleration topic. The following list describes the fields in the Headers parameter:

      • x-sls-parallel-enable: specifies whether to enable concurrent computing. By default, concurrent computing is disabled.

      • x-sls-parallel-time-piece-interval: the time interval based on which a query is split. Unit: seconds. Valid values: [3600, 86400 × 30]. Default value: 21600, which is equivalent to 6 hours.

      • x-sls-parallel-time-piece-count: the number of subqueries that you can obtain after splitting based on the specified time interval. Valid values: 1 to 16. Default value: 8.

      • x-sls-parallel-count: the number of global concurrent tasks. Valid values: 2 to 64. Default value: 8.

      • x-sls-parallel-count-per-host: the number of concurrent tasks on a server. Valid values: 1 to 8. Default value: 2.

      • x-sls-global-cache-enable: specifies whether to enable global cache. By default, global cache is disabled.

      Region

      V4 for signing is supported to provide higher security.

  6. After the settings are configured, click Save & test.

Step 3: Add a dashboard

Perform the following steps to add a dashboard for Grafana:

  1. In the left-side navigation pane, click Dashboards.

  2. On the Dashboards page, click + Created dashboard. Then, click + Add visualization.image

  3. On the Select data source page, select aliyun-log-service-datasource from the Data source drop-down list.

  4. Add charts based on your business requirements.image

    The following list describes the parameters that you must configure when you add a chart:

    • Data source type: The type of the data source varies based on the syntax and the store type. The syntax includes SQL and PromQL. The supported data source types include ALL(SQL), Logstore(SQL), MetricStore(SQL), and MetricStore(PromQL).

      • You can use the SQL syntax to query and analyze data in logstores. For more information, see Overview of log query and analysis.

      • You can use the SQL syntax and the PromQL syntax to query and analyze data in Metricstores. For more information, see Overview of log query and analysis.

      • You can add custom headers for the data source of the MetricStore(PromQL) type on the configuration page of the data source.

    • Logstores: the names of logstores to query.

    • Query: the query statement. Example:

      * | select count(*) as c, __time__-__time__%60 as t group by t
    • ycol: None

    • xcol: Select TimeSeries / Custom from the drop-down list and enter t.

    • goto SLS: the link that navigates you to the Simple Log Service console.

      You can click goto SLS on the Explore and Dashboards pages to be navigated to the Simple Log Service console for data comparison between the consoles based on your business requirements. You can use more powerful capabilities and more flexible log retrieval in the Simple Log Service console. After you are navigated to the Simple Log Service console, the query and time information that you specify in Grafana is displayed in the Simple Log Service console. You do not need to specify the information in the Simple Log Service console.

      No additional configurations are required for this redirection. Make sure that you logged on to the Simple Log Service console in a browser. Otherwise, you are navigated to the logon page of the Simple Log Service console.

      Note

      This feature is available when the Simple Log Service plug-in V2.30 or later is used.

  5. In the Panel options section, configure the Title parameter. Then, in the upper-right corner of the page, click Save dashboard. In the dialog box that appears, click Save.image

Configure template variables

After you configure template variables in Grafana, you can select different variable values to view different results in the same chart.

Configure a template variable for a time interval

  1. In the upper-right corner of the New dashboard page, choose Edit > Settings.

  2. Click Variables.

  3. Click New variable.

  4. Configure the parameters for the template variable. Then, click Add.

    The following table describes the parameters.

    Parameter

    Description

    Name

    The name of the template variable. Example: myinterval. If you use a template variable in a query criteria, you must add two dollar signs ($$) preceding the name of the template variable. Example: $$myinterval.

    Type

    Select Interval.

    Label

    Enter time interval.

    Values

    Enter 1m,10m,30m,1h,6h,12h,1d,7d,14d,30d.

    Auto Option

    Turn on Auto Option. Retain the default values for other parameters.

  5. Display the configuration effects.image

Configure a template variable for a domain

  1. On the Variables page, click New.

  2. Configure the parameters for the template variable. Then, click Add. The following table describes the parameters.

    Parameter

    Description

    Name

    The name of the template variable. Example: hostname. If you use a template variable in a query criteria, you must add a dollar sign ($) preceding the name of the template variable. Example: $hostname.

    Type

    Select Custom.

    Label

    Enter the name of the domain.

    Custom Options

    Enter *,example.com,example.org,example.net. You can view information about access to all domains. You can also view information about access to the example.com, example.org, or example.net domain.

    Selection Options

    Retain the default value for the parameter.

  3. Display the configuration effects.image

Configure a template variable for the list of logstores

  1. On the Variables page, set the Select variable type parameter to Custom. The name of the template variable is a unique identifier. You must specify a value that includes logstore for the Name parameter. The string logstore is not case-sensitive. You must specify optional variables for the Custom options parameter and separate the variables with commas (,).

    image.webp

  2. Select the specified custom name of the template variable from the drop-down list for the list of Logstores and refresh the dashboard to obtain the latest results.image.webp

Overview of standard chart writing

Single value charts (Stat and Gauge charts)

xcol: stat

ycol: <Numeric column>, <Numeric column>

Important

If you specify a non-numeric column for ycol, 0 is displayed for the column.

  • Example 1

    Chart type: Stat

    xcol: stat

    ycol: PV, deltaPercent

    query: * | select diff[1] as "PV", round((diff[1] - diff[2])/diff[2] * 100, 2) as deltaPercent from (select compare("PV", 86400) as diff from (select count(*) as "PV" from log))image

  • Example 2

    Chart type: Gauge

    xcol: stat

    ycol: c

    query: * | select count(distinct labels['hostname']) as c from (select promql_query('${metricName}{cluster =~ "${cluster}"}') from metrics ) limit 100000

    image

Pie charts (Pie)

xcol: pie

ycol: <Aggregate column>, <Aggregate column>

  • Example 1

    Chart type: Pie

    xcol: pie

    ycol: request_method, c

    query: request_method: "$method" | select count(*) as c, request_method group by request_methodimage

  • Example 2

    Chart type: Pie

    xcol: pie

    ycol: http_user_agent, pv

    query: * | select count(1) as pv, case when http_user_agent like '%Chrome%' then 'Chrome' when http_user_agent like '%Firefox%' then 'Firefox' when http_user_agent like '%Safari%' then 'Safari' else 'unKnown' end as http_user_agent group by case when http_user_agent like '%Chrome%' then 'Chrome' when http_user_agent like '%Firefox%' then 'Firefox' when http_user_agent like '%Safari%' then 'Safari' else 'unKnown' end order by pv desc limit 10image

  • Other scenarios

    The writing rules of Stat charts are also suitable for Pie charts, and data can also be displayed as expected.

    Chart type: Pie

    xcol: stat

    ycol: hostNameNum, ipNum

    query: * | select count(distinct labels['hostname']) as hostNameNum, count(distinct labels['ip']) + 20 as ipNum from (select promql_query('${metricName}{cluster =~ ".*"}') from metrics ) limit 100000

    image

Line charts (Time series)

xcol: <Time column>

ycol: <Numeric column> [, <Numeric column>, ...] for log writing and <labels / Aggregate column>#:#<Numeric column> for Metricstore or log aggregation writing

  • Example 1

    Chart type: Time series

    xcol: time

    ycol: pv, uv

    query: * | select __time__ - __time__ % $${myinterval} as time, COUNT(*)/ 100 as pv, approx_distinct(remote_addr)/ 60 as uv GROUP BY time order by time limit 2000

    image

  • Example 2

    Chart type: Time series

    xcol: time

    ycol: labels#:#value

    query: * | select time, * from (select promql_query_range('${metricName}') from metrics) limit 1000

    image

  • Example 3

    You can use SQL statements to display the labels of time series data based on your business requirements.

    Chart type: Time series

    xcol: time

    ycol: customLabelsExtract#:#value

    query: * | select concat(labels['ip'], ' -> ', labels['cluster']) as customLabelsExtract, value from (select promql_query_range('${metricName}') from metrics) limit 1000

    image

Column charts (Bar)

xcol: bar

ycol: <Aggregate column>, <Numeric column> [, <Numeric column>, ...]

  • Example 1

    Chart type: Bar

    xcol: bar

    ycol: host, pv, pv2, uv

    query: * | select host, COUNT(*)+10 as pv, COUNT(*)+20 as pv2, approx_distinct(remote_addr) as uv GROUP BY host ORDER BY uv desc LIMIT 5

    image

Tables (Table)

If a field value is accurate to the nanosecond, the values of the time field can be sorted by nanosecond.

You can modify the totalLogs parameter. The totalLogs parameter specifies the total number of logs to query. Default value: 100. Minimum value: 1. Maximum value: 5000. This parameter takes effect only in search statements.

xcol: <None>

ycol: <None> or <Display column> [, <Display column>, ...]

  • Example 1

    Chart type: Table

    xcol: Table/Log

    ycol: <None>

    query: * | select __time__ - __time__ % 60 as time, COUNT(*)/ 100 as pv, approx_distinct(remote_addr)/ 60 as uv GROUP BY time order by time limit 2000

    image

Logs (Logs)

xcol: <None>

ycol: <None>

Example

Chart type: Logs

xcol: <None>

ycol: <None>

query: host: www.vt.mock.com

image

Traces (Traces)

Chart type: Traces

xcol: trace

ycol: None

query: traceID: "f88271003ab7d29ffee1eb8b68c58237"

image

Note

In this example, a logstore is used to collect trace data. Before you can use the logstore to collect trace data, you must enable the Trace application of Simple Log Service. You can use OpenTelemetry to collect trace data to Simple Log Service in native mode. You can also use other trace systems to collect trace data to Simple Log Service. For more information, see Overview.

Grafana of a version later than 10.0 supports the feature for filtering spans in trace data. If you use Grafana 10.0 or earlier, you can configure custom span filtering in your query statement. Example:

traceID: "f88271003ab7d29ffee1eb8b68c58237" and resource.deployment.environment : "dev" and service : "web_request" and duration > 10

Maps (Map)

xcol: map

ycol: <Country column>, <Geographic location column>, <Numeric column>

Example

Chart type: GeoMap

xcol: map

ycol: country, geo, pv

query: * | select count(1) as pv ,geohash(ip_to_geo(arbitrary(remote_addr))) as geo,ip_to_country(remote_addr) as country from log group by country having geo <>'' limit 1000

image

FAQ

  • Where are Grafana logs stored?

    Grafana logs are stored in the following directories:

    • macOS: /usr/local/var/log/grafana

    • Linux: /var/log/grafana

  • What do I do if aliyun-log-plugin_linux_amd64: permission denied appears in logs?

    Grant the EXECUTE permission on the dist/aliyun-log-plugin_linux_amd64 directory of the plug-in directory.