All Products
Search
Document Center

Simple Log Service:Consume data with Logstash

Last Updated:Mar 25, 2026

Simple Log Service lets you use Logstash for data consumption. To do this, configure the Logstash input Simple Log Service to pull data from Simple Log Service and send it to other systems, such as Kafka and HDFS.

Features

  • Distributed consumption: Configure multiple servers to consume data from a single Logstore simultaneously.

  • High performance: Based on the Java consumer group implementation, the plugin can process up to 20 MB/s of data on a single core (before compression).

  • High reliability: Consumption progress is saved on the server. If a consumer is interrupted, it automatically resumes from the last checkpoint.

  • Automatic load balancing: The system automatically allocates shards based on the number of consumers and rebalances the load when consumers are added or removed.

Procedure

Download the Logstash installation package for your operating system from Logstash.

The following example uses a Linux environment:

  1. Install Logstash. For more information, see the official Logstash documentation.

    1. Download and install the public signature key.

      sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
    2. In the /etc/yum.repos.d/ directory, create a repository file with a .repo extension, such as logstash.repo, and add the following content:

      [logstash-9.x]
      name=Elastic repository for 9.x packages
      baseurl=https://artifacts.elastic.co/packages/9.x/yum
      gpgcheck=1
      gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
      enabled=1
      autorefresh=1
      type=rpm-md
    3. Download and install Logstash.

      sudo yum install logstash
  2. Install the input plugin.

    1. Download the input plugin from logstash-input-sls.

    2. Install the input plugin.

      /usr/share/logstash/bin/logstash-plugin install logstash-input-sls.zip
      Note

      For information about potential installation failures and their solutions, see plugin installation and configuration.

  3. Create the logstash user. Logstash must run as a non-root user.

    1. Create the logstash user.

      sudo adduser --system --no-create-home --group logstash
    2. Set permissions for the logstash user. Ensure that the logstash user owns the Logstash-related directories, such as /usr/share/logstash, /etc/logstash, and /var/log/logstash.

      sudo chown -R logstash:logstash /usr/share/logstash /etc/logstash /var/log/logstash
    3. Verify that the logstash user was created.

      id logstash

      The output should display the User ID (UID) and Group ID (GID) for the logstash user, confirming that the user was successfully created.

      image

  4. Start Logstash as the logstash user.

    1. In the /etc/logstash directory, create a configuration file with a .conf extension. This topic uses logstash-sample.conf as an example.

    2. Add the sample code to the logstash-sample.conf file and start Logstash as the logstash user.

      sudo -u logstash /usr/share/logstash/bin/logstash -f /etc/logstash/logstash-sample.conf

      The following example configures Logstash to consume data from a Logstore and print it to standard output. The parameters used in the example are described below.

      Parameters

      Sample code

      input {
        logservice{
        endpoint => "your project endpoint"
        access_id => "your_accesskey_id"
        access_key => "your_accesskey_secret"
        project => "your project name"
        logstore => "your logstore name"
        consumer_group => "consumer group name"
        consumer_name => "consumer name"
        position => "end"
        checkpoint_second => 30
        include_meta => true
        consumer_name_with_ip => true
        }
      }
      
      output {
        stdout {}
      }

      endpointstring (Required)

      The service endpoint of the SLS Project. For more information, see Service endpoints.

      access_idstring (Required)

      The AccessKey ID of your Alibaba Cloud account. The AccessKey ID must have the required permissions to manage consumer groups. For more information, see Grant permissions to consume data from a Logstore.

      access_keystring (Required)

      The AccessKey Secret of your Alibaba Cloud account. The AccessKey Secret must have the required permissions to manage consumer groups. For more information, see Grant permissions to consume data from a Logstore.

      projectstring (Required)

      The name of the SLS Project.

      logstorestring (Required)

      The name of the SLS Logstore.

      consumer_groupstring (Required)

      The name of the consumer group.

      consumer_namestring (Required)

      The name of the consumer. This name must be unique within the consumer group.

      positionstring (Required)

      The position where data consumption starts.

      • begin: Starts consumption from the first log entry in the Logstore.

      • end: Starts consumption from the current time.

      • yyyy-MM-dd HH:mm:ss: Starts consumption from the specified point in time.

      checkpoint_secondnumber (Optional)

      The interval, in seconds, for saving a checkpoint. The value must be an integer from 10 to 60. Default: 30.

      include_metaboolean (Optional)

      Specifies whether to include metadata in the output. Metadata includes fields such as source, time, tag, and topic. Default: true.

      consumer_name_with_ipboolean (Optional)

      Specifies whether to append the consumer's IP address to the consumer name. This parameter must be set to true to enable distributed consumption. Default: true.