All Products
Search
Document Center

Simple Log Service:Collect standard output from self-managed cluster - new version (DaemonSet)

Last Updated:Apr 11, 2025

This topic describes how to deploy Logtail in DaemonSet mode to collect standard output from a self-managed Kubernetes cluster.

Prerequisites

  • Simple Log Service is activated. For more information, see Activate Simple Log Service.

  • Logtail version 2.1 or later is required. For more information about how to upgrade Logtail, see Upgrade Logtail.

  • A cluster of Kubernetes 1.6 or later is available.

  • kubectl is installed in the Kubernetes cluster.

Solution overview

When you deploy Logtail in DaemonSet mode to collect standard output from a Kubernetes cluster, you need to perform the following steps:

  1. Install Logtail components: Install Logtail components in your Kubernetes cluster. The Logtail components include DaemonSet logtail-ds, ConfigMap alibaba-log-configuration, and Deployment alibaba-log-controller. After Logtail is installed, Simple Log Service can deliver a Logtail configuration to Logtail and use Logtail to collect logs from the Kubernetes cluster.

  2. Create a Logtail configuration: After a Logtail configuration is created, Logtail collects incremental logs based on the Logtail configuration, and processes and uploads the collected logs to the created Logstore.

  3. Query logs: After a Logtail configuration is created, Simple Log Service automatically creates a Logstore to store the collected logs. You can view the logs in the Logstore.

Step 1: Install Logtail

Important
  • The alibaba-log-controller component is available only in Kubernetes 1.6 and later.

  • Make sure that the kubectl command-line tool is installed on the machine on which you want to run commands.

  1. Log on to the Simple Log Service console. Create a project. For more information, see Create a project.

    We recommend that you create a project whose name starts with k8s-log-. Example: k8s-log-${your_k8s_cluster_id}.

  2. Log on to your Kubernetes cluster and run the following commands to install Logtail and the required dependent components:

    1. Download and decompress the installation package:

      • Chinese mainland

        wget https://logtail-release-cn-hangzhou.oss-cn-hangzhou.aliyuncs.com/kubernetes/0.5.3/alibaba-cloud-log-all.tgz; tar xvf alibaba-cloud-log-all.tgz; chmod 744 ./alibaba-cloud-log-all/k8s-custom-install.sh
      • Outside the Chinese mainland

        wget https://logtail-release-ap-southeast-1.oss-ap-southeast-1.aliyuncs.com/kubernetes/0.5.3/alibaba-cloud-log-all.tgz; tar xvf alibaba-cloud-log-all.tgz; chmod 744 ./alibaba-cloud-log-all/k8s-custom-install.sh
    2. Modify the ./alibaba-cloud-log-all/values.yaml configuration file:

      # ===================== Required settings =====================
      # The name of the project. 
      SlsProjectName: 
      # The ID of the region where the project resides. 
      Region: 
      # The ID of the Alibaba Cloud account to which the project belongs. You must enclose the ID in double quotation marks (""). 
      AliUid: "11**99"
      # The AccessKey ID and AccessKey secret of the Alibaba Cloud account or Resource Access Management (RAM) user. The RAM user must have the AliyunLogFullAccess permission. 
      AccessKeyID: 
      AccessKeySercret: 
      # The custom ID of the cluster. The ID can contain only letters, digits, and hyphens (-). 
      ClusterID: 
      # ==========================================================
      # Specifies whether to enable metric collection for the related components. Valid values: true and false. Default value: true. 
      SlsMonitoring: true
      # The network type. Valid values: Internet and Intranet. Default value: Internet. 
      Net: Internet
      # Specifies whether the container runtime of the cluster is containerd. Valid values: true and false. Default value: false. 
      SLS_CONTAINERD_USED: true

      The following table describes the parameters that are included in the preceding command. You can configure the parameters based on your business requirements.

      Parameter

      Description

      SlsProjectName

      The name of the created project.

      Region

      The ID of the region where the project resides. For example, the ID of the China (Hangzhou) region is cn-hangzhou. For more information, see Supported regions.

      AliUid

      The ID of the Alibaba Cloud account to which the project belongs. You must enclose the ID in double quotation marks (""). Example: AliUid: "11**99". For more information, see Obtain the ID of the Alibaba Cloud account to which your Simple Log Service project belongs.

      AccessKeyID

      The AccessKey ID of the Alibaba Cloud account to which the project belongs. We recommend that you use the AccessKey pair of a RAM user and attach the AliyunLogFullAccess policy to the RAM user. For more information, see Create a RAM user and authorize the RAM user to access Simple Log Service.

      AccessKeySercret

      The AccessKey secret of the Alibaba Cloud account to which the project belongs. We recommend that you use the AccessKey pair of a RAM user and attach the AliyunLogFullAccess policy to the RAM user. For more information, see Create a RAM user and authorize the RAM user to access Simple Log Service.

      ClusterID

      The custom ID of the cluster. The ID can contain only letters, digits, and hyphens (-). This parameter corresponds to the ${your_k8s_cluster_id} variable in the following operations.

      Important

      Do not specify the same cluster ID for different Kubernetes clusters.

      SlsMonitoring

      Specifies whether to enable metric collection for the related components. Valid values:

      • true (default)

      • false

      Net

      The network type. Valid values:

      • Internet (default)

      • Intranet

      SLS_CONTAINERD_USED

      Specifies whether the container runtime of the cluster is containerd. Valid values:

      • true

      • false (default)

      Important

      If you do not enable the parameter settings for a self-managed Kubernetes cluster whose container runtime is containerd, Logtail may fail to collect logs.

    3. Install Logtail and the required components:

      bash k8s-custom-install.sh; kubectl apply -R -f result

The following table describes the Simple Log Service resources that are automatically created after you install Logtail and the required components.

Important
  • If you install Logtail and the required dependent components in a self-managed Kubernetes cluster, Logtail is automatically granted the privileged permissions. This prevents the container text file busy error that may occur when other pods are deleted. For more information, see Bug 1468249, Bug 1441737, and Issue 34538.

Resource type

Resource name

Function

Example

Machine group

k8s-group-<YOUR_CLUSTER_ID>

The machine group of logtail-daemonset, used in log collection scenarios.

k8s-group-my-cluster-123

k8s-group-<YOUR_CLUSTER_ID>-statefulset

The machine group of logtail-statefulset, used in metric collection scenarios.

k8s-group-my-cluster-123-statefulset

k8s-group-<YOUR_CLUSTER_ID>-singleton

The machine group of a single instance, used to create a Logtail configuration for the single instance.

k8s-group-my-cluster-123-singleton

Logstore

config-operation-log

This logstore is used to store logs from the alibaba-log-controller component in the Logtail component. We recommend that you do not create a collection configuration in this logstore. You can delete this logstore, and after doing so, the operation logs of alibaba-log-controller are no longer collected. The billing standards for this logstore are the same as those for regular logstores. For more information, see Billable items of the pay-as-you-go billing method.

None

Step 2: Create a Logtail configuration

This section describes two methods to create a Logtail configuration. We recommend that you use only one method to manage a Logtail configuration.

Method

Description

Applicable scenario

CRD - AliyunPipelineConfig (recommended)

You can use the AliyunPipelineConfig CRD, which is a Kubernetes CRD, to manage a Logtail configuration.

This method is suitable for scenarios in which complex collection and processing requirements are required and log and application version consistency is ensured in ACK clusters.

Simple Log Service console

You can manage a Logtail configuration in the GUI based on quick deployment and configuration.

This method is suitable for scenarios in which simple settings are required to manage a Logtail configuration. If you use this method to manage a Logtail configuration, specific advanced features and custom settings cannot be used.

CRD - AliyunPipelineConfig (recommended)

To create a Logtail configuration, you need to only create a Custom Resource (CR) from the AliyunPipelineConfig CRD. After the CR is created, the Logtail configuration takes effect.

Important

If you create a Logtail configuration by creating a CR and you want to modify the Logtail configuration, you can only modify the CR. If you modify the Logtail configuration in the Simple Log Service console, the new settings are not synchronized to the CR.

  1. Log on to the ACK console.

  2. In the left navigation bar, select Clusters.

  3. On the Clusters page, click More in the Actions column of the cluster that you want to manage, and then click Manage Cluster.

  4. Create a file named example-k8s-stdout.yaml.

    Note

    You can use the configuration generator to generate a YAML script for your scenario. This tool helps you quickly complete the configuration and reduces manual operations.

    The following example YAML file collects standard output in multiline mode from pods that have the app: ^(.*test.*)$ label in the default namespace, and sends the collected logs to the k8s-stdout Logstore (automatically created) in the k8s-log-test project. You need to modify the following parameters in the YAML file based on your business requirements:

    1. project, for example, k8s-log-test.

      Log on to the Simple Log Service console and check the name of the project that is generated by the installed Logtail. The name is typically in the k8s-log-<YOUR_CLUSTER_ID> format.

    2. IncludeK8sLabel, for example, app: ^(.*test.*)$. This parameter is used to filter target pods. In this example, pods that have the app label whose value contains test are collected.

    For more information about the config item in the YAML file, including the supported input, output, and processing plug-ins and container filtering methods, see Kubernetes standard output (new version). For more information about the complete YAML parameters, see CR parameters.

    apiVersion: telemetry.alibabacloud.com/v1alpha1
    # Create a CR from the ClusterAliyunPipelineConfig CRD.
    kind: ClusterAliyunPipelineConfig
    metadata:
      # The name of the resource. The name must be unique in your Kubernetes cluster. This name is also the name of the created Logtail configuration. If the name is duplicated, the Logtail configuration does not take effect.
      name: example-k8s-stdout
    spec:
      # Define the Logtail configuration
      config:
        aggregators: [ ]
        global: {}
        # Configure the Logtail input plug-ins.
        inputs:
          # Use the service_docker_stdout plug-in to collect text logs from containers
          - Type: input_container_stdio
            # Collect stdout.
            IgnoringStderr: false
            # Collect stderr
            IgnoringStdout: false
            # Enable container metadata preview
            CollectingContainersMeta: true
            # Container filtering
            ContainerFilters:
              IncludeK8sLabel:
                app: ^(.*test*.)$
            # Configure multiline collection
            Multiline:
              # Specify the regular expression that is used to match the beginning of the first line of a log.
              StartPattern: \d+-\d+-\d+.*
              Mode: custom
              UnmatchedContentTreatment: single_line
            # Use processing plug-ins to parse logs
        processors:
            # Parse logs in regular expression mode
          - Type: processor_parse_regex_native
            SourceKey: content
            # Regular expression
            Regex: (\d+-\d+-\d+\s\S+)(.*)
            # Field index
            Keys:
              - time
              - detail
        # Configure the Logtail output plug-ins.
        flushers:
          - Type: flusher_sls
            Logstore: k8s-stdout
        sample: |-
          2025-04-02 16:00:03
          1
          2
          3
          4
          5
          6
          7
          8
          9
      project:
        name: k8s-log-test
      logstores:
        - name: k8s-stdout
  5. Run kubectl apply -f example-k8s-stdout.yaml, where example-k8s-stdout.yaml is the name of the YAML file that you created. Logtail starts to collect standard output from containers and sends the collected logs to Simple Log Service.

Simple Log Service console

  1. Log on to the Simple Log Service console.

  2. In the Projects section, select the project that you used when you installed the Logtail component, such as k8s-log-<your_cluster_id>. On the Project page, click the Logtail configuration of the target Logstore, add a Logtail configuration, and then click Access Now in the Kubernetes-Standard Output-New Version section.

    image

  3. Because you have installed the Logtail component for the ACK cluster in the previous step, click Use Existing Machine Group. image

  4. On the Machine Group Configuration page, select the k8s-group-${your_k8s_cluster_id} machine group in the ACK DaemonSet Mode section of the K8s Scenario section, click > to add the machine group to the Selected Machine Groups section, and then click Next.image

  5. Create a Logtail configuration. Configure the required parameters as described in the following section and click Next. Approximately 1 minute is required to create a Logtail configuration.

    This section describes only the required parameters. For more information about the parameters, see Kubernetes standard output (new version).

    In the Global Configuration section, enter a configuration name.

    image

  6. Create Index and Preview Data: Simple Log Service enables full-text indexing by default. In this case, all fields in logs are indexed for queries. You can also manually create field indexes based on the collected logs, or click Auto-Generate Index. Simple Log Service generates field indexes. You can use field indexes to perform term queries on specific fields. This reduces indexing fees and improves query efficiency. For more information, see Create indexes.image

Step 3: Query and analyze logs

  1. Log on to the Simple Log Service console.

  2. In the Projects section, click the target project to go to the project details page.

    image

  3. Click the icon icon to the right of the target Logstore, and select Search & Analysis to view the logs from the Kubernetes cluster.

    image

Default fields of standard output (new version)

The following table describes the fields that are uploaded by default for each log in a Kubernetes cluster.

Field

Description

_time_

The time when the log was collected.

_source_

The type of the log source. Valid values: stdout and stderr.

__tag__:_image_name_

The name of the image.

__tag__:_container_name_

The name of the container.

__tag__:_pod_name_

The name of the pod.

__tag__:_namespace_

The namespace of the pod.

__tag__:_pod_uid_

The unique identifier of the pod.

References