Simple Log Service provides two methods to deploy Logtail for collecting Kubernetes logs: DaemonSet and Sidecar. For information about the differences between these two methods, see Logtail installation and collection guide for Kubernetes clusters. This topic describes how to deploy Logtail as a DaemonSet to collect standard output from Alibaba Cloud ACK clusters.
Prerequisites
Simple Log Service is activated. For more information, see Activate Simple Log Service.
Considerations
If you want to collect text logs from a cluster, see Collect text logs from ACK clusters (DaemonSet).
This topic applies only to ACK managed and dedicated clusters.
If you want to collect logs from container applications in an ACK Serverless cluster, see Collect application logs by using Pod environment variables.
If you use a self-managed Kubernetes cluster or your Alibaba Cloud ACK cluster and Simple Log Service belong to different Alibaba Cloud accounts, see Collect stdout and stderr from a self-managed cluster in DaemonSet mode (old version).
Solution overview
When you deploy Logtail as a DaemonSet to collect standard output from an ACK cluster, you need to perform the following steps:
Install the Logtail component: Install the Logtail component for your ACK cluster. The component includes the logtail-ds DaemonSet, the alibaba-log-configuration ConfigMap, and the alibaba-log-controller Deployment. These resources are used by Simple Log Service to deliver collection configurations to Logtail and perform log collection operations.
Create a Logtail configuration: After a Logtail configuration is created, Logtail collects incremental logs based on the Logtail configuration, processes and uploads the collected logs to the created Logstore. This topic describes the methods that you can use to create a LoongCollector configuration: CRD - AliyunPipelineConfig (recommended), CRD - AliyunLogConfig, environment variables, or in the Simple Log Service console.
Query and analyze logs: After a LoongCollector configuration is created, Simple Log Service automatically creates a Logstore to store the collected logs. You can view the logs in the Logstore.
Step 1: Install Logtail components
Install Logtail components in an existing ACK cluster
Log on to the ACK console. In the navigation pane on the left, click Clusters.
On the Clusters page, find the one you want to manage and click its name. In the navigation pane on the left, click Add-ons.
On the Logs and Monitoring tab of the Add-ons page, find the logtail-ds component (loongcollector) and click Install.
Install Logtail components when you create an ACK cluster
Log on to the ACK console. In the navigation pane on the left, click Clusters.
On the Clusters page, click Create Kubernetes Cluster. In the Component Configurations step of the wizard, select Enable Log Service.
This topic describes only the settings related to Simple Log Service. For more information about other settings, see Create an ACK managed cluster.
After you select Enable Log Service, the system prompts you to create a Simple Log Service project. You can use one of the following methods to create a project:
Select Project
You can select an existing project to manage the collected container logs.
Create Project
Simple Log Service automatically creates a project to manage the collected container logs.
ClusterID
indicates the unique identifier of the created Kubernetes cluster.
In the Component Configurations step of the wizard, Enable is selected for the Control Plane Component Logs parameter by default. If Enable is selected, the system automatically configures collection settings and collects logs from the control plane components of a cluster, and you are charged for the collected logs based on the pay-as-you-go billing method. You can determine whether to select Enable based on your business requirements. For more information, see Collect logs of control plane components in ACK managed clusters.
After the Logtail components are installed, Simple Log Service automatically generates a project named k8s-log-<YOUR_CLUSTER_ID>
and resources in the project. You can log on to the Simple Log Service console to view the resources. The following table describes the resources.
Resource type | Resource name | Description | Example |
Machine group | k8s-group- | The machine group of logtail-daemonset, which is used in log collection scenarios. | k8s-group-my-cluster-123 |
k8s-group- | The machine group of logtail-statefulset, which is used in metric collection scenarios. | k8s-group-my-cluster-123-statefulset | |
k8s-group- | The machine group of a single instance, which is used to create a Logtail configuration for the single instance. | k8s-group-my-cluster-123-singleton | |
Logstore | config-operation-log | The logstore is used to store logs of the alibaba-log-controller component. We recommend that you do not create a Logtail configuration for the logstore. You can delete the logstore. After the logstore is deleted, the system no longer collects the operational logs of the alibaba-log-controller component. You are charged for the logstore in the same manner as you are charged for regular logstores. For more information, see Billable items of pay-by-ingested-data. | None |
Step 2: Create a Logtail configuration
The following table describes the methods that you can use to create a LoongCollector configuration. We recommend that you use only one method to manage a Logtail configuration:
Method | Configuration description | Scenario |
CRD - AliyunPipelineConfig (recommended) | You can use the AliyunPipelineConfig Custom Resource Definition (CRD), which is a Kubernetes CRD, to manage a Logtail configuration. | This method is suitable for scenarios that require complex collection and processing, and version consistency between the Logtail configuration and the Logtail container in an ACK cluster. Note The logtail-ds component installed on an ACK cluster must be later than V1.8.10. For more information about how to update Logtail, see Update Logtail to the latest version. |
Simple Log Service console | You can manage a Logtail configuration in the GUI based on quick deployment and configuration. | This method is suitable for scenarios in which simple settings are required to manage a Logtail configuration. If you use this method to manage a Logtail configuration, specific advanced features and custom settings cannot be used. |
Environment variable | You can use environment variables to configure parameters used to manage a Logtail configuration in an efficient manner. | You can use environment variables only to configure simple settings. Complex processing logic is not supported. Only single-line text logs are supported. You can use environment variables to create a Logtail configuration that can meet the following requirements:
|
CRD - AliyunLogConfig | You can use the AliyunLogConfig CRD, which is an old version CRD, to manage a Logtail configuration. | This method is suitable for known scenarios in which you can use the old version CRD to manage Logtail configurations. You must gradually replace the AliyunLogConfig CRD with the AliyunPipelineConfig CRD to obtain better extensibility and stability. For more information about the differences between the two CRDs, see CRDs. |
CRD - AliyunPipelineConfig (recommended)
You need only to create an AliyunPipelineConfig custom resource to create a collection configuration. After the resource is created, the collection configuration automatically takes effect.
For a collection configuration that is created by using a custom resource, you can modify the configuration only by updating the custom resource. Modifications to the collection configuration in the Simple Log Service console are not synchronized to the custom resource.
Log on to the ACK console.
On the Clusters page, find the target cluster and click its name. In the navigation pane on the left, choose .
On the Custom Resources page, click the CRDs tab, then click Create from YAML.
Modify the parameters in the following YAML template based on your business requirements, copy and paste the template to the editor, and then click Create.
NoteYou can use the Logtail configuration generatorto generate a YAML script for your scenario. This tool helps you quickly complete the configuration and reduces manual operations.
The following YAML template collects standard output in multi-line text mode from pods that have the
app: ^(.*test.*)$
label in the default namespace. The collected logs are sent to thek8s-stdout
Logstore (automatically created) in thek8s-log-<YOUR_CLUSTER_ID>
project. You need to modify the following parameters in the YAML template based on your business requirements:project
, for example,k8s-log-<YOUR_CLUSTER_ID>
.Log on to the Simple Log Service console, and check the name of the project that is generated after the log collection component is installed.
IncludeK8sLabel
, for example,app: ^(.*test.*)$
. The label used to filter pods. In this example, pods whose label key is app and label value contains test are collected.Endpoint
andRegion
, for example,cn-hangzhou.log.aliyuncs.com
andcn-hangzhou
.
For information about the
config
field in the YAML template, including the supported input and output plug-ins, processing plug-in types, and container filtering methods, see PipelineConfig. For information about all parameters in the YAML template, see CR parameters.apiVersion: telemetry.alibabacloud.com/v1alpha1 # Create a CR from the ClusterAliyunPipelineConfig CRD. kind: ClusterAliyunPipelineConfig metadata: # The name of the resource. The name must be unique in your Kubernetes cluster. This name is also the name of the log collection configuration. If the name is duplicated, the configuration does not take effect. name: example-k8s-stdout spec: # Specify the project to which logs are collected. project: name: k8s-log-<YOUR_CLUSTER_ID> # Create a Logstore to store logs. logstores: - name: k8s-stdout # Define the log collection configuration. config: # Enter a sample log. You can leave this parameter empty. sample: | 2024-06-19 16:35:00 INFO test log line-1 line-2 end # Configure the input plug-in. inputs: # Use the service_docker_stdout plug-in to collect text logs from containers. - Type: service_docker_stdout Stdout: true Stderr: true # Configure conditions to filter containers. Multiple options are evaluated by using a logical AND. # Specify the namespace of the pods to which the required containers belong. Regular expression matching is supported. K8sNamespaceRegex: "^(default)$" # Enable container metadata preview. CollectContainersFlag: true # Collect pods whose labels meet the specified conditions. Multiple entries are evaluated by using a logical OR. IncludeK8sLabel: app: ^(.*test.*)$ # Configure settings for multi-line log collection. This configuration is invalid for single-line log collection. # Specify the regular expression that is used to match the beginning of the first line of a log. BeginLineRegex: \d+-\d+-\d+.* # Configure the output plug-in. flushers: # Use the flusher_sls plug-in to send logs to a specific Logstore. - Type: flusher_sls # Make sure that the Logstore exists. Logstore: k8s-stdout # Make sure that the endpoint is valid. Endpoint: cn-hangzhou.log.aliyuncs.com Region: cn-hangzhou TelemetryType: logs
CRD - AliyunLogConfig
You need only to create an AliyunLogConfig custom resource to create a collection configuration. After the resource is created, the collection configuration automatically takes effect.
For a collection configuration that is created by using a custom resource, you can modify the configuration only by updating the custom resource. Modifications to the collection configuration in the Simple Log Service console are not synchronized to the custom resource.
Log on to the ACK console.
On the Clusters page, find the target cluster and click its name. In the navigation pane on the left, choose .
On the Custom Resources page, click the CRDs tab, then click Create from YAML.
Modify the parameters in the following YAML template based on your business requirements, copy and paste the template to the editor, and then click Create.
This YAML script creates a collection configuration named
simple-stdout-example
. The configuration collects standard output in multi-line mode from all containers whose names start withapp
in the cluster. The collected logs are sent to thek8s-stdout
Logstore in thek8s-log-<YOUR_CLUSTER_ID>
project.For information about the logtailConfig field in the YAML template, including the supported input and output plug-ins, processing plug-in types, and container filtering methods, see AliyunLogConfigDetail. For information about all parameters in the YAML template, see CR parameters.
# Standard output configuration apiVersion: log.alibabacloud.com/v1alpha1 kind: AliyunLogConfig metadata: # The name of the resource. The name must be unique in your Kubernetes cluster. name: simple-stdout-example spec: # Specify the name of the project. If you leave this parameter empty, the project named k8s-log-<your_cluster_id> is used. # project: k8s-log-test # Specify the name of the Logstore. If the Logstore that you specify does not exist, Simple Log Service automatically creates a Logstore. logstore: k8s-stdout # Configure the log collection settings. logtailConfig: # The type of the data source. If you want to collect stdout logs, you must set the value to plugin. inputType: plugin # The name of the Logtail configuration. The name must be the same as the resource name that is specified in metadata.name. configName: simple-stdout-example inputDetail: plugin: inputs: - type: service_docker_stdout detail: # The settings that allow Logtail to collect both stdout and stderr logs. Stdout: true Stderr: true # Specify the namespace of the pods to which the required containers belong. Regular expression matching is supported. K8sNamespaceRegex: "^(default)$" # Specify the name of the required containers. Regular expression matching is supported. K8sContainerRegex: "^(app.*)$" # Configure settings for multi-line log collection. # Specify the regular expression that is used to match the beginning of the first line of a log. BeginLineRegex: \d+-\d+-\d+.*
Simple Log Service console
Log on to the Simple Log Service console.
In the Projects section, click the project that you specified when you installed the log collection component, such as
k8s-log-<YOUR_CLUSTER_ID>
. On the project details page, click the Logtail configuration of the destination Logstore, add a collection configuration, and then click K8S-Standard Output-Old Version.Because you have installed the log collection component for the ACK cluster in the previous step, click Use Existing Machine Group.
On the Machine Group Settings page, select the k8s-group-${your_k8s_cluster_id} machine group in the ACK DaemonSet section for the Kubernetes scenario, click > to add the machine group to the Selected Server Groups section, and then click Next.
Create a Logtail configuration. Configure the required parameters and click Next. Approximately 1 minute is required to create a Logtail configuration.
This section describes only the required parameters. For information about all parameters, see Logtail configuration.
Global Settings
In the Global Settings section, enter a configuration name.
Create Index and Preview Data: Simple Log Service enables full-text indexing by default. In this case, all fields in logs are indexed for queries. You can also manually create field indexes based on the collected logs, or click Generate Index. Simple Log Service generates field indexes. You can use field indexes to perform term queries on specific fields. This reduces indexing fees and improves query efficiency. For more information, see Create indexes.
Environment variables
Create an application and configure Simple Log Service.
Use the ACK console
Log on to the Container Service for Kubernetes (ACK) console. In the left-side navigation pane, click Clusters.
On the Clusters page, click the name of the cluster that you want to manage. In the left-side navigation pane, choose .
On the Deployments page, set Namespace in the upper part of the page, and then click Create From Image in the upper-right corner of the page.
On the Basic Information tab, set Name, click Next, and then go to the Container page.
Only parameters related to Simple Log Service are described in the following section. For more information about other application configurations, see Create a stateless application by using a Deployment.
In the Log Collection section, configure log-related settings.
Configure Collection Configuration.
Click Collection Configuration to create a collection configuration. Each collection configuration consists of Logstore and Log Path In Container (stdout Available).
Logstore: Specify the name of the Logstore that is used to store the collected log data. If the Logstore does not exist, ACK automatically creates a Logstore in the Simple Log Service project that is associated with your ACK cluster.
NoteThe default log retention period of Logstores is 90 days.
Log Path in Container (stdout available): To collect the stdout of a container, set the value to stdout.
All settings are added as configuration entries to the corresponding Logstore. By default, logs are collected in simple mode (by row).
Configure Custom Tag.
Click Custom Tag to create a custom tag. Each custom tag is a key-value pair that is appended to the collected logs. You can use custom tags to mark container log data, such as version numbers.
After you complete all configurations, you can click Next in the upper-right corner to proceed to the next step.
For subsequent operations, see Create a stateless application by using a Deployment.
Use a YAML template
Log on to the Container Service for Kubernetes (ACK) console. In the left-side navigation pane, click Clusters.
On the Clusters page, click the name of the cluster that you want to manage. In the left-side navigation pane, choose .
On the Deployments page, set Namespace in the upper part of the page, and then click Create From YAML in the upper-right corner of the page.
Configure the YAML template.
The syntax of the YAML template is the same as the Kubernetes syntax. However, to specify a collection configuration for a container, you must use
env
to add collection configurations and custom tags to the container. You must also create the correspondingvolumeMounts
andvolumes
based on the collection configuration. The following code is an example:apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: '1' labels: app: deployment-stdout cluster_label: CLUSTER-LABEL-A name: deployment-stdout namespace: default spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: deployment-stdout strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: labels: app: deployment-stdout cluster_label: CLUSTER-LABEL-A spec: containers: - args: - >- while true; do date '+%Y-%m-%d %H:%M:%S'; echo 1; echo 2; echo 3; echo 4; echo 5; echo 6; echo 7; echo 8; echo 9; sleep 10; done command: - /bin/sh - '-c' - '--' env: - name: cluster_id value: CLUSTER-A - name: aliyun_logs_log-stdout value: stdout image: 'mirrors-ssl.aliyuncs.com/busybox:latest' imagePullPolicy: IfNotPresent name: timestamp-test resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30
Use environment variables to create collection configurations and custom tags. All environment variables related to configurations use
aliyun_logs_
as the prefix.The following code shows how to create a collection configuration:
- name: aliyun_logs_log-varlog value: /var/log/*.log
In this example, a collection configuration is created. The format is
aliyun_logs_{key}
, where{key}
islog-varlog
.aliyun_logs_log-varlog
: This environment variable indicates that a configuration is created to collect logs from the /var/log/*.log path to aLogstore
namedlog-varlog
. The name of the log collection configuration is alsolog-varlog
. The purpose is to collect the content of the /var/log/*.log files in the container to thelog-varlog
Logstore
.
The following code shows how to create a custom tag:
- name: aliyun_logs_mytag1_tags value: tag1=v1
After you configure a tag, the corresponding field is automatically appended to the logs when the logs are collected from the container.
mytag1
isa name that does not contain underscores (_)
.
If you specify a collection path other than stdout in your collection configuration, you must create the corresponding
volumeMounts
in this section.In this example, the collection configuration is added to collect logs from the /var/log/*.log path. Therefore, the corresponding
volumeMounts
for /var/log is added.
After you complete the YAML template, click Create to submit the configuration to the Kubernetes cluster.
Use environment variables to configure advanced settings.
You can use environment variables to configure various parameters for log collection. You can use environment variables to configure advanced settings to meet your log collection requirements.
ImportantYou cannot use environment variables to configure log collection in edge computing scenarios.
Field
Description
Example
Note
aliyun_logs_{key}
This variable is required. {key} can contain only lowercase letters, digits, and hyphens (-).
If the aliyun_logs_{key}_logstore variable is not configured, a Logstore named {key} is created to store the collected log data.
To collect the stdout of a container, set the value to stdout. You can also set the value to a log file path in the containers.
- name: aliyun_logs_catalina value: stdout
- name: aliyun_logs_access-log value: /var/log/nginx/access.log
By default, logs are collected in simple mode. If you want to parse log content, we recommend that you use the Simple Log Service console and refer to Collect Kubernetes text logs by using the DaemonSet method or Collect Kubernetes stdout logs by using the DaemonSet method (old version) for configuration.
{key} indicates the name of the log collection configuration in Simple Log Service. The name must be unique in the Kubernetes cluster.
aliyun_logs_{key}_tags
This variable is used to add tags to log data. The value must be in the following format: {tag-key}={tag-value}.
- name: aliyun_logs_catalina_tags value: app=catalina
N/A.
aliyun_logs_{key}_project
This variable is optional. The variable specifies a project in Simple Log Service. The default project is the one that you specified when you created the cluster.
- name: aliyun_logs_catalina_project value: my-k8s-project
The project must be in the same region as the log collection component.
aliyun_logs_{key}_logstore
This variable is optional. The variable specifies a Logstore in Simple Log Service. Default value: {key}.
- name: aliyun_logs_catalina_logstore value: my-logstore
N/A.
aliyun_logs_{key}_shard
This variable is optional. The variable specifies the number of shards of the Logstore. Valid values: 1 to 10. Default value: 2.
NoteIf the specified Logstore already exists, this variable does not take effect.
- name: aliyun_logs_catalina_shard value: '4'
N/A.
aliyun_logs_{key}_ttl
This variable is optional. The variable specifies the log retention period. Valid values: 1 to 3650.
To retain log data permanently, set the value to 3650.
The default retention period is 90 days.
NoteIf the specified Logstore already exists, this variable does not take effect.
- name: aliyun_logs_catalina_ttl value: '3650'
N/A.
aliyun_logs_{key}_machinegroup
This variable is optional. The variable specifies the node group in which the application is deployed. The default value is the same as the default machine group that is used when the log collection component is installed. For more information about how to use this parameter, see Collect container logs from an ACK cluster.
- name: aliyun_logs_catalina_machinegroup value: my-machine-group
N/A.
aliyun_logs_{key}_logstoremode
This variable is optional. The variable specifies the type of the Logstore in Simple Log Service. Default value: standard. Valid values:
NoteIf the specified Logstore already exists, this variable does not take effect.
standard: supports the all-in-one data analytics feature of Simple Log Service. This type is suitable for real-time monitoring, interactive analysis, and building a complete observability system.
query: supports high-performance queries. The indexing traffic fee is about half of that for the standard type. However, SQL analysis is not supported. This type is suitable for scenarios in which a large amount of data is stored for a long period (weeks or months) and log analysis is not required.
- name: aliyun_logs_catalina_logstoremode value: standard
- name: aliyun_logs_catalina_logstoremode value: query
To use this variable, make sure that the logtail-ds image version is 1.3.1 or later.
Custom Requirement 1: Collect Data From Multiple Applications To The Same Logstore
If you want to collect data from multiple applications to the same Logstore, you can set the aliyun_logs_{key}_logstore parameter. For example, the following configurations collect stdout logs from two applications to the stdout-logstore Logstore.
In this example, the
{key}
value for Application 1 isapp1-stdout
, and the{key}
value for Application 2 isapp2-stdout
.Environment variables for Application 1:
# Configure environment variables - name: aliyun_logs_app1-stdout value: stdout - name: aliyun_logs_app1-stdout_logstore value: stdout-logstore
Environment variables for Application 2:
# Configure environment variables - name: aliyun_logs_app2-stdout value: stdout - name: aliyun_logs_app2-stdout_logstore value: stdout-logstore
Custom Requirement 2: Collect Data From Multiple Applications To Different Projects
If you want to collect data from different applications to multiple projects, perform the following steps:
Create a machine group in each project. Select Custom ID as the identifier. Set the custom identifier to
k8s-group-{cluster-id}
, where{cluster-id}
is the ID of your cluster. You can customize the machine group name.Specify the project, Logstore, and machine group in the environment variables for each application. The name of the machine group is the same as the one you created in the previous step.
In the following example, the
{key}
value for Application 1 isapp1-stdout
, and the{key}
value for Application 2 isapp2-stdout
. If the two applications are deployed in the same Kubernetes cluster, you can use the same machine group for the applications.Environment variables for Application 1:
# Configure environment variables - name: aliyun_logs_app1-stdout value: stdout - name: aliyun_logs_app1-stdout_project value: app1-project - name: aliyun_logs_app1-stdout_logstore value: app1-logstore - name: aliyun_logs_app1-stdout_machinegroup value: app1-machine-group
Environment variables for Application 2:
# Configure environment variables for Application 2 - name: aliyun_logs_app2-stdout value: stdout - name: aliyun_logs_app2-stdout_project value: app2-project - name: aliyun_logs_app2-stdout_logstore value: app2-logstore - name: aliyun_logs_app2-stdout_machinegroup value: app1-machine-group
Step 3: Query and analyze logs
Log on to the Simple Log Service console.
In the Projects section, click the project you want to go to its details page.
In the left-side navigation pane, click the
icon of the logstore you want. In the drop-down list, select Search & Analysis to view the logs that are collected from your Kubernetes cluster.
Default fields of container standard output (old version)
The following table describes the fields uploaded by default for each log in a Kubernetes cluster.
Field name | Description |
_time_ | The time when the log was collected. |
_source_ | The type of the log source. Valid values: stdout and stderr. |
_image_name_ | The name of the image. |
_container_name_ | The name of the container. |
_pod_name_ | The name of the pod. |
_namespace_ | The namespace of the pod. |
_pod_uid_ | The unique identifier of the pod. |
References
Create a dashboard to monitor the status of systems, applications, and services.
Configure alert rules to automatically generate alerts for exceptions in logs.
Troubleshoot collection errors: