All Products
Search
Document Center

Collect the logs of a container to Log Service

Last Updated: May 19, 2022

This topic describes how to collect the standard outputs and log files of a container in a Serverless Kubernetes (ASK) clusters to Log Service.

Prerequisites

  • An ASK cluster is created. For more information, see Create an ASK cluster.

  • Log Service is activated for the ASK cluster.

    If Log Service is not activated, you can activate Log Service as prompted when you log on to the Log Service console.

Background information

Log Service is an end-to-end data logging service. You can use Log Service to collect, consume, deliver, query, and analyze log data without performing further development. For more information, see What is Log Service?

If you use an ASK cluster, you can configure a custom resource definition (CRD) or use environment variables to collect the logs of a container to Log Service.

Method 1: Configure a CRD to collect the logs of a container to Log Service

  1. Log on to the ACK console.

  2. Install ack-sls-logtail to generate the alibaba-log-controller deployment in the cluster.

    1. In the left-side navigation pane, choose Marketplace > App Catalog.

    2. On the Alibaba Cloud Apps tab, click the ack-sls-logtail application.

    3. Configure the parameters that are required for installing the ack-sls-logtail application and select a cluster for which you want to install the ack-sls-logtail application.

      1. Select a namespace of the cluster that you want to install the ack-sls-logtail application and enter a value in the Release Name field.

      2. On the Parameters tab, specify the AccessKeyId and AccessKeySecret parameters in the YAML file.

      3. Click Create.

      ack-sls-logtail
    4. Check the installation status.

      On the Clusters page, click the name of the cluster on which you have installed the ack-sls-logtail application to go to the Cluster Information page. In the left-side navigation pane, choose Applications > Helm and check whether ack-sls-logtail-default is in the Deployed state. The default release name of the ack-sls-logtail application is ack-sls-logtail-default.

  3. Create a CRD.

    Connect to the cluster for which you want to collect logs. Create a YAML configuration file and name it log.yaml. Then, run the kubectl create -f log.yaml command to create the CRD for log configuration.

    The logs that are collected to Log Service can be categorized into standard outputs and log files. Standard outputs include error outputs.

    • Example of the YAML configuration file for the CRD that is used to collect the standard outputs.

      apiVersion: log.alibabacloud.com/v1alpha1      
      kind: AliyunLogConfig                         
      metadata:
        name: test-stdout # The name of the resource, which is unique in the cluster.     
      spec:
        project: k8s-log-c326bc86**** # The name of the project. You can customize the name of the project. We recommend that you use the ID of the cluster as the name of the project.
        logstore: test-stdout # The name of the Logstore. A Logstore is automatically created if it does not exist.                  
        shardCount: 2 # Optional. The number of shards. Default value: 2. Valid values: 1 to 10.                         
        lifeCycle: 90 # Optional. The retention period of logs in the Logstore. Unit: days. Default value: 90. Valid values: 1 to 7300. The value of 7300 indicates that the logs are retained permanently. 
        logtailConfig:                      
          inputType: plugin # The type of the data source. The value of file indicates that log files are collected, and the value of plugin indicates that standard outputs are collected.                                
          configName: test-stdout # The name of the collection configuration, which is the same as the metadata.name.    
          inputDetail:
            plugin:
              inputs:
                - type: service_docker_stdout
                  detail:
                    Stdout: true
                    Stderr: true
      #              IncludeEnv:
      #                aliyun_logs_test-stdout: "stdout"
    • Example of the YAML configuration file for the CRD that is used to collect log files.

      apiVersion: log.alibabacloud.com/v1alpha1
      kind: AliyunLogConfig
      metadata:
        name: test-file # The name of the resource, which is unique in the cluster.
      spec:
        project: k8s-log-c326bc86**** # The name of the project. You can customize the name of the project. We recommend that you use the ID of the cluster as the name of the project.
        logstore: test-file # The name of the Logstore. A Logstore is automatically created if it does not exist.
        logtailConfig:
          inputType: file # The type of the data source. The value of file indicates that log files are collected, and the value of plugin indicates that standard outputs are collected. 
          configName: test-file # The name of the collection configuration, which is the same as the metadata.name.  
          inputDetail:
            logType: common_reg_log # The type of the logs to be collected. For logs that are parsed by using delimiters, you can set the logType parameter to json_log.
            logPath: /log/ # The folder where logs are stored.
            filePattern: "*.log" # The name of the file. Wildcards are supported. Example: log_*.log.
            dockerFile: true # To collect files in the container, set the dockerFile parameter to true.
            # The key that is used to parse time.
            #timeKey: 'time'
            # The format of time parsing.
            #timeFormat: '%Y-%m-%dT%H:%M:%S'
            # Avoids conflicts caused by the same collection directory in different collection configurations.
            #dockerIncludeEnv:
            #  aliyun_logs_test-file: "/log/*.log"

    Run the following command to create a CRD for log configuration:

    kubectl create -f log.yaml
    Note

    After the CRD for log configuration is created, you can view the generated Logstore and logtail configurations in the Log Service console. You can modify the CRD to update the log configurations. The system automatically synchronizes the updated configurations to Log Service.

  4. Deploy the application.

    After you configure the CRD for log configuration, the logs of future service pods are collected to Log Service.

    The following code provides an example of a YAML configuration file that is used to collect the logs of a pod. You can use the while statement to continuously obtain the standard outputs and log files.

    apiVersion: v1
    kind: Pod
    metadata:
      labels:
        app: sls
      name: eci-sls-demo
      namespace: default
    spec:
      containers:
      - args:
        - -c
        - mkdir -p /log;while true; do echo hello world; date; echo hello sls >> /log/busy.log; sleep 1;
          done
        command:
        - /bin/sh
        image: busybox:latest
        imagePullPolicy: Always
        name: sls
  5. View logs in the Log Service console.

    Find the Logstore where the logs of your cluster in the project are stored, and then click the name of the Logstore to view the logs. For more information, see View the configuration result.

Method 2: Configure environment variables to collect the logs of a container to Log Service

  1. Log on to the ACK console.

  2. In the left-side navigation pane, click Clusters.

  3. On the Clusters page, click the name of the cluster for which you want to configure log configurations.

  4. In the left-side navigation pane of the cluster details page, choose Workloads > Deployments.

  5. Create or modify the YAML configuration file that is used to collect the logs of a pod. Configure environment variables to specify log-related configurations.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: alpine
      name: alpine
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: alpine
      template:
        metadata:
          labels:
            app: alpine
        spec:
          containers:
          - image: alpine
            imagePullPolicy: Always
            args:
            - ping
            - 127.0.0.1
            name: alpine
            env:
            # Configure environment variables.
            # Configure a project or use the default project of the Kubernetes cluster.
            - name: aliyun_logs_test-stdout_project
              value: k8s-log-xxx
            - name: aliyun_logs_test-file_project
              value: k8s-log-xxx
            # Configure a machine group or use the default machine group of the Kubernetes cluster.
            - name: aliyun_logs_test-stdout_machinegroup
              value: k8s-group-app-alpine
            - name: aliyun_logs_test-file_machinegroup
              value: k8s-group-app-alpine
            # Configure the Logstore and path for standard outputs and error outputs.
            - name: aliyun_logs_test-stdout
              value: stdout
            # Collects logs in the /log/*.log directory to a Logstore named aliyun_logs_test-file.
            - name: aliyun_logs_test-file
              value: /log/*.log
            # Set the retention period of logs. This parameter is valid only for a single Logstore.
            - name: aliyun_logs_test-stdout_ttl
              value: "7"
            # Set the number of log shards. This parameter is valid only for a single Logstore.
            - name: aliyun_logs_test-stdout_shard
              value: "2"

    In the preceding example, all environment variables that are related to log configurations are prefixed with aliyun_logs_. The environment variable aliyun_logs_test-stdout indicates that a Logstore named test-stdout is created. The standard outputs of the container are collected to test-stdout. The collection path is stdout.

  6. Click Create.

  7. View logs in the Log Service console.

    Find the Logstore where the logs of your cluster in the project are stored, and then click the name of the Logstore to view the logs. For more information, see View the configuration result.

View the configuration result

  1. Log on to the Log Service console.

  2. Click the name of the project where you want to view logs.

  3. Find the Logstore where the logs of your cluster are stored. Click the name of the Logstore to view the logs.

    • Standard outputs

      SLS1
    • Log files

      SLS2