All Products
Search
Document Center

Collect logs of an elastic container instance to Kafka

Last Updated: Jan 11, 2022

This topic describes how to collect logs of an elastic container instance to Kafka.

Prerequisites

  • A connection is established between an elastic container instance and Kafka

    If you deploy Kafka on Alibaba Cloud, we recommend that you deploy Kafka and the elastic container instance in the same VPC.

  • The configurations for collecting logs of the elastic container instance are complete.

Method 1: Configure log collection in the Log Service console

Logtail is a log collection agent that is provided by Alibaba Cloud Log Service. Logtail can be used to collect text file logs and standard output logs. You can configure the flushers plug-in in the Log Service console to import logs to Kafka. The following code provides the configuration text of flushers:

"flushers": [
    {
      "detail": {
        "Brokers": [
          "The IP address of the Kafka broker"
        ],
        "Topic": "log"
      },
      "type": "flusher_kafka"
    }
  ]
  • If you use Logtail to collect standard output logs, you can directly modify the configurations of Logtail and add the flushers plug-in to the Plug-in Config field. The following figure shows how to add the flushers plug-in.

    Log collection 1
  • If you use Logtail to collect text file logs, you need to unfold Advanced Options, turn on the plug-in processing switch, and then add the flushers plug-in to the Plug-in Config field. The following figure shows how to add the flushers plug-in.

    Log collection 2

Method 2: Configure log collection by using the CRD for Log Service in Kubernetes

If a CustomResourceDefinition (CRD) for Log Service is installed in your Kubernetes cluster, you can use the CRD to configure the flushers plug-in and import logs to Kafka. Refer to the following configuration examples to modify the YAML file of CRD for Log Service. Set the kind parameter to AliyunLogConfig. Then, run the kubectl apply command to update the configurations.

  • Example: YAML configuration file for collecting standard output logs

    apiVersion: log.alibabacloud.com/v1alpha1     
    kind: AliyunLogConfig                        
    metadata:
      name: test-stdout                     # Unique resource name in the cluster.
    spec:
      project: k8s-log-xxx                  # Optional. The name of the project. By default, the project that was configured when you installed the cluster is selected. You can also specify an unused project.
      logstore: test-stdout                 # The name of the Logstore. If the specified Logstore does not exist, a Logstore is automatically created.
      shardCount: 2                         # Optional. The number of Logstore shards. Valid values: 1 to 10. Default value: 2.
      lifeCycle: 90                         # Optional. The retention period of log data in the Logstore. Valid values: 1 to 7300. Default value: 90. Unit: days. The value 7300 indicates that log data is permanently stored in the Logstore.
      logtailConfig:                        
        inputType: plugin                   # The type of the data source. Valid values: file and plugin. The value file indicates text file logs, and the value plugin indicates standard output logs.
        configName: test-stdout             # The name of the collection configuration, which is the same as the metadata.name.
        inputDetail:
          plugin:
            inputs:
              - type: service_docker_stdout
                detail:
                  Stdout: true
                  Stderr: true
                  IncludeEnv:
                    aliyun_logs_test-stdout: "stdout"
            flushers:
              - type: flusher_kafka         # The type of the flushers is Kafka.
                detail:
                  Brokers:
                  - 192.XX.XX.XX:9092       # The IP address of the Kafka broker.
                  - 192.XX.XX.XX:9092
                  - 192.XX.XX.XX:9092
                  Topic: log
  • Example: YAML configuration file for collecting text file logs

    apiVersion: log.alibabacloud.com/v1alpha1    
    kind: AliyunLogConfig                         
    metadata:
      name: test-file                          
    spec:
      project: k8s-log-xxx                  # Optional. The name of the project. By default, the project that was configured when you installed the cluster is selected. You can also specify an unused project.
      logstore: test-file                   # The name of the Logstore. If the specified Logstore does not exist, a Logstore is automatically created.
      shardCount: 2                         # Optional. The number of Logstore shards. Valid values: 1 to 10. Default value: 2.
      lifeCycle: 90                         # Optional. The retention period of log data in the Logstore. Valid values: 1 to 7300. Default value: 90. Unit: days. The value 7300 indicates that log data is permanently stored in the Logstore.
      logtailConfig:                    
        inputType: file                     # The type of the data source. Valid values: file and plugin. The value file indicates text file logs, and the value plugin indicates standard output logs.
        configName: test-file               # The name of the collection configuration, which is the same as the metadata.name.
        inputDetail:
            logType: common_reg_log        # The type of the logs that you want to collect. For logs that are parsed by using delimiters, you can set the logType parameter to json_log.
            logPath: /log/                 # The path of the log folder.
            filePattern: "*.log"           # The name of log file. The name supports wildcards. Example: log_*.log.
            dockerFile: true               # Collects files in containers.
            plugin:        
              flushers:
                - type: flusher_kafka        # The type of the flushers is Kafka.
                  detail:
                    Brokers:
                     - 192.XX.XX.XX:9092           ## The IP address of the Kafka broker.
                     - 192.XX.XX.XX:9092
                     - 192.XX.XX.XX:9092
                    Topic: log

Result verification

After you complete the configurations, you can see that logs of the elastic container instance have been collected to Kafka. For example, if you are using an Alibaba Cloud Message Queue for Apache Kafka, you can query the logs of the elastic container instance on the Message Query page. The following figure shows the details.

kafka