edit-icon download-icon

Integrate with Alibaba Cloud Log Service

Last Updated: Mar 05, 2018

Log Service supports collecting Kubernetes cluster logs by using Logtail. This document mainly introduces how to deploy the Logtail DaemonSet.

Configuration process

1

Step 1 Deploy Logtail DaemonSet

Step 2 Configure Logtail machine group

In the Log Service console, create a machine group with the custom identity. Then, no additional Operation & Maintenance (O&M) is needed to expand or contract this Kubernetes cluster.

Step 3 Create collection configurations in the console

Create collection configurations in the Log Service console. All the collection configurations are for the server side. No local configuration is needed.

Step 1 Deploy Kubernetes DaemonSet

  1. Connect to your Kubernetes cluster.

    See Connect to a Kubernetes cluster by using kubectl.

  2. Configure parameters.

    1. Click to download the Log Service YAML file template, and then open it in the vi editor.

    2. Replace all ${your xxxx} parameters in the env environment variable with your actual values.

      Parameter Description
      ${your_region_name} The region name. Replace it with the region where your created Log Service project resides. For the region name, see Region name selected when installing Logtail.
      ${your_aliyun_user_id} The user identification. Replace it with the ID of your Alibaba Cloud main account, which is in the string format. For how to check the ID, see User identification settings.
      ${your_machine_group_name} The machine group identification of your cluster. Replace it with a value in the range of [0-9a-zA-Z-_]. For more information, see Custom machine groups.

    Note:

    • Your main account must enable the AccessKey. For more information, see Create an AccessKey in 5-minute quick start.
    • Do not modify the volumeMounts or volumes sections in the template. Otherwise, Logtail cannot work normally.
    • You can customize the startup parameter configurations of Logtail containers if the following conditions are met:
      • You have the following three environment variables when starting the Logtail containers: ALIYUN_LOGTAIL_USER_DEFINED_ID, ALIYUN_LOGTAIL_USER_ID, and ALIYUN_LOGTAIL_CONFIG.
      • The domain socket of Docker is mounted to /var/run/docker.sock.
      • The root directory is mounted to the /logtail_host directory of Logtail containers if you want to collect logs from other containers or host files.
  3. Deploy the Logtail DaemonSet.

    Example:

    1. [root@iZu kubernetes]# curl http://logtail-release.oss-cn-hangzhou.aliyuncs.com/docker/k8s/logtail-daemonset.yaml > logtail-daemonset.yaml
    2. [root@iZu kubernetes]# vi logtail-daemonset.yaml
    3. ...
    4. env:
    5. - name: "ALIYUN_LOGTAIL_CONFIG"
    6. value: "/etc/ilogtail/conf/cn_hangzhou/ilogtail_config.json"
    7. - name: "ALIYUN_LOGTAIL_USER_ID"
    8. value: "16542189653****"
    9. - name: "ALIYUN_LOGTAIL_USER_DEFINED_ID"
    10. value: "k8s-logtail"
    11. ...
    12. [root@iZu kubernetes]# kubectl apply -f logtail-daemonset.yaml

    You can use kubectl get ds -n kube-system to check the running status of your Logtail agent.

Step 2 Configure machine group

  1. Activate Log Service and create a project.

  2. Click Create Machine Group on the Machine Groups page in the Log Service console.

  3. Select User-defined Identity from the Machine Group Identification drop-down list. Enter the ALIYUN_LOGTAIL_USER_DEFINED_ID configured in the previous step in the User-defined Identity field.

    1

    Click Confirm to create the machine group. One minute later, click Machine Status at the right of the machine group on the Machine Groups page to view the heartbeat status of the node that has Logtail DaemonSet deployed. For more information, see View status in Configure machine groups.

Step 3 Create collection configurations

Create Logtail collection configurations in the console as per your needs. For how to create collection configurations, see:

Other operations

Check the status of the Logtail DaemonSet in the Kubernetes cluster

You can run the command kubectl get ds -n kube-system to check the running status of Logtail.

Note: The default namespace of Logtail is kube-system.

How to adjust Logtail resource limits

By default, Logtail can at most occupy 40% of single-core CPUs and 200 MB of memory. To increase processing speed, adjust parameters in the following two sections:

  • limits and requests under resources in the yaml template.
  • The path of the Logtail startup configuration file is the ALIYUN_LOGTAIL_CONFIG environment variable in the yaml template. For the modification method, see Modify startup parameters.

Force to update Logtail DaemonSet

Run the following command to force to update Logtail DaemonSet after modifying the logtail-daemonset.yaml file:

  1. kubectl --namespace=kube-system delete ds logtail
  2. kubectl apply -f ./logtail-daemonset.yaml

Note: Data duplication might occur during force update.

Check configuration information of Logtail DaemonSet

kubectl describe ds logtail -n kube-system

Check version number, IP, and startup time of Logtail

Example:

  1. [root@iZbp1dsu6v77zfb40qfbiaZ ~]# kubectl get po -n kube-system -l k8s-app=logtail
  2. NAME READY STATUS RESTARTS AGE
  3. logtail-gb92k 1/1 Running 0 2h
  4. logtail-wm7lw 1/1 Running 0 4d
  5. [root@iZbp1dsu6v77zfb40qfbiaZ ~]# kubectl exec logtail-gb92k -n kube-system cat /usr/local/ilogtail/app_info.json
  6. {
  7. "UUID" : "",
  8. "hostname" : "logtail-gb92k",
  9. "instance_id" : "0EBB2B0E-0A3B-11E8-B0CE-0A58AC140402_172.20.4.2_1517810940",
  10. "ip" : "172.20.4.2",
  11. "logtail_version" : "0.16.2",
  12. "os" : "Linux; 3.10.0-693.2.2.el7.x86_64; #1 SMP Tue Sep 12 22:26:13 UTC 2017; x86_64",
  13. "update_time" : "2018-02-05 06:09:01"
  14. }

Check Logtail running logs

Logtail running logs are stored in the /usr/local/ilogtail/ directory. The file name is ilogtail.LOG. The rotation file is compressed and stored as ilogtail.LOG.x.gz.

Example:

  1. [root@iZbp1dsu6v77zfb40qfbiaZ ~]# kubectl exec logtail-gb92k -n kube-system tail /usr/local/ilogtail/ilogtail.LOG
  2. [2018-02-05 06:09:02.168693] [INFO] [9] [build/release64/sls/ilogtail/LogtailPlugin.cpp:104] logtail plugin Resume:start
  3. [2018-02-05 06:09:02.168807] [INFO] [9] [build/release64/sls/ilogtail/LogtailPlugin.cpp:106] logtail plugin Resume:success
  4. [2018-02-05 06:09:02.168822] [INFO] [9] [build/release64/sls/ilogtail/EventDispatcher.cpp:369] start add existed check point events, size:0
  5. [2018-02-05 06:09:02.168827] [INFO] [9] [build/release64/sls/ilogtail/EventDispatcher.cpp:511] add existed check point events, size:0 cache size:0 event size:0 success count:0

Restart the Logtail of a pod

Example:

  1. [root@iZbp1dsu6v77zfb40qfbiaZ ~]# kubectl exec logtail-gb92k -n kube-system /etc/init.d/ilogtaild stop
  2. kill process Name: ilogtail pid: 7
  3. kill process Name: ilogtail pid: 9
  4. stop success
  5. [root@iZbp1dsu6v77zfb40qfbiaZ ~]# kubectl exec logtail-gb92k -n kube-system /etc/init.d/ilogtaild start
  6. ilogtail is running
Thank you! We've received your feedback.