This topic describes how to install the Logtail agent in a Kubernetes cluster.
Background information
- Create an aliyunlogconfigs CRD.
- Create a Deployment controller named alibaba-log-controller.
- Install Logtail in the DaemonSet mode.
Install Logtail on an Alibaba Cloud Container Service for Kubernetes cluster
Install Logtail on a user-created Kubernetes cluster
FAQ
- How can I create one project in Log Service for multiple Kubernetes clusters?
To collect logs from multiple Kubernetes clusters in one project, you can replace the
${your_k8s_cluster_id}
parameter in the preceding installation command with the ID of the cluster where you first install other Log Service components.For example, if you have three Kubernetes clusters whose IDs are abc001, abc002, and abc003, you can replace the
${your_k8s_cluster_id}
parameter withabc001
when you install other Log Service components in the three clusters.Note Logs from multiple Kubernetes clusters that reside in different regions cannot be collected in one project. - How can I view the logs of the Logtail container?
Logtail log files
ilogtail.LOG
andlogtail_plugin.LOG
are stored in the/usr/local/ilogtail/
directory of the Logtail container. The stdout logs of the container are insignificant. You can ignore the following stdout log entries.start umount useless mount points, /shm$|/merged$|/mqueue$ umount: /logtail_host/var/lib/docker/overlay2/3fd0043af174cb0273c3c7869500fbe2bdb95d13b1e110172ef57fe840c82155/merged: must be superuser to unmount umount: /logtail_host/var/lib/docker/overlay2/d5b10aa19399992755de1f85d25009528daa749c1bf8c16edff44beab6e69718/merged: must be superuser to unmount umount: /logtail_host/var/lib/docker/overlay2/5c3125daddacedec29df72ad0c52fac800cd56c6e880dc4e8a640b1e16c22dbe/merged: must be superuser to unmount ... xargs: umount: exited with status 255; aborting umount done start logtail ilogtail is running logtail status: ilogtail is running
- How can I view the status of Log Service components in Kubernetes clusters?
helm status alibaba-log-controller
- What can I do if I fail to launch alibaba-log-controller?
Check whether the command used to install the alibaba-log-controller component is correct:
- The command is running on the master node of the Kubernetes cluster.
- The cluster ID specified in the command is the ID of your Kubernetes cluster.
If the command used to install the alibaba-log-controller component is incorrect and causes the launch failure, run the
helm del --purge alibaba-log-controller
command to delete the alibaba-log-controller package. Make sure that the parameters in the installation command are valid, and then run the command to install alibaba-log-controller again. - How can I view the status of Logtail DaemonSets in the Kubernetes cluster?
Run the
kubectl get ds -n kube-system
command to view the status of Logtail in the cluster.Note The default namespace of Logtail iskube-system
. - How can I view the version number, IP address, and startup time of Logtail?
Example:
[root@iZbp1dsu6v77zfb40qfbiaZ ~]# kubectl get po -n kube-system | grep logtail NAME READY STATUS RESTARTS AGE logtail-ds-gb92k 1/1 Running 0 2h logtail-ds-wm7lw 1/1 Running 0 4d [root@iZbp1dsu6v77zfb40qfbiaZ ~]# kubectl exec logtail-ds-gb92k -n kube-system cat /usr/local/ilogtail/app_info.json { "UUID" : "", "hostname" : "logtail-ds-gb92k", "instance_id" : "0EBB2B0E-0A3B-11E8-B0CE-0A58AC140402_172.20.4.2_1517810940", "ip" : "172.20.4.2", "logtail_version" : "0.16.2", "os" : "Linux; 3.10.0-693.2.2.el7.x86_64; #1 SMP Tue Sep 12 22:26:13 UTC 2017; x86_64", "update_time" : "2018-02-05 06:09:01" }
- How can I view the operational logs of Logtail?
The operational logs of Logtail are stored in the
ilogtail.LOG
file in the/usr/local/ilogtail/
directory. If the log file is rotated, it is compressed and stored asilogtail.LOG.x.gz
.Example:[root@iZbp1dsu6v77zfb40qfbiaZ ~]# kubectl exec logtail-ds-gb92k -n kube-system tail /usr/local/ilogtail/ilogtail.LOG [2018-02-05 06:09:02.168693] [INFO] [9] [build/release64/sls/ilogtail/LogtailPlugin.cpp:104] logtail plugin Resume:start [2018-02-05 06:09:02.168807] [INFO] [9] [build/release64/sls/ilogtail/LogtailPlugin.cpp:106] logtail plugin Resume:success [2018-02-05 06:09:02.168822] [INFO] [9] [build/release64/sls/ilogtail/EventDispatcher.cpp:369] start add existed check point events, size:0 [2018-02-05 06:09:02.168827] [INFO] [9] [build/release64/sls/ilogtail/EventDispatcher.cpp:511] add existed check point events, size:0 cache size:0 event size:0 success count:0
- How can I restart Logtail that is installed in a pod?
Example:
[root@iZbp1dsu6v77zfb40qfbiaZ ~]# kubectl exec logtail-ds-gb92k -n kube-system /etc/init.d/ilogtaild stop kill process Name: ilogtail pid: 7 kill process Name: ilogtail pid: 9 stop success [root@iZbp1dsu6v77zfb40qfbiaZ ~]# kubectl exec logtail-ds-gb92k -n kube-system /etc/init.d/ilogtaild start ilogtail is running
What to do next
- DaemonSet
- For information about how to collect logs by using CRDs, see Use CRDs to collect Kubernetes container logs in the DaemonSet mode.
- For information about how to collect stdout or stderr logs from a Kubernetes cluster by using the console, see Use the console to collect Kubernetes stdout and stderr logs in the DaemonSet mode.
- For information about how to collect Kubernetes text logs by using the console, see Use Log Service to collect Kubernetes logs.
- Sidecar
- For information about how to collect logs by using CRDs, seeUse CRDs to collect Kubernetes container logs in the Sidecar mode
- For information about how to collect logs by using the console, see Use the console to collect Kubernetes container logs in the Sidecar mode.