Alibaba Cloud Log Service allows you to collect logs of containers in elastic container instances by using sidecar containers. This topic describes how to deploy a sidecar container and configure Logtail to collect logs of containers.
Prerequisites
An ASK cluster is created. For more information, see Create an ASK cluster.
Log Service is activated for the ASK cluster.
If Log Service is not activated, you can activate Log Service as prompted when you log on to the Log Service console.
Background information
Alibaba Cloud Log Service allows you to collect logs of containers in elastic container instances by using sidecar containers. You can create a sidecar container and application container in an elastic container instance. The sidecar container is used to run a logging agent and collect the logs of the application container.
To use a sidecar container to collect container logs, you must enable Logtail. Logtail and the application container must share the same directory that is used to store log files. This way, Logtail can monitor the changes to log files in the shared directory and collect logs after the application container writes log data to the shared directory.
You can use one of the following methods to collect logs:
Standard outputs
To collect standard outputs, you must use the stdlog volume of the elastic container instance. When you create a pod, you can mount the stdlog volume to the sidecar container. The sidecar container can access standard outputs that are collected by the basic components of the elastic container instance as files.
Text files
To collect text files, you must use a shared volume in a pod. A volume can be mounted to multiple containers in a pod. The sidecar container can write the outputs of the application container to text files in the corresponding volume.
Step 1: Deploy a sidecar container
Create a Deployment that contains a sidecar container.
The content in the following YAML template provides an example of the Deployment. Replace the placeholder variables with the values based on your requirements.
apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx-log-sidecar-demo name: nginx-log-sidecar-demo spec: replicas: 2 selector: matchLabels: app: nginx-log-sidecar-demo template: metadata: labels: app: nginx-log-sidecar-demo spec: containers: - name: nginx-log-demo image: registry-vpc.${RegionId}.aliyuncs.com/log-service/docker-log-test:latest command: - /bin/mock_log args: - '--log-type=nginx' - '--stdout=false' - '--stderr=true' - '--path=/var/log/nginx/access.log' - '--total-count=100000000' - '--logs-per-sec=100' imagePullPolicy: Always volumeMounts: - mountPath: /var/log/nginx name: nginx-log - name: logtail image: registry-vpc.${RegionId}.aliyuncs.com/log-service/logtail:latest env: - name: ALIYUN_LOGTAIL_USER_ID value: "${Aliuid}" - name: ALIYUN_LOGTAIL_USER_DEFINED_ID value: nginx-log-sidecar - name: ALIYUN_LOGTAIL_CONFIG value: /etc/ilogtail/conf/${RegionId}/ilogtail_config.json - name: aliyun_logs_machinegroup value: k8s-group-app-alpine imagePullPolicy: Always volumeMounts: - mountPath: /var/log/nginx name: nginx-log - mountPath: /stdlog name: stdlog volumes: - emptyDir: {} # Collect text files to the emptyDir volume. name: nginx-log - name: stdlog # Collect standard outputs to the stdlog volume. flexVolume: driver: alicloud/pod-stdlog
Run the following command to query information about the pods:
kubectl get pods -l app=nginx-log-sidecar-demo
Expected output:
NAME READY STATUS RESTARTS AGE nginx-log-sidecar-demo-84587d9796-krn5z 2/2 Running 0 32m nginx-log-sidecar-demo-84587d9796-vhnld 2/2 Running 0 32m
View logs.
View logs by running the kubectl command
View logs in the Elastic Container Instance console
Step 2: Configure Logtail to collect logs
After you deploy the sidecar container, you need to configure Logtail in the Log Service console to collect logs.
Log on to the Log Service console.
In the Import Data section, click RegEx - Text Log.
On the Specify Logstore wizard page, set the parameters and click Next.
Select a project and Logstore. If no project or Logstore is available, you can click Create Now and create one.
NoteBy default, the system creates a project named
k8s-log-{The ID of the Kubernetes cluster}
for each Kubernetes cluster.(Optional) Create a machine group.
If a machine group is available, click Use Existing Machine Groups to skip this step.
On the Create Machine Group wizard page, follow the instructions to check whether a machine group is created and click Complete Installation.
Configure the machine group parameters and click Next.
Select Custom ID for the Identifier parameter. Enter the value of
ALIYUN_LOGTAIL_USER_DEFINED_ID
that you set in Step 1 in the Custom ID section. In this example, the value isnginx-log-sidecar
.
Configure the machine group.
Select and move the machine group that you want to use from Source Server Groups to Applied Server Groups and then click Next.
Configure Logtail.
Logtail can collect text files in the following modes: Simple Mode, Full Regex Mode, Delimiter Mode, and JSON Mode. For more information, see Overview.
NoteTurn off Docker File.
Example on how to collect standard outputs
If you want to collect standard outputs, the log path must be the same as the mount path of the stdlog volume. In this example, the log path is
/stdlog
.Example on how to collect text files
If you want to collect text files, the log path must be the same as the mount path of the shared volume. In this example, the log path is
/var/log/nginx
.
Configure log query and analysis.
By default, indexes are configured. You can modify the indexes based on your business requirements. For more information, see Configure indexes.
View the logs that are collected from the elastic container instance.
After you complete the preceding configurations, Log Service starts to collect logs of containers in the elastic container instance. The following figure shows an example on how to collect standard outputs to the Logstore of Log Service.
NoteThe standard outputs in the stdlog volume of a pod are collected by the basic components of the elastic container instance. The format of the logs is the same as the format of Kubernetes logs. Kubernetes adds a prefix such as a timestamp to each entry of standard outputs. You must configure the log parser to remove the prefix. For more information, see Parse JSON logs.