This topic describes how to collect logs of Jobs when elastic container instances are used to run Jobs in Container Service for Kubernetes (ACK) clusters.
An ACK cluster is created, and virtual nodes are deployed on the cluster. For more information, see Create a managed Kubernetes cluster and Deploy the virtual node controller and use it to create Elastic Container Instance-based pods.
alibabacloud.com/eci=truelabel is added to the namespaces of the ACK cluster.
After the label is added, pods can be scheduled to run on elastic container instances. For more information, see Schedule pods to Elastic Container Instance.
An Apsara File Storage Network Attached Storage (NAS) file system is created, and a mount target is added to the file system. For more information, see Create a NAS file system and Manage mount targets.
In ACK clusters, the logs of Jobs that run on standard nodes can be collected by using DaemonSet. However, for the Jobs that run on elastic container instances attached to the cluster by using virtual nodes, pods exit when Jobs are complete because elastic container instances do not support DaemonSet. The pods may exit before the collection of logs is complete.
In view of the preceding scenario, you can perform the following steps to collect the logs of the Jobs:
Mount NAS file systems to the Jobs and save the log output to the NAS file systems.
Mount the NAS file systems to another pod to obtain the logs of the Jobs that are stored in the NAS file systems.
If you are using Log Service, you can synchronize Log Service by configuring environment variables and mounting volumes to the Jobs. For more information, see Configure log collection for an elastic container instance.
The following example describes how to collect the logs of a Job. In this example, the
alibabacloud.com/eci=true label has been added to the namespace named vk. This way, the pods that are deployed under the namespace will be scheduled to run on the elastic container instance. Use the name of your namespace when you deploy the Job.
Prepare the YAML configuration file of the Job.
The following example provides a configuration of a Job to calculate the value of π.
apiVersion: batch/v1 kind: Job metadata: name: pi spec: template: spec: containers: - name: pi image: resouer/ubuntu-bc command: ["sh", "c", "echo 'scale=1000; 4*a( 1)' | bc -l > /eci/a.log file to find the 2>& 1"] # Redirects the output to the specified file. volumeMounts: - name: log-volume mountPath: /eci readOnly: false restartPolicy: Never # Mounts a NAS file system to store logs volumes: - name: log-volume nfs: path: /eci server: 04edd48c7c-****.cn-hangzhou.nas.aliyuncs.com readOnly: false backoffLimit: 4
Deploy the Job on an elastic container instance.
kubectl apply -f job.yaml -n vk
View the status of the pod to check whether the Job runs smoothly.
kubectl get pod -n vk
Prepare the configuration file of the pod to collect the logs of the Job.
The following code provides an example of the YAML configuration file.
apiVersion: v1 kind: Pod metadata: name: log-collection spec: containers: - image: nginx:latest name: log-collection command: ['/bin/sh', '-c', 'echo &(cat /eci/a.log)'] # Shows the logs of the Job. volumeMounts: - mountPath: /eci name: log-volume restartPolicy: Never # Mounts the NAS file system that stores the logs of the Job. volumes: - name: log-volume nfs: server: 04edd48c7c-****.cn-hangzhou.nas.aliyuncs.com path: /eci readOnly: false
Deploy the pod to view the logs of the Job.
kubectl apply -f log-collection.yaml -n vk