×
Community Blog Kubernetes File Collection Practices: Sidecar + hostPath Volumes

Kubernetes File Collection Practices: Sidecar + hostPath Volumes

This article introduces the configuration of hostPath volume based on Sidecar to ensure data security in extreme cases (node downtime, pod crash, etc.

By Baoze

The advantage of DaemonSet is that it can minimize the resources occupied by agent collection and support stdout collection. However, the DaemonSet pod needs to be responsible for node-level data collection, so different pods on the same node may affect each other. For example, a pod with a large amount of data processed may consume more logtail processing timeslices, affecting data collection of other pods.

Therefore, we recommend using Sidecar to deploy logtail for business files with important data. That means adding another container to the business pod to run logtail and focusing on file collection of the pod.

Considering the high requirements for data security when using Sidecar for collection, this article introduces the configuration of hostPath volume based on Sidecar to ensure data security in extreme cases (node downtime, pod crash, etc.). A sample application (Dockerfile) and the corresponding deployment configuration (YAML) will be given to facilitate readers to build on it. This will make the content easier to understand and practice.

Design Introduction

Pod Crash

First, think about the pod crash scenarios. Files generated by business containers in pods are temporary by default. When there is a pod crash, these files will be lost. A method for avoiding the loss is to mount them to persistent volumes and decouple data from the pod lifecycle.

This article uses the hostPath to mount the data directory of the pod to the host (node) where the pod runs. Since multiple business pods will run on one node, we need to do some space partitioning. It is feasible to define a data directory (e.g., /data) on the host, and each pod creates a subdirectory at runtime under this data directory as its data space (such as /data/<pod_name>). The pod can divide this data space based on the needs, such as creating a logs directory for storage and a data directory to store the data generated by the business logic.

Node Downtime

Based on the hostPath + host data directory method, the solution to node downtime is easy: decouple the data directory from the node, such as using a cloud disk as the data directory. As such, it doesn't matter whether the node crashes or not. You can pull a new node and mount the cloud disk to it to recover the data. Please see Disk volume overview for more information.

Example

A sample application and its deployment configuration (source code) are presented below to illustrate the preceding design further.

Application and Its Deployment Configuration

The application logic is simple. First, create the corresponding pod data directory and then generate logs:

#!/bin/bash
echo "Data directory: ${POD_DATA_DIR}"

# Create a specified log directory based on environment variables (a data directory is created by the way). 
# If the directory already exists, it means a conflict occurs. Combine the current date and time and adjust the environment variables. 
if [ -d ${POD_DATA_DIR} ]; then
    echo "Existing data directory ${POD_DATA_DIR}, concat timestamp"
    POD_DATA_DIR="${POD_DATA_DIR}-$(date +%s)"
    echo "New data directory: ${POD_DATA_DIR}"
fi
POD_LOG_DIR="${POD_DATA_DIR}/logs"
mkdir -p ${POD_LOG_DIR}

# Create a symbolic link to unify the log paths in the logtail collection configuration. 
ln -s ${POD_LOG_DIR} /share/logs

# Generate logs. 
LOG_FILE_PATH=${POD_LOG_DIR}/app.log
for((i=0;i<10000000000000;i++)); do
    echo "Log ${i} to file" >> ${LOG_FILE_PATH}
    sleep 1
done

The sample code run.sh includes the following steps:

  1. Create the corresponding data directory and log directory based on the environment variable POD_DATA_DIR (specified in the deployment configuration)
  2. Create a symbolic link to unify the log paths in the logtail collection configuration, which will be explained in the next section together with the Sidecar configuration. This step is optional. The agent (such as filebeat) that manages the collection configuration locally can directly collect the POD_LOG_DIR directory
  3. Generate logs. The application can execute the corresponding binaries as needed, but make sure the relevant data or logs fall in the pod data directory.

Combined with the Dockerfile, we package run.sh as an application image. URL: registry.cn-hangzhou.aliyuncs.com/log-service/docker-log-test:sidecar-app

FROM registry.cn-hangzhou.aliyuncs.com/log-service/centos:centos7.7.1908
ADD run.sh /run.sh
RUN chmod +x /run.sh
ENTRYPOINT ["/run.sh"]

Finally, let's look at the deployment configuration (YAML) of the application:

apiVersion: v1
kind: Pod
metadata:
  # The suffix is not fixed. The name is randomly generated. 
  generateName: app-sidecar-logtail-
  namespace: default
spec:
  volumes:
  # Define the shared directory of the application container and the Logtail Sidecar container. 
  - emptyDir: {}
    name: share
  # Define the data directory on the host. The application container will create a subdirectory under the directory as its data directory. 
  - hostPath:
      path: /data
      type: DirectoryOrCreate
    name: parent-data-dir-on-host
  containers:
  # The application containers output logs in file format. 
  - name: app
    # The execution logic of the application:
    #1. Create the corresponding subdirectory under the host data directory as its data directory. 
    #2. Create the corresponding symbolic link for the data directory and share it with the Sidecar container through the shared directory. 
    #3. Execute the application logic (to continuously generate mock data). 
    # Refer to the directory app for the Dockerfile and startup script of this image. 
    image: registry.cn-hangzhou.aliyuncs.com/log-service/docker-log-test:sidecar-app
    imagePullPolicy: Always
    volumeMounts:
    # Mount the shared directory to share data with the Sidecar container. 
    - mountPath: /share
      name: share
    # Mount the host data directory to create the corresponding subdirectory. 
    - mountPath: /data
      name: parent-data-dir-on-host
    env:
    # Obtain the PodName to create the corresponding data directory for the pod on the host. 
    - name: POD_NAME
      valueFrom:
        fieldRef:
          fieldPath: metadata.name
    - name: POD_DATA_DIR
      value: /data/$(POD_NAME)

Pay attention to the configuration of the following two parts:

  1. volumes/volumeMounts: The host directory /data is used as the parent directory of all pod data directories and mounted to the pod in the unified path.
  2. env POD_NAME, POD_DATA_DIR: Here, we use the pod name as the name of the pod data directory and create the corresponding subdirectory under the parent directory.

The Complete Deployment Configuration (Including the Sidecar Configuration)

Next, add the Sidecar configuration to the preceding application deployment configuration. Here, we take logtail as an example (The configuration can be adjusted for other agents based on the files). The complete configuration is listed below:

apiVersion: v1
kind: Pod
metadata:
  # The suffix is not fixed. The name is randomly generated. 
  generateName: app-sidecar-logtail-
  namespace: default
spec:
  volumes:
  # Define the shared directory of the application container and the Logtail Sidecar container. 
  - emptyDir: {}
    name: share
  # Define the data directory on the host. The application container will create a subdirectory under the directory as its data directory. 
  - hostPath:
      path: /data
      type: DirectoryOrCreate
    name: parent-data-dir-on-host
  containers:
  # The application containers output logs in file format. 
  - name: app
    # The execution logic of the application:
    #1. Create the corresponding subdirectory under the host data directory as its data directory. 
    #2. Create the corresponding symbolic link for the data directory and share it with the Sidecar container through the shared directory. 
    #3. Execute the application logic (to continuously generate mock data). 
    # Refer to the directory app for the Dockerfile and startup script of this image. 
    image: registry.cn-hangzhou.aliyuncs.com/log-service/docker-log-test:sidecar-app
    imagePullPolicy: Always
    volumeMounts:
    # Mount the shared directory to share data with the Sidecar container. 
    - mountPath: /share
      name: share
    # Mount the host data directory to create the corresponding subdirectory. 
    - mountPath: /data
      name: parent-data-dir-on-host
    env:
    # Obtain the PodName to create the corresponding data directory for the pod on the host. 
    - name: POD_NAME
      valueFrom:
        fieldRef:
          fieldPath: metadata.name
    - name: POD_DATA_DIR
      value: /data/$(POD_NAME)
  # The Logtail Sidecar container shares the log directory with the application container to collect logs. 
  - name: logtail
    image: registry-vpc.cn-hangzhou.aliyuncs.com/log-service/logtail:v1.0.25.0-eca7ef7-aliyun
    volumeMounts:
    # Mount the shared directory in a read-only manner to obtain log data. 
    - mountPath: /share
      name: share
      readOnly: true
    - mountPath: /data
      name: parent-data-dir-on-host
      readOnly: true
    env:
    # Attach pod-related properties to each log for traceability. 
    # Modify the values of the ALIYUN_LOG_ENV_TAGS to add or delete fields as needed. Separate fields with |. 
    # For more information about how to get pod properties, see: https://kubernetes.io/zh/docs/tasks/inject-data-application/environment-variable-expose-pod-information/
    - name: ALIYUN_LOG_ENV_TAGS
      value: _node_name_|_node_ip_|_pod_name_|_pod_namespace_
    - name: _node_name_
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: spec.nodeName
    - name: _node_ip_
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: status.hostIP
    - name: _pod_name_
      valueFrom:
        fieldRef:
          fieldPath: metadata.name
    - name: _pod_namespace_
      valueFrom:
        fieldRef:
          fieldPath: metadata.namespace
    # Set the configuration file used by Logtail to access the specified region of SLS. 
    # Rule: /etc/ilogtail/conf/<region>-<network_type>/ilogtail_config.json
    # - <region> indicates a region, such as cn-hangzhou, cn-shanghai.
    # - <network_type> indicates the network type used, including intranet, internet, and acceleration.
    # Example:
    # - Access Hangzhou public cloud by internet: /etc/ilogtail/conf/cn-hangzhou-internet/ilogtail_config.json
    # - Access Shanghai public cloud by acceleration: /etc/ilogtail/conf/cn-shanghai-acceleration/ilogtail_config.json
    - name: ALIYUN_LOGTAIL_CONFIG
      value: '/etc/ilogtail/conf/cn-hangzhou-internet/ilogtail_config.json'
    # Set the user-defined identifiers of the Logtail instance to associate the machine group and obtain the collection configuration. You can set multiple of it and use the comma (,) to separate them. 
    - name: ALIYUN_LOGTAIL_USER_DEFINED_ID
      value: sidecar-logtail-1,sidecar-logtail-2
    # Set the ALIUID to access the corresponding SLS project. You can set multiple ALIUIDs and use the comma (,) to separate them. 
    - name: ALIYUN_LOGTAIL_USER_ID
      value: "123456789"
    # For more startup parameters, please refer to https://help.aliyun.com/document_detail/32278.html
    - name: cpu_usage_limit
      value: "2.0"
    - name: mem_usage_limit
      value: "1024"

The complete deployment configuration mainly adds the processes after -name: logtail. Similar to the application configuration, there are two parts:

  1. volumeMounts: This process adds two mounts that are the same as those in the deployment configuration without the Sidecar. However, the complete deployment configuration focuses on file collection, so readOnly is required.
  2. env: This process configures logtail based on the environment variables, including the SLS endpoint, logs attached properties, user-defined identifiers, and ALIUIDs. These parameters must be adjusted accordingly.

Create an Application Pod

Save the complete configuration above as the sidecar.yaml and use the kubectl create -f sidecar.yaml to create an application pod:

$for((i=0;i<4;i++)); do kubectl create -f sidecar.yaml; done
pod/app-sidecar-logtail-c8gsg created
pod/app-sidecar-logtail-k74lp created
pod/app-sidecar-logtail-5fqrl created
pod/app-sidecar-logtail-764vm created
$kubectl get pods -o wide
NAME                        READY   STATUS    RESTARTS   AGE   IP          NODE                       NOMINATED NODE   READINESS GATES
app-sidecar-logtail-5fqrl   2/2     Running   0          16s   10.7.0.37   cn-hangzhou.172.16.0.171   <none>           <none>
app-sidecar-logtail-764vm   2/2     Running   0          15s   10.7.0.70   cn-hangzhou.172.16.0.172   <none>           <none>
app-sidecar-logtail-c8gsg   2/2     Running   0          16s   10.7.0.36   cn-hangzhou.172.16.0.171   <none>           <none>
app-sidecar-logtail-k74lp   2/2     Running   0          16s   10.7.0.68   cn-hangzhou.172.16.0.172   <none>           <none>

There are two pods on each node. We can choose one pod and enter the app container to view the data directory.

$kubectl exec -it app-sidecar-logtail-5fqrl -c app bash
[root@app-sidecar-logtail-5fqrl /]# ls -al /data/
total 16
drwxr-xr-x 4 root root 4096 Nov  8 02:40 .
drwxr-xr-x 1 root root 4096 Nov  8 02:40 ..
drwxr-xr-x 3 root root 4096 Nov  8 02:40 app-sidecar-logtail-5fqrl
drwxr-xr-x 3 root root 4096 Nov  8 02:40 app-sidecar-logtail-c8gsg
[root@app-sidecar-logtail-5fqrl /]# tail /share/logs/app.log
Log 120 to file
Log 121 to file
Log 122 to file
Log 123 to file
Log 124 to file
Log 125 to file
Log 126 to file
Log 127 to file
Log 128 to file
Log 129 to file
  • Each pod under /data/ creates the corresponding data directory according to their names.
  • The /share/logs is a log directory symbolic link created in each pod from which you can access the corresponding files.

Collect to SLS (Console Operation)

Here is a brief description of the configuration process:

Create a Single-Line Text Collection Configuration

1

We use /share/logs/app.log as the collection target.

Here is the explanation of the share mount that is not explained earlier. The names of each pod data directory are dynamic in the preceding processes, but they are all created with the same sidecar.yaml. Logically, they belong to the same application. Generally speaking, we prefer to use the SLS collection configuration for each application to realize the collection, no matter how many pods there are. Therefore, we added the share mount and /share/logs symbolic link to mask the dynamic directories of each pod.

Create the Machine Group and Apply the Collection Configuration

2

Note: You need to create the machine group with user-defined identifiers (consistent with the ALIYUN_LOGTAIL_USER_DEFINED_ID in the configuration) to logically distinguish sidecar logtail instances generated by different applications and send different collection configurations to these instances to avoid conflicts.

View Logs on the Console

After completing the configuration in the SLS console and sending the collection configuration to the logtail instance, you can query the corresponding logs in the console (index must be enabled).

3

You can easily distinguish the source of each piece of data with the pod fields (pod_name, etc.) we attached.

More Information

With the aforementioned design, we ensure that business data will not be lost in extreme cases. This design requires some additional work, including:

  • Periodic Archiving of Host Data Directories: Since the lifecycle of pods and data is decoupled, the corresponding data directories remain on the host when the pods are destroyed. Therefore, inspection scripts should be added to archive these data directories periodically. For example, periodically scan the data directory on the host. If a file in a pod directory is not updated for a certain period, it will be seen as expired and should be archived (for example, transferred to OSS).
  • Monitor Abnormal Pod Crashes: Generally, you can use Kubernetes events to set alerts. We recommend the Kubernetes event center provided by SLS.
  • Monitor the Status of Logtail: This is mainly to detect the sudden increase of application data and network instability timely. Please see Use built-in alert monitoring rules for Logtail for more information.
0 1 1
Share on

Alibaba Container Service

120 posts | 26 followers

You may also like

Comments

Alibaba Container Service

120 posts | 26 followers

Related Products

  • ACK One

    Provides a control plane to allow users to manage Kubernetes clusters that run based on different infrastructure resources

    Learn More
  • Container Service for Kubernetes

    Alibaba Cloud Container Service for Kubernetes is a fully managed cloud container management service that supports native Kubernetes and integrates with other Alibaba Cloud products.

    Learn More
  • Function Compute

    Alibaba Cloud Function Compute is a fully-managed event-driven compute service. It allows you to focus on writing and uploading code without the need to manage infrastructure such as servers.

    Learn More
  • Elastic High Performance Computing Solution

    High Performance Computing (HPC) and AI technology helps scientific research institutions to perform viral gene sequencing, conduct new drug research and development, and shorten the research and development cycle.

    Learn More