All Products
Search
Document Center

Container Service for Kubernetes:Use hostPath volumes

Last Updated:Oct 11, 2025

hostPath volumes mount a file or directory from a host node's file system directly into a pod. This allows a pod to read from and write to the node's file system, which is useful for scenarios such as accessing node-level logs, reading specific configuration files, and quickly sharing data in a development environment.

How it works

Process overview

After a pod is scheduled to a node, the kubelet on that node performs the hostPath mount operation before starting the containers. The kubelet validates and prepares the path specified in the hostPath field based on the type defined.

  • DirectoryOrCreate: Checks if the path exists on the host. If it does not, an empty directory is automatically created with permissions set to 0755. The directory has the same owner and group as the kubelet.

  • Directory: Checks if the path exists on the host and is a directory. If not, the pod will fail to start.

  • FileOrCreate: Checks if the host path exists. If not, an empty file is automatically created with permissions set to 0644. The file has the same owner and group as the kubelet.

  • File: Checks if the path exists on the host and is a file. If not, the pod will fail to start.

Once validated, the kubelet bind-mounts the host path to the container. All subsequent read and write operations from the container to the mount point will directly affect the host's file system.

Usage methods

  • Directly mount a hostPath volume in a pod: This method is simple to configure by defining the hostPath in the pod's volumes section. However, it creates a tight coupling between the application and the node's storage, making it less suitable for production applications that require long-term maintenance or future storage changes.

  • Mount a hostPath volume using a PV and a PVC: This method decouples storage from the application. The hostPath is defined in a separate Persistent Volume (PV), which is then claimed by a pod using a Persistent Volume Claim (PVC). This allows for independent management of the underlying storage without modifying the application's pod definition. This method lets you manage the underlying storage independently without changing the application pod's configuration.

Method 1: Directly mount hostPath in a pod

  1. Create a file named pod-hostpath-direct.yaml.

    This example mounts the node's /data directory to the /test directory in the pod.
    apiVersion: v1
    kind: Pod
    metadata:
      name: test-pod
    spec:
      containers:
      - image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
        name: test-container
        volumeMounts:
        - mountPath: /test
          name: test-volume
      volumes:
      - name: test-volume
        hostPath:
          # Specify the path on the host node
          path: /data
          # Specify the mount mode
          type: DirectoryOrCreate
  2. Deploy the pod.

    kubectl apply -f pod-hostpath-direct.yaml
  3. Verify the mount by creating a file inside the pod, then check if the file exists on the host node.

    1. Create a file in the pod.

      Create a file named test.txt at the /test directory (the mount point) of the pod.

      kubectl exec test-pod -- sh -c 'echo "This file was created from within the Pod." > /test/test.txt'
    2. Get the name of the node where the pod is running.

      NODE_NAME=$(kubectl get pod test-pod -o jsonpath='{.spec.nodeName}')
      echo "Pod is running on node: $NODE_NAME"
    3. Verify the file on the node.

      Log on to the node and run the ls /data command to check for the file in the host's /data directory.

      If the test.txt file exists, the hostPath volume was mounted successfully.

Method 2: Mount hostPath using a PV and a PVC

  1. Create a file named pv-pvc-hostpath.yaml.

    This example creates a PV pointing to the host's /data directory, a PVC that requests this storage, and a pod that uses this PVC.
    # --- PV Definition ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: hostpath-pv
      labels:
        type: local
    spec:
      capacity:
        storage: 10Gi
      accessModes:
        - ReadWriteOnce
      hostPath:
        path: "/data"
    ---
    # --- PVC Definition ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: hostpath-pvc
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
      # The selector ensures this PVC binds to the specific PV created above
      selector:
        matchLabels:
          type: local
    ---
    # --- Pod Definition ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: test-pod-pvc
    spec:
      containers:
        - name: test-container
          image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
          ports:
            - containerPort: 80
          volumeMounts:
            - mountPath: "/usr/share/nginx/html"
              name: storage
      volumes:
        - name: storage
          persistentVolumeClaim:
            # Reference the PVC defined above
            claimName: hostpath-pvc
  2. Create the PV, PVC, and pod.

    kubectl apply -f pv-pvc-hostpath.yaml
  3. Verify the mount by creating a file inside the pod, then checking if the file exists on the node.

    1. Create a file in the pod.

      Create a file named test.txt in the /usr/share/nginx/html directory (the mount point) of the pod.

      kubectl exec test-pod-pvc -- sh -c 'echo "File from PV/PVC Pod." > /usr/share/nginx/html/test.txt'
    2. Get the name of the node where the pod is running.

      NODE_NAME=$(kubectl get pod test-pod-pvc -o jsonpath='{.spec.nodeName}')
      echo "Pod is running on node: $NODE_NAME"
    3. Verify the file on the node.

      Log on to the node and run the ls /data command to check for the file in the host's /data directory.

      If the test.txt file exists, the hostPath volume mounted using the PV and PVC is working properly.

Apply in production

  • Security

    • Use read-only mounts: If your application only needs to read data from the node, configure the volume as read-only (ReadOnlyMany) to prevent accidental modification of host files.

    • Follow the principle of least privilege: Never mount the host's root directory (/) or sensitive system directories such as /etc or /var. Always use a dedicated, isolated directory for hostPath volumes.

  • Node resources

    • Monitor host disk usage: Writes to a hostPath volume consume the node's disk space. Monitor and configure alerts for the relevant disk partition to prevent node failures caused by disk exhaustion.

    • Evaluate I/O impact: High-frequency I/O to a hostPath volume consumes significant node resources, potentially impacting the performance of other pods or even the kubelet itself.

  • Data persistence and portability

    Note

    A hostPath volume tightly couples a pod's data to a specific node's physical storage. This means the data in the hostPath volume is tied to that node and does not persist across nodes.

    • Do not use hostPath for stateful applications that require high availability and persistent storage, such as databases or caches. The data in a hostPath volume exists only on a single node. If a pod is rescheduled to another node due to a failure or an update, it will lose access to its original data.

    • hostPath breaks container isolation. Granting a pod access to the host file system creates a significant security risk. A vulnerability in the containerized application could be exploited to compromise the entire node. Do not use hostPath on nodes with a read-only root file system, such as ContainerOS.

FAQ

If a pod is deleted and recreated, is the data in the hostPath volume still there?

It depends on where the new pod is scheduled.

  • Scheduled to the same node: Yes. The new pod will mount the exact same directory on the node and will have access to all the previous data.

  • Scheduled to a different node: No. The new pod will mount an empty directory on the new node. The original data remains on the original node, but is inaccessible to the new pod.

=