All Products
Search
Document Center

Container Service for Kubernetes:Mount a NAS file system to a sandboxed container

Last Updated:Nov 18, 2025

Complex access paths and high latency in traditional storage solutions often degrade I/O performance. Directly mounting a Network Attached Storage (NAS) file system in a sandboxed container optimizes the storage path, enabling containers to read from and write to the NAS file system directly for significantly improved performance. This topic explains how this direct mounting works and shows you how to implement it.

Background information

Virtio-fs is a shared file system that enables resources such as Volumes, Secrets, and ConfigMaps to be shared with a virtual machine's guest operating system. This setup allows NAS file systems to be natively mounted via a Volume.

However, in this setup, the NAS is mounted on the host node. When containers access the NAS, their I/O must pass through virtio-fs to reach the host-mounted file system, which introduces performance overhead.

Sandboxed containers support direct mounting of NAS file systems. To achieve this, the system unmounts the NAS mount target from the host, mounts the NAS file system inside the guest operating system, then bind-mounts it into the container. This process allows the container to read from and write to the NAS file system directly, providing near-native performance.

image

How it works

image

The direct mount process for a NAS file system in a sandboxed container is as follows:

  1. The Kubelet requests the CSI-Plugin to mount the NAS Volume.

  2. The CSI-Plugin mounts the NAS file system on the host.

  3. The Kubelet requests the Kangaroo-Runtime to create the container.

  4. The Kangaroo-Runtime parses the NAS mount information, passes it to the guest operating system, and simultaneously unmounts the NAS file system from the host.

  5. The Kangaroo-Runtime requests the Agent to create the container.

  6. The Agent mounts the NAS file system inside the guest operating system.

  7. The Agent bind-mounts the NAS file system from the guest operating system into the container.

Prerequisites

Procedure

  1. Create a statically provisioned PersistentVolume (PV).

    1. Save the following YAML as nas-pv-csi.yaml.

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        labels:
          alicloud-pvname: nas-pv-csi
        name: nas-pv-csi
      spec:
        accessModes:
          - ReadWriteMany
        capacity:
          storage: 5Gi
        csi:
          driver: nasplugin.csi.alibabacloud.com
          volumeAttributes:
            options: noresvport,nolock
            path: /csi
            server: ${nas-server-address}  # Replace with your actual NAS mount target address. 
                                     # Format: file-system-id.region.nas.aliyuncs.com
                                     # To get it: 1) Go to the NAS Console > File Systems. 
                                     # 2) Select your file system, go to "Mount Targets" tab. 
                                     # 3) Copy the "Mount Target Domain Name".
            vers: "3"
          volumeHandle: nas-pv-csi
        persistentVolumeReclaimPolicy: Retain
    2. Run the following command to create the statically provisioned PV.

      kubectl create -f nas-pv-csi.yaml
  2. Create a PersistentVolumeClaim (PVC) for the NAS storage. Use selector.matchLabels to bind the PVC to the PV by its label.

    1. Save the following YAML as nas-pvc-csi.yaml.

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: nas-pvc-csi
        namespace: default
      spec:
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 5Gi
        selector:
          matchLabels:
            alicloud-pvname: nas-pv-csi
    2. Run the following command to create the PVC.

      kubectl create -f nas-pvc-csi.yaml
  3. Create an application and mount the PVC.

    1. Save the following YAML as deploy-nas-csi.yaml.

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: deploy-nas-csi
      spec:
        replicas: 2
        selector:
          matchLabels:
            app: busybox
        template:
          metadata:
            labels:
              app: busybox
            annotations:
              storage.alibabacloud.com/enable_nas_passthrough: "true"
          spec:
            runtimeClassName: runv
            containers:
              - name: busybox
                image: registry.cn-hangzhou.aliyuncs.com/acs/busybox:v1.29.2
                command: 
                - tail
                - -f
                - /dev/null
                volumeMounts:
                  - name: nas-pvc
                    mountPath: "/data"
            restartPolicy: Always
            volumes:
              - name: nas-pvc
                persistentVolumeClaim:
                  claimName: nas-pvc-csi    #Must match the PVC name created in the previous step to bind the PVC.

      Direct NAS mounting is disabled for pods by default. To enable the NAS passthrough feature, add the following annotation to the pod template:

      annotations:
        storage.alibabacloud.com/enable_nas_passthrough: "true"
    2. Run the following command to create the application.

      kubectl create -f deploy-nas-csi.yaml
  4. Verify the mount.

    1. Run the following command to view pod information.

      kubectl get pods

      Expected output:

      NAME                              READY   STATUS    RESTARTS   AGE
      deploy-nas-csi-847f8b****-qmv2m   1/1     Running   0          47s
      deploy-nas-csi-847f8b****-wj8k5   1/1     Running   0          47s
    2. Run the following command to open a shell in the specified pod.

      kubectl exec -it deploy-nas-csi-847f8b****-qmv2m -- sh
    3. Run the following command to view the mount information.

      mount 

      If the output contains an entry for the /data mount point, the mount is successful. A successful mount produces output similar to the following:

      Expected output

      kataShared on / type virtio_fs (rw,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,dax=inode)
      proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
      tmpfs on /dev type tmpfs (rw,nosuid,size=65536k,mode=755)
      devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
      mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
      sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime)
      tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,relatime)
      cgroup on /sys/fs/cgroup/perf_event type cgroup (ro,nosuid,nodev,noexec,relatime,perf_event)
      cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (ro,nosuid,nodev,noexec,relatime,cpu,cpuacct)
      cgroup on /sys/fs/cgroup/blkio type cgroup (ro,nosuid,nodev,noexec,relatime,blkio)
      cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (ro,nosuid,nodev,noexec,relatime,net_cls,net_prio)
      cgroup on /sys/fs/cgroup/memory type cgroup (ro,nosuid,nodev,noexec,relatime,memory)
      cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
      cgroup on /sys/fs/cgroup/freezer type cgroup (ro,nosuid,nodev,noexec,relatime,freezer)
      cgroup on /sys/fs/cgroup/devices type cgroup (ro,nosuid,nodev,noexec,relatime,devices)
      cgroup on /sys/fs/cgroup/cpuset type cgroup (ro,nosuid,nodev,noexec,relatime,cpuset)
      cgroup on /sys/fs/cgroup/pids type cgroup (ro,nosuid,nodev,noexec,relatime,pids)
      cgroup on /sys/fs/cgroup/enormoustlb type cgroup (ro,nosuid,nodev,noexec,relatime,enormoustlb)
      kataShared on /data type virtio_fs (rw,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,dax=inode)
      kataShared on /etc/hosts type virtio_fs (rw,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,dax=inode)   
      kataShared on /dev/termination-log type virtio_fs (rw,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,dax=inode)
      kataShared on /etc/hostname type virtio_fs (rw,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,dax=inode)
      kataShared on /etc/resolv.conf type virtio_fs (rw,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,dax=inode)
      shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=64000k)
      kataShared on /var/run/secrets/kubernetes.io/serviceaccount type virtio_fs (ro,relatime,user_id=0,group_id=0,default_permissions,allow_other,dax=inode)
      tmpfs on /proc/kcore type tmpfs (rw,nosuid,size=65536k,mode=755)
      tmpfs on /proc/keys type tmpfs (rw,nosuid,size=65536k,mode=755)
      tmpfs on /proc/timer_list type tmpfs (rw,nosuid,size=65536k,mode=755)
      tmpfs on /proc/sched_debug type tmpfs (rw,nosuid,size=65536k,mode=755)
      proc on /proc/bus type proc (ro,relatime)
      proc on /proc/fs type proc (ro,relatime)
      proc on /proc/irq type proc (ro,relatime)
      proc on /proc/sys type proc (ro,relatime)
      proc on /proc/sysrq-trigger type proc (ro,relatime)