All Products
Search
Document Center

Container Service for Kubernetes:NAS volume FAQ

Last Updated:Mar 26, 2026

This topic covers common issues with Network Attached Storage (NAS) volumes in ACK and ACS clusters, including mounting failures, permission errors, and unmount timeouts.

Quick navigation

Category Issue
Mounting chown: Operation not permitted during mount
Mounting Controller task queue is full when mounting a dynamically provisioned NAS volume
Mounting Mount times are longer than expected
Mounting unknown filesystem type "xxx" error during mount
Mounting Pod stuck in ContainerCreating when mounting two NAS PVCs
Mounting How do I mount a NAS file system with TLS using CSI?
Mounting How do I implement user or group isolation on NAS?
Mounting Can multiple applications share the same NAS volume?
Mounting failed to do setup volume error when mounting a NAS volume in ACS
Usage Cannot create or modify directories on a NAS volume
Usage NFS Stale File Handle error during read/write operations
Unmounting Unmount times out and pod is stuck in Terminating state

Mounting

chown: Operation not permitted during mount

The container process lacks permission to change file ownership on the NAS volume. Fix this using one of the following approaches:

  • Run as root or use `fsGroup`: If accessModes is set to ReadWriteOnce, configure securityContext.fsGroup to set volume ownership for the pod. For details, see Configure a security context for a pod or container.

  • Check the NAS permission group: If the error persists when running as root, the mount target's permission rule may be mapping root to an anonymous user. Set the permission rule to no_squash to prevent this. For details, see Manage permission groups.

Controller task queue is full when mounting a dynamically provisioned NAS volume

When reclaimPolicy is set to Delete and archiveOnDelete is set to false in your StorageClass, subdirectory deletions are slower than creations. This blocks the controller's task queue and prevents new persistent volumes (PVs) from being created.

Set archiveOnDelete to true in your StorageClass. With this setting, deleting a PV renames the corresponding subdirectory instead of deleting its contents — a much faster operation. Clean up the renamed directories separately, for example, by running a scheduled cleanup job or using multiple pods to concurrently delete subdirectories that match a specific naming pattern.

Mount times are longer than expected

When both of the following conditions are met, Kubernetes runs chmod and chown recursively on the mounted volume, which significantly slows down mount operations:

  • accessModes is set to ReadWriteOnce in the persistent volume (PV) and Persistent Volume Claim (PVC)

  • securityContext.fsGroup is configured in the pod spec

Choose one of the following approaches:

  • Remove `fsGroup`: Remove the fsGroup field from securityContext if your workload doesn't require it.

  • Pre-set permissions manually: Mount the target directory to an ECS instance, run chown or chmod to set the required permissions, then use the NAS volume through the Container Storage Interface (CSI). See Mount a statically provisioned NAS volume or Use dynamically provisioned NAS volumes.

  • Use `fsGroupChangePolicy` (Kubernetes 1.20+): Set fsGroupChangePolicy to OnRootMismatch in the pod's securityContext. Kubernetes then runs chmod and chown only the first time the volume is mounted — subsequent mounts are much faster. For details, see Configure a security context for a pod or container.

unknown filesystem type "xxx" error during mount

The node where the pod is scheduled is missing required storage dependencies. Verify that the volume configuration is correct.

Pod stuck in ContainerCreating when mounting two NAS PVCs

If a pod mounts two PVCs that point to the same NAS file system and gets stuck in ContainerCreating, the two PVs likely share the same spec.csi.volumeHandle. The kubelet treats them as the same PV, causing a mounting conflict — even though mounting either PVC individually works.

Set a unique spec.csi.volumeHandle for each PV. The recommended approach is to set it to the same value as the PV name (metadata.name).

How do I mount a NAS file system with TLS using CSI?

TLS encryption for NAS volumes uses the alinas mount protocol, which routes traffic through the Alibaba Cloud NAS client.

Important

The NAS client uses Stunnel for TLS encryption. For high-throughput workloads, this can consume significant CPU resources — in extreme cases, a single mount point may use an entire CPU core. For details, see Encryption in transit for NFS file systems.

  1. Install the cnfs-nas-daemon component.

  2. On the Add-ons page, edit the csi-plugin configuration to enable the AlinasMountProxy=true FeatureGate.

  3. Apply the following YAML to your cluster. Both examples use mountProtocol: alinas and the tls mount option.

    Dynamically provisioned volume

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: alicloud-nas-tls
    mountOptions:
    - nolock,tcp,noresvport
    - vers=3
    - tls   # Enable TLS encryption.
    parameters:
      volumeAs: subpath
      server: "0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/k8s/"
      mountProtocol: alinas  # Use the Alibaba Cloud NAS client.
    provisioner: nasplugin.csi.alibabacloud.com
    reclaimPolicy: Retain
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: nas-tls
    spec:
      accessModes:
        - ReadWriteMany
      storageClassName: alicloud-nas-tls
      resources:
        requests:
          storage: 20Gi
    Parameter Description
    parameters.mountProtocol Specifies the mount client. Set to alinas to use the Alibaba Cloud NAS client. Defaults to "" (standard NFS protocol).
    mountOptions List of mount options. Add tls to enable TLS encryption. The tls option is only effective when mountProtocol is set to alinas.

    Statically provisioned volume

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-nas-tls
      labels:
        alicloud-pvname: pv-nas-tls
    spec:
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteMany
      csi:
        driver: nasplugin.csi.alibabacloud.com
        volumeHandle: pv-nas   # Must match the PV name.
        volumeAttributes:
          server: "2564f4****-ysu87.cn-shenzhen.nas.aliyuncs.com"
          path: "/csi"
          mountProtocol: alinas  # Use the Alibaba Cloud NAS client.
      mountOptions:
      - nolock,tcp,noresvport
      - vers=3
      - tls  # Enable TLS encryption.
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: pvc-nas-tls
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 5Gi
      selector:
        matchLabels:
          alicloud-pvname: pv-nas-tls
    Parameter Description
    spec.csi.volumeAttributes.mountProtocol Specifies the mount client. Set to alinas to use the Alibaba Cloud NAS client. Defaults to "" (standard NFS protocol).
    spec.mountOptions List of mount options. Add tls to enable TLS encryption. The tls option is only effective when mountProtocol is set to alinas.

How do I implement user or group isolation on NAS?

Run container processes as the nobody user (UID/GID 65534) to isolate access between different users and groups on a shared NAS volume.

  1. Add securityContext fields to your workload manifest:

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: nas-sts
    spec:
      selector:
        matchLabels:
          app: nginx
      serviceName: "nginx"
      replicas: 1
      template:
        metadata:
          labels:
            app: nginx
        spec:
          securityContext:
            fsGroup: 65534                        # New files and directories get UID/GID 65534 (nobody).
            fsGroupChangePolicy: "OnRootMismatch" # Only update ownership when root directory permissions don't match.
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            securityContext:
              runAsUser: 65534               # Run all processes as UID 65534 (nobody).
              runAsGroup: 65534              # Run all processes with primary GID 65534 (nobody).
              allowPrivilegeEscalation: false
            volumeMounts:
            - name: nas-pvc
              mountPath: /data
      volumeClaimTemplates:
      - metadata:
          name: nas-pvc
        spec:
          accessModes: [ "ReadWriteOnce" ]
          storageClassName: "alicloud-nas-subpath"
          resources:
            requests:
              storage: 100Gi
  2. Verify that container processes run as nobody:

    kubectl exec nas-sts-0 -- "top"

    Expected output:

    Mem: 11538180K used, 52037796K free, 5052K shrd, 253696K buff, 8865272K cached
    CPU:  0.1% usr  0.1% sys  0.0% nic 99.7% idle  0.0% io  0.0% irq  0.0% sirq
    Load average: 0.76 0.60 0.58 1/1458 54
      PID  PPID USER     STAT   VSZ %VSZ CPU %CPU COMMAND
       49     0 nobody   R     1328  0.0   9  0.0 top
        1     0 nobody   S     1316  0.0  10  0.0 sleep 3600
  3. Verify that new files and directories in the NAS mount path have nobody ownership:

    kubectl exec nas-sts-0 -- sh -c "touch /data/test; mkdir /data/test-dir; ls -arlth /data/"

    Expected output:

    total 5K
    drwxr-xr-x    1 root     root        4.0K Aug 30 10:14 ..
    drwxr-sr-x    2 nobody   nobody      4.0K Aug 30 10:14 test-dir
    -rw-r--r--    1 nobody   nobody         0 Aug 30 10:14 test
    drwxrwsrwx    3 root     nobody      4.0K Aug 30 10:14 .

Can multiple applications share the same NAS volume?

Yes. NAS provides shared storage, so a single PVC can be mounted by multiple pods simultaneously.

For concurrent write behavior and known limitations, see How do I prevent exceptions that may occur when multiple processes or clients concurrently write data to a log file? and How do I resolve the latency in writing data to an NFS file system?

For mounting instructions, see Use CNFS to manage NAS file systems (recommended), Mount a statically provisioned NAS volume, and Use dynamically provisioned NAS volumes.

failed to do setup volume error when mounting a NAS volume in ACS

This error usually means the NAS mount target is in a different virtual private cloud (VPC) from your cluster. Follow these steps to verify.

  1. Get the cluster's VPC ID.

    1. Log on to the ACS console. In the left navigation pane, click Clusters.

    2. Click the cluster name. In the left navigation pane, choose Configurations > ConfigMaps.

    3. Switch to the kube-system namespace and click acs-profile. Record the vpcId value (for example, vpc-gw87c9kdqs25al2z****).

  2. Get the VPC ID of the NAS mount target.

    1. Log on to the NAS console. In the left navigation pane, choose File System > File System List, and click the NAS name.NAS console

    2. In the left navigation pane, click Mount Targets. In the Mount Target section, find the VPC ID for your mount target.

  3. Compare the two VPC IDs. If they don't match, the NAS mount target is not accessible from your cluster. Follow the instructions in Mount NAS file systems on ACS to create a mount target in the correct VPC or update your StorageClass to use the correct mount target address.

Usage

Cannot create or modify directories on a NAS volume

A non-root container process doesn't have write permissions on the mounted PV. Fix this using one of the following approaches:

  • Use an init container: Start an init container with root permissions that mounts the PV and runs chmod or chown to set the required permissions on the mount directory.

  • Use `fsGroupChangePolicy`: Set fsGroupChangePolicy to OnRootMismatch in the pod's securityContext. Kubernetes automatically runs chmod and chown the first time the volume is mounted.

NFS Stale File Handle error during read/write operations

This is standard NFS behavior. It occurs when one client deletes a file while another client still holds an open file descriptor for it:

  1. Client A opens /data/file.txt.

  2. Client B deletes /data/file.txt.

  3. Client A tries to read or write using its now-invalid file descriptor and gets the error.

NAS doesn't enforce data consistency at this level. Handle the error in your application logic — for example, by implementing file locking or by reopening file handles when this error occurs.

Unmounting

Unmount times out and pod is stuck in Terminating state

When a pod with a mounted NAS volume is deleted and gets stuck in Terminating, the cause is usually a misconfiguration in the csi-plugin DaemonSet where /var/run is mounted as a hostPath volume.

Run the following command to confirm:

kubectl get ds -n kube-system csi-plugin -ojsonpath='{.spec.template.spec.volumes[?(@.hostPath.path=="/var/run/")]}'

If the command returns any output, the misconfiguration is present. Patch the DaemonSet to fix it:

Warning

This patch causes csi-plugin pods to restart. Assess the impact on your workloads before applying this change in production.

kubectl patch -n kube-system daemonset csi-plugin -p '
spec:
  template:
    spec:
      containers:
        - name: csi-plugin
          volumeMounts:
            - mountPath: /host/var/run/efc
              name: efc-metrics-dir
            - mountPath: /host/var/run/ossfs
              name: ossfs-metrics-dir
            - mountPath: /host/var/run/
              $patch: delete
      volumes:
        - name: ossfs-metrics-dir
          hostPath:
            path: /var/run/ossfs
            type: DirectoryOrCreate
        - name: efc-metrics-dir
          hostPath:
            path: /var/run/efc
            type: DirectoryOrCreate
        - name: fuse-metrics-dir
          $patch: delete'

After the patch is applied, the csi-plugin pods restart with the correct configuration and the unmount issue is resolved.