All Products
Search
Document Center

File Storage NAS:Mount a dynamically provisioned NAS volume using NFS

Last Updated:Mar 26, 2026

File Storage NAS (NAS) supports dynamic provisioning, where the Container Storage Interface (CSI) plug-in automatically creates persistent volumes (PVs) based on persistent volume claims (PVCs) and StorageClasses. Use dynamic provisioning for workloads that need shared, persistent file storage across multiple pods — such as big data analysis, log aggregation, and web application serving.

Prerequisites

Before you begin, make sure that:

  • The CSI plug-in is installed in the cluster. To update csi-plugin and csi-provisioner, see Update csi-plugin and csi-provisioner.

  • NAS is activated. If this is your first time using NAS, activate it on the NAS product page.

  • If your cluster uses FlexVolume, upgrade to CSI first — FlexVolume is deprecated. See Upgrade from FlexVolume to CSI. To check which storage component your cluster uses, go to Operations > Add-ons and click the Storage tab.

Limitations

  • NAS file systems using the Server Message Block (SMB) protocol cannot be mounted.

  • All pods sharing a NAS file system must be in the same virtual private cloud (VPC). Cross-VPC mounting is not supported.

  • NAS file systems can be mounted across zones within the same VPC.

Choose a mount mode

Mode

Use when

Pre-created NAS required

PV maps to

Subpath

Multiple apps or pods share the same NAS file system, or different pods need different subdirectories

Yes

A subdirectory of the NAS file system

Sharepath

Pods in different namespaces must share the same NAS path

Yes

The same NAS directory (no new subdirectory created)

FilesystemNAS console

Your app needs to dynamically create and delete NAS file systems and mount targets

No (CSI auto-creates)

A dedicated NAS file system

Filesystem mode supports kubectl only. The ACK console is not supported for this mode.

Step 1: Create a NAS file system and a mount target

Skip this step if you are using filesystem mode. The CSI plug-in creates the NAS file system and mount target automatically.

NAS file system types vary by region and zone. Select the type and zone that match your cluster region, VPC, and vSwitch.

  1. Log on to the NAS console.

  2. On the File System List page, click Create File System, then select Create General-purpose NAS File System or Create Extreme NAS File System.

  3. Configure the file system parameters and click Buy Now. The following table describes the key parameters. For the full parameter list, see Create a file system.

    Parameter

    Description

    Region

    Select the region where your cluster is located.

    Zone

    Select a zone. NAS can be mounted across zones within the same VPC. Select a single zone for best performance.

    Protocol type

    Select NFS. SMB is not supported for Kubernetes mounting.

    VPC and vSwitch

    (General-purpose NAS only) Select the VPC and vSwitch used by pods in your ACK cluster.

Step 2: Mount a dynamically provisioned NAS volume

Subpath mode

Use subpath mode when multiple applications or pods need to share a NAS file system, or when different pods need access to different subdirectories of the same file system.

Each PVC creates a new subdirectory under the path specified in the StorageClass. The PV corresponds to that subdirectory.

Use kubectl

1. Create a StorageClass

  1. Create alicloud-nas-subpath.yaml with the following content, and update the parameters for your environment:

    Parameter

    Description

    Default

    allowVolumeExpansion

    (General-purpose NAS only) Set to true to enable NAS directory quotas on dynamically provisioned PVs, allowing you to expand the volume by modifying the PVC. The quota takes effect asynchronously and may be briefly exceeded under heavy writes. For details, see Manage directory quotas.

    mountOptions

    NFS mount options, including the NFS version.

    volumeAs

    The mount mode. Set to subpath.

    server

    The mount target domain and base path. Replace with your actual mount target. To find the domain name, see Manage mount targets.

    / (root)

    provisioner

    The CSI driver. Must be nasplugin.csi.alibabacloud.com.

    reclaimPolicy

    Controls what happens to PV data when a PVC is deleted.

    Delete

    archiveOnDelete

    (When reclaimPolicy: Delete) Controls whether PV data is deleted or archived. true: renames the subdirectory to archived-{pvName}.{timestamp}. false: permanently deletes the data.

    Important

    Do not set this to false during high-traffic periods. To delete backend data, you must set archiveOnDelete: false using kubectl — the console cannot configure this parameter.

    true

    allowVolumeExpansion: true
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: alicloud-nas-subpath
    mountOptions:
    - nolock,tcp,noresvport
    - vers=3
    parameters:
      volumeAs: subpath
      server: "0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/k8s/"
    provisioner: nasplugin.csi.alibabacloud.com
    reclaimPolicy: Retain
  2. Create the StorageClass:

    kubectl create -f alicloud-nas-subpath.yaml

2. Create a PVC

  1. Create pvc.yaml with the following content:

    Parameter

    Description

    Default

    accessModes

    Volume access mode.

    ReadWriteMany

    storageClassName

    The StorageClass to use.

    storage

    The requested capacity. This does not limit actual usage unless allowVolumeExpansion: true is set on the StorageClass and the NAS file system is general-purpose. The quota is measured in GiB and rounded up to the nearest integer. For volume expansion, see Expand a dynamically provisioned NAS volume.

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: nas-csi-pvc
    spec:
      accessModes:
      - ReadWriteMany
      storageClassName: alicloud-nas-subpath
      resources:
        requests:
          storage: 20Gi
  2. Create the PVC:

    kubectl create -f pvc.yaml

3. Deploy applications and verify

This example deploys two nginx applications that share the same NAS subdirectory.

  1. nginx-1.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: deployment-nas-1
      labels:
        app: nginx-1
    spec:
      selector:
        matchLabels:
          app: nginx-1
      template:
        metadata:
          labels:
            app: nginx-1
        spec:
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            ports:
            - containerPort: 80
            volumeMounts:
              - name: nas-pvc
                mountPath: "/data"
          volumes:
            - name: nas-pvc
              persistentVolumeClaim:
                claimName: nas-csi-pvc
  2. nginx-2.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: deployment-nas-2
      labels:
        app: nginx-2
    spec:
      selector:
        matchLabels:
          app: nginx-2
      template:
        metadata:
          labels:
            app: nginx-2
        spec:
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            ports:
            - containerPort: 80
            volumeMounts:
              - name: nas-pvc
                mountPath: "/data"
          volumes:
            - name: nas-pvc
              persistentVolumeClaim:
                claimName: nas-csi-pvc

    Both applications reference the same PVC (nas-csi-pvc).

  3. Deploy both applications:

    kubectl create -f nginx-1.yaml -f nginx-2.yaml
  4. Verify the pods are running:

    • /k8s/: the base path from the StorageClass

    • nas-79438493-f3e0-11e9-bbe5-00163e09****: the automatically created PV name

    kubectl get pod

    Expected output:

    NAME                                READY   STATUS    RESTARTS   AGE
    deployment-nas-1-5b5cdb85f6-a****   1/1     Running   0          32s
    deployment-nas-2-c5bb4746c-b****    1/1     Running   0          32s

    Both pods mount the same NAS subdirectory at /data. The mount target path follows the pattern <mount-target>:/k8s/<pv-name>/:

To mount different subdirectories to different pods, create a separate PVC for each pod.

Use the ACK console

1. Create a StorageClass

  1. Log on to the ACK console. In the left-side navigation pane, click ACK consoleClusters.

  2. Click the cluster name, then choose Volumes > StorageClasses in the left-side pane.

  3. On the StorageClasses page, click Create.

  4. Configure the parameters and click Create.

    Parameter

    Description

    Example

    Name

    The StorageClass name. Follow the format requirements displayed in the console.

    alicloud-nas-subpath

    PV type

    Select NAS.

    NAS

    Select mount target

    The mount target domain and path. If no mount target exists, create a NAS file system first. See Step 1. To find the domain name, see Manage mount targets.

    0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/k8s/

    Volume mode

    Select Subdirectory for subpath mode. Each PVC creates a new subdirectory under the base path.

    Note

    Subdirectory mode requires CSI plug-in version 1.31.4 or later. Earlier versions use Shared Directory (sharepath) mode.

    Subdirectory

    Mount path

    The base path in the NAS file system. If the subdirectory does not exist, it is created automatically. Leave blank to use the root directory. For Extreme NAS, set this to a subdirectory of /share, such as /share/data.

    /

    Reclaim policy

    Controls what happens to data when a PVC is deleted. Use Retain for high data-security requirements.

    Important

    The archiveOnDelete parameter cannot be configured through the console. To delete backend data, use kubectl. In ACK Serverless clusters, the Delete policy does not delete NAS directories because CSI-Provisioner lacks the required Linux privileges.

    Retain

    Mount options

    NFS mount options. Use NFS v3 (recommended). Extreme NAS supports only NFS v3. See NFS.

    (default)

2. Create a PVC

  1. In the left-side pane, choose Volumes > Persistent Volume Claims, then click Create.

  2. Configure the parameters and click Create.

    Parameter

    Description

    Example

    PVC type

    Select NAS.

    NAS

    Name

    A unique name for the PVC within the cluster.

    pvc-nas

    Allocation mode

    Select Use StorageClass.

    Use StorageClass

    Existing storage class

    Select the StorageClass created in the previous step.

    alicloud-nas-subpath

    Capacity

    The requested storage capacity. Does not limit actual usage unless directory quotas are enabled. See Expand a dynamically provisioned NAS volume.

    20Gi

    Access mode

    The volume access mode.

    ReadWriteMany

3. Deploy an application

  1. In the left-side pane, choose ACK ClustersACK ClustersWorkloads > Deployments, then click Create from Image.

  2. Configure the key parameters. Use defaults for all other settings. For full configuration details, see Create a stateless application by using a Deployment.

    Section

    Parameter

    Description

    Example

    Basic information

    Name

    A custom name for the Deployment.

    deployment-nas-1

    Replicas

    Number of pod replicas.

    2

    Container

    Image name

    The container image address.

    anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6

    Required resources

    vCPU and memory.

    0.25 vCores, 512 MiB

    Volume

    Add PVC

    Select Mount Source (the PVC you created) and Container Path (the mount path in the container).

    Mount Source: pvc-nas; Container Path: /data

  3. On the Deployments page, click the application name. On the Pods tab, verify the pod is in the Running state.

Sharepath mode

Use sharepath mode when pods in different namespaces need to access the same NAS path. All PVs created from the same StorageClass map to the same NAS directory — no new subdirectory is created per PVC.

Important

The reclaim policy must be Retain for sharepath mode.

Use kubectl

1. Create a StorageClass

  1. Create alicloud-nas-sharepath.yaml:

    Parameter

    Description

    mountOptions

    NFS mount options. Use NFS v3 (recommended). Extreme NAS supports only NFS v3. See NFS.

    volumeAs

    Set to sharepath.

    server

    The mount target and path. All PVs share this path. Replace with your actual mount target. See Manage mount targets.

    provisioner

    Must be nasplugin.csi.alibabacloud.com.

    reclaimPolicy

    Must be Retain for sharepath mode.

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: alicloud-nas-sharepath
    mountOptions:
    - nolock,tcp,noresvport
    - vers=3
    parameters:
      volumeAs: sharepath
      server: "0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/data"
    provisioner: nasplugin.csi.alibabacloud.com
    reclaimPolicy: Retain
  2. Create the StorageClass:

    kubectl create -f alicloud-nas-sharepath.yaml

2. Create PVCs in different namespaces

  1. Create the namespaces:

    kubectl create ns ns1
    kubectl create ns ns2
  2. Create pvc.yaml with PVCs in both namespaces:

    In sharepath mode, the storage parameter does not take effect.
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: nas-csi-pvc
      namespace: ns1
    spec:
      accessModes:
      - ReadWriteMany
      storageClassName: alicloud-nas-sharepath
      resources:
        requests:
          storage: 20Gi
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: nas-csi-pvc
      namespace: ns2
    spec:
      accessModes:
      - ReadWriteMany
      storageClassName: alicloud-nas-sharepath
      resources:
        requests:
          storage: 20Gi
  3. Create the PVCs:

    kubectl create -f pvc.yaml

3. Deploy applications and verify

  1. nginx.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx
      namespace: ns1
    spec:
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            ports:
            - containerPort: 80
            volumeMounts:
              - name: nas-pvc
                mountPath: "/data"
          volumes:
            - name: nas-pvc
              persistentVolumeClaim:
                claimName: nas-csi-pvc
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx
      namespace: ns2
    spec:
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            ports:
            - containerPort: 80
            volumeMounts:
              - name: nas-pvc
                mountPath: "/data"
          volumes:
            - name: nas-pvc
              persistentVolumeClaim:
                claimName: nas-csi-pvc
  2. Deploy the applications:

    kubectl create -f nginx.yaml
  3. Verify the pods are running:

    kubectl get pod -A -l app=nginx

    Expected output:

    NAMESPACE  NAME                    READY   STATUS    RESTARTS   AGE
    ns1        nginx-5b5cdb85f6-a****  1/1     Running   0          32s
    ns2        nginx-c5bb4746c-b****   1/1     Running   0          32s

    Both pods mount 0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/data at /data, even though they are in different namespaces.

Use the ACK console

1. Create a StorageClass

  1. Log on to the ACK console. Click the cluster name, then choose Volumes > StorageClasses.

  2. Click Create and configure the parameters.

    Parameter

    Description

    Example

    Name

    The StorageClass name.

    alicloud-nas-sharepath

    PV type

    Select NAS.

    NAS

    Select mount target

    The mount target domain and path. See Manage mount targets.

    0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/data

    Volume mode

    Select Shared Directory. All PVs share the same NAS path.

    Shared Directory

    Mount path

    The path within the NAS file system. Leave blank to use the root. For Extreme NAS, use a subdirectory of /share.

    /

    Reclaim policy

    Must be Retain for sharepath mode.

    Retain

    Mount options

    NFS mount options. Use NFS v3.

    (default)

  3. Click Create.

2. Create PVCs in different namespaces

  1. Create the ns1 and ns2 namespaces. See Manage namespaces and resource quotas.

  2. In the left-side pane, choose Volumes > Persistent Volume Claims. Select ns1 in the Namespace section and click Create.

  3. Configure the parameters and click Create.

    Parameter

    Description

    Example

    PVC type

    Select NAS.

    NAS

    Name

    A unique name within the cluster.

    pvc-nas

    Allocation mode

    Select Use StorageClass.

    Use StorageClass

    Existing storage class

    Select the alicloud-nas-sharepath StorageClass.

    alicloud-nas-sharepath

    Capacity

    The requested capacity (does not take effect in sharepath mode).

    20Gi

    Access mode

    The volume access mode.

    ReadWriteMany

  4. Repeat the steps to create pvc-nas in the ns2 namespace.

3. Deploy an application

  1. In the left-side pane, choose Workloads > Deployments. Select ns1 in the Namespace section and click Create from Image.

  2. Configure the parameters.

    Section

    Parameter

    Description

    Example

    Basic information

    Name

    Deployment name.

    nginx

    Replicas

    Number of replicas.

    2

    Container

    Image name

    Container image.

    anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6

    Required resources

    vCPU and memory.

    0.25 vCores, 512 MiB

    Volume

    Add PVC

    Mount Source: the PVC in ns1. Container Path: the mount path.

    Mount Source: pvc-nas; Container Path: /data

  3. Repeat for the ns2 namespace.

  4. On the Deployments page, click the application name. On the Pods tab, verify pods are in the Running state.

Filesystem mode

Use filesystem mode when your application needs to create and delete NAS file systems and mount targets dynamically. When a PVC is created, CSI automatically creates a NAS file system and mount target. When the PVC is deleted, both are deleted if the reclaim policy is set to Delete and deleteVolume is true.

Important

By default, deleting a PV in filesystem mode retains the NAS file system and mount target. To delete them automatically, set reclaimPolicy: Delete and deleteVolume: "true" in the StorageClass.

Only kubectl is supported for filesystem mode.

1. (ACK dedicated clusters only) Configure RAM permissions

In ACK dedicated clusters, grant the following permissions to CSI-Provisioner so it can create and delete NAS file systems and mount targets:

{
    "Action": [
        "nas:DescribeMountTargets",
        "nas:CreateMountTarget",
        "nas:DeleteFileSystem",
        "nas:DeleteMountTarget",
        "nas:CreateFileSystem"
    ],
    "Resource": ["*"],
    "Effect": "Allow"
}

Attach this RAM policy using one of the following methods:

  • Attach it to the master RAM role of the ACK dedicated cluster. See Modify the document and description of a custom policy.

  • Create a RAM user, attach the policy to the user, generate an AccessKey pair, and specify the credentials in the CSI-Provisioner env variable:

    env:
    - name: CSI_ENDPOINT
      value: unix://socketDir/csi.sock
    - name: ACCESS_KEY_ID
      value: ""
    - name: ACCESS_KEY_SECRET
      value: ""

2. Create a StorageClass

  1. Create alicloud-nas-fs.yaml:

    Parameter

    Description

    Default

    volumeAs

    Set to filesystem. Each PV corresponds to a dedicated NAS file system.

    fileSystemType

    NAS file system type. standard: General-purpose NAS. extreme: Extreme NAS.

    standard

    storageType

    Storage type. For standard: Performance or Capacity. For extreme: standard or advance.

    Performance (standard); standard (extreme)

    regionId

    The region of the NAS file system.

    zoneId

    The zone of the NAS file system.

    vpcId

    The VPC for the mount target.

    vSwitchId

    The vSwitch for the mount target.

    accessGroupName

    The permission group for the mount target.

    DEFAULT_VPC_GROUP_NAME

    deleteVolume

    Whether to delete the NAS file system when the PV is deleted. Set to "true" with reclaimPolicy: Delete to enable automatic deletion.

    "false"

    provisioner

    Must be nasplugin.csi.alibabacloud.com.

    reclaimPolicy

    Reclaim policy for the PV. The NAS file system is automatically deleted only when both deleteVolume: "true" and reclaimPolicy: Delete are set.

    Retain

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: alicloud-nas-fs
    mountOptions:
    - nolock,tcp,noresvport
    - vers=3
    parameters:
      volumeAs: filesystem
      fileSystemType: standard
      storageType: Performance
      regionId: cn-beijing
      zoneId: cn-beijing-e
      vpcId: "vpc-2ze2fxn6popm8c2mzm****"
      vSwitchId: "vsw-2zwdg25a2b4y5juy****"
      accessGroupName: DEFAULT_VPC_GROUP_NAME
      deleteVolume: "false"
    provisioner: nasplugin.csi.alibabacloud.com
    reclaimPolicy: Retain
  2. Create the StorageClass:

    kubectl create -f alicloud-nas-fs.yaml

3. Create a PVC and deploy an application

  1. Create pvc.yaml:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: nas-csi-pvc-fs
    spec:
      accessModes:
        - ReadWriteMany
      storageClassName: alicloud-nas-fs
      resources:
        requests:
          storage: 20Gi
  2. Create nginx.yaml:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: deployment-nas-fs
      labels:
        app: nginx
    spec:
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            ports:
            - containerPort: 80
            volumeMounts:
              - name: nas-pvc
                mountPath: "/data"
          volumes:
            - name: nas-pvc
              persistentVolumeClaim:
                claimName: nas-csi-pvc-fs
  3. Create the PVC and deployment:

    kubectl create -f pvc.yaml -f nginx.yaml

Step 3: Verify NAS volume behavior

After deploying your application, verify that the NAS volume correctly persists data across pod restarts and shares data across pods.

Verify data persistence

  1. List the running pods:

    kubectl get pod

    Expected output:

    NAME                                READY   STATUS    RESTARTS   AGE
    deployment-nas-1-5b5cdb85f6-a****   1/1     Running   0          32s
    deployment-nas-2-c5bb4746c-b****    1/1     Running   0          32s
  2. Confirm the /data path is empty in the pod:

    kubectl exec deployment-nas-1-5b5cdb85f6-a**** -- ls /data

    No output confirms the directory is empty.

  3. Create a test file:

    kubectl exec deployment-nas-1-5b5cdb85f6-a**** -- touch /data/nas
  4. Delete the pod to trigger a restart:

    kubectl delete pod deployment-nas-1-5b5cdb85f6-a****

    In another terminal, watch the pod restart:

    kubectl get pod -w -l app=nginx
  5. After the pod restarts, verify the file is still there:

    kubectl get pod

    Expected output:

    NAME                                READY   STATUS    RESTARTS   AGE
    deployment-nas-1-5b5cdm2g5-c****    1/1     Running   0          32s
    deployment-nas-2-c5bb4746c-b****    1/1     Running   0          32s
    kubectl exec deployment-nas-1-5b5cdm2g5-c**** -- ls /data

    Expected output:

    nas

    The nas file persists after the pod restart, confirming data persistence.

Verify data sharing

  1. Check that both pods see the same empty /data directory:

    kubectl exec deployment-nas-1-5b5cdb85f6-a**** -- ls /data
    kubectl exec deployment-nas-2-c5bb4746c-b**** -- ls /data

    No output from either command confirms both directories are empty.

  2. Create a file in one pod:

    kubectl exec deployment-nas-1-5b5cdb85f6-a**** -- touch /data/nas
  3. Verify both pods see the file:

    kubectl exec deployment-nas-1-5b5cdb85f6-a**** -- ls /data

    Expected output:

    nas
    kubectl exec deployment-nas-2-c5bb4746c-b**** -- ls /data

    Expected output:

    nas

    The file created in one pod is immediately visible in the other, confirming shared storage.

FAQ

How do I enable user or group isolation in the NAS file system?

Set securityContext on your pod to run all containers as a specific user and group. The following example uses the nobody user (UID and GID: 65534):

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nas-sts
spec:
  selector:
    matchLabels:
      app: busybox
  serviceName: "busybox"
  replicas: 1
  template:
    metadata:
      labels:
        app: busybox
    spec:
      securityContext:
        fsGroup: 65534    # Directories are created as the nobody user (UID/GID 65534)
        fsGroupChangePolicy: "OnRootMismatch"    # Change permissions only if the root directory ownership does not match
      containers:
      - name: busybox
        image: busybox
        command:
        - sleep
        - "3600"
        securityContext:
          runAsUser: 65534    # All processes run as nobody (UID 65534)
          runAsGroup: 65534   # All processes run as nobody (GID 65534)
          allowPrivilegeEscalation: false
        volumeMounts:
        - name: nas-pvc
          mountPath: /data
  volumeClaimTemplates:
  - metadata:
      name: nas-pvc
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "alicloud-nas-subpath"
      resources:
        requests:
          storage: 100Gi

Verify the user context in the running container:

kubectl exec nas-sts-0 -- "top"

Expected output:

Mem: 11538180K used, 52037796K free, 5052K shrd, 253696K buff, 8865272K cached
CPU:  0.1% usr  0.1% sys  0.0% nic 99.7% idle  0.0% io  0.0% irq  0.0% sirq
Load average: 0.76 0.60 0.58 1/1458 54
  PID  PPID USER     STAT   VSZ %VSZ CPU %CPU COMMAND
   49     0 nobody   R     1328  0.0   9  0.0 top
    1     0 nobody   S     1316  0.0  10  0.0 sleep 3600

Verify files and directories are created as the nobody user:

kubectl exec nas-sts-0 -- sh -c "touch /data/test; mkdir /data/test-dir; ls -arlth /data/"

Expected output:

total 5K
drwxr-xr-x    1 root     root        4.0K Aug 30 10:14 ..
drwxr-sr-x    2 nobody   nobody      4.0K Aug 30 10:14 test-dir
-rw-r--r--    1 nobody   nobody         0 Aug 30 10:14 test
drwxrwsrwx    3 root     nobody      4.0K Aug 30 10:14 .

Both test and test-dir are owned by the nobody user, confirming user isolation is active.

What's next

References