All Products
Search
Document Center

Container Service for Kubernetes:Mount a statically provisioned NAS volume

Last Updated:Mar 26, 2026

NAS volumes give your pods two things: data that survives pod restarts, and a shared file system that multiple pods can read and write at the same time. This guide walks through the full workflow for mounting an existing NAS file system as a static volume in an ACK cluster.

Static vs. dynamic volumes

With a static volume, you pre-create a persistent volume (PV) to represent an existing NAS file system, then create a persistent volume claim (PVC) to bind to it. This approach suits existing storage resources, but the bound PVC does not support online resizing by default.

With a dynamic volume, there is no pre-created PV — the system provisions one automatically when a PVC referencing a StorageClass is created. Dynamic volumes support resizing. See Use a dynamic NAS volume or Use CNFS to automatically resize a NAS volume.

How it works

Mounting a static NAS volume involves three steps, each performed by a different role:

  1. Create a PV (cluster admin): Register the existing NAS file system in the cluster by declaring its mount target address, capacity, and access mode.

  2. Create a PVC (developer): Request storage by creating a PVC. Kubernetes binds it to a matching PV automatically.

  3. Mount in a pod (developer): Reference the PVC in your workload manifest. Kubernetes mounts it as a directory inside the container.

Prerequisites

Before you begin, ensure that you have:

  • The csi-plugin and csi-provisioner components installed. These are installed by default in ACK clusters — check the Add-ons page to confirm they are present. For best results, upgrade the CSI components to the latest version.

  • An existing NAS file system that meets all of the following conditions. If you don't have one, create a new file system or use a dynamic NAS volume instead.

    • Protocol type: NFS only (SMB is not supported)

    • VPC: The NAS file system must be in the same VPC as your cluster. Cross-availability zone mounting is supported; cross-VPC mounting is not

    • Mount target: A mount target in the same VPC as your cluster, in the available state. See Manage mount targets and record the mount target address

    • (Optional) Encryption type: Configure at NAS file system creation time if you need encrypted storage

    NAS has limitations on mount connectivity, number of file systems, and supported protocol types.

Usage notes

  • Do not delete mount targets while in use. Deleting a mount target while a volume is mounted causes I/O exceptions on the node.

  • Handle concurrent writes in your application. NAS is a shared file system. When multiple pods mount the same volume, your application is responsible for managing data consistency from concurrent writes. See FAQ about read and write access to files.

  • Avoid setting `securityContext.fsgroup`. When this is set, kubelet recursively runs chmod or chown on the mount after each pod start, which can significantly slow pod startup. See NAS volume FAQ for optimization options.

Step 1: Create a PV

Register your existing NAS file system in the cluster by creating a PV.

kubectl

  1. Create a file named pv-nas.yaml.

    Parameter Required Default Description
    storage Yes Capacity declaration for PV-PVC matching. Does not limit actual NAS capacity, which is governed by NAS specifications (General-purpose NAS, Extreme NAS).
    accessModes Yes ReadWriteMany: read-write by many nodes. ReadWriteOnce: read-write by one node. ReadOnlyMany: read-only by many nodes.
    persistentVolumeReclaimPolicy No Retain Retain: after PVC deletion, the PV enters Released state and NAS data is preserved — clean up manually. Delete: this policy must be used with the archiveOnDelete parameter. Static PVs do not support archiveOnDelete. Therefore, for static PVs, even if this policy is set to Delete, the PV and the NAS files are not actually deleted when you delete the PVC. To configure archiveOnDelete, see Use a dynamic NAS volume.
    driver Yes Fixed to nasplugin.csi.alibabacloud.com for NAS CSI.
    volumeHandle Yes Unique PV identifier; must match metadata.name.
    server Yes NAS mount target address. To find it, see Manage mount targets.
    path No / (General-purpose NAS) NAS subdirectory to mount. For Extreme NAS, the root is /share and all paths must start with /share (for example, /share/data). The directory is created automatically if it does not exist.
    mountOptions No NFSv3 NFS mount options including protocol version. For supported versions by NAS type, see NFS protocols.
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-nas                      # Must be unique within the cluster.
      labels:
        alicloud-pvname: pv-nas         # Used to bind a specific PVC to this PV.
    spec:
      capacity:
        storage: 5Gi                    # Matching value only — does not cap actual NAS capacity.
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain
      csi:
        driver: nasplugin.csi.alibabacloud.com   # Fixed value for the Alibaba Cloud NAS CSI driver.
        volumeHandle: pv-nas                     # Must match metadata.name; must be unique per PV.
        volumeAttributes:
          server: "0c47****-mpk25.cn-shenzhen.nas.aliyuncs.com"  # Replace with your mount target address.
          path: "/csi"                           # Subdirectory to mount; created automatically if absent.
      mountOptions:
        - nolock,tcp,noresvport   # Improves reliability for stateless NFS connections.
        - vers=3                  # NFS protocol version. Use vers=4.0 or vers=4.1 for file locking support.

    Key parameters:

  2. Apply the manifest.

    kubectl create -f pv-nas.yaml
  3. Verify the PV is available.

    kubectl get pv

    Expected output:

    NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM    STORAGECLASS     VOLUMEATTRIBUTESCLASS   REASON   AGE
    pv-nas   5Gi        RWX            Retain           Available                             <unset>                          25s

    The Available status confirms the PV is created and not yet bound to a PVC.

ACK console

  1. Log on to the ACK console. Click Clusters, then click your cluster name.

  2. In the left navigation pane, choose Volumes > Persistent Volumes.

  3. Click Create and configure the parameters. The new PV appears on the Persistent Volumes page after creation.

    Parameter Description
    PV type Select NAS.
    Volume name A name that is unique within the cluster.
    Capacity Matching value for PV-PVC binding; does not cap actual NAS capacity.
    Access mode ReadWriteMany or ReadWriteOnce.
    Use CNFS Optional. CNFS adds automated O&M, cache acceleration, and performance monitoring. To bring an existing NAS file system under CNFS management, see Create a CNFS-managed NAS file system (Recommended).
    Mount target domain name Available only when CNFS is disabled. Select a mount target from the list, or choose Custom and provide a DNS name that resolves to a NAS mount target.
    Mount path (Advanced options) NAS subdirectory to mount. For Extreme NAS, paths must start with /share. Created automatically if absent.
    Reclaim policy Retain (default) or Delete. For static PVs, Delete must be used with archiveOnDelete. However, static PVs do not support archiveOnDelete. Therefore, for static PVs, even if the policy is set to Delete, deleting the PVC does not actually delete the PV or the NAS files.
    Mount options NFS mount options, including the protocol version. Default is NFSv3.
    Label Labels to assign to the PV.

Step 2: Create a PVC

Create a PVC to claim the PV.

kubectl

  1. Create a file named pvc-nas.yaml.

    Parameter Description
    accessModes Must match the access mode declared on the PV.
    storage Must be less than or equal to the PV's declared capacity for binding to succeed. Does not limit actual NAS capacity.
    matchLabels Label selector targeting the PV. Must match the label set on the PV (alicloud-pvname: pv-nas).
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: pvc-nas       # Must be unique within the namespace.
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 5Gi
      selector:
        matchLabels:
          alicloud-pvname: pv-nas   # Binds to the PV with this label.

    The matchLabels selector binds this PVC to the specific PV you created. Without it, Kubernetes may bind to any PV that satisfies the capacity and access mode requirements.

  2. Apply the manifest.

    kubectl create -f pvc-nas.yaml
  3. Verify the PVC is bound.

    kubectl get pvc

    Expected output:

    NAME       STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
    pvc-nas    Bound    pv-nas    5Gi        RWX                           <unset>                 5s

    The Bound status confirms the PVC is linked to pv-nas and ready to use.

ACK console

  1. In the left navigation pane, choose Storage > PVCs.

  2. Click Create and configure the parameters.

    Parameter Description
    PVC type Select NAS.
    Name A name that is unique within the namespace.
    Allocation mode Select Existing volume to bind to the PV you created.
    Existing volumes Select the PV created in Step 1.
    Capacity Must be less than or equal to the PV's declared capacity.
    Access mode Must match the PV's access mode.

Step 3: Deploy an application with the NAS volume

After creating the PVC, mount it in a workload.

kubectl

  1. Create a file named deploy.yaml. This Deployment creates two replicas, each mounting the same NAS file system — useful for verifying shared storage.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nas-test
      labels:
        app: nginx
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
            ports:
            - containerPort: 80
            volumeMounts:
              - name: pvc-nas
                mountPath: "/data"       # Path inside the container where NAS is mounted.
          volumes:
            - name: pvc-nas
              persistentVolumeClaim:
                claimName: pvc-nas       # References the PVC created in Step 2.
  2. Apply the manifest.

    kubectl create -f deploy.yaml
  3. Verify the pods are running.

    kubectl get pod -l app=nginx

    Expected output:

    NAME               READY   STATUS    RESTARTS   AGE
    nas-test-****-***a   1/1     Running   0          32s
    nas-test-****-***b   1/1     Running   0          32s

ACK console

  1. In the left navigation pane, choose Workloads > Deployments.

  2. Click Create from image and configure the parameters. For parameters not listed below, keep the defaults. See Create a Deployment for full details. After deployment, click the application name on the Stateless page and confirm all pods show a Running status on the Pods tab.

    Section Parameter Description
    Basic information Name The Deployment name.
    Replicas Number of pod replicas. Use 2 or more to verify shared storage.
    Container Image name The container image address.
    Required resources vCPU and memory requests.
    Volume Add PVC Mount source: the PVC from Step 2. Container path: the mount path inside the container, such as /data.

Verify storage behavior

After deployment, run the following checks to confirm shared and persistent storage.

Verify shared storage

Create a file in one pod and check for it in another.

  1. Get the pod names.

    kubectl get pod | grep nas-test

    Expected output:

    nas-test-*****a   1/1   Running   0   40s
    nas-test-*****b   1/1   Running   0   40s
  2. Create a file in the first pod.

    kubectl exec nas-test-*****a -- touch /data/test.txt
  3. Check that the file is visible in the second pod.

    kubectl exec nas-test-*****b -- ls /data

    Expected output:

    test.txt

    The file is visible, which confirms that data can be shared between pods.

Verify persistent storage

Restart the pods and confirm the file survives.

  1. Trigger a rolling restart.

    kubectl rollout restart deploy nas-test
  2. Wait for the new pods to reach Running state.

    kubectl get pod | grep nas-test

    Expected output:

    nas-test-*****c   1/1   Running   0   67s
    nas-test-*****d   1/1   Running   0   49s
  3. Confirm the file is still present in a new pod.

    kubectl exec nas-test-*****c -- ls /data

    Expected output:

    test.txt

    The file persists across pod restarts, confirming that data is stored in the NAS file system and not in the container's local storage.

Apply in production

Security

  • Restrict mount target access with permission groups. NAS uses permission groups to control which IP addresses can mount the file system. Add only the private IP addresses of your cluster nodes or the vSwitch CIDR block. Avoid 0.0.0.0/0.

Performance and cost

  • Choose the right NAS type. See Select a file system type to match NAS type to your IOPS and throughput requirements.

  • Tune mount options. NFS protocol version vers=4.0 or vers=4.1 provides better file locking in some workloads. For large sequential I/O, tuning rsize and wsize can improve read and write throughput.

Reliability

  • Set the reclaim policy to `Retain` for production data. This prevents accidental NAS data loss if a PVC is deleted.

  • Add a liveness probe. Configure a liveness probe that checks whether the mount point is accessible. If the mount fails, Kubernetes restarts the pod, which triggers a remount.

  • Monitor with Container Storage Monitoring. Use Container Storage Monitoring to set up alerts for volume anomalies or performance degradation.

Clean up resources

Delete resources in the following order to avoid unexpected charges.

  1. Delete the workload. This unmounts the volume from all pods.

    kubectl delete deployment <your-deployment-name>
  2. Delete the PVC. The subsequent behavior of the bound PV depends on the reclaim policy.

    • Retain: the PV enters Released state. The PV object and NAS data are preserved — clean up both manually.

    • Delete: the PV object is also deleted. Note the following:

      • If the PV points to the NAS root directory, the backend NAS data is preserved to prevent accidental deletion.

      • If the volumeHandle of a static PV is a suffix of the path value (for example, volumeHandle is app and path is /exports/app), deleting the PVC triggers the automatic deletion of the backend NAS subdirectory. Use this setting with caution.

    kubectl delete pvc <your-pvc-name>
  3. Delete the PV. Only delete a PV in the Available or Released state. This removes the PV definition from Kubernetes but does not affect data on the NAS file system.

    kubectl delete pv <your-pv-name>
  4. (Optional) Delete the NAS file system. This permanently deletes all data and cannot be undone. Confirm that no services depend on this file system before proceeding. See Delete a file system.

What's next

References