NAS volumes give your pods two things: data that survives pod restarts, and a shared file system that multiple pods can read and write at the same time. This guide walks through the full workflow for mounting an existing NAS file system as a static volume in an ACK cluster.
Static vs. dynamic volumes
With a static volume, you pre-create a persistent volume (PV) to represent an existing NAS file system, then create a persistent volume claim (PVC) to bind to it. This approach suits existing storage resources, but the bound PVC does not support online resizing by default.
With a dynamic volume, there is no pre-created PV — the system provisions one automatically when a PVC referencing a StorageClass is created. Dynamic volumes support resizing. See Use a dynamic NAS volume or Use CNFS to automatically resize a NAS volume.
How it works
Mounting a static NAS volume involves three steps, each performed by a different role:
-
Create a PV (cluster admin): Register the existing NAS file system in the cluster by declaring its mount target address, capacity, and access mode.
-
Create a PVC (developer): Request storage by creating a PVC. Kubernetes binds it to a matching PV automatically.
-
Mount in a pod (developer): Reference the PVC in your workload manifest. Kubernetes mounts it as a directory inside the container.
Prerequisites
Before you begin, ensure that you have:
-
The csi-plugin and csi-provisioner components installed. These are installed by default in ACK clusters — check the Add-ons page to confirm they are present. For best results, upgrade the CSI components to the latest version.
-
An existing NAS file system that meets all of the following conditions. If you don't have one, create a new file system or use a dynamic NAS volume instead.
-
Protocol type: NFS only (SMB is not supported)
-
VPC: The NAS file system must be in the same VPC as your cluster. Cross-availability zone mounting is supported; cross-VPC mounting is not
-
Mount target: A mount target in the same VPC as your cluster, in the available state. See Manage mount targets and record the mount target address
-
(Optional) Encryption type: Configure at NAS file system creation time if you need encrypted storage
NAS has limitations on mount connectivity, number of file systems, and supported protocol types.
-
Usage notes
-
Do not delete mount targets while in use. Deleting a mount target while a volume is mounted causes I/O exceptions on the node.
-
Handle concurrent writes in your application. NAS is a shared file system. When multiple pods mount the same volume, your application is responsible for managing data consistency from concurrent writes. See FAQ about read and write access to files.
-
Avoid setting `securityContext.fsgroup`. When this is set, kubelet recursively runs
chmodorchownon the mount after each pod start, which can significantly slow pod startup. See NAS volume FAQ for optimization options.
Step 1: Create a PV
Register your existing NAS file system in the cluster by creating a PV.
kubectl
-
Create a file named
pv-nas.yaml.Parameter Required Default Description storageYes — Capacity declaration for PV-PVC matching. Does not limit actual NAS capacity, which is governed by NAS specifications (General-purpose NAS, Extreme NAS). accessModesYes — ReadWriteMany: read-write by many nodes.ReadWriteOnce: read-write by one node.ReadOnlyMany: read-only by many nodes.persistentVolumeReclaimPolicyNo RetainRetain: after PVC deletion, the PV entersReleasedstate and NAS data is preserved — clean up manually.Delete: this policy must be used with thearchiveOnDeleteparameter. Static PVs do not supportarchiveOnDelete. Therefore, for static PVs, even if this policy is set toDelete, the PV and the NAS files are not actually deleted when you delete the PVC. To configurearchiveOnDelete, see Use a dynamic NAS volume.driverYes — Fixed to nasplugin.csi.alibabacloud.comfor NAS CSI.volumeHandleYes — Unique PV identifier; must match metadata.name.serverYes — NAS mount target address. To find it, see Manage mount targets. pathNo /(General-purpose NAS)NAS subdirectory to mount. For Extreme NAS, the root is /shareand all paths must start with/share(for example,/share/data). The directory is created automatically if it does not exist.mountOptionsNo NFSv3 NFS mount options including protocol version. For supported versions by NAS type, see NFS protocols. apiVersion: v1 kind: PersistentVolume metadata: name: pv-nas # Must be unique within the cluster. labels: alicloud-pvname: pv-nas # Used to bind a specific PVC to this PV. spec: capacity: storage: 5Gi # Matching value only — does not cap actual NAS capacity. accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain csi: driver: nasplugin.csi.alibabacloud.com # Fixed value for the Alibaba Cloud NAS CSI driver. volumeHandle: pv-nas # Must match metadata.name; must be unique per PV. volumeAttributes: server: "0c47****-mpk25.cn-shenzhen.nas.aliyuncs.com" # Replace with your mount target address. path: "/csi" # Subdirectory to mount; created automatically if absent. mountOptions: - nolock,tcp,noresvport # Improves reliability for stateless NFS connections. - vers=3 # NFS protocol version. Use vers=4.0 or vers=4.1 for file locking support.Key parameters:
-
Apply the manifest.
kubectl create -f pv-nas.yaml -
Verify the PV is available.
kubectl get pvExpected output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE pv-nas 5Gi RWX Retain Available <unset> 25sThe
Availablestatus confirms the PV is created and not yet bound to a PVC.
ACK console
-
Log on to the ACK console. Click Clusters, then click your cluster name.
-
In the left navigation pane, choose Volumes > Persistent Volumes.
-
Click Create and configure the parameters. The new PV appears on the Persistent Volumes page after creation.
Parameter Description PV type Select NAS. Volume name A name that is unique within the cluster. Capacity Matching value for PV-PVC binding; does not cap actual NAS capacity. Access mode ReadWriteManyorReadWriteOnce.Use CNFS Optional. CNFS adds automated O&M, cache acceleration, and performance monitoring. To bring an existing NAS file system under CNFS management, see Create a CNFS-managed NAS file system (Recommended). Mount target domain name Available only when CNFS is disabled. Select a mount target from the list, or choose Custom and provide a DNS name that resolves to a NAS mount target. Mount path (Advanced options) NAS subdirectory to mount. For Extreme NAS, paths must start with /share. Created automatically if absent.Reclaim policy Retain(default) orDelete. For static PVs,Deletemust be used witharchiveOnDelete. However, static PVs do not supportarchiveOnDelete. Therefore, for static PVs, even if the policy is set toDelete, deleting the PVC does not actually delete the PV or the NAS files.Mount options NFS mount options, including the protocol version. Default is NFSv3. Label Labels to assign to the PV.
Step 2: Create a PVC
Create a PVC to claim the PV.
kubectl
-
Create a file named
pvc-nas.yaml.Parameter Description accessModesMust match the access mode declared on the PV. storageMust be less than or equal to the PV's declared capacity for binding to succeed. Does not limit actual NAS capacity. matchLabelsLabel selector targeting the PV. Must match the label set on the PV ( alicloud-pvname: pv-nas).kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-nas # Must be unique within the namespace. spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi selector: matchLabels: alicloud-pvname: pv-nas # Binds to the PV with this label.The
matchLabelsselector binds this PVC to the specific PV you created. Without it, Kubernetes may bind to any PV that satisfies the capacity and access mode requirements. -
Apply the manifest.
kubectl create -f pvc-nas.yaml -
Verify the PVC is bound.
kubectl get pvcExpected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE pvc-nas Bound pv-nas 5Gi RWX <unset> 5sThe
Boundstatus confirms the PVC is linked topv-nasand ready to use.
ACK console
-
In the left navigation pane, choose Storage > PVCs.
-
Click Create and configure the parameters.
Parameter Description PVC type Select NAS. Name A name that is unique within the namespace. Allocation mode Select Existing volume to bind to the PV you created. Existing volumes Select the PV created in Step 1. Capacity Must be less than or equal to the PV's declared capacity. Access mode Must match the PV's access mode.
Step 3: Deploy an application with the NAS volume
After creating the PVC, mount it in a workload.
kubectl
-
Create a file named
deploy.yaml. This Deployment creates two replicas, each mounting the same NAS file system — useful for verifying shared storage.apiVersion: apps/v1 kind: Deployment metadata: name: nas-test labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6 ports: - containerPort: 80 volumeMounts: - name: pvc-nas mountPath: "/data" # Path inside the container where NAS is mounted. volumes: - name: pvc-nas persistentVolumeClaim: claimName: pvc-nas # References the PVC created in Step 2. -
Apply the manifest.
kubectl create -f deploy.yaml -
Verify the pods are running.
kubectl get pod -l app=nginxExpected output:
NAME READY STATUS RESTARTS AGE nas-test-****-***a 1/1 Running 0 32s nas-test-****-***b 1/1 Running 0 32s
ACK console
-
In the left navigation pane, choose Workloads > Deployments.
-
Click Create from image and configure the parameters. For parameters not listed below, keep the defaults. See Create a Deployment for full details. After deployment, click the application name on the Stateless page and confirm all pods show a Running status on the Pods tab.
Section Parameter Description Basic information Name The Deployment name. Replicas Number of pod replicas. Use 2 or more to verify shared storage. Container Image name The container image address. Required resources vCPU and memory requests. Volume Add PVC Mount source: the PVC from Step 2. Container path: the mount path inside the container, such as /data.
Verify storage behavior
After deployment, run the following checks to confirm shared and persistent storage.
Verify shared storage
Create a file in one pod and check for it in another.
-
Get the pod names.
kubectl get pod | grep nas-testExpected output:
nas-test-*****a 1/1 Running 0 40s nas-test-*****b 1/1 Running 0 40s -
Create a file in the first pod.
kubectl exec nas-test-*****a -- touch /data/test.txt -
Check that the file is visible in the second pod.
kubectl exec nas-test-*****b -- ls /dataExpected output:
test.txtThe file is visible, which confirms that data can be shared between pods.
Verify persistent storage
Restart the pods and confirm the file survives.
-
Trigger a rolling restart.
kubectl rollout restart deploy nas-test -
Wait for the new pods to reach
Runningstate.kubectl get pod | grep nas-testExpected output:
nas-test-*****c 1/1 Running 0 67s nas-test-*****d 1/1 Running 0 49s -
Confirm the file is still present in a new pod.
kubectl exec nas-test-*****c -- ls /dataExpected output:
test.txtThe file persists across pod restarts, confirming that data is stored in the NAS file system and not in the container's local storage.
Apply in production
Security
-
Restrict mount target access with permission groups. NAS uses permission groups to control which IP addresses can mount the file system. Add only the private IP addresses of your cluster nodes or the vSwitch CIDR block. Avoid
0.0.0.0/0.
Performance and cost
-
Choose the right NAS type. See Select a file system type to match NAS type to your IOPS and throughput requirements.
-
Tune mount options. NFS protocol version
vers=4.0orvers=4.1provides better file locking in some workloads. For large sequential I/O, tuningrsizeandwsizecan improve read and write throughput.
Reliability
-
Set the reclaim policy to `Retain` for production data. This prevents accidental NAS data loss if a PVC is deleted.
-
Add a liveness probe. Configure a liveness probe that checks whether the mount point is accessible. If the mount fails, Kubernetes restarts the pod, which triggers a remount.
-
Monitor with Container Storage Monitoring. Use Container Storage Monitoring to set up alerts for volume anomalies or performance degradation.
Clean up resources
Delete resources in the following order to avoid unexpected charges.
-
Delete the workload. This unmounts the volume from all pods.
kubectl delete deployment <your-deployment-name> -
Delete the PVC. The subsequent behavior of the bound PV depends on the reclaim policy.
-
Retain: the PV entersReleasedstate. The PV object and NAS data are preserved — clean up both manually. -
Delete: the PV object is also deleted. Note the following:-
If the PV points to the NAS root directory, the backend NAS data is preserved to prevent accidental deletion.
-
If the
volumeHandleof a static PV is a suffix of thepathvalue (for example,volumeHandleisappandpathis/exports/app), deleting the PVC triggers the automatic deletion of the backend NAS subdirectory. Use this setting with caution.
-
kubectl delete pvc <your-pvc-name> -
-
Delete the PV. Only delete a PV in the
AvailableorReleasedstate. This removes the PV definition from Kubernetes but does not affect data on the NAS file system.kubectl delete pv <your-pv-name> -
(Optional) Delete the NAS file system. This permanently deletes all data and cannot be undone. Confirm that no services depend on this file system before proceeding. See Delete a file system.
What's next
-
If you encounter mount errors or I/O issues, see Troubleshoot storage issues and NAS volume FAQ.
-
To manage NAS file systems with enhanced performance and QoS controls, see Manage NAS file systems by using CNFS.