Manually creating persistent volumes (PVs) for each workload does not scale when multiple pods need shared, persistent file storage. Dynamic provisioning automatically creates and binds PVs to persistent volume claims (PVCs) based on a StorageClass, which makes it ideal for workloads that require concurrent read/write access across pods.
Dynamic provisioning workflow
When you create a PVC, the system uses the referenced StorageClass to automatically create a PV and the corresponding NAS storage resource.
Create a StorageClass: Define a storage template that specifies the NAS mount target and mount mode.
Create a PVC: Request storage. The system automatically creates a PV and binds it to the PVC.
Mount the volume: Reference the PVC in your pod spec to mount the NAS volume inside the container.
Choose a mount mode
Set the mount mode with the volumeAs parameter in the StorageClass. This parameter defines how each PV maps to NAS storage.
Mount mode | How it works | Best for | Cost | Isolation |
subpath (recommended) | Each PV maps to a dedicated subdirectory in a single NAS file system. | Multiple pods that share or isolate data on a single NAS file system. | Low—one file system is shared across PVs. | Directory-level isolation. |
sharepath | All PVs map to the same directory defined in the StorageClass. | Multiple pods across namespaces that need access to the same NAS subdirectory. | Low—one file system, one directory. | No isolation—all PVs share one directory. |
filesystem (not recommended) | Each PV maps to a newly created, standalone NAS file system instance. | Workloads that require strict performance or security isolation. | High—one file system and mount target per PV. | Full file system isolation. |
For most use cases, subpath offers the best balance of isolation and cost. Use sharepath when you need cross-namespace access to the same directory. Use filesystem only when strict isolation is required, and you can accept the higher cost.
Prerequisites
Before you begin, make sure that the following requirements are met:
The csi-plugin and csi-provisioner components installed in your cluster.
These CSI components are installed by default. You can verify their status on the Add-ons page. Upgrade the CSI components to the latest version.
(Subpath and sharepath only) A NAS file system that meets the following conditions. If no file system exists, create one.
Protocol type: NFS only.
VPC: The NAS file system must be in the same VPC as your cluster. NAS supports cross-zone mounting but does not support cross-VPC mounting.
Mount target: A mount target in the same VPC as your cluster with an active status. For more information, see Manage mount targets. Note the mount target address for later use.
(Optional) Encryption type: To encrypt data on the storage volume, configure the encryption type when you create the NAS file system.
NAS has limits on the number of mount connections, file systems, and supported protocol types.
(Subpath only) CSI component version 1.31.4 or later. For upgrade instructions, see Upgrade CSI components.
Before you start
Do not delete the mount target while volumes are in use. Deleting the mount target in the NAS console while volumes are mounted causes I/O errors on the node.
Handle concurrent writes at the application level. NAS is a shared storage service. When multiple pods write to the same volume, your application must manage data consistency. For more information, see Multiple processes writing to the same log file and File read and write issues.
Avoid
securityContext.fsgroupwhen possible. SettingsecurityContext.fsgroupin your pod spec causes kubelet to recursively runchmodorchownafter mounting, which can significantly slow down pod startup. For optimization options, see NAS Persistent Volume FAQ.
Method 1: Mount using subpath (recommended)
In subpath mode, each PVC automatically gets a dedicated subdirectory in the NAS file system as its PV.
Step 1: Create a StorageClass
kubectl
Create a file named
alicloud-nas-subpath.yamlwith the following content.StorageClass parameters (subpath)
Parameter
Description
Default
mountOptionsMount options for NAS, including the NFS protocol version. NAS uses NFS v3 by default. To use a different version, set
vers=4.0. For supported NFS versions per NAS type, see NFS protocol.vers=3parameters.volumeAsMount mode. Set to
subpath.--
parameters.serverThe NAS mount target address and subdirectory path. Format:
<mount-target-address>:/<path>.
-<mount-target-address>: The mount target address. See Manage mount targets.
-:/<path>: The NAS subdirectory to mount. If unset or if the subdirectory does not exist, the root directory is mounted by default. General-purpose NAS root:/. Extreme NAS root:/share(subdirectory paths must start with/share, for example,/share/data).Root directory
parameters.archiveOnDeleteControls whether backend data is permanently deleted when you delete a PVC. This setting applies only when
reclaimPolicyisDelete.
-true: Data is not deleted. The subdirectory is archived and renamed toarchived-{pvName}.{timestamp}.
-false: The subdirectory and its data are permanently deleted (only the subdirectory, not the NAS file system).NoteSetting this to
falsewith frequent PV creation and deletion may block the CSI controller task queue and prevent new PVs from being provisioned. See CSI controller task queue is full.trueprovisionerThe CSI driver. Set to
nasplugin.csi.alibabacloud.comfor Alibaba Cloud NAS.--
reclaimPolicyReclaim policy for the PV.
Delete: Handles backend data based on thearchiveOnDeletesetting.Retain: The PV and NAS data remain intact when you delete the PVC. You must delete them manually. Use this for production environments.
DeleteallowVolumeExpansionEnables online expansion of PVs by modifying the PVC capacity. Supported only for General-purpose NAS. The CSI driver uses NAS directory quota to enforce capacity limits. To expand, edit the
spec.resources.requests.storagefield in the PVC. See Set directory quotas for NAS dynamically provisioned volumes.NoteNAS directory quotas are applied asynchronously. Immediately after creating or expanding a PV, high-speed bulk writes may exceed the quota before it fully takes effect. See Limits.
--
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: # StorageClass name. Must be unique in the cluster. name: alicloud-nas-subpath mountOptions: - nolock,tcp,noresvport - vers=3 parameters: # Mount mode: subpath creates a subdirectory per PV. volumeAs: subpath # NAS mount target address and base path. Format: <mount-target-address>:/<path> server: "0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/k8s" # Archive (not delete) subdirectory data when PVC is deleted with reclaimPolicy: Delete. archiveOnDelete: "true" provisioner: nasplugin.csi.alibabacloud.com reclaimPolicy: Retain allowVolumeExpansion: trueCreate the StorageClass.
kubectl create -f alicloud-nas-subpath.yaml
Console
Log on to the ACK console. In the left navigation pane, click Clusters.
On the Clusters page, click the name of the target cluster. In the left navigation pane, choose Volumes > StorageClasses.
Click Create. Enter a unique name for the StorageClass, set PV Type to NAS, and complete the remaining fields as described below.
Configuration item
Description
Select Mount Target
The mount target address of the NAS file system.
Volume Mode
Select Subdirectory (subpath). The system automatically creates a subdirectory under the mount path. Data is stored at
<NAS mount target>:<mount path>/<pv-name>/.Mount Path
The subdirectory of the NAS file system to mount. If you leave this blank, the root directory is mounted by default. If the directory does not exist, it is automatically created.
General-purpose NAS: Root directory is
/.Extreme NAS: Root directory is
/share. Subdirectory paths must start with/share(for example,/share/data).
Reclaim Policy
Reclaim policy for the PV.
Delete(default): Handles backend data based on thearchiveOnDeletesetting.Retain: The PV and NAS data remain intact. You must delete them manually.
Mount Options
Mount options for NAS, including the NFS protocol version. The default version is NFS v3. To use a different version, set
vers=4.0. For supported versions, see NFS protocol.
After the StorageClass is created, you can view it in the Storage Classes list on the console.
Step 2: Create a PVC
kubectl
Create a file named
nas-pvc.yamlwith the following content.PVC parameters
Parameter
Description
Default
accessModesVolume access mode. This field is required. Valid values:
ReadWriteMany: Read-write by many nodes. Recommended for NAS.ReadWriteOnce: Read-write by a single node.ReadOnlyMany: Read-only by many nodes.
ReadWriteManystorageClassNameThe name of the StorageClass to bind.
--
storageRequested volume capacity. By default, this is a resource request and does not restrict the actual storage available to the pod. The maximum NAS capacity depends on the specification. See General-purpose NAS and Extreme NAS.
WhenallowVolumeExpansionis set totruein the StorageClass, this value becomes a hard limit enforced by NAS directory quota.--
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nas-csi-pvc spec: accessModes: - ReadWriteMany # Reference the StorageClass created in Step 1. storageClassName: alicloud-nas-subpath resources: requests: # Requested volume capacity. See the storage parameter description below. storage: 20GiCreate the PVC.
kubectl create -f nas-pvc.yamlVerify that the PV was automatically created and bound:
kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE nas-a7540d97-0f53-4e05-b7d9-557309****** 20Gi RWX Retain Bound default/nas-csi-pvc alicloud-nas-subpath <unset> 5m
Console
In the left navigation pane, choose Volumes > Persistent Volume Claims.
On the Persistent Volume Claims page, click Create. Configure the PVC as described below and click Create.
Configuration item
Description
PVC Type
Select NAS.
Name
PVC name. Must be unique within the namespace.
Allocation Mode
Select Use StorageClass.
Existing Storage Class
Click Select and choose the StorageClass you created earlier.
Capacity
Requested volume capacity. By default, this is a resource request and does not restrict the actual storage available to the pod. The maximum NAS capacity depends on the specification. See General-purpose NAS and Extreme NAS.
WhenallowVolumeExpansionis set totruein the StorageClass, this value becomes a hard limit enforced by NAS directory quota.Access Mode
Volume access mode. Valid values:
ReadWriteMany(default): Read-write by many nodes.ReadWriteOnce: Read-write by a single node.ReadOnlyMany: Read-only by many nodes.
Step 3: Create an application and mount the volume
Mount the PVC to your application pods. The following example creates two Deployments that reference the same PVC to share a single NAS subdirectory.
To mount different subdirectories of the same NAS file system to multiple pods, create separate StorageClasses and PVCs for each subdirectory, then mount each PVC separately.
kubectl
Create two files named
nginx-1.yamlandnginx-2.yamlwith the following content. Both reference the same PVC.Create the two Deployments.
kubectl create -f nginx-1.yaml -f nginx-2.yamlCheck the pod status:
kubectl get pod -l app=nginxNAME READY STATUS RESTARTS AGE nas-test-1-b75d5b6bc-***** 1/1 Running 0 51s nas-test-2-b75d5b6bc-***** 1/1 Running 0 44sConfirm that the PVC is mounted. Replace
<podName>with the actual pod name. Both pods should shownas-csi-pvcas the claim name, confirming they share the same NAS subdirectory.kubectl describe pod <podName> | grep "ClaimName:"
Console
Repeat the following steps to create two Deployments that mount the same PVC.
On the ACK Clusters page, click the name of the target cluster. In the left navigation pane, choose Workloads > Deployments.
Click Create from Image. Configure the application as described below. Keep other parameters at their default values. For more information, see Create a Deployment.
Configuration page
Parameter
Description
Basic Information
Replicas
Number of replicas for the Deployment.
Container
Image Name
Image address used to deploy the application.
Required Resources
vCPU and memory resources required.
Volume
Click Add PVC, then complete:
Mount Source: Select the PVC created earlier.
Container Path: Path in the container where the NAS file system is mounted.
After the Deployment completes, click the application name on the Stateless page and confirm that the pod status is Running on the Container Group tab.
To verify that storage is shared and persistent, see Verify shared and persistent storage.
Method 2: Mount using sharepath
In sharepath mode, all PVCs created from the same StorageClass map to the same NAS directory. No new directories are created for individual PVs. Use this mode when pods in different namespaces need to read from and write to the same files.
Step 1: Create a StorageClass
kubectl
Create a file named
alicloud-nas-sharepath.yamlwith the following content.StorageClass parameters (sharepath)
Parameter
Description
Default
mountOptionsMount options for NAS, including the NFS protocol version. The default version is NFS v3. To use a different version, set
vers=4.0. For supported versions, see NFS protocol.vers=3parameters.volumeAsMount mode. Set to
sharepath.--
parameters.serverThe NAS mount target address and subdirectory path. Format:
<mount-target-address>:/<path>.
-<mount-target-address>: The mount target address. See Manage mount targets.
-:/<path>: The NAS subdirectory to mount. If unset or if the subdirectory does not exist, the root directory is mounted by default. General-purpose NAS root:/. Extreme NAS root:/share(subdirectory paths must start with/share).Root directory
provisionerThe CSI driver. Set to
nasplugin.csi.alibabacloud.com.--
reclaimPolicyFor sharepath, always set this to
Retainto prevent data loss. The default Kubernetes value isDelete.--
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: alicloud-nas-sharepath mountOptions: - nolock,tcp,noresvport - vers=3 parameters: # Mount mode: sharepath maps all PVs to the same directory. volumeAs: sharepath # NAS mount target address and shared directory path. server: "0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/k8s" provisioner: nasplugin.csi.alibabacloud.com # Sharepath only supports Retain. reclaimPolicy: RetainCreate the StorageClass.
kubectl create -f alicloud-nas-sharepath.yaml
Console
On the Clusters page, click the name of the target cluster. In the left navigation pane, choose Volumes > StorageClasses.
Click Create. Enter a unique name for the StorageClass, set PV Type to NAS, and complete the remaining fields as described below.
Configuration item
Description
Select mount target
The mount target address of the NAS file system.
Volume mode
Select Shared directory (sharepath).
Mount path
The subdirectory of the NAS file system to mount. If you leave this blank, the root directory is mounted by default. If the directory does not exist, it is automatically created.
- General-purpose NAS: Root directory is/.
- Extreme NAS: Root directory is/share. Subdirectory paths must start with/share.Reclaim Policy
For sharepath, set this to
Retain.Mount options
Mount options for NAS, including the NFS protocol version. The default version is NFS v3. To use a different version, set
vers=4.0. For supported versions, see NFS protocol.
Step 2: Create PVCs in multiple namespaces
To demonstrate cross-namespace sharing, create a PVC with the same name in two namespaces. Although the PVCs share a name, they are independent resources in separate namespaces. Both use the same StorageClass to obtain PVs that point to the same NAS directory.
kubectl
Create the
ns1andns2namespaces.kubectl create ns ns1 kubectl create ns ns2Create a file named
pvc.yamlwith the following content.PVC parameters
Parameter
Description
Default
accessModesVolume access mode. This field is required. Valid values:
ReadWriteMany: Read-write by many nodes. Recommended for NAS.ReadWriteOnce: Read-write by a single node.ReadOnlyMany: Read-only by many nodes.
ReadWriteManystorageClassNameThe name of the StorageClass to bind.
--
storageRequested volume capacity. By default, this is a resource request and does not restrict actual storage. The maximum NAS capacity depends on the specification. See General-purpose NAS and Extreme NAS.
WhenallowVolumeExpansionis set totruein the StorageClass, this value becomes a hard limit enforced by NAS directory quota.--
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nas-csi-pvc namespace: ns1 spec: accessModes: - ReadWriteMany storageClassName: alicloud-nas-sharepath resources: requests: storage: 20Gi --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nas-csi-pvc namespace: ns2 spec: accessModes: - ReadWriteMany storageClassName: alicloud-nas-sharepath resources: requests: storage: 20GiCreate the PVCs.
kubectl create -f pvc.yamlVerify that PVs were automatically created and bound. Both PVs should show
Boundstatus, each linked to a PVC in a different namespace:kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE nas-0b448885-6226-4d22-8a5b-d0768c****** 20Gi RWX Retain Bound ns1/nas-csi-pvc alicloud-nas-sharepath <unset> 74s nas-bcd21c93-8219-4a11-986b-fd934a****** 20Gi RWX Retain Bound ns2/nas-csi-pvc alicloud-nas-sharepath <unset> 74s
Console
Create namespaces.
On the Clusters page, click the cluster name. In the left navigation pane, click Namespaces and Quotas.
Click Create. Follow the prompts to create the
ns1andns2namespaces.
In the left navigation pane, choose Volumes > Persistent Volume Claims.
Create a PVC in the
ns1namespace. Set Namespace tons1and configure the PVC as described below.Configuration item
Description
PVC Type
Select NAS.
Name
PVC name. Must be unique within the namespace.
Allocation Mode
Select Use StorageClass.
Existing Storage Class
Click Select and choose the StorageClass created earlier.
Capacity
Requested volume capacity. By default, this is a resource request and does not restrict actual storage.
WhenallowVolumeExpansionis set totruein the StorageClass, this value becomes a hard limit enforced by NAS directory quota.Access Mode
Volume access mode. Valid values:
ReadWriteMany(default): Read-write by many nodes.ReadWriteOnce: Read-write by a single node.ReadOnlyMany: Read-only by many nodes.
Repeat the previous step to create a PVC in the
ns2namespace.
After the PVCs are created, return to the Persistent Volume Claims page and confirm that the PVCs in ns1 and ns2 are bound to automatically created PVs.
Step 3: Create applications and mount the volume
Create Deployments in both namespaces that mount their respective PVCs to share the NAS directory defined in the StorageClass.
kubectl
Create two files named
nginx-ns1.yamlandnginx-ns2.yaml. Each binds the PVC in its namespace.Create the two Deployments.
kubectl create -f nginx-ns1.yaml -f nginx-ns2.yamlCheck the pod status:
kubectl get pod -A -l app=nginxNAMESPACE NAME READY STATUS RESTARTS AGE ns1 nas-test-b75d5b6bc-***** 1/1 Running 0 2m19s ns2 nas-test-b75d5b6bc-***** 1/1 Running 0 2m11sConfirm that the PVC is mounted. Replace
<namespace-name>and<pod-name>with actual values. The two pods mountns1/nas-csi-pvcandns2/nas-csi-pvc, respectively. Both point to the same NAS directory.kubectl describe pod -n <namespace-name> <pod-name> | grep "ClaimName:"
Console
On the ACK Clusters page, click the name of the target cluster. In the left navigation pane, choose Workloads > Deployments.
Create a Deployment in the
ns1namespace and mount the corresponding PVC.Set Namespace to ns1 and click Create from Image.
Complete the application creation as described below. Keep other parameters at their default values. For more information, see Create a Deployment.
Configuration page
Parameter
Description
Basic Information
Replicas
Number of replicas for the Deployment.
Container
Image Name
Image address used to deploy the application.
Required Resources
vCPU and memory resources required.
Volumes
Click PVC, then complete:
Mount Source: Select the PVC created earlier.
Container Path: Path in the container where the NAS file system is mounted (for example,
/data).
Repeat the previous step to create a Deployment in the
ns2namespace and mount the corresponding PVC.
Return to the Stateless page to check the Deployment status in both namespaces and confirm that pods are running with their PVCs mounted.
To verify that storage is shared and persistent, see Verify shared and persistent storage.
Method 3: Mount using filesystem (not recommended)
In filesystem mode, each PV maps to an independent NAS file system instance and mount target. Use this mode only when you require strict isolation for performance or security, and can accept the higher cost.
Each filesystem-mode PV creates one independent NAS file system and one mount target.
By default, NAS file systems and mount targets are retained when you delete a filesystem-mode PV. To delete them along with the PV, set both
reclaimPolicy: Deleteandparameters.deleteVolume: "true"in the StorageClass.For ACK dedicated clusters, grant the required permissions to csi-provisioner. See the authorization steps below.
Verify shared and persistent storage
After you deploy your application, verify that the volume works as expected. The following examples use subpath mode with the nas-test-1 and nas-test-2 Deployments.
Verify shared storage
Create a file in one pod and check for it in another pod.
Get the pod names:
kubectl get pod | grep nas-testnas-test-1-b75d5b6bc-***** 1/1 Running 0 50s nas-test-2-b75d5b6bc-***** 1/1 Running 0 60sCreate a file in the first pod. Replace the pod name with the actual value.
kubectl exec nas-test-1-b75d5b6bc-***** -- touch /data/test.txtCheck for the file in the second pod. The file should appear in both pods, which confirms shared storage.
kubectl exec nas-test-2-b75d5b6bc-***** -- ls /datatest.txt
Verify persistent storage
Restart the Deployment and check for the file in the new pod.
Restart the Deployment to trigger pod recreation.
kubectl rollout restart deploy nas-test-1Wait for the new pod to reach the Running state:
kubectl get pod | grep nas-testnas-test-1-5bb845b795-***** 1/1 Running 0 115m nas-test-2-5b6bccb75d-***** 1/1 Running 0 103mCheck for the previously created file in the new pod. Replace the pod name with the actual value. The file should persist after the pod restart, which confirms persistent storage.
kubectl exec nas-test-1-5bb845b795-***** -- ls /datatest.txt
Recommendations for production
Security and data protection
Use the Retain reclaim policy. Set
reclaimPolicytoRetainin your StorageClass to prevent accidental data loss when a PVC is deleted.Restrict access with NAS permission groups. NAS uses permission groups to manage network access. Follow the principle of least privilege by adding only the private IP addresses of cluster nodes or their vSwitch CIDR block. Avoid granting overly broad permissions such as
0.0.0.0/0.
Performance and cost
Select a suitable NAS type. See Select a file system type to choose a NAS type that matches the IOPS and throughput requirements of your application.
Optimize mount options. Adjust NFS mount parameters based on your workload characteristics. Using
vers=4.0orvers=4.1may improve performance and file locking in some scenarios. For large-scale reads and writes, test differentrsizeandwsizevalues to optimize throughput.
Operations and reliability
Configure liveness probes. Add liveness probes to your pods to check whether the mount target is reachable. If a mount fails, ACK can automatically restart the pod to trigger a remount.
Enable storage monitoring. Use container storage monitoring to set up alerts and detect volume exceptions or performance bottlenecks.
Clean up resources
To avoid unexpected charges, release resources in the following order when you no longer need the NAS volumes:
Step 1: Delete workloads
Delete all Deployments, StatefulSets, and other workloads that use the NAS volume. This unmounts the volume from all pods.
kubectl delete deployment <your-deployment-name>Step 2: Delete PVCs
Delete the PVCs associated with your workloads. The behavior of the bound PV and backend NAS depends on the reclaimPolicy defined in the StorageClass.
kubectl delete pvc <your-pvc-name>Step 3: Delete PVs
Delete a PV when its status is Available or Released. This removes only the PV definition from the cluster—it does not delete data on the backend NAS file system.
kubectl delete pv <your-pv-name>Step 4: Delete the backend NAS file system (optional)
subpath and sharepath modes: See Delete a file system. This operation permanently deletes all data on the NAS file system and cannot be undone. Confirm that no workloads depend on the file system before proceeding.
filesystem mode: If the backend NAS file system was not automatically deleted when you deleted the PVC, see Delete a file system to locate and delete it manually.
Cleanup behavior by mode
The following table summarizes what happens to backend storage when you delete a PVC, based on mount mode and reclaim policy.
Mount mode | Reclaim policy | Key parameter | PV behavior | Backend NAS behavior |
subpath |
| -- | Enters | Subdirectory and data retained. Delete manually. |
subpath |
|
| Automatically deleted. | Subdirectory archived as |
subpath |
|
| Automatically deleted. | Subdirectory and data permanently deleted. |
sharepath |
| -- | Enters | Shared directory and data retained. |
filesystem |
| -- | Enters | NAS file system and mount target retained. |
filesystem |
|
| Automatically deleted. | NAS file system and mount target retained. Delete manually. |
filesystem |
|
| Automatically deleted. | NAS file system and mount target automatically deleted. |
In ACK Serverless clusters, even with reclaimPolicy: Delete, the backend NAS subdirectory and data are not deleted or archived due to permission restrictions. Only the PV object is deleted.References
Troubleshooting
Advanced features
Manage NAS file systems with CNFS—Manage NAS file systems independently for improved performance and QoS control.
Set directory quotas for NAS dynamically provisioned volumes—Enforce capacity limits on subpath-mode volumes.